module ml.neural_tree
#
Short summary#
module mlstatpy.ml.neural_tree
Conversion from tree to neural network.
Classes#
class |
truncated documentation |
---|---|
Classifier or regressor following scikit-learn API. |
|
Node ensemble. |
|
Classifier following scikit-learn API. |
|
Regressor following scikit-learn API. |
Functions#
function |
truncated documentation |
---|---|
Converts a binary class label into a matrix with two columns of probabilities. |
Properties#
property |
truncated documentation |
---|---|
|
HTML representation of estimator. This is redundant with the logic of _repr_mimebundle_. The latter should … |
|
HTML representation of estimator. This is redundant with the logic of _repr_mimebundle_. The latter should … |
|
HTML representation of estimator. This is redundant with the logic of _repr_mimebundle_. The latter should … |
Returns the shape of the coefficients. |
|
Returns the weights. |
Static Methods#
staticmethod |
truncated documentation |
---|---|
Implements strategy “compact”. See @see meth create_from_tree. |
|
Implements strategy “one”. See @see meth create_from_tree. |
|
Creates a |
|
Converts this model into ONNX. |
|
|
Converts this model into ONNX. |
|
Converts this model into ONNX. |
Shape calculator when converting this model into ONNX. See :epkg:`skearn-onnx`. |
|
|
Shape calculator when converting this model into ONNX. See :epkg:`skearn-onnx`. |
|
Shape calculator when converting this model into ONNX. See :epkg:`skearn-onnx`. |
Methods#
method |
truncated documentation |
---|---|
Retrieves node and attributes for node i. |
|
Returns the number of nodes |
|
usual |
|
Common beginning to methods loss, dlossds, dlossdw. |
|
Retrieves the output nodes. nb_last is the number of expected outputs. |
|
Updates internal members. |
|
Appends a node into the graph. |
|
Clear all nodes |
|
|
|
Returns the classification probabilities. |
|
|
Returns the classification probabilities. |
|
Returns the classification probabilities. |
Computes the loss derivative against the inputs. |
|
Creates a cache with intermediate results. |
|
Trains the estimator. |
|
|
Trains the estimator. |
|
Trains the estimator. |
Computes the gradient in X. |
|
Computes the loss due to prediction error. Returns a float. |
|
|
|
Returns the predicted classes. |
|
Returns the predicted classes. |
|
Returns the classification probabilities. |
|
Exports the neural network into dot. |
|
Updates weights. |
Documentation#
Conversion from tree to neural network.
- class mlstatpy.ml.neural_tree.BaseNeuralTreeNet(estimator, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#
Bases :
BaseEstimator
Classifier or regressor following scikit-learn API.
- Paramètres:
estimator – instance of
NeuralTreeNet
.X – training set
y – training labels
optimizer – optimizer, by default, it is
SGDOptimizer
.max_iter – number maximum of iterations
early_th – early stopping threshold
verbose – more verbose
lr – to overwrite learning_rate_init if optimizer is None (unused otherwise)
lr_schedule – to overwrite lr_schedule if optimizer is None (unused otherwise)
l1 – L1 regularization if optimizer is None (unused otherwise)
l2 – L2 regularization if optimizer is None (unused otherwise)
momentum – used if optimizer is None
- __init__(estimator, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#
- decision_function(X)#
Returns the classification probabilities.
- Paramètres:
X – inputs
- Renvoie:
probabilities
- fit(X, y, sample_weights=None)#
Trains the estimator.
- Paramètres:
X – input features
y – expected classes (binary)
sample_weights – sample weights
- Renvoie:
self
- static onnx_converter()#
Converts this model into ONNX.
- static onnx_shape_calculator()#
Shape calculator when converting this model into ONNX. See :epkg:`skearn-onnx`.
- class mlstatpy.ml.neural_tree.NeuralTreeNet(dim, empty=True)#
Bases :
_TrainingAPI
Node ensemble.
- Paramètres:
dim – space dimension
empty – empty network, other adds an identity node
<<<
import numpy from mlstatpy.ml.neural_tree import NeuralTreeNode, NeuralTreeNet w1 = numpy.array([-0.5, 0.8, -0.6]) neu = NeuralTreeNode(w1[1:], bias=w1[0], activation='sigmoid') net = NeuralTreeNet(2, empty=True) net.append(neu, numpy.arange(2)) ide = NeuralTreeNode(numpy.array([1.]), bias=numpy.array([0.]), activation='identity') net.append(ide, numpy.arange(2, 3)) X = numpy.abs(numpy.random.randn(10, 2)) pred = net.predict(X) print(pred)
>>>
[[0.024 1.468 0.204 0.204] [0.617 0.655 0.401 0.401] [1.045 1.32 0.388 0.388] [1.844 0.177 0.705 0.705] [0.007 0.382 0.327 0.327] [1.463 0.579 0.58 0.58 ] [0.688 0.184 0.485 0.485] [0.847 0.6 0.454 0.454] [1.017 0.387 0.52 0.52 ] [0.178 0.413 0.353 0.353]]
- __getitem__(i)#
Retrieves node and attributes for node i.
- __init__(dim, empty=True)#
- __len__()#
Returns the number of nodes
- __repr__()#
usual
- _common_loss_dloss(X, y, cache=None)#
Common beginning to methods loss, dlossds, dlossdw.
- static _create_from_tree_compact(tree, k=1.0)#
Implements strategy “compact”. See @see meth create_from_tree.
- static _create_from_tree_one(tree, k=1.0)#
Implements strategy “one”. See @see meth create_from_tree.
- _get_output_node_attr(nb_last)#
Retrieves the output nodes. nb_last is the number of expected outputs.
- _predict_one(X)#
- _update_members(node=None, attr=None)#
Updates internal members.
- append(node, inputs)#
Appends a node into the graph.
- Paramètres:
node – node to add
inputs – index of input nodes
- clear()#
Clear all nodes
- static create_from_tree(tree, k=1.0, arch='one')#
Creates a
NeuralTreeNet
instance from a DecisionTreeClassifier- Paramètres:
tree – DecisionTreeClassifier
k – slant of the sigmoïd
arch – architecture, see below
- Renvoie:
The function only works for binary problems. Available architecture: * “one”: the method adds nodes with one output, there
is no soecific definition of layers,
“compact”: the adds two nodes, the first computes the threshold, the second one computes the leaves output, a final node merges all outputs into one
See notebook Un arbre de décision en réseaux de neurones for examples.
- dlossds(X, y, cache=None)#
Computes the loss derivative against the inputs.
- fill_cache(X)#
Creates a cache with intermediate results.
- gradient_backward(graddx, X, inputs=False, cache=None)#
Computes the gradient in X.
- Paramètres:
graddx – existing gradient against the inputs
X – computes the gradient in X
inputs – if False, derivative against the coefficients, otherwise against the inputs.
cache – cache intermediate results to avoid more computation
- Renvoie:
gradient
- loss(X, y, cache=None)#
Computes the loss due to prediction error. Returns a float.
- property shape#
Returns the shape of the coefficients.
- property training_weights#
Returns the weights.
- update_training_weights(X, add=True)#
Updates weights.
- Paramètres:
grad – vector to add to the weights such as gradient
add – addition or replace
- class mlstatpy.ml.neural_tree.NeuralTreeNetClassifier(estimator, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#
Bases :
ClassifierMixin
,BaseNeuralTreeNet
Classifier following scikit-learn API.
- Paramètres:
estimator – instance of
NeuralTreeNet
.X – training set
y – training labels
optimizer – optimizer, by default, it is
SGDOptimizer
.max_iter – number maximum of iterations
early_th – early stopping threshold
verbose – more verbose
lr – to overwrite learning_rate_init if optimizer is None (unused otherwise)
lr_schedule – to overwrite lr_schedule if optimizer is None (unused otherwise)
l1 – L1 regularization if optimizer is None (unused otherwise)
l2 – L2 regularization if optimizer is None (unused otherwise)
momentum – used if optimizer is None
- __init__(estimator, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#
- predict(X)#
Returns the predicted classes.
- Paramètres:
X – inputs
- Renvoie:
classes
- predict_proba(X)#
Returns the classification probabilities.
- Paramètres:
X – inputs
- Renvoie:
probabilities
- class mlstatpy.ml.neural_tree.NeuralTreeNetRegressor(estimator, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#
Bases :
RegressorMixin
,BaseNeuralTreeNet
Regressor following scikit-learn API.
- Paramètres:
estimator – instance of
NeuralTreeNet
.X – training set
y – training labels
optimizer – optimizer, by default, it is
SGDOptimizer
.max_iter – number maximum of iterations
early_th – early stopping threshold
verbose – more verbose
lr – to overwrite learning_rate_init if optimizer is None (unused otherwise)
lr_schedule – to overwrite lr_schedule if optimizer is None (unused otherwise)
l1 – L1 regularization if optimizer is None (unused otherwise)
l2 – L2 regularization if optimizer is None (unused otherwise)
momentum – used if optimizer is None
- __init__(estimator, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#
- predict(X)#
Returns the predicted classes.
- Paramètres:
X – inputs
- Renvoie:
classes
- mlstatpy.ml.neural_tree.label_class_to_softmax_output(y_label)#
Converts a binary class label into a matrix with two columns of probabilities.
<<<
import numpy from mlstatpy.ml.neural_tree import label_class_to_softmax_output y_label = numpy.array([0, 1, 0, 0]) soft_y = label_class_to_softmax_output(y_label) print(soft_y)
>>>
[[1. 0.] [0. 1.] [1. 0.] [1. 0.]]