module ml.neural_tree

Inheritance diagram of mlstatpy.ml.neural_tree

Short summary

module mlstatpy.ml.neural_tree

Conversion from tree to neural network.

source on GitHub

Classes

class

truncated documentation

NeuralTreeNet

Node ensemble.

Functions

function

truncated documentation

label_class_to_softmax_output

Converts a binary class label into a matrix with two columns of probabilities.

Properties

property

truncated documentation

shape

Returns the shape of the coefficients.

training_weights

Returns the weights.

Static Methods

staticmethod

truncated documentation

_create_from_tree_compact

Implements strategy one. See @see meth create_from_tree.

_create_from_tree_one

Implements strategy one. See @see meth create_from_tree.

create_from_tree

Creates a NeuralTreeNet instance from a DecisionTreeClassifier

Methods

method

truncated documentation

__getitem__

Retrieves node and attributes for node i.

__init__

__len__

Returns the number of nodes

__repr__

usual

_common_loss_dloss

Common beginning to methods loss, dlossds, dlossdw.

_get_output_node_attr

Retrieves the output nodes. nb_last is the number of expected outputs.

_predict_one

_update_members

Updates internal members.

append

Appends a node into the graph.

clear

Clear all nodes

copy

dlossds

Computes the loss derivative against the inputs.

fill_cache

Creates a cache with intermediate results.

gradient_backward

Computes the gradient in X.

loss

Computes the loss due to prediction error. Returns a float.

predict

to_dot

Exports the neural network into dot.

update_training_weights

Updates weights.

Documentation

Conversion from tree to neural network.

source on GitHub

class mlstatpy.ml.neural_tree.NeuralTreeNet(dim, empty=True)

Bases : mlstatpy.ml._neural_tree_api._TrainingAPI

Node ensemble.

<<<

import numpy
from mlstatpy.ml.neural_tree import NeuralTreeNode, NeuralTreeNet

w1 = numpy.array([-0.5, 0.8, -0.6])

neu = NeuralTreeNode(w1[1:], bias=w1[0], activation='sigmoid')
net = NeuralTreeNet(2, empty=True)
net.append(neu, numpy.arange(2))

ide = NeuralTreeNode(numpy.array([1.]),
                     bias=numpy.array([0.]),
                     activation='identity')

net.append(ide, numpy.arange(2, 3))

X = numpy.abs(numpy.random.randn(10, 2))
pred = net.predict(X)
print(pred)

>>>

    [[0.719 0.458 0.45  0.45 ]
     [1.167 0.775 0.492 0.492]
     [0.005 0.348 0.331 0.331]
     [0.676 0.847 0.385 0.385]
     [0.762 1.457 0.318 0.318]
     [0.174 1.452 0.226 0.226]
     [0.814 0.975 0.393 0.393]
     [0.238 0.972 0.291 0.291]
     [0.781 0.303 0.486 0.486]
     [0.868 1.035 0.395 0.395]]

source on GitHub

Paramètres
  • dim – space dimension

  • empty – empty network, other adds an identity node

source on GitHub

__getitem__(i)

Retrieves node and attributes for node i.

__init__(dim, empty=True)
Paramètres
  • dim – space dimension

  • empty – empty network, other adds an identity node

source on GitHub

__len__()

Returns the number of nodes

__repr__()

usual

_common_loss_dloss(X, y, cache=None)

Common beginning to methods loss, dlossds, dlossdw.

source on GitHub

static _create_from_tree_compact(tree, k=1.0)

Implements strategy one. See @see meth create_from_tree.

static _create_from_tree_one(tree, k=1.0)

Implements strategy one. See @see meth create_from_tree.

_get_output_node_attr(nb_last)

Retrieves the output nodes. nb_last is the number of expected outputs.

source on GitHub

_predict_one(X)
_update_members(node=None, attr=None)

Updates internal members.

append(node, inputs)

Appends a node into the graph.

Paramètres
  • node – node to add

  • inputs – index of input nodes

source on GitHub

clear()

Clear all nodes

static create_from_tree(tree, k=1.0, arch='one')

Creates a NeuralTreeNet instance from a DecisionTreeClassifier

Paramètres
Renvoie

NeuralTreeNet

The function only works for binary problems. Available architecture: * “one”: the method adds nodes with one output, there

is no soecific definition of layers,

  • “compact”: the adds two nodes, the first computes the threshold, the second one computes the leaves output, a final node merges all outputs into one

See notebook Un arbre de décision en réseaux de neurones for examples.

source on GitHub

dlossds(X, y, cache=None)

Computes the loss derivative against the inputs.

source on GitHub

fill_cache(X)

Creates a cache with intermediate results.

source on GitHub

gradient_backward(graddx, X, inputs=False, cache=None)

Computes the gradient in X.

Paramètres
  • graddx – existing gradient against the inputs

  • X – computes the gradient in X

  • inputs – if False, derivative against the coefficients, otherwise against the inputs.

  • cache – cache intermediate results to avoid more computation

Renvoie

gradient

source on GitHub

loss(X, y, cache=None)

Computes the loss due to prediction error. Returns a float.

source on GitHub

property shape

Returns the shape of the coefficients.

to_dot(X=None)

Exports the neural network into dot.

Paramètres

X – input as an example

source on GitHub

property training_weights

Returns the weights.

update_training_weights(X, add=True)

Updates weights.

Paramètres
  • grad – vector to add to the weights such as gradient

  • add – addition or replace

source on GitHub

mlstatpy.ml.neural_tree.label_class_to_softmax_output(y_label)

Converts a binary class label into a matrix with two columns of probabilities.

<<<

import numpy
from mlstatpy.ml.neural_tree import label_class_to_softmax_output

y_label = numpy.array([0, 1, 0, 0])
soft_y = label_class_to_softmax_output(y_label)
print(soft_y)

>>>

    [[1. 0.]
     [0. 1.]
     [1. 0.]
     [1. 0.]]

source on GitHub