module ml._neural_tree_node#

Inheritance diagram of mlstatpy.ml._neural_tree_node

Short summary#

module mlstatpy.ml._neural_tree_node

Conversion from tree to neural network.

source on GitHub

Classes#

class

truncated documentation

NeuralTreeNode

One node in a neural network.

Properties#

property

truncated documentation

bias

Returns the weights.

input_weights

Returns the weights.

ndim

Returns the input dimension.

ndim_out

Returns the output dimension.

training_weights

Returns the weights stored in the neuron.

Static Methods#

staticmethod

truncated documentation

_dleakyrelu

Derivative of the Leaky Relu function.

_drelu

Derivative of the Relu function.

_dsigmoid

Derivativ of the sigmoid function.

_dsoftmax

Derivative of the softmax function.

_leakyrelu

Leaky Relu function.

_relu

Relu function.

_softmax

Derivative of the softmax function.

get_activation_dloss_function

Returns the derivative of the default loss function based on the activation function. It returns a function …

get_activation_function

Returns the activation function. It returns a function y=f(x).

get_activation_gradient_function

Returns the activation function. It returns a function y=f”(x). About the sigmoid:

get_activation_loss_function

Returns a default loss function based on the activation function. It returns two functions g=loss(x,y).

Methods#

method

truncated documentation

__eq__

__getstate__

usual

__init__

__repr__

usual

__setstate__

usual

_common_loss_dloss

Common beginning to methods loss, dlossds, dlossdw.

_predict

Computes inputs of the activation function.

_set_fcts

dlossds

Computes the loss derivative due to prediction error.

fill_cache

Creates a cache with intermediate results. lX is the results before the activation function, aX

gradient_backward

Computes the gradients at point X.

loss

Computes the loss. Returns a float.

predict

Computes neuron outputs.

update_training_weights

Updates weights.

Documentation#

Conversion from tree to neural network.

source on GitHub

class mlstatpy.ml._neural_tree_node.NeuralTreeNode(weights, bias=None, activation='sigmoid', nodeid=-1, tag=None)#

Bases : _TrainingAPI

One node in a neural network.

Paramètres:
  • weights – weights

  • bias – bias, if None, draws a random number

  • activation – activation function

  • nodeid – node id

  • tag – unused but to add information on how this node was created

source on GitHub

__eq__(obj)#

Return self==value.

__getstate__()#

usual

__hash__ = None#
__init__(weights, bias=None, activation='sigmoid', nodeid=-1, tag=None)#
__repr__()#

usual

__setstate__(state)#

usual

_common_loss_dloss(X, y, cache=None)#

Common beginning to methods loss, dlossds, dlossdw.

source on GitHub

static _dleakyrelu(x)#

Derivative of the Leaky Relu function.

static _drelu(x)#

Derivative of the Relu function.

static _dsigmoid(x)#

Derivativ of the sigmoid function.

static _dsoftmax(x)#

Derivative of the softmax function.

static _leakyrelu(x)#

Leaky Relu function.

_predict(X)#

Computes inputs of the activation function.

static _relu(x)#

Relu function.

_set_fcts()#
static _softmax(x)#

Derivative of the softmax function.

property bias#

Returns the weights.

dlossds(X, y, cache=None)#

Computes the loss derivative due to prediction error.

source on GitHub

fill_cache(X)#

Creates a cache with intermediate results. lX is the results before the activation function, aX is the results after the activation function, the prediction.

source on GitHub

static get_activation_dloss_function(activation)#

Returns the derivative of the default loss function based on the activation function. It returns a function df(x,y)/dw, df(w)/dw where w are the weights.

source on GitHub

static get_activation_function(activation)#

Returns the activation function. It returns a function y=f(x).

source on GitHub

static get_activation_gradient_function(activation)#

Returns the activation function. It returns a function y=f”(x). About the sigmoid:

rac{1}{1 + e^{-x}} \

f”(x) &=&

rac{e^{-x}}{(1 + e^{-x})^2} = f(x)(1-f(x))

end{array}}

source on GitHub

static get_activation_loss_function(activation)#

Returns a default loss function based on the activation function. It returns two functions g=loss(x,y).

source on GitHub

gradient_backward(graddx, X, inputs=False, cache=None)#

Computes the gradients at point X.

Paramètres:
  • graddx – existing gradient against the inputs

  • X – computes the gradient in X

  • inputs – if False, derivative against the coefficients, otherwise against the inputs.

  • cache – cache intermediate results

Renvoie:

gradient

source on GitHub

property input_weights#

Returns the weights.

loss(X, y, cache=None)#

Computes the loss. Returns a float.

source on GitHub

property ndim#

Returns the input dimension.

property ndim_out#

Returns the output dimension.

predict(X)#

Computes neuron outputs.

property training_weights#

Returns the weights stored in the neuron.

update_training_weights(X, add=True)#

Updates weights.

Paramètres:
  • grad – vector to add to the weights such as gradient

  • add – addition or replace

source on GitHub