module training.sgd_learning_rate#

Inheritance diagram of onnxcustom.training.sgd_learning_rate

Short summary#

module onnxcustom.training.sgd_learning_rate

Helper for onnxruntime-training.

source on GitHub

Classes#

class

truncated documentation

BaseLearningRate

Class handling the learning rate update after every iteration of a gradient. Two methods need to be overwritten …

LearningRateSGD

Implements the learning the same way as sklearn.linear_model.SGDRegressor.

LearningRateSGDNesterov

Implements the learning the same way as sklearn.linear_model.SGDRegressor.

Properties#

property

truncated documentation

needs_grad

Returns the True if the gradient update needs to retain past gradients.

needs_grad

Returns the True if the gradient update needs to retain past gradients.

needs_grad

Returns the True if the gradient update needs to retain past gradients.

value

Returns the current learning rate.

value

Returns the current learning rate.

value

Returns the current learning rate.

Static Methods#

staticmethod

truncated documentation

select

Returns an instance of a given initialized with kwargs.

select

Returns an instance of a given initialized with kwargs.

select

Returns an instance of a given initialized with kwargs.

Methods#

method

truncated documentation

__init__

__init__

__init__

__repr_extended__

__repr_extended__

__repr_extended__

_call_iobinding

_call_iobinding

_call_iobinding

build_onnx_function

build_onnx_function

init_learning_rate

Initializes the learning rate at the beginning of the training.

init_learning_rate

Updates the learning rate at the end of an iteration.

init_learning_rate

Updates the learning rate at the end of an iteration.

loop

Loops over learning rate values, n to be precise.

loop

Loops over learning rate values, n to be precise.

loop

Loops over learning rate values, n to be precise.

update_learning_rate

Updates the learning rate at the end of an iteration.

update_learning_rate

Updates the learning rate at the end of an iteration.

update_learning_rate

Updates the learning rate at the end of an iteration.

update_weights

Updates weights based on the algorithm this class is setting up.

update_weights

update_weights

Documentation#

Helper for onnxruntime-training.

source on GitHub

class onnxcustom.training.sgd_learning_rate.BaseLearningRate#

Bases: BaseLearningOnnx

Class handling the learning rate update after every iteration of a gradient. Two methods need to be overwritten init_learning_rate and update_learning_rate. The first one starts the loop, the second returns the next one.

source on GitHub

__init__()#
__repr_extended__()#
_call_iobinding(sess, bind)#
init_learning_rate()#

Initializes the learning rate at the beginning of the training. :return: self

source on GitHub

loop(n=1000)#

Loops over learning rate values, n to be precise. :param n: number of requested iterations :return: iterator

source on GitHub

property needs_grad#

Returns the True if the gradient update needs to retain past gradients.

source on GitHub

static select(class_name, **kwargs)#

Returns an instance of a given initialized with kwargs. :param class_name: an instance of BaseLearningRate

or a string among the following class names (see below), it can also be a float and in that case, class LearningRateSGD is used

Returns:

instance of BaseLearningRate

Possible values for class_name: * ‘SGD’ or ‘LearningRateSGD’: see LearningRateSGD

source on GitHub

update_learning_rate(t)#

Updates the learning rate at the end of an iteration. :param t: iteration number :return: self

source on GitHub

update_weights(device, statei, gradienti, batch_size, velocity=None)#

Updates weights based on the algorithm this class is setting up.

Parameters:
  • device – device

  • statei – current weight

  • gradienti – gradient

  • batch_size – batch_size

  • velocity – same shape as the gradient

source on GitHub

property value#

Returns the current learning rate.

class onnxcustom.training.sgd_learning_rate.LearningRateSGD(eta0=0.01, alpha=0.0001, power_t=0.25, learning_rate='invscaling')#

Bases: BaseLearningRate

Implements the learning the same way as sklearn.linear_model.SGDRegressor.

Parameters:
  • eta0 – initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules.

  • alpha – constant that multiplies the regularization term, the higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’.

  • power_t – exponent for inverse scaling learning rate

  • learning_rate

    learning rate schedule: * ‘constant’: eta = eta0 * ‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen

    by a heuristic proposed by Leon Bottou, this number is multiplied by a constant C to make the first number equal to eta0

    • ’invscaling’: eta = eta0 / pow(t, power_t)

Created attributes: * eta0_: initial eta0 * optimal_init_: use when learning_rate==’optimal’ * value_: value to be returned by property value

source on GitHub

__init__(eta0=0.01, alpha=0.0001, power_t=0.25, learning_rate='invscaling')#
build_onnx_function(opset, device, n_tensors)#

This class computes a function represented as an ONNX graph. This method builds it. This function creates InferenceSession which do that.

Parameters:
  • opset – opset to use

  • deviceC_OrtDevice

  • args – additional arguments

source on GitHub

init_learning_rate()#

Updates the learning rate at the end of an iteration. :return: self

source on GitHub

property needs_grad#

Returns the True if the gradient update needs to retain past gradients.

source on GitHub

update_learning_rate(t)#

Updates the learning rate at the end of an iteration. :param t: iteration number :return: self

source on GitHub

update_weights(n_bind, device, statei, gradienti, batch_size, velocity=None)#

Updates weights based on the algorithm this class is setting up.

Parameters:
  • device – device

  • statei – current weight

  • gradienti – gradient

  • batch_size – batch_size

  • velocity – same shape as the gradient

source on GitHub

property value#

Returns the current learning rate.

class onnxcustom.training.sgd_learning_rate.LearningRateSGDNesterov(eta0=0.01, alpha=0.0001, power_t=0.25, learning_rate='invscaling', momentum=0.9, nesterov=True)#

Bases: LearningRateSGD

Implements the learning the same way as sklearn.linear_model.SGDRegressor.

Parameters:
  • eta0 – initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules.

  • alpha – constant that multiplies the regularization term, the higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’.

  • power_t – exponent for inverse scaling learning rate

  • learning_rate

    learning rate schedule: * ‘constant’: eta = eta0 * ‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen

    by a heuristic proposed by Leon Bottou, this number is multiplied by a constant C to make the first number equal to eta0

    • ’invscaling’: eta = eta0 / pow(t, power_t)

  • momentum – float, default=0.9 Value of momentum used, must be larger than or equal to 0.

  • nesterov – bool, default=True Whether to use nesterov’s momentum or not. Use nesterov’s if True Not using nesterov is equivalent to class LearningRateSGD.

Created attributes: * eta0_: initial eta0 * optimal_init_: use when learning_rate==’optimal’ * value_: value to be returned by property value

updates = [
    self.momentum * velocity - self.learning_rate * grad
    for velocity, grad in zip(self.velocities, grads)]
self.velocities = updates

if self.nesterov:
    updates_nesterov = [
        self.momentum * velocity - self.learning_rate * grad
        for velocity, grad in zip(self.velocities, grads)]
    return updates, updates_nesterov    --> new gradient and velocities
else:
    return updates                      --> new gradient

source on GitHub

__init__(eta0=0.01, alpha=0.0001, power_t=0.25, learning_rate='invscaling', momentum=0.9, nesterov=True)#
build_onnx_function(opset, device, n_tensors)#

This class computes a function represented as an ONNX graph. This method builds it. This function creates InferenceSession which do that.

Parameters:
  • opset – opset to use

  • deviceC_OrtDevice

  • args – additional arguments

source on GitHub

init_learning_rate()#

Updates the learning rate at the end of an iteration. :return: self

source on GitHub

property needs_grad#

Returns the True if the gradient update needs to retain past gradients.

source on GitHub

update_learning_rate(t)#

Updates the learning rate at the end of an iteration. :param t: iteration number :return: self

source on GitHub

update_weights(n_bind, device, statei, gradienti, batch_size, velocity=None)#

Updates weights based on the algorithm this class is setting up.

Parameters:
  • device – device

  • statei – current weight

  • gradienti – gradient

  • batch_size – batch_size

  • velocity – same shape as the gradient

source on GitHub