module onnx_conv.onnx_ops.onnx_gradient_op#

Inheritance diagram of mlprodict.onnx_conv.onnx_ops.onnx_gradient_op

Short summary#

module mlprodict.onnx_conv.onnx_ops.onnx_gradient_op

Custom operators for gradient numbers.

source on GitHub

Classes#

class

truncated documentation

OnnxBroadcastGradientArgs_1

Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 …

OnnxBroadcastGradientArgs_1

Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 …

OnnxFusedMatMul_1

MatMul and Gemm without a C.

OnnxFusedMatMul_1

MatMul and Gemm without a C.

OnnxSoftmaxGrad_13

Gradient of Softmax. SoftmaxGrad computes Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, …

OnnxSoftmaxGrad_13

Gradient of Softmax. SoftmaxGrad computes Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, …

OnnxYieldOp_1

Defines a custom operator for YieldOp.

OnnxYieldOp_1

Defines a custom operator for YieldOp.

Properties#

property

truncated documentation

onnx_prefix

onnx_prefix

onnx_prefix

onnx_prefix

onnx_prefix

onnx_prefix

onnx_prefix

onnx_prefix

outputs

Returns the outputs of the node.

outputs

Returns the outputs of the node.

outputs

Returns the outputs of the node.

outputs

Returns the outputs of the node.

outputs

Returns the outputs of the node.

outputs

Returns the outputs of the node.

outputs

Returns the outputs of the node.

outputs

Returns the outputs of the node.

Methods#

method

truncated documentation

__init__

__init__

__init__

__init__

__init__

__init__

__init__

__init__

Documentation#

Custom operators for gradient numbers.

source on GitHub

mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxBroadcastGradientArgs#

alias of OnnxBroadcastGradientArgs_1

class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxBroadcastGradientArgs_1(a_shape, b_shape, op_version=None, **kwargs)#

Bases: OnnxOperator

Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 with broadcast. The ouput axes are deterministic from last to first. Output is an empty vector when no reduction is necessary for the corresponding input.

source on GitHub

Parameters:
  • a_shape – The 1st input shape as Tensor.

  • b_shape – The 2nds input shape as Tensor.

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub

__init__(a_shape, b_shape, op_version=None, **kwargs)#
Parameters:
  • a_shape – The 1st input shape as Tensor.

  • b_shape – The 2nds input shape as Tensor.

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub

mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxFusedMatMul#

alias of OnnxFusedMatMul_1

class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxFusedMatMul_1(X, Y, transA=0, transB=0, op_version=None, **kwargs)#

Bases: OnnxOperator

MatMul and Gemm without a C.

source on GitHub

Parameters:
  • X – first matrix

  • Y – second matrix

  • transA – transpose first matrix

  • transB – transpose second matrix

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub

__init__(X, Y, transA=0, transB=0, op_version=None, **kwargs)#
Parameters:
  • X – first matrix

  • Y – second matrix

  • transA – transpose first matrix

  • transB – transpose second matrix

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub

mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxSoftmaxGrad#

alias of OnnxSoftmaxGrad_13

class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxSoftmaxGrad_13(grad, prob, op_version=None, **kwargs)#

Bases: OnnxOperator

Gradient of Softmax. SoftmaxGrad computes Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, which can be simulated as a pointwise-multiplication (“Mul”), followed by a “ReduceSum”. Unfortunately, the treatment of “axis” is different in “SoftmaxGrad” and “ReduceSum”. If axis=k for SoftmaxGrad, we need to specify [k, …, n-1] as the axes of reduction for “ReduceSum”, after accounting for negative-axis specification. An alternative solution would be to Flatten inputs to 2D and then reshape output back to original shape. Hopefully, many of these ops can be optimized away in the common-case of statically-known shapes.

source on GitHub

Parameters:
  • grad – gradient

  • prob – probablities

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub

__init__(grad, prob, op_version=None, **kwargs)#
Parameters:
  • grad – gradient

  • prob – probablities

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub

mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxYieldOp#

alias of OnnxYieldOp_1

class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxYieldOp_1(X, non_differentiable_outputs=None, full_shape_outputs=None, op_version=None, **kwargs)#

Bases: OnnxOperator

Defines a custom operator for YieldOp.

source on GitHub

Parameters:
  • X – array or OnnxOperatorMixin

  • non_differentiable_outputs – the indices of the module outputs that doesn’t have a gradient.

  • full_shape_outputs – the indices of the module outputs that must have full shape.

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub

__init__(X, non_differentiable_outputs=None, full_shape_outputs=None, op_version=None, **kwargs)#
Parameters:
  • X – array or OnnxOperatorMixin

  • non_differentiable_outputs – the indices of the module outputs that doesn’t have a gradient.

  • full_shape_outputs – the indices of the module outputs that must have full shape.

  • op_version – opset version

  • kwargs – additional parameter

source on GitHub