module onnxrt.ops_cpu.op_sum#

Inheritance diagram of mlprodict.onnxrt.ops_cpu.op_sum

Short summary#

module mlprodict.onnxrt.ops_cpu.op_sum

Runtime operator.

source on GitHub

Classes#

class

truncated documentation

Sum

Sum === Element-wise sum of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs …

Properties#

property

truncated documentation

args_default

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

args_default_modified

Returns the list of modified parameters.

args_mandatory

Returns the list of optional arguments.

args_optional

Returns the list of optional arguments.

atts_value

Returns all parameters in a dictionary.

Methods#

method

truncated documentation

__init__

_run

to_python

Documentation#

Runtime operator.

source on GitHub

class mlprodict.onnxrt.ops_cpu.op_sum.Sum(onnx_node, desc=None, **options)#

Bases: OpRun

===

Element-wise sum of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs must have the same data type. This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

Between 1 and 2147483647 inputs.

  • data_0 (variadic, heterogeneous)T: List of tensors for sum.

Outputs

  • sum (heterogeneous)T: Output tensor.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.

Version

Onnx name: Sum

This version of the operator has been available since version 13.

Runtime implementation: Sum

__init__(onnx_node, desc=None, **options)#
_run(*args, attributes=None, verbose=0, fLOG=None)#

Should be overwritten.

source on GitHub

to_python(inputs)#

Returns a python code equivalent to this operator.

Parameters:

inputs – inputs name

Returns:

imports, python code, both as strings

source on GitHub