Classes#

Summary#

class

class parent

truncated documentation

Abs

Abs === Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = …

Acos

Acos ==== Calculates the arccosine (inverse of cosine) of the given input tensor, element-wise. Inputs

Acosh

Acosh ===== Calculates the hyperbolic arccosine of the given input tensor element-wise. Inputs

Adagrad

Adagrad (ai.onnx.preview.training) ================================== Compute one iteration of ADAGRAD, a stochastic gradient …

Adam

Adam (ai.onnx.preview.training) =============================== Compute one iteration of Adam, a stochastic gradient based …

Add

Add === Performs element-wise binary addition (with Numpy-style broadcasting support). This operator supports **multidirectional …

AdjacencyGraphDisplay

Structure which contains the necessary information to display a graph using an adjacency matrix.

And

And === Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and …

ArgMax_11

ArgMax_12

ArgMax ====== Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting …

ArgMax_12

ArgMax ====== Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting …

ArgMin_11

ArgMin_12

ArgMin ====== Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting …

ArgMin_12

ArgMin ====== Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting …

ArrayFeatureExtractor

ArrayFeatureExtractor (ai.onnx.ml) ================================== Select elements of the input tensor based on the …

ArrayZipMapDictionary

Mocks an array without changing the data it receives. Notebooks Time processing for every ONNX nodes in a graph illustrates the weaknesses …

Asin

Asin ==== Calculates the arcsine (inverse of sine) of the given input tensor, element-wise. Inputs

Asinh

Asinh ===== Calculates the hyperbolic arcsine of the given input tensor element-wise. Inputs

Atan

Atan ==== Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise. Inputs

Atanh

Atanh ===== Calculates the hyperbolic arctangent of the given input tensor element-wise. Inputs

AttributeGraph

Class wrapping a function to make it simple as a parameter.

AutoAction

Extends the API to automatically look for exporters.

AutoType

Extends the API to automatically look for exporters.

AveragePool

AveragePool =========== AveragePool consumes an input tensor X and applies average pooling across the tensor according …

BatchNormalization_14

BatchNormalization ================== Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. …

BatchNormalization_14

BatchNormalization ================== Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. …

BatchNormalization_9

Bernoulli

Bernoulli ========= Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor …

BiGraph

BiGraph representation.

Binarizer

Binarizer (ai.onnx.ml) ====================== Maps the values of the input tensor to either 0 or 1, element-wise, based …

BitShift

BitShift ======== Bitwise shift operator performs element-wise operation. For each input element, if the attribute “direction” …

BlackmanWindow

Returns \omega_n = 0.42 - 0.5 \cos \left( \frac{2\pi n}{N-1} \right) + 0.08 \cos \left( \frac{4\pi n}{N-1} \right)

BroadcastGradientArgs

BroadcastGradientArgs (mlprodict) ================================= Version Onnx name: BroadcastGradientArgs

BroadcastGradientArgsSchema

Defines a schema for operators added in this package such as BroadcastGradientArgs.

CDist

CDist (mlprodict) ================= Version Onnx name: CDist

CDistSchema

Defines a schema for operators added in this package such as TreeEnsembleClassifierDouble.

CachedEinsum

Stores all the necessary information to cache the preprocessing of a an einsum equation.

Cast

Cast ==== The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns …

CastLike

CastLike ======== The operator casts the elements of a given input tensor (the first input) to the same data type as the …

CategoryMapper

CategoryMapper (ai.onnx.ml) =========================== Converts strings to integers and vice versa. Two sequences of …

Ceil

Ceil ==== Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), …

Celu

Celu ==== Continuously Differentiable Exponential Linear Units: Perform the linear unit element-wise on the input tensor …

CheckerContext

Class hosting information about a graph.

CheckerContextDefaultRegistry

Registry.

Clip_11

Clip ==== Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. …

Clip_11

Clip ==== Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. …

Clip_6

CodeNodeVisitor

Defines a visitor which walks though the syntax tree of the code.

CodeNodeVisitor

Visits the code, implements verification rules.

CodeTranslator

Class which converts a Python function into something else. It must implements methods visit and depart.

CommonExpand

CommonGRU

CommonLSTM

CommonRNN

CommonReshape

CommonSplit

Runtime for operator Split.

CompilationError

Raised when a compilation error was detected.

ComplexAbs

ComplexAbs (mlprodict) ====================== Version Onnx name: ComplexAbs

ComplexAbsSchema

Defines a schema for operators added in this package such as ComplexAbs.

Compress

Compress ======== Selects slices from an input tensor along a given axis where condition evaluates to True for each axis …

Concat

Concat ====== Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for …

ConcatFromSequence

ConcatFromSequence ================== Concatenate a sequence of tensors into a single tensor. All input tensors must have …

ConstantOfShape

ConstantOfShape =============== Generate a tensor with given value and shape. Attributes

Constant_11

Constant_12

Constant ======== This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, …

Constant_12

Constant ======== This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, …

Constant_9

Conv

Conv ==== The convolution operator consumes an input tensor and a filter, and computes the output. Attributes

Conv

C++ implementation of operator Conv for ReferenceEvaluator. See following example.

ConvDouble

Implements float runtime for operator Conv. The code is inspired from conv.cc

ConvFloat

Implements float runtime for operator Conv. The code is inspired from conv.cc

ConvTranspose

ConvTranspose ============= The convolution transpose operator consumes an input tensor and a filter, and computes the …

ConvTransposeDouble

Implements float runtime for operator Conv. The code is inspired from conv_transpose.cc

ConvTransposeFloat

Implements float runtime for operator Conv. The code is inspired from conv_transpose.cc

Cos

Cos === Calculates the cosine of the given input tensor, element-wise. Inputs

Cosh

Cosh ==== Calculates the hyperbolic cosine of the given input tensor element-wise. Inputs

CumSum

CumSum ====== Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively …

CustomScorerTransform

Wraps a scoring function into a transformer. Function @see fn register_scorers must be called to register the converter …

DEBUG

DEBUG (mlprodict) ================= Version Onnx name: DEBUG

DEBUGSchema

Defines a schema for operators added in this package such as Solve.

DFT

DFT === Computes the discrete Fourier transform of input. Attributes

DefaultNone

Default value for parameters when the parameter is not set but the operator has a default behaviour for it.

DepthToSpace

DepthToSpace ============ DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse …

DequantizeLinear

DequantizeLinear ================ The linear dequantization operator. It consumes a quantized tensor, a scale, and a zero …

Det

Det === Det calculates determinant of a square matrix or batches of square matrices. Det takes one input tensor of shape …

DetectedVariable

Wrapper around a Variable to detect inputs and outputs of a graph.

DictVectorizer

DictVectorizer (ai.onnx.ml) =========================== Uses an index mapping to convert a dictionary to an array. Given …

Div

Div === Performs element-wise binary division (with Numpy-style broadcasting support). This operator supports **multidirectional …

DropoutBase

Dropout_12

Dropout ======= Dropout takes an input floating-point tensor, an optional input ratio (floating-point scalar) and an optional …

Dropout_12

Dropout ======= Dropout takes an input floating-point tensor, an optional input ratio (floating-point scalar) and an optional …

Dropout_7

DynamicQuantizeLinear

DynamicQuantizeLinear ===================== A Function to fuse calculation for Scale, Zero Point and FP32->8Bit convertion …

Einsum

Einsum ====== An einsum of the form term1, term2 -> output-term produces an output tensor using the following equation …

EinsumSubOp

Defines a sub operation used in Einsum decomposition.

Elu

Elu === Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x) - 1.) for x < 0, …

Equal

Equal ===== Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors …

Erf

Erf === Computes the error function of the given input tensor element-wise. Inputs

ExistingVariable

Temporary name.

Exp

Exp === Calculates the exponential of the given input tensor, element-wise. Inputs

Expand_13

Expand ====== Broadcast the input tensor following the given shape and the broadcast rule. The broadcast rule is similar …

Expand_13

Expand ====== Broadcast the input tensor following the given shape and the broadcast rule. The broadcast rule is similar …

ExpectedAssertionError

Expected failure.

Expression

Expression (mlprodict) ====================== Version Onnx name: Expression

ExpressionSchema

Defines a schema for operators added in this package such as ComplexAbs.

EyeLike

EyeLike ======= Generate a 2D tensor (matrix) with ones on the diagonal and zeros everywhere else. Only 2D tensors are …

FFT

FFT (mlprodict) =============== Version Onnx name: FFT

FFT2D

FFT2D (mlprodict) ================= Version Onnx name: FFT2D

FFT2DSchema

Defines a schema for operators added in this package such as FFT.

FFTSchema

Defines a schema for operators added in this package such as FFT.

FctVersion

Identifies a version of a function based on its arguments and its parameters.

FeatureVectorizer

Very similar to Concat.

Flatten

Flatten ======= Flattens the input tensor into a 2D matrix. If input tensor has shape (d_0, d_1, .

Float32InfError

Raised when a float is out of range and cannot be converted into a float32.

Floor

Floor ===== Floor takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the floor is, y = floor(x), …

FusedMatMul

FusedMatMul (mlprodict) ======================= Version Onnx name: FusedMatMul

FusedMatMulSchema

Defines a schema for operators added in this package such as FusedMatMul.

GPT2TokenizerTransformer

Wraps GPT2Tokenizer

GRU

GRU === Computes an one-layer GRU. This operator is usually supported via some custom implementation such as CuDNN. Notations: …

Gather

Gather ====== Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension …

GatherDouble

Implements runtime for operator Gather. The code is inspired from tfidfvectorizer.cc

GatherElements

GatherElements ============== GatherElements takes two inputs data and indices of the same rank r >= 1 and an optional …

GatherFloat

Implements runtime for operator Gather. The code is inspired from tfidfvectorizer.cc

GatherInt64

Implements runtime for operator Gather. The code is inspired from tfidfvectorizer.cc

GatherND

Python runtime for function SoftmaxCrossEntropyLoss.

Gemm

Gemm ==== General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3 A’ = transpose(A) …

GlobalAveragePool

GlobalAveragePool ================= GlobalAveragePool consumes an input tensor X and applies average pooling across the …

GlobalMaxPool

GlobalMaxPool ============= GlobalMaxPool consumes an input tensor X and applies max pooling across the values in the …

GraphBuilder

Helpers to build graph.

GraphEinsumSubOp

Class gathering all nodes produced to explicit einsum operators.

Greater

Greater ======= Returns the tensor resulted from performing the greater logical operation elementwise on the input tensors …

GreaterOrEqual

GreaterOrEqual ============== Returns the tensor resulted from performing the greater_equal logical operation elementwise …

GridSample

GridSample ========== Given an input X and a flow-field grid, computes the output Y using X values and pixel locations …

GridSampleDouble

Implements float runtime for operator GridSample. The code is inspired from pool.cc

GridSampleFloat

Implements float runtime for operator GridSample. The code is inspired from pool.cc

HammingWindow

Returns \omega_n = \alpha - \beta \cos \left( \frac{\pi n}{N-1} \right) where N is the window length. …

HannWindow

Returns \omega_n = \sin^2\left( \frac{\pi n}{N-1} \right) where N is the window length. See hann_window

HardSigmoid

HardSigmoid =========== HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the …

Hardmax

Hardmax ======= The operator computes the hardmax values for the given input: Hardmax(element in input, axis) = 1 if …

Identity

Identity ======== Identity operator Inputs

If

If == If conditional Attributes

ImperfectPythonCode

Raised if the code shows errors.

Imputer

Imputer (ai.onnx.ml) ==================== Replaces inputs that equal one value with another, leaving all other elements …

InferenceSession

Wrappers around InferenceSession from onnxruntime.

InferenceSession2

Overwrites class InferenceSession to capture the standard output and error.

InputDetectedVariable

Instance of DetectedVariable. Only for inputs.

Inverse

Inverse (mlprodict) =================== Version Onnx name: Inverse

InverseSchema

Defines a schema for operators added in this package such as Inverse.

IsInf

IsInf ===== Map infinity to true and other values to false. Attributes

IsNaN

IsNaN ===== Returns which elements of the input are NaN. Inputs

LRN

LRN === Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). …

LSTM

LSTM ==== Computes an one-layer LSTM. This operator is usually supported via some custom implementation such as CuDNN. …

LabelEncoder

LabelEncoder (ai.onnx.ml) ========================= Maps each element in the input tensor to another value. The mapping …

LayerNormalization

LayerNormalization ================== This is layer normalization defined in ONNX as function. The overall computation …

LeakyRelu

LeakyRelu ========= LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one output data (Tensor<T>) …

Less

Less ==== Returns the tensor resulted from performing the less logical operation elementwise on the input tensors A

LessOrEqual

LessOrEqual =========== Returns the tensor resulted from performing the less_equal logical operation elementwise on …

LexicalScopeContext

Construct an instance with the lexical scope from the parent graph to allow lookup of names from that scope via this_or_ancestor_graph_has. …

LinearClassifier

LinearClassifier (ai.onnx.ml) ============================= Linear classifier Attributes

LinearRegressor

LinearRegressor (ai.onnx.ml) ============================ Generalized linear regression evaluation. If targets is set …

Log

Log === Calculates the natural log of the given input tensor, element-wise. Inputs

LogSoftmax

LogSoftmax ========== The operator computes the log of softmax values for the given input: LogSoftmax(input, axis) = …

Loop

Loop ==== Generic Looping construct. This loop has multiple termination conditions: 1) Trip count. Iteration count specified …

LpNormalization

LpNormalization =============== Given a matrix, apply Lp-normalization along the provided axis. Attributes

MLAction

Base class for every action.

MLActionAdd

Addition

MLActionBinary

Any binary operation.

MLActionCast

Cast into another type.

MLActionConcat

Concatenate number of arrays into an array.

MLActionCst

Constant

MLActionFunction

A function.

MLActionFunctionCall

Any function call.

MLActionIfElse

Addition

MLActionReturn

Returns a results.

MLActionSign

Sign of an expression: 1=positive, 0=negative.

MLActionTensorAdd

Tensor addition.

MLActionTensorDiv

Tensor division.

MLActionTensorDot

Scalar product.

MLActionTensorMul

Tensor multiplication.

MLActionTensorSub

Tensor soustraction.

MLActionTensorTake

Extracts an element of the tensor.

MLActionTensorVector

Tensor operation.

MLActionTestEqual

Operator ==.

MLActionTestInf

Operator <.

MLActionUnary

Any binary operation.

MLActionVar

Variable. The constant is only needed to guess the variable type.

MLModel

Base class for every machine learned model

MLNumType

Base class for numerical types.

MLNumTypeBool

A numpy.bool.

MLNumTypeFloat32

A numpy.float32.

MLNumTypeFloat64

A numpy.float64.

MLNumTypeInt32

A numpy.int32.

MLNumTypeInt64

A numpy.int64.

MLNumTypeSingle

int32 or float32

MLTensor

Defines a tensor with a dimension and a single type for what it contains.

MLType

Base class for every type.

MatMul

MatMul ====== Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html

Max

Max === Element-wise max of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs …

MaxPool

MaxPool ======= MaxPool consumes an input tensor X and applies max pooling across the tensor according to kernel sizes, …

MaxPoolDouble

Implements float runtime for operator Conv. The code is inspired from pool.cc

MaxPoolFloat

Implements float runtime for operator Conv. The code is inspired from pool.cc

Mean

Mean ==== Element-wise mean of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs …

Min

Min === Element-wise min of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs …

MissingInputError

Raised when an input is missing.

MissingOperatorError

Missing operator.

MissingVariableError

Raised when a variable is missing.

MockVariableName

A string.

MockVariableNameShape

A string and a shape.

MockVariableNameShapeType

A string and a shape and a type.

MockWrappedLightGbmBoosterClassifier

Mocked lightgbm.

Mod

Mod === Performs element-wise binary modulus (with Numpy-style broadcasting support). The sign of the remainder is the …

Momentum

Momentum (ai.onnx.preview.training) =================================== Compute one iteration of stochastic gradient update …

Mul

Mul === Performs element-wise binary multiplication (with Numpy-style broadcasting support). This operator supports **multidirectional …

MultiOnnxVar

Class used to return multiple OnnxVar at the same time.

MurmurHash3

MurmurHash3 (mlprodict) ======================= Version Onnx name: MurmurHash3

MurmurHash3Schema

Defines a schema for operators added in this package such as MurmurHash3.

NDArray

Used to annotation ONNX numpy functions.

NDArraySameType

Shortcut to simplify signature description.

NDArraySameTypeSameShape

Shortcut to simplify signature description.

NDArrayType

Shortcut to simplify signature description.

NDArrayTypeSameShape

Shortcut to simplify signature description.

Neg

Neg === Neg takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where each element flipped sign, …

NegativeLogLikelihoodLoss

Python runtime for function NegativeLogLikelihoodLoss.

NewOperatorSchema

Defines a schema for operators added in this package such as TreeEnsembleRegressorDouble.

NodeResultName

Defines a result name for a node.

NonMaxSuppression

NonMaxSuppression ================= Filter out boxes that have high intersection-over-union (IOU) overlap with previously …

NonZero

NonZero ======= Returns the indices of the elements that are non-zero (in row-major order - by dimension). NonZero behaves …

Normalizer

Normalizer (ai.onnx.ml) ======================= Normalize the input. There are three normalization modes, which have …

Not

Not === Returns the negation of the input tensor element-wise. Inputs

NotImplementedShapeInferenceError

Shape Inference can be implemented but is currently not.

NumpyCode

Converts an ONNX operators into numpy code.

OneHot

OneHot ====== Produces a one-hot tensor based on inputs. The locations represented by the index values in the ‘indices’ …

OneHotEncoder

ONNX specifications does not mention the possibility to change the output type, sparse, dense, float, double. …

OnnxBackendAssertionError

Expected failure.

OnnxBackendMissingNewOnnxOperatorException

Raised when onnxruntime or mlprodict does not implement a new operator defined in the latest onnx. …

OnnxBackendTest

Definition of a backend test. It starts with a folder, in this folder, one onnx file must be there, then a subfolder …

OnnxBroadcastGradientArgs_1

Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 …

OnnxBroadcastGradientArgs_1

Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 …

OnnxCheckError

Raised when a model fails check.

OnnxComplexAbs_1

Defines a custom operator for ComplexAbs.

OnnxComplexAbs_1

Defines a custom operator for ComplexAbs.

OnnxExisting

Wrapper around OnnxIdentity to specify this operator is not part of the subgraph it is used in.

OnnxFFT2D_1

Defines a custom operator for FFT2D.

OnnxFFT2D_1

Defines a custom operator for FFT2D.

OnnxFFT_1

Defines a custom operator for FFT.

OnnxFFT_1

Defines a custom operator for FFT.

OnnxFusedMatMul_1

MatMul and Gemm without a C.

OnnxFusedMatMul_1

MatMul and Gemm without a C.

OnnxInference

Loads an ONNX file or object or stream. Computes the output of the ONNX graph. Several runtimes …

OnnxInference2

onnxruntime API

OnnxInferenceBackend

ONNX backend following the pattern from onnx/backend/base.py. …

OnnxInferenceBackendMicro

Same backend as @see cl OnnxInferenceBackend but runtime is @see cl OnnxMicroRuntime.

OnnxInferenceBackendOrt

Same backend as @see cl OnnxInferenceBackend but runtime is onnxruntime1.

OnnxInferenceBackendPyC

Same backend as @see cl OnnxInferenceBackend but runtime is python_compiled.

OnnxInferenceBackendPyEval

Same backend as @see cl OnnxInferenceBackend but runtime is @see cl OnnxShapeInference.

OnnxInferenceBackendRep

Computes the prediction for an ONNX graph loaded with @see cl OnnxInference.

OnnxInferenceBackendShape

Same backend as @see cl OnnxInferenceBackend but runtime is @see cl OnnxShapeInference.

OnnxInferenceExport

Implements methods to export a instance of OnnxInference into json, dot, text, python. …

OnnxInferenceNode

A node to execute.

OnnxKind

Describes a result type.

OnnxLoadFactory

Automatically creating all operators from onnx packages takes time. That’s why function loadop only creates …

OnnxMicroRuntime

Implements a micro runtime for ONNX graphs. It does not implements all the operator types.

OnnxNotebook

Defines magic commands to help with notebooks

OnnxNumpyCompiler

Implements a class which runs onnx graph.

OnnxNumpyFunction

Class wrapping a function build with OnnxNumpyCompiler.

OnnxNumpyFunctionInferenceSession

Overwrites OnnxNumpyFunction to run an instance of InferenceSession from onnxruntime.

OnnxNumpyFunctionOnnxInference

Overwrites OnnxNumpyFunction to run an instance of OnnxInference.

OnnxOperator

Ancestor to every ONNX operator exposed in mlprodict.npy.xops and mlprodict.npy.xops_ml.

OnnxOperatorBase

Base class for OnnxOperator, OnnxOperator`Item, :class:`OnnxOperatorTuple.

OnnxOperatorFunction

This operator is used to insert existing ONNX function into the ONNX graph being built.

OnnxOperatorItem

Accessor to one of the output returned by a OnnxOperator.

OnnxOperatorTuple

Class used to return multiple OnnxVar at the same time.

OnnxPipeline

The pipeline overwrites method fit, it trains and converts every steps into ONNX before training the next step …

OnnxRFFT_1

Defines a custom operator for FFT.

OnnxRFFT_1

Defines a custom operator for FFT.

OnnxRuntimeMissingNewOnnxOperatorException

Raised when a new operator was added but cannot be found.

OnnxShapeInference

Implements a micro runtime for ONNX graphs. It does not implements all the operator types.

OnnxSoftmaxGrad_13

Gradient of Softmax. SoftmaxGrad computes Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, …

OnnxSoftmaxGrad_13

Gradient of Softmax. SoftmaxGrad computes Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, …

OnnxSpeedupClassifier

Trains with scikit-learn, transform with ONNX.

OnnxSpeedupCluster

Trains with scikit-learn, transform with ONNX.

OnnxSpeedupRegressor

Trains with scikit-learn, transform with ONNX.

OnnxSpeedupTransformer

Trains with scikit-learn, transform with ONNX.

OnnxSubEstimator

This operator is used to call the converter of a model to insert the node coming from the conversion into a bigger …

OnnxSubOnnx

This operator is used to insert existing ONNX into the ONNX graph being built.

OnnxTokenizer_1

Defines a custom operator not defined by ONNX specifications but in onnxruntime.

OnnxTokenizer_1

Defines a custom operator not defined by ONNX specifications but in onnxruntime.

OnnxTransformer

Calls onnxruntime or the runtime implemented in this package to transform input based on a ONNX graph. It …

OnnxTranslator

Class which converts a Python function into an ONNX function. It must implements methods visit and depart. …

OnnxVar

Variables used into onnx computation.

OnnxVarGraph

Overloads OnnxVar to handle graph attribute.

OnnxWholeSession

Runs the prediction for a single ONNX, it lets the runtime handle the graph logic as well.

OnnxYieldOp_1

Defines a custom operator for YieldOp.

OnnxYieldOp_1

Defines a custom operator for YieldOp.

OpFunction

Runs a custom function.

OpRun

Ancestor to all operators in this subfolder. The runtime for every node can checked into ONNX unit tests. …

OpRunArg

Ancestor to all unary operators in this subfolder and which produces position of extremas (ArgMax, …). Checks …

OpRunBinary

Ancestor to all binary operators in this subfolder. Checks that inputs type are the same.

OpRunBinaryComparison

Ancestor to all binary operators in this subfolder comparing tensors.

OpRunBinaryNum

Ancestor to all binary operators in this subfolder. Checks that inputs type are the same.

OpRunBinaryNumpy

Implements the inplaces logic. numpy_fct is a binary numpy function which takes two matrices and has a argument …

OpRunClassifierProb

Ancestor to all binary operators in this subfolder. Checks that inputs type are the same.

OpRunCustom

Automates some methods for custom operators defined outside mlprodict.

OpRunExtended

Base class to cache C++ implementation based on inputs.

OpRunOnnxEmpty

Unique operator for an empty runtime.

OpRunOnnxRuntime

Unique operator which calls onnxruntime to compute predictions for one operator.

OpRunReduceNumpy

Implements the reduce logic. It must have a parameter axes.

OpRunUnary

Ancestor to all unary operators in this subfolder. Checks that inputs type are the same.

OpRunUnaryNum

Ancestor to all unary and numerical operators in this subfolder. Checks that inputs type are the same.

OperatorSchema

Defines a schema for operators added in this package such as TreeEnsembleRegressorDouble.

OptionalGetElement

OptionalGetElement ================== If the input is a tensor or sequence type, it returns the input. If the input is …

OptionalHasElement

OptionalHasElement ================== Returns true if (1) the input is an optional-type and contains an element, or, (2) …

Or

Or == Returns the tensor resulted from performing the or logical operation elementwise on the input tensors A and …

OutputDetectedVariable

Instance of DetectedVariable. Only for outputs.

PRelu

PRelu ===== PRelu takes input data (Tensor<T>) and slope tensor as input, and produces one output data (Tensor<T>) where …

Pad_1

Pad_18

Pad === Given a tensor containing the data to be padded (data), a tensor containing the number of start and end pad …

Pad_18

Pad === Given a tensor containing the data to be padded (data), a tensor containing the number of start and end pad …

Pow

Pow === Pow takes input data (Tensor<T>) and exponent Tensor, and produces one output data (Tensor<T>) where the function …

QLinearConv

QLinearConv =========== The convolution operator consumes a quantized input tensor, its scale and zero point, a quantized …

QLinearConvInt8

Implements int8 runtime for operator QLinearConv. The code is inspired from qlinearconv.cc

QLinearConvUInt8

Implements uint8 runtime for operator QLinearConvUInt8. The code is inspired from qlinearconv.cc

QuantizeLinear

QuantizeLinear ============== The linear quantization operator. It consumes a high precision tensor, a scale, and a zero …

QuantizedBiasTensor

Instantiates a quantized tensor (uint8) with bias from a float tensor.

QuantizedTensor

Instantiates a quantized tensor (uint8) from a float tensor.

RFFT

RFFT (mlprodict) ================ Version Onnx name: RFFT

RFFTSchema

Defines a schema for operators added in this package such as FFT.

RNN_14

RNN === Computes an one-layer simple RNN. This operator is usually supported via some custom implementation such as CuDNN. …

RNN_14

RNN === Computes an one-layer simple RNN. This operator is usually supported via some custom implementation such as CuDNN. …

RNN_7

RandomNormal

RandomNormal ============ Generate a tensor with random values drawn from a normal distribution. The shape of the tensor …

RandomNormalLike

RandomNormalLike ================ Generate a tensor with random values drawn from a normal distribution. The shape of …

RandomUniform

RandomUniform ============= Generate a tensor with random values drawn from a uniform distribution. The shape of the tensor …

RandomUniformLike

RandomUniformLike ================= Generate a tensor with random values drawn from a uniform distribution. The shape …

Range

Range ===== Generate a tensor containing a sequence of numbers that begin at start and extends by increments of delta

Reciprocal

Reciprocal ========== Reciprocal takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the reciprocal …

ReduceL1_1

ReduceL1_18

ReduceL1 ======== Computes the L1 norm of the input tensor’s element along the provided axes. The resulting tensor has …

ReduceL1_18

ReduceL1 ======== Computes the L1 norm of the input tensor’s element along the provided axes. The resulting tensor has …

ReduceL2_1

ReduceL2_18

ReduceL2 ======== Computes the L2 norm of the input tensor’s element along the provided axes. The resulting tensor has …

ReduceL2_18

ReduceL2 ======== Computes the L2 norm of the input tensor’s element along the provided axes. The resulting tensor has …

ReduceLogSumExp_1

ReduceLogSumExp_18

ReduceLogSumExp =============== Computes the log sum exponent of the input tensor’s element along the provided axes. The …

ReduceLogSumExp_18

ReduceLogSumExp =============== Computes the log sum exponent of the input tensor’s element along the provided axes. The …

ReduceLogSum_1

ReduceLogSum_18

ReduceLogSum ============ Computes the log sum of the input tensor’s element along the provided axes. The resulting tensor …

ReduceLogSum_18

ReduceLogSum ============ Computes the log sum of the input tensor’s element along the provided axes. The resulting tensor …

ReduceMax_1

ReduceMax_18

ReduceMax ========= Computes the max of the input tensor’s element along the provided axes. The resulting tensor has the …

ReduceMax_18

ReduceMax ========= Computes the max of the input tensor’s element along the provided axes. The resulting tensor has the …

ReduceMean_1

ReduceMean_18

ReduceMean ========== Computes the mean of the input tensor’s element along the provided axes. The resulting tensor has …

ReduceMean_18

ReduceMean ========== Computes the mean of the input tensor’s element along the provided axes. The resulting tensor has …

ReduceMin_1

ReduceMin_18

ReduceMin ========= Computes the min of the input tensor’s element along the provided axes. The resulting tensor has the …

ReduceMin_18

ReduceMin ========= Computes the min of the input tensor’s element along the provided axes. The resulting tensor has the …

ReduceProd_1

ReduceProd_18

ReduceProd ========== Computes the product of the input tensor’s element along the provided axes. The resulting tensor …

ReduceProd_18

ReduceProd ========== Computes the product of the input tensor’s element along the provided axes. The resulting tensor …

ReduceSumSquare_1

ReduceSumSquare_18

ReduceSumSquare =============== Computes the sum square of the input tensor’s element along the provided axes. The resulting …

ReduceSumSquare_18

ReduceSumSquare =============== Computes the sum square of the input tensor’s element along the provided axes. The resulting …

ReduceSum_1

ReduceSum_11

ReduceSum_13

ReduceSum ========= Computes the sum of the input tensor’s element along the provided axes. The resulting tensor has the …

ReduceSum_13

ReduceSum ========= Computes the sum of the input tensor’s element along the provided axes. The resulting tensor has the …

RefAttrName

Implements a link between a parameter of a function and an attribute in node.

Relu

Relu ==== Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, …

Reshape_13

Reshape_14

Reshape ======= Reshape the input tensor similar to numpy.reshape. First input is the data tensor, second input is a shape …

Reshape_14

Reshape ======= Reshape the input tensor similar to numpy.reshape. First input is the data tensor, second input is a shape …

Reshape_5

Resize

Resize ====== Resize the input tensor. In general, it calculates every value in the output tensor as a weighted average …

RoiAlign

RoiAlign ======== Region of Interest (RoI) align operation described in the [Mask R-CNN paper](https://arxiv.org/abs/1703.06870). …

RoiAlignDouble

Implements float runtime for operator RoiAlign. The code is inspired from pool.cc

RoiAlignFloat

Implements float runtime for operator RoiAlign. The code is inspired from pool.cc

Round

Round ===== Round takes one input Tensor and rounds the values, element-wise, meaning it finds the nearest integer for …

RuntimeBadResultsError

Raised when the results are too different from scikit-learn.

RuntimeNonMaxSuppression

Implements runtime for operator NonMaxSuppression. The code is inspired from non_max_suppression.cc

RuntimeSVMClassifierDouble

Implements runtime for operator SVMClassifierDouble. The code is inspired from svm_classifier.cc

RuntimeSVMClassifierFloat

Implements runtime for operator SVMClassifier. The code is inspired from svm_classifier.cc

RuntimeSVMRegressorDouble

Implements Double runtime for operator SVMRegressor. The code is inspired from svm_regressor.cc

RuntimeSVMRegressorFloat

Implements float runtime for operator SVMRegressor. The code is inspired from svm_regressor.cc

RuntimeTfIdfVectorizer

Implements runtime for operator TfIdfVectorizer. The code is inspired from tfidfvectorizer.cc

RuntimeTreeEnsembleClassifierDouble

Implements runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_classifier.cc

RuntimeTreeEnsembleClassifierFloat

Implements runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_classifier.cc

RuntimeTreeEnsembleClassifierPDouble

Implements double runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_Classifier.cc

RuntimeTreeEnsembleClassifierPFloat

Implements float runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_Classifier.cc

RuntimeTreeEnsembleRegressorDouble

Implements double runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc

RuntimeTreeEnsembleRegressorFloat

Implements float runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc

RuntimeTreeEnsembleRegressorPDouble

Implements double runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc

RuntimeTreeEnsembleRegressorPFloat

Implements float runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc

RuntimeTypeError

Raised when a type of a variable is unexpected.

STFT

STFT ==== Computes the Short-time Fourier Transform of the signal. Attributes

SVMClassifier

SVMClassifier (ai.onnx.ml) ========================== Support Vector Machine classifier Attributes

SVMClassifierCommon

SVMClassifierDouble

SVMClassifierDouble (mlprodict) =============================== Version Onnx name: SVMClassifierDouble

SVMClassifierDoubleSchema

Defines a schema for operators added in this package such as SVMClassifierDouble.

SVMRegressor

SVMRegressor (ai.onnx.ml) ========================= Support Vector Machine regression prediction and one-class SVM anomaly …

SVMRegressorCommon

SVMRegressorDouble

SVMRegressorDouble (mlprodict) ============================== Version Onnx name: SVMRegressorDouble

SVMRegressorDoubleSchema

Defines a schema for operators added in this package such as SVMRegressorDouble.

Scaler

Scaler (ai.onnx.ml) =================== Rescale input data, for example to standardize features by removing the mean and …

Scan

Scan ==== Scan can be used to iterate over one or more scan_input tensors, constructing zero or more scan_output tensors. …

ScatterElements

ScatterElements =============== ScatterElements takes three inputs data, updates, and indices of the same rank r …

ScatterND

ScatterND ========= ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates

Schema

Wrapper around a schema.

Selu

Selu ==== Selu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the scaled exponential …

SentencePieceTokenizerTransformer

Wraps SentencePieceTokenizer

SequenceAt

SequenceAt ========== Outputs a tensor copy from the tensor at ‘position’ in ‘input_sequence’. Accepted range for ‘position’ …

SequenceConstruct

SequenceConstruct ================= Construct a tensor sequence containing ‘inputs’ tensors. All tensors in ‘inputs’ must …

SequenceEmpty

SequenceEmpty ============= Construct an empty tensor sequence, with given data type. Attributes

SequenceInsert

SequenceInsert ============== Outputs a tensor sequence that inserts ‘tensor’ into ‘input_sequence’ at ‘position’. ‘tensor’ …

ShapeConstraint

One constraint.

ShapeConstraintList

A list of ShapeConstraint.

ShapeContainer

Stores all infered shapes as ShapeResult. Attributes:

ShapeInferenceDimensionError

Raised when the shape cannot continue due to unknown dimension.

ShapeInferenceException

Raised when shape inference fails.

ShapeInferenceMissing

Raised when an operator is missing.

ShapeResult

Contains information about shape and type of a result in an onnx graph.

Shape_1

Shape_15

Shape ===== Takes a tensor as input and outputs an 1D int64 tensor containing the shape of the input tensor. Optional …

Shape_15

Shape ===== Takes a tensor as input and outputs an 1D int64 tensor containing the shape of the input tensor. Optional …

Shrink

Shrink ====== Shrink takes one input data (Tensor<numeric>) and produces one Tensor output, having same datatype and shape …

Sigmoid

Sigmoid ======= Sigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the sigmoid function, …

Sign

Sign ==== Calculate the sign of the given input tensor element-wise. If input > 0, output 1. if input < 0, output -1. …

SimplifiedOnnxInference

Simple wrapper around InferenceSession which imitates OnnxInference. It only enable CPUExecutionProvider. …

Sin

Sin === Calculates the sine of the given input tensor, element-wise. Inputs

Sinh

Sinh ==== Calculates the hyperbolic sine of the given input tensor element-wise. Inputs

Size

Size ==== Takes a tensor as input and outputs a int64 scalar that equals to the total number of elements of the input …

SliceCommon

Slice_1

Slice_10

Slice ===== Produces a slice of the input tensor along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html

Slice_10

Slice ===== Produces a slice of the input tensor along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html

SoftmaxCrossEntropyLoss

Python runtime for function SoftmaxCrossEntropyLoss.

SoftmaxGradSchema

Defines a schema for operators added in this package such as SoftmaxGrad_13.

SoftmaxGrad_13

SoftmaxGrad computes dX = Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, which can be …

SoftmaxGrad_13

SoftmaxGrad computes dX = Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, which can be …

Softmax_1

Softmax_13

Softmax ======= The operator computes the normalized exponential values for the given input: Softmax(input, axis) = …

Softmax_13

Softmax ======= The operator computes the normalized exponential values for the given input: Softmax(input, axis) = …

Softplus

Softplus ======== Softplus takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the softplus …

Softsign

Softsign ======== Calculates the softsign (x/(1+|x|)) of the given input tensor element-wise. Inputs

Solve

Solve (mlprodict) ================= Version Onnx name: Solve

SolveSchema

Defines a schema for operators added in this package such as Solve.

SpaceToDepth

SpaceToDepth ============ SpaceToDepth rearranges blocks of spatial data into depth. More specifically, this op outputs …

Split_11

Runtime for operator Split.

Split_13

Runtime for operator Split.

Split_13

Runtime for operator Split.

Split_2

Runtime for operator Split.

Sqrt

Sqrt ==== Square root takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the square root …

Squeeze_1

Squeeze_11

Squeeze_13

Squeeze ======= Remove single-dimensional entries from the shape of a tensor. Takes an input axes with a list of axes …

Squeeze_13

Squeeze ======= Remove single-dimensional entries from the shape of a tensor. Takes an input axes with a list of axes …

StringNormalizer

The operator is not really threadsafe as python cannot play with two locales at the same time. stop words should …

Sub

Sub === Performs element-wise binary subtraction (with Numpy-style broadcasting support). This operator supports **multidirectional …

Sum

Sum === Element-wise sum of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs …

Tan

Tan === Calculates the tangent of the given input tensor, element-wise. Inputs

Tanh

Tanh ==== Calculates the hyperbolic tangent of the given input tensor element-wise. Inputs

TemplateBenchmarkClassifier

asv test for a classifier, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkClassifierRawScore

asv test for a classifier, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkClustering

asv example for a clustering algorithm, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkMultiClassifier

asv example for a classifier, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkOutlier

asv example for an outlier detector, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkRegressor

asv example for a regressor, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkTrainableTransform

asv example for a trainable transform, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkTransform

asv example for a transform, Full template can be found in common_asv_skl.py. …

TemplateBenchmarkTransformPositive

asv example for a transform, Full template can be found in common_asv_skl.py. …

Tf2OnnxConvert

Applies the converter on an ONNX graph.

TfIdfVectorizer

TfIdfVectorizer =============== This transform extracts n-grams from the input sequence and save them as a vector. Input …

ThresholdedRelu

ThresholdedRelu =============== ThresholdedRelu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) …

Tokenizer

See Tokenizer.

TokenizerSchema

Defines a schema for operators added in this package such as TreeEnsembleClassifierDouble.

TokenizerTransformerBase

Base class for SentencePieceTokenizerTransformer and GPT2TokenizerTransformer.

TopK_1

TopK_10

TopK_11

TopK ==== Retrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of shape [a_1, …

TopK_11

TopK ==== Retrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of shape [a_1, …

Transpose

Transpose ========= Transpose the input tensor similar to numpy.transpose. For example, when perm=(1, 0, 2), given an …

TreeEnsembleClassifierCommon

TreeEnsembleClassifierDouble

TreeEnsembleClassifierDouble (mlprodict) ======================================== Version Onnx name: TreeEnsembleClassifierDouble

TreeEnsembleClassifierDoubleSchema

Defines a schema for operators added in this package such as TreeEnsembleClassifierDouble.

TreeEnsembleClassifier_1

TreeEnsembleClassifier_3

TreeEnsembleClassifier (ai.onnx.ml) =================================== Tree Ensemble classifier. Returns the top class …

TreeEnsembleClassifier_3

TreeEnsembleClassifier (ai.onnx.ml) =================================== Tree Ensemble classifier. Returns the top class …

TreeEnsembleRegressorCommon

TreeEnsembleRegressorDouble

Runtime for the custom operator TreeEnsembleRegressorDouble.

TreeEnsembleRegressorDoubleSchema

Defines a schema for operators added in this package such as TreeEnsembleRegressorDouble.

TreeEnsembleRegressor_1

TreeEnsembleRegressor_3

TreeEnsembleRegressor (ai.onnx.ml) ================================== Tree Ensemble regressor. Returns the regressed …

TreeEnsembleRegressor_3

TreeEnsembleRegressor (ai.onnx.ml) ================================== Tree Ensemble regressor. Returns the regressed …

Trilu

Trilu ===== Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor(s). …

UndefinedSchema

Undefined schema.

Unique

Unique ====== Find the unique elements of a tensor. When an optional attribute ‘axis’ is provided, unique subtensors sliced …

Unsqueeze_1

Unsqueeze_11

Unsqueeze_13

Unsqueeze ========= Insert single-dimensional entries to the shape of an input tensor (data). Takes one required input …

Unsqueeze_13

Unsqueeze ========= Insert single-dimensional entries to the shape of an input tensor (data). Takes one required input …

Variable

An input or output to an ONNX graph.

Where

Where ===== Return elements, either from X or Y, depending on condition. Where behaves like [numpy.where](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) …

WrappedLightGbmBooster

A booster can be a classifier, a regressor. Trick to wrap it in a minimal function.

WrappedLightGbmBoosterClassifier

Trick to wrap a LGBMClassifier into a class.

XGBClassifierConverter

converter for XGBClassifier

XGBConverter

common methods for converters

XGBRegressorConverter

converter class

Xor

Xor === Returns the tensor resulted from performing the xor logical operation elementwise on the input tensors A and …

YieldOp

YieldOp (mlprodict) =================== Version Onnx name: YieldOp

YieldOpSchema

Defines a schema for operators added in this package such as ComplexAbs.

ZipMap

The class does not output a dictionary as specified in ONNX specifications but a ArrayZipMapDictionary

ZipMapDictionary

Custom dictionary class much faster for this runtime, it implements a subset of the same methods.

_ArgMax

Base class for runtime for operator ArgMax. …

_ArgMin

Base class for runtime for operator ArgMin. …

_ClassifierCommon

Labels strings are not natively implemented in C++ runtime. The class stores the strings labels, replaces them by …

_CombineModels

_CommonAsvSklBenchmark

Common tests to all benchmarks testing converted scikit-learn models. See benchmark attributes. …

_CommonAsvSklBenchmarkClassifier

Common class for a classifier.

_CommonAsvSklBenchmarkClassifierRawScore

Common class for a classifier.

_CommonAsvSklBenchmarkClustering

Common class for a clustering algorithm.

_CommonAsvSklBenchmarkMultiClassifier

Common class for a multi-classifier.

_CommonAsvSklBenchmarkOutlier

Common class for outlier detection.

_CommonAsvSklBenchmarkRegressor

Common class for a regressor.

_CommonAsvSklBenchmarkTrainableTransform

Common class for a trainable transformer.

_CommonAsvSklBenchmarkTransform

Common class for a transformer.

_CommonAsvSklBenchmarkTransformPositive

Common class for a transformer for positive features.

_CommonQuantizeLinear

_CommonRandom

Common methods to all random operators.

_CommonTopK

Ths class hides a parameter used as a threshold above which the parallelisation is started: th_para.

_CommonWindow

_CustomSchema

For operators defined outside onnx.

_GraphBuilder

Graph builder. It takes a graph structure made with instances of OnnxOperatorBase. The main method is to_onnx. …

_MyEncoder

_NDArrayAlias

Ancestor to custom signature.

_OnnxPipelineStepSpeedup

Speeds up inference by replacing methods transform or predict by a runtime for ONNX.

_ParamEncoder

_Softmax

_StaticVariables

Holds static variables.

_WrapperLogger

Wrappers around class logging.Logger to take indentation into account.

_WrapperPrint

Wrappers around print to help debugging.

_created_classes

Class to store all dynamic classes created by wrappers.

_inline_mapping

Overwrites class dictionary to debug more easily.

if_then_else

Overloads class OnnxVarGraph.

tf_op

Decorator to register any new converter.

wrapper_onnxnumpy

Intermediate wrapper to store a pointer on the compiler (type: OnnxNumpyCompiler).

wrapper_onnxnumpy_np

Intermediate wrapper to store a pointer on the compiler (type: OnnxNumpyCompiler) supporting multiple signatures. …