module onnxrt.ops_cpu.op_lrn
#
Short summary#
module mlprodict.onnxrt.ops_cpu.op_lrn
Runtime operator.
Classes#
class |
truncated documentation |
---|---|
LRN === Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). … |
Properties#
property |
truncated documentation |
---|---|
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of modified parameters. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns all parameters in a dictionary. |
Methods#
method |
truncated documentation |
---|---|
Documentation#
Runtime operator.
- class mlprodict.onnxrt.ops_cpu.op_lrn.LRN(onnx_node, desc=None, **options)#
Bases:
OpRun
===
Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes
alpha: Scaling parameter. Default value is
namealphaf9.999999747378752e-05typeFLOAT
(FLOAT)beta: The exponent. Default value is
namebetaf0.75typeFLOAT
(FLOAT)bias: Default value is
namebiasf1.0typeFLOAT
(FLOAT)size (required): The number of channels to sum over default value cannot be automatically retrieved (INT)
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output tensor, which has the shape and type as input tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
Version
Onnx name: LRN
This version of the operator has been available since version 13.
Runtime implementation:
LRN
- __init__(onnx_node, desc=None, **options)#
- _run(x, attributes=None, verbose=0, fLOG=None)#
Should be overwritten.