.. _l-onnx-doc-LRN: === LRN === .. contents:: :local: .. _l-onnx-op-lrn-13: LRN - 13 ======== **Version** * **name**: `LRN (GitHub) `_ * **domain**: **main** * **since_version**: **13** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 13**. **Summary** Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensor of shape (N x C x D1 x D2, ..., Dk), its region is {X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}. square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)). Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta **Attributes** * **alpha**: Scaling parameter. Default value is ``9.999999747378752e-05``. * **beta**: The exponent. Default value is ``0.75``. * **bias**: Default value is ``1.0``. * **size** (required): The number of channels to sum over **Inputs** * **X** (heterogeneous) - **T**: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...]. **Outputs** * **Y** (heterogeneous) - **T**: Output tensor, which has the shape and type as input tensor **Type Constraints** * **T** in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors. **Examples** **default** :: alpha = 0.0002 beta = 0.5 bias = 2.0 nsize = 3 node = onnx.helper.make_node( "LRN", inputs=["x"], outputs=["y"], alpha=alpha, beta=beta, bias=bias, size=nsize, ) x = np.random.randn(5, 5, 5, 5).astype(np.float32) square_sum = np.zeros((5, 5, 5, 5)).astype(np.float32) for n, c, h, w in np.ndindex(x.shape): square_sum[n, c, h, w] = sum( x[ n, max(0, c - int(math.floor((nsize - 1) / 2))) : min( 5, c + int(math.ceil((nsize - 1) / 2)) + 1 ), h, w, ] ** 2 ) y = x / ((bias + (alpha / nsize) * square_sum) ** beta) expect(node, inputs=[x], outputs=[y], name="test_lrn") **_default** :: alpha = 0.0001 beta = 0.75 bias = 1.0 nsize = 3 node = onnx.helper.make_node("LRN", inputs=["x"], outputs=["y"], size=3) x = np.random.randn(5, 5, 5, 5).astype(np.float32) square_sum = np.zeros((5, 5, 5, 5)).astype(np.float32) for n, c, h, w in np.ndindex(x.shape): square_sum[n, c, h, w] = sum( x[ n, max(0, c - int(math.floor((nsize - 1) / 2))) : min( 5, c + int(math.ceil((nsize - 1) / 2)) + 1 ), h, w, ] ** 2 ) y = x / ((bias + (alpha / nsize) * square_sum) ** beta) expect(node, inputs=[x], outputs=[y], name="test_lrn_default") **Differences** .. raw:: html
00Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).
11It normalizes over local input regions.It normalizes over local input regions.
22The local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensorThe local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensor
33of shape (N x C x D1 x D2, ..., Dk), its region isof shape (N x C x D1 x D2, ..., Dk), its region is
44{X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.{X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
55
66square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2),square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2),
77where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
88
99Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ betaY[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta
1010
1111**Attributes****Attributes**
1212
1313* **alpha**:* **alpha**:
1414 Scaling parameter. Default value is 9.999999747378752e-05. Scaling parameter. Default value is 9.999999747378752e-05.
1515* **beta**:* **beta**:
1616 The exponent. Default value is 0.75. The exponent. Default value is 0.75.
1717* **bias**:* **bias**:
1818 Default value is 1.0. Default value is 1.0.
1919* **size** (required):* **size** (required):
2020 The number of channels to sum over The number of channels to sum over
2121
2222**Inputs****Inputs**
2323
2424* **X** (heterogeneous) - **T**:* **X** (heterogeneous) - **T**:
2525 Input data tensor from the previous operator; dimensions for image Input data tensor from the previous operator; dimensions for image
2626 case are (N x C x H x W), where N is the batch size, C is the number case are (N x C x H x W), where N is the batch size, C is the number
2727 of channels, and H and W are the height and the width of the data. of channels, and H and W are the height and the width of the data.
2828 For non image case, the dimensions are in the form of (N x C x D1 x For non image case, the dimensions are in the form of (N x C x D1 x
2929 D2 ... Dn), where N is the batch size. Optionally, if dimension D2 ... Dn), where N is the batch size. Optionally, if dimension
3030 denotation is in effect, the operation expects the input data tensor denotation is in effect, the operation expects the input data tensor
3131 to arrive with the dimension denotation of [DATA_BATCH, to arrive with the dimension denotation of [DATA_BATCH,
3232 DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...]. DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...].
3333
3434**Outputs****Outputs**
3535
3636* **Y** (heterogeneous) - **T**:* **Y** (heterogeneous) - **T**:
3737 Output tensor, which has the shape and type as input tensor Output tensor, which has the shape and type as input tensor
3838
3939**Type Constraints****Type Constraints**
4040
4141* **T** in (* **T** in (
42 tensor(bfloat16),
4243 tensor(double), tensor(double),
4344 tensor(float), tensor(float),
4445 tensor(float16) tensor(float16)
4546 ): ):
4647 Constrain input and output types to float tensors. Constrain input and output types to float tensors.
.. _l-onnx-op-lrn-1: LRN - 1 ======= **Version** * **name**: `LRN (GitHub) `_ * **domain**: **main** * **since_version**: **1** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 1**. **Summary** Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensor of shape (N x C x D1 x D2, ..., Dk), its region is {X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}. square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)). Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta **Attributes** * **alpha**: Scaling parameter. Default value is ``9.999999747378752e-05``. * **beta**: The exponent. Default value is ``0.75``. * **bias**: Default value is ``1.0``. * **size** (required): The number of channels to sum over **Inputs** * **X** (heterogeneous) - **T**: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE ...]. **Outputs** * **Y** (heterogeneous) - **T**: Output tensor, which has the shape and type as input tensor **Type Constraints** * **T** in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.