.. _l-onnx-doc-ScatterND: ========= ScatterND ========= .. contents:: :local: .. _l-onnx-op-scatternd-18: ScatterND - 18 ============== **Version** * **name**: `ScatterND (GitHub) `_ * **domain**: **main** * **since_version**: **18** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 18**. **Summary** ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1, and `updates` tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input `data`, and then updating its value to values specified by `updates` at specific index positions specified by `indices`. Its output shape is the same as the shape of `data`. `indices` is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of `indices`. `indices` is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into `data`. Hence, k can be a value at most the rank of `data`. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor. Index values are allowed to be negative, as per the usual convention for counting backwards from the end, but are expected in the valid range. `updates` is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of `updates` correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of `data`. Thus, the shape of `updates` must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes. The `output` is calculated via the following equation: output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices): output[indices[idx]] = updates[idx] The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. `reduction` allows specification of an optional reduction operation, which is applied to all values in `updates` tensor into `output` at the specified `indices`. In cases where `reduction` is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. When `reduction` is set to some reduction function `f`, `output` is calculated as follows: output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices): output[indices[idx]] = f(output[indices[idx]], updates[idx]) where the `f` is +/*/max/min as specified. This operator is the inverse of GatherND. (Opset 18 change): Adds max/min to the set of allowed reduction ops. Example 1: :: data = [1, 2, 3, 4, 5, 6, 7, 8] indices = [[4], [3], [1], [7]] updates = [9, 10, 11, 12] output = [1, 11, 3, 10, 9, 6, 7, 12] Example 2: :: data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] indices = [[0], [2]] updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] **Attributes** * **reduction**: Type of reduction to apply: none (default), add, mul, max, min. 'none': no reduction applied. 'add': reduction using the addition operation. 'mul': reduction using the addition operation. 'max': reduction using the maximum operation.'min': reduction using the minimum operation. Default value is ``'none'``. **Inputs** * **data** (heterogeneous) - **T**: Tensor of rank r >= 1. * **indices** (heterogeneous) - **tensor(int64)**: Tensor of rank q >= 1. * **updates** (heterogeneous) - **T**: Tensor of rank q + r - indices_shape[-1] - 1. **Outputs** * **output** (heterogeneous) - **T**: Tensor of rank r >= 1. **Type Constraints** * **T** in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type. **Examples** **_scatternd** :: node = onnx.helper.make_node( "ScatterND", inputs=["data", "indices", "updates"], outputs=["y"], ) data = np.array( [ [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], ], dtype=np.float32, ) indices = np.array([[0], [2]], dtype=np.int64) updates = np.array( [ [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], ], dtype=np.float32, ) # Expecting output as np.array( # [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], # [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], # [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], # [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32) output = scatter_nd_impl(data, indices, updates) expect( node, inputs=[data, indices, updates], outputs=[output], name="test_scatternd", ) **_scatternd_add** :: node = onnx.helper.make_node( "ScatterND", inputs=["data", "indices", "updates"], outputs=["y"], reduction="add", ) data = np.array( [ [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], ], dtype=np.float32, ) indices = np.array([[0], [0]], dtype=np.int64) updates = np.array( [ [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], ], dtype=np.float32, ) # Expecting output as np.array( # [[[7, 8, 9, 10], [13, 14, 15, 16], [18, 17, 16, 15], [16, 15, 14, 13]], # [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], # [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], # [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32) output = scatter_nd_impl(data, indices, updates, reduction="add") expect( node, inputs=[data, indices, updates], outputs=[output], name="test_scatternd_add", ) **_scatternd_multiply** :: node = onnx.helper.make_node( "ScatterND", inputs=["data", "indices", "updates"], outputs=["y"], reduction="mul", ) data = np.array( [ [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], ], dtype=np.float32, ) indices = np.array([[0], [0]], dtype=np.int64) updates = np.array( [ [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], ], dtype=np.float32, ) # Expecting output as np.array( # [[[5, 10, 15, 20], [60, 72, 84, 96], [168, 147, 126, 105], [128, 96, 64, 32]], # [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], # [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], # [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32) output = scatter_nd_impl(data, indices, updates, reduction="mul") expect( node, inputs=[data, indices, updates], outputs=[output], name="test_scatternd_multiply", ) **_scatternd_max** :: node = onnx.helper.make_node( "ScatterND", inputs=["data", "indices", "updates"], outputs=["y"], reduction="max", ) data = np.array( [ [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], ], dtype=np.float32, ) indices = np.array([[0], [0]], dtype=np.int64) updates = np.array( [ [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], ], dtype=np.float32, ) # Expecting output as np.array( # [[[5, 5, 5, 5], [6, 6, 7, 8], [8, 7, 7, 7], [8, 8 ,8, 8]], # [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], # [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], # [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32) output = scatter_nd_impl(data, indices, updates, reduction="max") expect( node, inputs=[data, indices, updates], outputs=[output], name="test_scatternd_max", ) **_scatternd_min** :: node = onnx.helper.make_node( "ScatterND", inputs=["data", "indices", "updates"], outputs=["y"], reduction="min", ) data = np.array( [ [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], ], dtype=np.float32, ) indices = np.array([[0], [0]], dtype=np.int64) updates = np.array( [ [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], ], dtype=np.float32, ) # Expecting output as np.array( # [[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 3, 2, 1]], # [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], # [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], # [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]], dtype=np.float32) output = scatter_nd_impl(data, indices, updates, reduction="min") expect( node, inputs=[data, indices, updates], outputs=[output], name="test_scatternd_min", ) **Differences** .. raw:: html
00ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
11and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operationand updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
22is produced by creating a copy of the input data, and then updating its value to valuesis produced by creating a copy of the input data, and then updating its value to values
33specified by updates at specific index positions specified by indices. Its output shapespecified by updates at specific index positions specified by indices. Its output shape
44is the same as the shape of data.is the same as the shape of data.
55
66indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
77 indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
88Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies anHence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
99update to a single element of the tensor. When k is less than rank(data) each update entry specifies anupdate to a single element of the tensor. When k is less than rank(data) each update entry specifies an
1010update to a slice of the tensor. Index values are allowed to be negative, as per the usualupdate to a slice of the tensor. Index values are allowed to be negative, as per the usual
1111convention for counting backwards from the end, but are expected in the valid range.convention for counting backwards from the end, but are expected in the valid range.
1212
1313updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, theupdates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
1414first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
1515The remaining dimensions of updates correspond to the dimensions of theThe remaining dimensions of updates correspond to the dimensions of the
1616replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
1717corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updatescorresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
1818must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenationmust equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
1919of shapes.of shapes.
2020
2121The output is calculated via the following equation:The output is calculated via the following equation:
22
2223 output = np.copy(data) output = np.copy(data)
2324 update_indices = indices.shape[:-1] update_indices = indices.shape[:-1]
2425 for idx in np.ndindex(update_indices): for idx in np.ndindex(update_indices):
2526 output[indices[idx]] = updates[idx] output[indices[idx]] = updates[idx]
27
2628The order of iteration in the above loop is not specified.The order of iteration in the above loop is not specified.
2729In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
2830This ensures that the output value does not depend on the iteration order.This ensures that the output value does not depend on the iteration order.
2931
3032reduction allows specification of an optional reduction operation, which is applied to all values in updatesreduction allows specification of an optional reduction operation, which is applied to all values in updates
3133tensor into output at the specified indices.tensor into output at the specified indices.
3234In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
3335then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
34When reduction is set to "add", output is calculated as follows:
35 output = np.copy(data)
36 update_indices = indices.shape[:-1]
37 for idx in np.ndindex(update_indices):
38 output[indices[idx]] += updates[idx]
3936When reduction is set to "mul", output is calculated as follows:When reduction is set to some reduction function f, output is calculated as follows:
37
4038 output = np.copy(data) output = np.copy(data)
4139 update_indices = indices.shape[:-1] update_indices = indices.shape[:-1]
4240 for idx in np.ndindex(update_indices): for idx in np.ndindex(update_indices):
4341 output[indices[idx]] *= updates[idx] output[indices[idx]] = f(output[indices[idx]], updates[idx])
42
43where the f is +/*/max/min as specified.
44
4445This operator is the inverse of GatherND.This operator is the inverse of GatherND.
46
47(Opset 18 change): Adds max/min to the set of allowed reduction ops.
48
4549Example 1:Example 1:
4650::::
4751
4852 data = [1, 2, 3, 4, 5, 6, 7, 8] data = [1, 2, 3, 4, 5, 6, 7, 8]
4953 indices = [[4], [3], [1], [7]] indices = [[4], [3], [1], [7]]
5054 updates = [9, 10, 11, 12] updates = [9, 10, 11, 12]
5155 output = [1, 11, 3, 10, 9, 6, 7, 12] output = [1, 11, 3, 10, 9, 6, 7, 12]
5256
5357Example 2:Example 2:
5458::::
5559
5660 data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
5761 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
5862 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
5963 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
6064 indices = [[0], [2]] indices = [[0], [2]]
6165 updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
6266 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
6367 output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
6468 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
6569 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
6670 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
6771
6872**Attributes****Attributes**
6973
7074* **reduction**:* **reduction**:
7175 Type of reduction to apply: none (default), add, mul. 'none': no Type of reduction to apply: none (default), add, mul, max, min.
7276 reduction applied. 'add': reduction using the addition operation. 'none': no reduction applied. 'add': reduction using the addition
77 operation. 'mul': reduction using the addition operation. 'max':
78 reduction using the maximum operation.'min': reduction using the
7379 'mul': reduction using the multiplication operation. Default value is 'none'. minimum operation. Default value is 'none'.
7480
7581**Inputs****Inputs**
7682
7783* **data** (heterogeneous) - **T**:* **data** (heterogeneous) - **T**:
7884 Tensor of rank r >= 1. Tensor of rank r >= 1.
7985* **indices** (heterogeneous) - **tensor(int64)**:* **indices** (heterogeneous) - **tensor(int64)**:
8086 Tensor of rank q >= 1. Tensor of rank q >= 1.
8187* **updates** (heterogeneous) - **T**:* **updates** (heterogeneous) - **T**:
8288 Tensor of rank q + r - indices_shape[-1] - 1. Tensor of rank q + r - indices_shape[-1] - 1.
8389
8490**Outputs****Outputs**
8591
8692* **output** (heterogeneous) - **T**:* **output** (heterogeneous) - **T**:
8793 Tensor of rank r >= 1. Tensor of rank r >= 1.
8894
8995**Type Constraints****Type Constraints**
9096
9197* **T** in (* **T** in (
9298 tensor(bfloat16), tensor(bfloat16),
9399 tensor(bool), tensor(bool),
94100 tensor(complex128), tensor(complex128),
95101 tensor(complex64), tensor(complex64),
96102 tensor(double), tensor(double),
97103 tensor(float), tensor(float),
98104 tensor(float16), tensor(float16),
99105 tensor(int16), tensor(int16),
100106 tensor(int32), tensor(int32),
101107 tensor(int64), tensor(int64),
102108 tensor(int8), tensor(int8),
103109 tensor(string), tensor(string),
104110 tensor(uint16), tensor(uint16),
105111 tensor(uint32), tensor(uint32),
106112 tensor(uint64), tensor(uint64),
107113 tensor(uint8) tensor(uint8)
108114 ): ):
109115 Constrain input and output types to any tensor type. Constrain input and output types to any tensor type.
.. _l-onnx-op-scatternd-16: ScatterND - 16 ============== **Version** * **name**: `ScatterND (GitHub) `_ * **domain**: **main** * **since_version**: **16** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 16**. **Summary** ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1, and `updates` tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input `data`, and then updating its value to values specified by `updates` at specific index positions specified by `indices`. Its output shape is the same as the shape of `data`. `indices` is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of `indices`. `indices` is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into `data`. Hence, k can be a value at most the rank of `data`. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor. Index values are allowed to be negative, as per the usual convention for counting backwards from the end, but are expected in the valid range. `updates` is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of `updates` correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of `data`. Thus, the shape of `updates` must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes. The `output` is calculated via the following equation: output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices): output[indices[idx]] = updates[idx] The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. `reduction` allows specification of an optional reduction operation, which is applied to all values in `updates` tensor into `output` at the specified `indices`. In cases where `reduction` is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. When `reduction` is set to "add", `output` is calculated as follows: output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices): output[indices[idx]] += updates[idx] When `reduction` is set to "mul", `output` is calculated as follows: output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices): output[indices[idx]] *= updates[idx] This operator is the inverse of GatherND. Example 1: :: data = [1, 2, 3, 4, 5, 6, 7, 8] indices = [[4], [3], [1], [7]] updates = [9, 10, 11, 12] output = [1, 11, 3, 10, 9, 6, 7, 12] Example 2: :: data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] indices = [[0], [2]] updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] **Attributes** * **reduction**: Type of reduction to apply: none (default), add, mul. 'none': no reduction applied. 'add': reduction using the addition operation. 'mul': reduction using the multiplication operation. Default value is ``'none'``. **Inputs** * **data** (heterogeneous) - **T**: Tensor of rank r >= 1. * **indices** (heterogeneous) - **tensor(int64)**: Tensor of rank q >= 1. * **updates** (heterogeneous) - **T**: Tensor of rank q + r - indices_shape[-1] - 1. **Outputs** * **output** (heterogeneous) - **T**: Tensor of rank r >= 1. **Type Constraints** * **T** in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type. **Differences** .. raw:: html
00ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
11and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operationand updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
22is produced by creating a copy of the input data, and then updating its value to valuesis produced by creating a copy of the input data, and then updating its value to values
33specified by updates at specific index positions specified by indices. Its output shapespecified by updates at specific index positions specified by indices. Its output shape
44is the same as the shape of data. Note that indices should not have duplicate entries.is the same as the shape of data.
5That is, two or more updates for the same index-location is not supported.
65
76indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
87 indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
98Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies anHence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
109update to a single element of the tensor. When k is less than rank(data) each update entry specifies anupdate to a single element of the tensor. When k is less than rank(data) each update entry specifies an
1110update to a slice of the tensor. Index values are allowed to be negative, as per the usualupdate to a slice of the tensor. Index values are allowed to be negative, as per the usual
1211convention for counting backwards from the end, but are expected in the valid range.convention for counting backwards from the end, but are expected in the valid range.
1312
1413updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, theupdates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
1514first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
1615The remaining dimensions of updates correspond to the dimensions of theThe remaining dimensions of updates correspond to the dimensions of the
1716replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
1817corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updatescorresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
1918must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenationmust equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
2019of shapes.of shapes.
2120
2221The output is calculated via the following equation:The output is calculated via the following equation:
23
2422 output = np.copy(data) output = np.copy(data)
2523 update_indices = indices.shape[:-1] update_indices = indices.shape[:-1]
2624 for idx in np.ndindex(update_indices): for idx in np.ndindex(update_indices):
2725 output[indices[idx]] = updates[idx] output[indices[idx]] = updates[idx]
28
2926The order of iteration in the above loop is not specified.The order of iteration in the above loop is not specified.
3027In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
3128This ensures that the output value does not depend on the iteration order.This ensures that the output value does not depend on the iteration order.
3229
3330This operator is the inverse of GatherND.reduction allows specification of an optional reduction operation, which is applied to all values in updates
34
3531Example 1:tensor into output at the specified indices.
36::
37
3832 data = [1, 2, 3, 4, 5, 6, 7, 8]In cases where reduction is set to "none", indices should not have duplicate entries: that is, if idx1 != idx2,
3933 indices = [[4], [3], [1], [7]]then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
4034 updates = [9, 10, 11, 12]When reduction is set to "add", output is calculated as follows:
35 output = np.copy(data)
36 update_indices = indices.shape[:-1]
37 for idx in np.ndindex(update_indices):
38 output[indices[idx]] += updates[idx]
39When reduction is set to "mul", output is calculated as follows:
40 output = np.copy(data)
41 update_indices = indices.shape[:-1]
42 for idx in np.ndindex(update_indices):
43 output[indices[idx]] *= updates[idx]
44This operator is the inverse of GatherND.
45Example 1:
46::
47
48 data = [1, 2, 3, 4, 5, 6, 7, 8]
49 indices = [[4], [3], [1], [7]]
50 updates = [9, 10, 11, 12]
4151 output = [1, 11, 3, 10, 9, 6, 7, 12] output = [1, 11, 3, 10, 9, 6, 7, 12]
4252
4353Example 2:Example 2:
4454::::
4555
4656 data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4757 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4858 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
4959 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
5060 indices = [[0], [2]] indices = [[0], [2]]
5161 updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5262 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
5363 output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5464 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
5565 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
5666 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
5767
68**Attributes**
69
70* **reduction**:
71 Type of reduction to apply: none (default), add, mul. 'none': no
72 reduction applied. 'add': reduction using the addition operation.
73 'mul': reduction using the multiplication operation. Default value is 'none'.
74
5875**Inputs****Inputs**
5976
6077* **data** (heterogeneous) - **T**:* **data** (heterogeneous) - **T**:
6178 Tensor of rank r >= 1. Tensor of rank r >= 1.
6279* **indices** (heterogeneous) - **tensor(int64)**:* **indices** (heterogeneous) - **tensor(int64)**:
6380 Tensor of rank q >= 1. Tensor of rank q >= 1.
6481* **updates** (heterogeneous) - **T**:* **updates** (heterogeneous) - **T**:
6582 Tensor of rank q + r - indices_shape[-1] - 1. Tensor of rank q + r - indices_shape[-1] - 1.
6683
6784**Outputs****Outputs**
6885
6986* **output** (heterogeneous) - **T**:* **output** (heterogeneous) - **T**:
7087 Tensor of rank r >= 1. Tensor of rank r >= 1.
7188
7289**Type Constraints****Type Constraints**
7390
7491* **T** in (* **T** in (
7592 tensor(bfloat16), tensor(bfloat16),
7693 tensor(bool), tensor(bool),
7794 tensor(complex128), tensor(complex128),
7895 tensor(complex64), tensor(complex64),
7996 tensor(double), tensor(double),
8097 tensor(float), tensor(float),
8198 tensor(float16), tensor(float16),
8299 tensor(int16), tensor(int16),
83100 tensor(int32), tensor(int32),
84101 tensor(int64), tensor(int64),
85102 tensor(int8), tensor(int8),
86103 tensor(string), tensor(string),
87104 tensor(uint16), tensor(uint16),
88105 tensor(uint32), tensor(uint32),
89106 tensor(uint64), tensor(uint64),
90107 tensor(uint8) tensor(uint8)
91108 ): ):
92109 Constrain input and output types to any tensor type. Constrain input and output types to any tensor type.
.. _l-onnx-op-scatternd-13: ScatterND - 13 ============== **Version** * **name**: `ScatterND (GitHub) `_ * **domain**: **main** * **since_version**: **13** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 13**. **Summary** ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1, and `updates` tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input `data`, and then updating its value to values specified by `updates` at specific index positions specified by `indices`. Its output shape is the same as the shape of `data`. Note that `indices` should not have duplicate entries. That is, two or more `updates` for the same index-location is not supported. `indices` is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of `indices`. `indices` is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into `data`. Hence, k can be a value at most the rank of `data`. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor. Index values are allowed to be negative, as per the usual convention for counting backwards from the end, but are expected in the valid range. `updates` is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of `updates` correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of `data`. Thus, the shape of `updates` must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes. The `output` is calculated via the following equation: output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices): output[indices[idx]] = updates[idx] The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. This operator is the inverse of GatherND. Example 1: :: data = [1, 2, 3, 4, 5, 6, 7, 8] indices = [[4], [3], [1], [7]] updates = [9, 10, 11, 12] output = [1, 11, 3, 10, 9, 6, 7, 12] Example 2: :: data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] indices = [[0], [2]] updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] **Inputs** * **data** (heterogeneous) - **T**: Tensor of rank r >= 1. * **indices** (heterogeneous) - **tensor(int64)**: Tensor of rank q >= 1. * **updates** (heterogeneous) - **T**: Tensor of rank q + r - indices_shape[-1] - 1. **Outputs** * **output** (heterogeneous) - **T**: Tensor of rank r >= 1. **Type Constraints** * **T** in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type. **Differences** .. raw:: html
00ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1,
11and updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operationand updates tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
22is produced by creating a copy of the input data, and then updating its value to valuesis produced by creating a copy of the input data, and then updating its value to values
33specified by updates at specific index positions specified by indices. Its output shapespecified by updates at specific index positions specified by indices. Its output shape
44is the same as the shape of data. Note that indices should not have duplicate entries.is the same as the shape of data. Note that indices should not have duplicate entries.
55That is, two or more updates for the same index-location is not supported.That is, two or more updates for the same index-location is not supported.
66
77indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.indices is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices.
88 indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data. indices is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data.
99Hence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies anHence, k can be a value at most the rank of data. When k equals rank(data), each update entry specifies an
1010update to a single element of the tensor. When k is less than rank(data) each update entry specifies anupdate to a single element of the tensor. When k is less than rank(data) each update entry specifies an
1111update to a slice of the tensor. Index values are allowed to be negative, as per the usualupdate to a slice of the tensor. Index values are allowed to be negative, as per the usual
1212convention for counting backwards from the end, but are expected in the valid range.convention for counting backwards from the end, but are expected in the valid range.
1313
1414updates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, theupdates is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
1515first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
1616The remaining dimensions of updates correspond to the dimensions of theThe remaining dimensions of updates correspond to the dimensions of the
1717replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
1818corresponding to the trailing (r-k) dimensions of data. Thus, the shape of updatescorresponding to the trailing (r-k) dimensions of data. Thus, the shape of updates
1919must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenationmust equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
2020of shapes.of shapes.
2121
2222The output is calculated via the following equation:The output is calculated via the following equation:
2323
2424 output = np.copy(data) output = np.copy(data)
2525 update_indices = indices.shape[:-1] update_indices = indices.shape[:-1]
2626 for idx in np.ndindex(update_indices): for idx in np.ndindex(update_indices):
2727 output[indices[idx]] = updates[idx] output[indices[idx]] = updates[idx]
2828
2929The order of iteration in the above loop is not specified.The order of iteration in the above loop is not specified.
3030In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
3131This ensures that the output value does not depend on the iteration order.This ensures that the output value does not depend on the iteration order.
3232
3333This operator is the inverse of GatherND.This operator is the inverse of GatherND.
3434
3535Example 1:Example 1:
3636::::
3737
3838 data = [1, 2, 3, 4, 5, 6, 7, 8] data = [1, 2, 3, 4, 5, 6, 7, 8]
3939 indices = [[4], [3], [1], [7]] indices = [[4], [3], [1], [7]]
4040 updates = [9, 10, 11, 12] updates = [9, 10, 11, 12]
4141 output = [1, 11, 3, 10, 9, 6, 7, 12] output = [1, 11, 3, 10, 9, 6, 7, 12]
4242
4343Example 2:Example 2:
4444::::
4545
4646 data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4747 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
4848 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
4949 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
5050 indices = [[0], [2]] indices = [[0], [2]]
5151 updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5252 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
5353 output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
5454 [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
5555 [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
5656 [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
5757
5858**Inputs****Inputs**
5959
6060* **data** (heterogeneous) - **T**:* **data** (heterogeneous) - **T**:
6161 Tensor of rank r >= 1. Tensor of rank r >= 1.
6262* **indices** (heterogeneous) - **tensor(int64)**:* **indices** (heterogeneous) - **tensor(int64)**:
6363 Tensor of rank q >= 1. Tensor of rank q >= 1.
6464* **updates** (heterogeneous) - **T**:* **updates** (heterogeneous) - **T**:
6565 Tensor of rank q + r - indices_shape[-1] - 1. Tensor of rank q + r - indices_shape[-1] - 1.
6666
6767**Outputs****Outputs**
6868
6969* **output** (heterogeneous) - **T**:* **output** (heterogeneous) - **T**:
7070 Tensor of rank r >= 1. Tensor of rank r >= 1.
7171
7272**Type Constraints****Type Constraints**
7373
7474* **T** in (* **T** in (
75 tensor(bfloat16),
7576 tensor(bool), tensor(bool),
7677 tensor(complex128), tensor(complex128),
7778 tensor(complex64), tensor(complex64),
7879 tensor(double), tensor(double),
7980 tensor(float), tensor(float),
8081 tensor(float16), tensor(float16),
8182 tensor(int16), tensor(int16),
8283 tensor(int32), tensor(int32),
8384 tensor(int64), tensor(int64),
8485 tensor(int8), tensor(int8),
8586 tensor(string), tensor(string),
8687 tensor(uint16), tensor(uint16),
8788 tensor(uint32), tensor(uint32),
8889 tensor(uint64), tensor(uint64),
8990 tensor(uint8) tensor(uint8)
9091 ): ):
9192 Constrain input and output types to any tensor type. Constrain input and output types to any tensor type.
.. _l-onnx-op-scatternd-11: ScatterND - 11 ============== **Version** * **name**: `ScatterND (GitHub) `_ * **domain**: **main** * **since_version**: **11** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 11**. **Summary** ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1, and `updates` tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input `data`, and then updating its value to values specified by `updates` at specific index positions specified by `indices`. Its output shape is the same as the shape of `data`. Note that `indices` should not have duplicate entries. That is, two or more `updates` for the same index-location is not supported. `indices` is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of `indices`. `indices` is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into `data`. Hence, k can be a value at most the rank of `data`. When k equals rank(data), each update entry specifies an update to a single element of the tensor. When k is less than rank(data) each update entry specifies an update to a slice of the tensor. Index values are allowed to be negative, as per the usual convention for counting backwards from the end, but are expected in the valid range. `updates` is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape. The remaining dimensions of `updates` correspond to the dimensions of the replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor, corresponding to the trailing (r-k) dimensions of `data`. Thus, the shape of `updates` must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation of shapes. The `output` is calculated via the following equation: output = np.copy(data) update_indices = indices.shape[:-1] for idx in np.ndindex(update_indices): output[indices[idx]] = updates[idx] The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order. This operator is the inverse of GatherND. Example 1: :: data = [1, 2, 3, 4, 5, 6, 7, 8] indices = [[4], [3], [1], [7]] updates = [9, 10, 11, 12] output = [1, 11, 3, 10, 9, 6, 7, 12] Example 2: :: data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] indices = [[0], [2]] updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]] output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]], [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]], [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]] **Inputs** * **data** (heterogeneous) - **T**: Tensor of rank r >= 1. * **indices** (heterogeneous) - **tensor(int64)**: Tensor of rank q >= 1. * **updates** (heterogeneous) - **T**: Tensor of rank q + r - indices_shape[-1] - 1. **Outputs** * **output** (heterogeneous) - **T**: Tensor of rank r >= 1. **Type Constraints** * **T** in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.