# GatherND#

## GatherND - 13#

Version

• name: GatherND (GitHub)

• domain: main

• since_version: 13

• function: False

• support_level: SupportType.COMMON

• shape inference: True

This version of the operator has been available since version 13.

Summary

Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b.

indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data, where each element defines a slice of data

batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of data tensor and indices are representing the batches, and the gather starts from the b+1 dimension.

Some salient points about the inputs’ rank and shape:

1. r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q

2. The first b dimensions of the shape of indices tensor and data tensor must be equal.

3. b < min(q, r) is to be honored.

4. The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive)

5. All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[…,i] <= data_shape[i] - 1. It is an error if any of the index values are out of bounds.

The output is computed as follows:

The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data.

1. If indices_shape[-1] > r-b => error condition

2. If indices_shape[-1] == r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensors containing 1-D tensors of dimension r-b, where N is an integer equals to the product of 1 and all the elements in the batch dimensions of the indices_shape. Let us think of each such r-b ranked tensor as indices_slice. Each scalar value corresponding to data[0:b-1,indices_slice] is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Example 1 below)

3. If indices_shape[-1] < r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensor containing 1-D tensors of dimension < r-b. Let us think of each such tensors as indices_slice. Each tensor slice corresponding to data[0:b-1, indices_slice , :] is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Examples 2, 3, 4 and 5 below)

This operator is the inverse of ScatterND.

Example 1

batch_dims = 0

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[0,0],[1,1]] # indices_shape = [2, 2]

output = [0,3] # output_shape = [2]

Example 2

batch_dims = 0

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[1],[0]] # indices_shape = [2, 1]

output = [[2,3],[0,1]] # output_shape = [2, 2]

Example 3

batch_dims = 0

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[0,1],[1,0]] # indices_shape = [2, 2]

output = [[2,3],[4,5]] # output_shape = [2, 2]

Example 4

batch_dims = 0

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]

output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]

Example 5

batch_dims = 1

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[1],[0]] # indices_shape = [2, 1]

output = [[2,3],[4,5]] # output_shape = [2, 2]

Attributes

• batch_dims: The number of batch dimensions. The gather of indexing starts from dimension of data[batch_dims:] Default value is 0.

Inputs

• data (heterogeneous) - T: Tensor of rank r >= 1.

• indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1. All index values are expected to be within bounds [-s, s-1] along axis of size s. It is an error if any of the index values are out of bounds.

Outputs

• output (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.

Type Constraints

• T in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.

Examples

_int32

node = onnx.helper.make_node(
"GatherND",
inputs=["data", "indices"],
outputs=["output"],
)

data = np.array([[0, 1], [2, 3]], dtype=np.int32)
indices = np.array([[0, 0], [1, 1]], dtype=np.int64)
output = gather_nd_impl(data, indices, 0)
expected_output = np.array([0, 3], dtype=np.int32)
assert np.array_equal(output, expected_output)
expect(
node,
inputs=[data, indices],
outputs=[output],
name="test_gathernd_example_int32",
)

_float32

node = onnx.helper.make_node(
"GatherND",
inputs=["data", "indices"],
outputs=["output"],
)

data = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]], dtype=np.float32)
indices = np.array([[[0, 1]], [[1, 0]]], dtype=np.int64)
output = gather_nd_impl(data, indices, 0)
expected_output = np.array([[[2, 3]], [[4, 5]]], dtype=np.float32)
assert np.array_equal(output, expected_output)
expect(
node,
inputs=[data, indices],
outputs=[output],
name="test_gathernd_example_float32",
)

_int32_batchdim_1

node = onnx.helper.make_node(
"GatherND",
inputs=["data", "indices"],
outputs=["output"],
batch_dims=1,
)

data = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]], dtype=np.int32)
indices = np.array([[1], [0]], dtype=np.int64)
output = gather_nd_impl(data, indices, 1)
expected_output = np.array([[2, 3], [4, 5]], dtype=np.int32)
assert np.array_equal(output, expected_output)
expect(
node,
inputs=[data, indices],
outputs=[output],
name="test_gathernd_example_int32_batch_dim1",
)

Differences

 0 0 Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers 1 1 slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b. slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b. 2 2 3 3 indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data, indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data, 4 4 where each element defines a slice of data where each element defines a slice of data 5 5 6 6 batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of 7 7 data tensor and indices are representing the batches, and the gather starts from the b+1 dimension. data tensor and indices are representing the batches, and the gather starts from the b+1 dimension. 8 8 9 9 Some salient points about the inputs' rank and shape: Some salient points about the inputs' rank and shape: 10 10 11 11 1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q 1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q 12 12 13 13 2) The first b dimensions of the shape of indices tensor and data tensor must be equal. 2) The first b dimensions of the shape of indices tensor and data tensor must be equal. 14 14 15 15 3) b < min(q, r) is to be honored. 3) b < min(q, r) is to be honored. 16 16 17 17 4) The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive) 4) The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive) 18 18 19 19 5) All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[...,i] <= data_shape[i] - 1. 5) All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[...,i] <= data_shape[i] - 1. 20 20 It is an error if any of the index values are out of bounds. It is an error if any of the index values are out of bounds. 21 21 22 22 The output is computed as follows: The output is computed as follows: 23 23 24 24 The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data. The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data. 25 25 26 26 1) If indices_shape[-1] > r-b => error condition 1) If indices_shape[-1] > r-b => error condition 27 27 28 28 2) If indices_shape[-1] == r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensors 2) If indices_shape[-1] == r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensors 29 29 containing 1-D tensors of dimension r-b, where N is an integer equals to the product of 1 and all the elements in the batch dimensions containing 1-D tensors of dimension r-b, where N is an integer equals to the product of 1 and all the elements in the batch dimensions 30 30 of the indices_shape. Let us think of each such r-b ranked tensor as indices_slice. Each *scalar value* corresponding to data[0:b-1,indices_slice] of the indices_shape. Let us think of each such r-b ranked tensor as indices_slice. Each *scalar value* corresponding to data[0:b-1,indices_slice] 31 31 is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Example 1 below) is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Example 1 below) 32 32 33 33 3) If indices_shape[-1] < r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensor 3) If indices_shape[-1] < r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensor 34 34 containing 1-D tensors of dimension < r-b. Let us think of each such tensors as indices_slice. Each *tensor slice* corresponding containing 1-D tensors of dimension < r-b. Let us think of each such tensors as indices_slice. Each *tensor slice* corresponding 35 35 to data[0:b-1, indices_slice , :] is filled into the corresponding location of the (q-b-1)-dimensional tensor to data[0:b-1, indices_slice , :] is filled into the corresponding location of the (q-b-1)-dimensional tensor 36 36 to form the output tensor (Examples 2, 3, 4 and 5 below) to form the output tensor (Examples 2, 3, 4 and 5 below) 37 37 38 38 This operator is the inverse of ScatterND. This operator is the inverse of ScatterND. 39 39 40 40 Example 1 Example 1 41 41 42 42 batch_dims = 0 batch_dims = 0 43 43 44 44 data = [[0,1],[2,3]] # data_shape = [2, 2] data = [[0,1],[2,3]] # data_shape = [2, 2] 45 45 46 46 indices = [[0,0],[1,1]] # indices_shape = [2, 2] indices = [[0,0],[1,1]] # indices_shape = [2, 2] 47 47 48 48 output = [0,3] # output_shape = [2] output = [0,3] # output_shape = [2] 49 49 50 50 Example 2 Example 2 51 51 52 52 batch_dims = 0 batch_dims = 0 53 53 54 54 data = [[0,1],[2,3]] # data_shape = [2, 2] data = [[0,1],[2,3]] # data_shape = [2, 2] 55 55 56 56 indices = [[1],[0]] # indices_shape = [2, 1] indices = [[1],[0]] # indices_shape = [2, 1] 57 57 58 58 output = [[2,3],[0,1]] # output_shape = [2, 2] output = [[2,3],[0,1]] # output_shape = [2, 2] 59 59 60 60 Example 3 Example 3 61 61 62 62 batch_dims = 0 batch_dims = 0 63 63 64 64 data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] 65 65 66 66 indices = [[0,1],[1,0]] # indices_shape = [2, 2] indices = [[0,1],[1,0]] # indices_shape = [2, 2] 67 67 68 68 output = [[2,3],[4,5]] # output_shape = [2, 2] output = [[2,3],[4,5]] # output_shape = [2, 2] 69 69 70 70 Example 4 Example 4 71 71 72 72 batch_dims = 0 batch_dims = 0 73 73 74 74 data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] 75 75 76 76 indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2] indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2] 77 77 78 78 output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2] output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2] 79 79 80 80 Example 5 Example 5 81 81 82 82 batch_dims = 1 batch_dims = 1 83 83 84 84 data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] 85 85 86 86 indices = [[1],[0]] # indices_shape = [2, 1] indices = [[1],[0]] # indices_shape = [2, 1] 87 87 88 88 output = [[2,3],[4,5]] # output_shape = [2, 2] output = [[2,3],[4,5]] # output_shape = [2, 2] 89 89 90 90 **Attributes** **Attributes** 91 91 92 92 * **batch_dims**: * **batch_dims**: 93 93 The number of batch dimensions. The gather of indexing starts from The number of batch dimensions. The gather of indexing starts from 94 94 dimension of data[batch_dims:] Default value is 0. dimension of data[batch_dims:] Default value is 0. 95 95 96 96 **Inputs** **Inputs** 97 97 98 98 * **data** (heterogeneous) - **T**: * **data** (heterogeneous) - **T**: 99 99 Tensor of rank r >= 1. Tensor of rank r >= 1. 100 100 * **indices** (heterogeneous) - **tensor(int64)**: * **indices** (heterogeneous) - **tensor(int64)**: 101 101 Tensor of rank q >= 1. All index values are expected to be within Tensor of rank q >= 1. All index values are expected to be within 102 102 bounds [-s, s-1] along axis of size s. It is an error if any of the bounds [-s, s-1] along axis of size s. It is an error if any of the 103 103 index values are out of bounds. index values are out of bounds. 104 104 105 105 **Outputs** **Outputs** 106 106 107 107 * **output** (heterogeneous) - **T**: * **output** (heterogeneous) - **T**: 108 108 Tensor of rank q + r - indices_shape[-1] - 1. Tensor of rank q + r - indices_shape[-1] - 1. 109 109 110 110 **Type Constraints** **Type Constraints** 111 111 112 112 * **T** in ( * **T** in ( 113 tensor(bfloat16), 113 114 tensor(bool), tensor(bool), 114 115 tensor(complex128), tensor(complex128), 115 116 tensor(complex64), tensor(complex64), 116 117 tensor(double), tensor(double), 117 118 tensor(float), tensor(float), 118 119 tensor(float16), tensor(float16), 119 120 tensor(int16), tensor(int16), 120 121 tensor(int32), tensor(int32), 121 122 tensor(int64), tensor(int64), 122 123 tensor(int8), tensor(int8), 123 124 tensor(string), tensor(string), 124 125 tensor(uint16), tensor(uint16), 125 126 tensor(uint32), tensor(uint32), 126 127 tensor(uint64), tensor(uint64), 127 128 tensor(uint8) tensor(uint8) 128 129 ): ): 129 130 Constrain input and output types to any tensor type. Constrain input and output types to any tensor type.

## GatherND - 12#

Version

• name: GatherND (GitHub)

• domain: main

• since_version: 12

• function: False

• support_level: SupportType.COMMON

• shape inference: True

This version of the operator has been available since version 12.

Summary

Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b.

indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data, where each element defines a slice of data

batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of data tensor and indices are representing the batches, and the gather starts from the b+1 dimension.

Some salient points about the inputs’ rank and shape:

1. r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q

2. The first b dimensions of the shape of indices tensor and data tensor must be equal.

3. b < min(q, r) is to be honored.

4. The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive)

5. All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[…,i] <= data_shape[i] - 1. It is an error if any of the index values are out of bounds.

The output is computed as follows:

The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data.

1. If indices_shape[-1] > r-b => error condition

2. If indices_shape[-1] == r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensors containing 1-D tensors of dimension r-b, where N is an integer equals to the product of 1 and all the elements in the batch dimensions of the indices_shape. Let us think of each such r-b ranked tensor as indices_slice. Each scalar value corresponding to data[0:b-1,indices_slice] is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Example 1 below)

3. If indices_shape[-1] < r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensor containing 1-D tensors of dimension < r-b. Let us think of each such tensors as indices_slice. Each tensor slice corresponding to data[0:b-1, indices_slice , :] is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Examples 2, 3, 4 and 5 below)

This operator is the inverse of ScatterND.

Example 1

batch_dims = 0

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[0,0],[1,1]] # indices_shape = [2, 2]

output = [0,3] # output_shape = [2]

Example 2

batch_dims = 0

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[1],[0]] # indices_shape = [2, 1]

output = [[2,3],[0,1]] # output_shape = [2, 2]

Example 3

batch_dims = 0

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[0,1],[1,0]] # indices_shape = [2, 2]

output = [[2,3],[4,5]] # output_shape = [2, 2]

Example 4

batch_dims = 0

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]

output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]

Example 5

batch_dims = 1

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[1],[0]] # indices_shape = [2, 1]

output = [[2,3],[4,5]] # output_shape = [2, 2]

Attributes

• batch_dims: The number of batch dimensions. The gather of indexing starts from dimension of data[batch_dims:] Default value is 0.

Inputs

• data (heterogeneous) - T: Tensor of rank r >= 1.

• indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1. All index values are expected to be within bounds [-s, s-1] along axis of size s. It is an error if any of the index values are out of bounds.

Outputs

• output (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.

Type Constraints

• T in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.

Differences

 0 0 Given data tensor of rank r >= 1, and indices tensor of rank q >= 1, this operator gathers Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers 1 1 slices of data into an output tensor of rank q + r - indices_shape[-1] - 1. slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b. 2 2 3 3 indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data, indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data, 4 4 where each element defines a slice of data where each element defines a slice of data 5 5 6 batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of 7 data tensor and indices are representing the batches, and the gather starts from the b+1 dimension. 8 6 9 Some salient points about the inputs' rank and shape: Some salient points about the inputs' rank and shape: 7 10 8 11 1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q 1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q 9 12 13 2) The first b dimensions of the shape of indices tensor and data tensor must be equal. 14 15 3) b < min(q, r) is to be honored. 16 10 17 2) The indices_shape[-1] should have a value between 1 (inclusive) and rank r (inclusive) 4) The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive) 11 18 12 19 3) All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[...,i] <= data_shape[i] - 1. 5) All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[...,i] <= data_shape[i] - 1. 13 20 It is an error if any of the index values are out of bounds. It is an error if any of the index values are out of bounds. 14 21 15 22 The output is computed as follows: The output is computed as follows: 16 23 17 24 The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data. The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data. 18 25 19 26 1) If indices_shape[-1] > r => error condition 1) If indices_shape[-1] > r-b => error condition 20 27 21 28 2) If indices_shape[-1] == r, since the rank of indices is q, indices can be thought of as a (q-1)-dimensional tensor 2) If indices_shape[-1] == r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensors 22 29 containing 1-D tensors of dimension r. Let us think of each such r ranked tensor as indices_slice. containing 1-D tensors of dimension r-b, where N is an integer equals to the product of 1 and all the elements in the batch dimensions 23 Each *scalar value* corresponding to data[indices_slice] is filled into the corresponding location of the (q-1)-dimensional tensor 24 30 to form the output tensor (Example 1 below) of the indices_shape. Let us think of each such r-b ranked tensor as indices_slice. Each *scalar value* corresponding to data[0:b-1,indices_slice] 31 is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Example 1 below) 25 32 26 33 3) If indices_shape[-1] < r, since the rank of indices is q, indices can be thought of as a (q-1)-dimensional tensor 3) If indices_shape[-1] < r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensor 27 34 containing 1-D tensors of dimension < r. Let us think of each such tensors as indices_slice. containing 1-D tensors of dimension < r-b. Let us think of each such tensors as indices_slice. Each *tensor slice* corresponding 28 35 Each *tensor slice* corresponding to data[indices_slice , :] is filled into the corresponding location of the (q-1)-dimensional tensor to data[0:b-1, indices_slice , :] is filled into the corresponding location of the (q-b-1)-dimensional tensor 29 36 to form the output tensor (Examples 2, 3, and 4 below) to form the output tensor (Examples 2, 3, 4 and 5 below) 30 37 31 38 This operator is the inverse of ScatterND. This operator is the inverse of ScatterND. 32 39 33 40 Example 1 Example 1 34 41 42 batch_dims = 0 43 35 44 data = [[0,1],[2,3]] # data_shape = [2, 2] data = [[0,1],[2,3]] # data_shape = [2, 2] 36 45 37 46 indices = [[0,0],[1,1]] # indices_shape = [2, 2] indices = [[0,0],[1,1]] # indices_shape = [2, 2] 38 47 39 48 output = [0,3] # output_shape = [2] output = [0,3] # output_shape = [2] 40 49 41 50 Example 2 Example 2 42 51 52 batch_dims = 0 53 43 54 data = [[0,1],[2,3]] # data_shape = [2, 2] data = [[0,1],[2,3]] # data_shape = [2, 2] 44 55 45 56 indices = [[1],[0]] # indices_shape = [2, 1] indices = [[1],[0]] # indices_shape = [2, 1] 46 57 47 58 output = [[2,3],[0,1]] # output_shape = [2, 2] output = [[2,3],[0,1]] # output_shape = [2, 2] 48 59 49 60 Example 3 Example 3 50 61 62 batch_dims = 0 63 51 64 data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] 52 65 53 66 indices = [[0,1],[1,0]] # indices_shape = [2, 2] indices = [[0,1],[1,0]] # indices_shape = [2, 2] 54 67 55 68 output = [[2,3],[4,5]] # output_shape = [2, 2] output = [[2,3],[4,5]] # output_shape = [2, 2] 56 69 57 70 Example 4 Example 4 58 71 72 batch_dims = 0 73 59 74 data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] 60 75 61 76 indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2] indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2] 62 77 63 78 output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2] output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2] 64 79 80 Example 5 81 82 batch_dims = 1 83 84 data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2] 85 86 indices = [[1],[0]] # indices_shape = [2, 1] 87 88 output = [[2,3],[4,5]] # output_shape = [2, 2] 89 90 **Attributes** 91 92 * **batch_dims**: 93 The number of batch dimensions. The gather of indexing starts from 94 dimension of data[batch_dims:] Default value is 0. 95 65 96 **Inputs** **Inputs** 66 97 67 98 * **data** (heterogeneous) - **T**: * **data** (heterogeneous) - **T**: 68 99 Tensor of rank r >= 1. Tensor of rank r >= 1. 69 100 * **indices** (heterogeneous) - **tensor(int64)**: * **indices** (heterogeneous) - **tensor(int64)**: 70 101 Tensor of rank q >= 1. All index values are expected to be within Tensor of rank q >= 1. All index values are expected to be within 71 102 bounds [-s, s-1] along axis of size s. It is an error if any of the bounds [-s, s-1] along axis of size s. It is an error if any of the 72 103 index values are out of bounds. index values are out of bounds. 73 104 74 105 **Outputs** **Outputs** 75 106 76 107 * **output** (heterogeneous) - **T**: * **output** (heterogeneous) - **T**: 77 108 Tensor of rank q + r - indices_shape[-1] - 1. Tensor of rank q + r - indices_shape[-1] - 1. 78 109 79 110 **Type Constraints** **Type Constraints** 80 111 81 112 * **T** in ( * **T** in ( 82 113 tensor(bool), tensor(bool), 83 114 tensor(complex128), tensor(complex128), 84 115 tensor(complex64), tensor(complex64), 85 116 tensor(double), tensor(double), 86 117 tensor(float), tensor(float), 87 118 tensor(float16), tensor(float16), 88 119 tensor(int16), tensor(int16), 89 120 tensor(int32), tensor(int32), 90 121 tensor(int64), tensor(int64), 91 122 tensor(int8), tensor(int8), 92 123 tensor(string), tensor(string), 93 124 tensor(uint16), tensor(uint16), 94 125 tensor(uint32), tensor(uint32), 95 126 tensor(uint64), tensor(uint64), 96 127 tensor(uint8) tensor(uint8) 97 128 ): ): 98 129 Constrain input and output types to any tensor type. Constrain input and output types to any tensor type.

## GatherND - 11#

Version

• name: GatherND (GitHub)

• domain: main

• since_version: 11

• function: False

• support_level: SupportType.COMMON

• shape inference: True

This version of the operator has been available since version 11.

Summary

Given data tensor of rank r >= 1, and indices tensor of rank q >= 1, this operator gathers slices of data into an output tensor of rank q + r - indices_shape[-1] - 1.

indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data, where each element defines a slice of data

Some salient points about the inputs’ rank and shape:

1. r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q

2. The indices_shape[-1] should have a value between 1 (inclusive) and rank r (inclusive)

3. All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[…,i] <= data_shape[i] - 1. It is an error if any of the index values are out of bounds.

The output is computed as follows:

The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data.

1. If indices_shape[-1] > r => error condition

2. If indices_shape[-1] == r, since the rank of indices is q, indices can be thought of as a (q-1)-dimensional tensor containing 1-D tensors of dimension r. Let us think of each such r ranked tensor as indices_slice. Each scalar value corresponding to data[indices_slice] is filled into the corresponding location of the (q-1)-dimensional tensor to form the output tensor (Example 1 below)

3. If indices_shape[-1] < r, since the rank of indices is q, indices can be thought of as a (q-1)-dimensional tensor containing 1-D tensors of dimension < r. Let us think of each such tensors as indices_slice. Each tensor slice corresponding to data[indices_slice , :] is filled into the corresponding location of the (q-1)-dimensional tensor to form the output tensor (Examples 2, 3, and 4 below)

This operator is the inverse of ScatterND.

Example 1

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[0,0],[1,1]] # indices_shape = [2, 2]

output = [0,3] # output_shape = [2]

Example 2

data = [[0,1],[2,3]] # data_shape = [2, 2]

indices = [[1],[0]] # indices_shape = [2, 1]

output = [[2,3],[0,1]] # output_shape = [2, 2]

Example 3

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[0,1],[1,0]] # indices_shape = [2, 2]

output = [[2,3],[4,5]] # output_shape = [2, 2]

Example 4

data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]

indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]

output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]

Inputs

• data (heterogeneous) - T: Tensor of rank r >= 1.

• indices (heterogeneous) - tensor(int64): Tensor of rank q >= 1. All index values are expected to be within bounds [-s, s-1] along axis of size s. It is an error if any of the index values are out of bounds.

Outputs

• output (heterogeneous) - T: Tensor of rank q + r - indices_shape[-1] - 1.

Type Constraints

• T in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to any tensor type.