.. _l-onnx-doc-SpaceToDepth: ============ SpaceToDepth ============ .. contents:: :local: .. _l-onnx-op-spacetodepth-13: SpaceToDepth - 13 ================= **Version** * **name**: `SpaceToDepth (GitHub) `_ * **domain**: **main** * **since_version**: **13** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 13**. **Summary** SpaceToDepth rearranges blocks of spatial data into depth. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. **Attributes** * **blocksize** (required): Blocks of [blocksize, blocksize] are moved. **Inputs** * **input** (heterogeneous) - **T**: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width. **Outputs** * **output** (heterogeneous) - **T**: Output tensor of [N, C * blocksize * blocksize, H/blocksize, W/blocksize]. **Type Constraints** * **T** in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to all tensor types. **Examples** **default** :: b, c, h, w = shape = (2, 2, 6, 6) blocksize = 2 node = onnx.helper.make_node( 'SpaceToDepth', inputs=['x'], outputs=['y'], blocksize=blocksize, ) x = np.random.random_sample(shape).astype(np.float32) tmp = np.reshape(x, [b, c, h // blocksize, blocksize, w // blocksize, blocksize]) tmp = np.transpose(tmp, [0, 3, 5, 1, 2, 4]) y = np.reshape(tmp, [b, c * (blocksize**2), h // blocksize, w // blocksize]) expect(node, inputs=[x], outputs=[y], name='test_spacetodepth') **_example** :: node = onnx.helper.make_node( 'SpaceToDepth', inputs=['x'], outputs=['y'], blocksize=2, ) # (1, 1, 4, 6) input tensor x = np.array([[[[0, 6, 1, 7, 2, 8], [12, 18, 13, 19, 14, 20], [3, 9, 4, 10, 5, 11], [15, 21, 16, 22, 17, 23]]]]).astype(np.float32) # (1, 4, 2, 3) output tensor y = np.array([[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23]]]]).astype(np.float32) expect(node, inputs=[x], outputs=[y], name='test_spacetodepth_example') **Differences** .. raw:: html
 `0` `0` `SpaceToDepth rearranges blocks of spatial data into depth. More specifically,` `SpaceToDepth rearranges blocks of spatial data into depth. More specifically,` `1` `1` `this op outputs a copy of the input tensor where values from the height and width dimensions` `this op outputs a copy of the input tensor where values from the height and width dimensions` `2` `2` `are moved to the depth dimension.` `are moved to the depth dimension.` `3` `3` `4` `4` `**Attributes**` `**Attributes**` `5` `5` `6` `6` `* **blocksize** (required):` `* **blocksize** (required):` `7` `7` ` Blocks of [blocksize, blocksize] are moved.` ` Blocks of [blocksize, blocksize] are moved.` `8` `8` `9` `9` `**Inputs**` `**Inputs**` `10` `10` `11` `11` `* **input** (heterogeneous) - **T**:` `* **input** (heterogeneous) - **T**:` `12` `12` ` Input tensor of [N,C,H,W], where N is the batch axis, C is the` ` Input tensor of [N,C,H,W], where N is the batch axis, C is the` `13` `13` ` channel or depth, H is the height and W is the width.` ` channel or depth, H is the height and W is the width.` `14` `14` `15` `15` `**Outputs**` `**Outputs**` `16` `16` `17` `17` `* **output** (heterogeneous) - **T**:` `* **output** (heterogeneous) - **T**:` `18` `18` ` Output tensor of [N, C * blocksize * blocksize, H/blocksize,` ` Output tensor of [N, C * blocksize * blocksize, H/blocksize,` `19` `19` ` W/blocksize].` ` W/blocksize].` `20` `20` `21` `21` `**Type Constraints**` `**Type Constraints**` `22` `22` `23` `23` `* **T** in (` `* **T** in (` `24` ` tensor(bfloat16),` `24` `25` ` tensor(bool),` ` tensor(bool),` `25` `26` ` tensor(complex128),` ` tensor(complex128),` `26` `27` ` tensor(complex64),` ` tensor(complex64),` `27` `28` ` tensor(double),` ` tensor(double),` `28` `29` ` tensor(float),` ` tensor(float),` `29` `30` ` tensor(float16),` ` tensor(float16),` `30` `31` ` tensor(int16),` ` tensor(int16),` `31` `32` ` tensor(int32),` ` tensor(int32),` `32` `33` ` tensor(int64),` ` tensor(int64),` `33` `34` ` tensor(int8),` ` tensor(int8),` `34` `35` ` tensor(string),` ` tensor(string),` `35` `36` ` tensor(uint16),` ` tensor(uint16),` `36` `37` ` tensor(uint32),` ` tensor(uint32),` `37` `38` ` tensor(uint64),` ` tensor(uint64),` `38` `39` ` tensor(uint8)` ` tensor(uint8)` `39` `40` ` ):` ` ):` `40` `41` ` Constrain input and output types to all tensor types.` ` Constrain input and output types to all tensor types.`
.. _l-onnx-op-spacetodepth-1: SpaceToDepth - 1 ================ **Version** * **name**: `SpaceToDepth (GitHub) `_ * **domain**: **main** * **since_version**: **1** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 1**. **Summary** SpaceToDepth rearranges blocks of spatial data into depth. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. **Attributes** * **blocksize** (required): Blocks of [blocksize, blocksize] are moved. **Inputs** * **input** (heterogeneous) - **T**: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width. **Outputs** * **output** (heterogeneous) - **T**: Output tensor of [N, C * blocksize * blocksize, H/blocksize, W/blocksize]. **Type Constraints** * **T** in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to all tensor types.