.. _l-onnx-doc-ReduceLogSumExp: =============== ReduceLogSumExp =============== .. contents:: :local: .. _l-onnx-op-reducelogsumexp-13: ReduceLogSumExp - 13 ==================== **Version** * **name**: `ReduceLogSumExp (GitHub) `_ * **domain**: **main** * **since_version**: **13** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 13**. **Summary** Computes the log sum exponent of the input tensor's element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned. The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True. **Attributes** * **axes**: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data). * **keepdims**: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is ``1``. **Inputs** * **data** (heterogeneous) - **T**: An input tensor. **Outputs** * **reduced** (heterogeneous) - **T**: Reduced output tensor. **Type Constraints** * **T** in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors. **Examples** **_do_not_keepdims** :: shape = [3, 2, 2] axes = [1] keepdims = 0 node = onnx.helper.make_node( 'ReduceLogSumExp', inputs=['data'], outputs=['reduced'], axes=axes, keepdims=keepdims ) data = np.array( [[[5, 1], [20, 2]], [[30, 1], [40, 2]], [[55, 1], [60, 2]]], dtype=np.double) reduced = np.log(np.sum( np.exp(data), axis=tuple(axes), keepdims=keepdims == 1)) # print(reduced) #[[20., 2.31326175] # [40.00004578, 2.31326175] # [60.00671387, 2.31326175]] expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_do_not_keepdims_example') np.random.seed(0) data = np.random.uniform(-10, 10, shape).astype(np.double) reduced = np.log(np.sum( np.exp(data), axis=tuple(axes), keepdims=keepdims == 1)) expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_do_not_keepdims_random') **_keepdims** :: shape = [3, 2, 2] axes = [1] keepdims = 1 node = onnx.helper.make_node( 'ReduceLogSumExp', inputs=['data'], outputs=['reduced'], axes=axes, keepdims=keepdims ) data = np.array( [[[5, 1], [20, 2]], [[30, 1], [40, 2]], [[55, 1], [60, 2]]], dtype=np.double) reduced = np.log(np.sum(np.exp(data), axis=tuple(axes), keepdims=keepdims == 1)) # print(reduced) # [[[20., 2.31326175]] # [[40.00004578, 2.31326175]] # [[60.00671387, 2.31326175]]] expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_keepdims_example') np.random.seed(0) data = np.random.uniform(-10, 10, shape).astype(np.double) reduced = np.log(np.sum(np.exp(data), axis=tuple(axes), keepdims=keepdims == 1)) expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_keepdims_random') **_default_axes_keepdims** :: shape = [3, 2, 2] axes = None keepdims = 1 node = onnx.helper.make_node( 'ReduceLogSumExp', inputs=['data'], outputs=['reduced'], keepdims=keepdims ) data = np.array( [[[5, 1], [20, 2]], [[30, 1], [40, 2]], [[55, 1], [60, 2]]], dtype=np.double) reduced = np.log(np.sum(np.exp(data), axis=axes, keepdims=keepdims == 1)) # print(reduced) # [[[60.00671387]]] expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_default_axes_keepdims_example') np.random.seed(0) data = np.random.uniform(-10, 10, shape).astype(np.double) reduced = np.log(np.sum(np.exp(data), axis=axes, keepdims=keepdims == 1)) expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_default_axes_keepdims_random') **_negative_axes_keepdims** :: shape = [3, 2, 2] axes = [-2] keepdims = 1 node = onnx.helper.make_node( 'ReduceLogSumExp', inputs=['data'], outputs=['reduced'], axes=axes, keepdims=keepdims ) data = np.array( [[[5, 1], [20, 2]], [[30, 1], [40, 2]], [[55, 1], [60, 2]]], dtype=np.double) reduced = np.log(np.sum(np.exp(data), axis=tuple(axes), keepdims=keepdims == 1)) # print(reduced) # [[[20., 2.31326175]] # [[40.00004578, 2.31326175]] # [[60.00671387, 2.31326175]]] expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_negative_axes_keepdims_example') np.random.seed(0) data = np.random.uniform(-10, 10, shape).astype(np.double) reduced = np.log(np.sum(np.exp(data), axis=tuple(axes), keepdims=keepdims == 1)) expect(node, inputs=[data], outputs=[reduced], name='test_reduce_log_sum_exp_negative_axes_keepdims_random') **Differences** .. raw:: html
 `0` `0` `Computes the log sum exponent of the input tensor's element along the provided axes. The resulting` `Computes the log sum exponent of the input tensor's element along the provided axes. The resulting` `1` `1` `tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then` `tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then` `2` `2` `the resulted tensor have the reduced dimension pruned.` `the resulting tensor has the reduced dimension pruned.` `3` `3` `4` `4` `The above behavior is similar to numpy, with the exception that numpy defaults keepdims to` `The above behavior is similar to numpy, with the exception that numpy defaults keepdims to` `5` `5` `False instead of True.` `False instead of True.` `6` `6` `7` `7` `**Attributes**` `**Attributes**` `8` `8` `9` `9` `* **axes**:` `* **axes**:` `10` `10` ` A list of integers, along which to reduce. The default is to reduce` ` A list of integers, along which to reduce. The default is to reduce` `11` `11` ` over all the dimensions of the input tensor. Accepted range is [-r,` ` over all the dimensions of the input tensor. Accepted range is [-r,` `12` `12` ` r-1] where r = rank(data).` ` r-1] where r = rank(data).` `13` `13` `* **keepdims**:` `* **keepdims**:` `14` `14` ` Keep the reduced dimension or not, default 1 means keep reduced` ` Keep the reduced dimension or not, default 1 means keep reduced` `15` `15` ` dimension. Default value is 1.` ` dimension. Default value is 1.` `16` `16` `17` `17` `**Inputs**` `**Inputs**` `18` `18` `19` `19` `* **data** (heterogeneous) - **T**:` `* **data** (heterogeneous) - **T**:` `20` `20` ` An input tensor.` ` An input tensor.` `21` `21` `22` `22` `**Outputs**` `**Outputs**` `23` `23` `24` `24` `* **reduced** (heterogeneous) - **T**:` `* **reduced** (heterogeneous) - **T**:` `25` `25` ` Reduced output tensor.` ` Reduced output tensor.` `26` `26` `27` `27` `**Type Constraints**` `**Type Constraints**` `28` `28` `29` `29` `* **T** in (` `* **T** in (` `30` ` tensor(bfloat16),` `30` `31` ` tensor(double),` ` tensor(double),` `31` `32` ` tensor(float),` ` tensor(float),` `32` `33` ` tensor(float16),` ` tensor(float16),` `33` `34` ` tensor(int32),` ` tensor(int32),` `34` `35` ` tensor(int64),` ` tensor(int64),` `35` `36` ` tensor(uint32),` ` tensor(uint32),` `36` `37` ` tensor(uint64)` ` tensor(uint64)` `37` `38` ` ):` ` ):` `38` `39` ` Constrain input and output types to high-precision numeric tensors.` ` Constrain input and output types to high-precision numeric tensors.`
.. _l-onnx-op-reducelogsumexp-11: ReduceLogSumExp - 11 ==================== **Version** * **name**: `ReduceLogSumExp (GitHub) `_ * **domain**: **main** * **since_version**: **11** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 11**. **Summary** Computes the log sum exponent of the input tensor's element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True. **Attributes** * **axes**: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data). * **keepdims**: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is ``1``. **Inputs** * **data** (heterogeneous) - **T**: An input tensor. **Outputs** * **reduced** (heterogeneous) - **T**: Reduced output tensor. **Type Constraints** * **T** in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors. **Differences** .. raw:: html
 `0` `0` `Computes the log sum exponent of the input tensor's element along the provided axes. The resulting` `Computes the log sum exponent of the input tensor's element along the provided axes. The resulting` `1` `1` `tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then` `tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then` `2` `2` `the resulted tensor have the reduced dimension pruned.` `the resulted tensor have the reduced dimension pruned.` `3` `3` `4` `4` `The above behavior is similar to numpy, with the exception that numpy defaults keepdims to` `The above behavior is similar to numpy, with the exception that numpy defaults keepdims to` `5` `5` `False instead of True.` `False instead of True.` `6` `6` `7` `7` `**Attributes**` `**Attributes**` `8` `8` `9` `9` `* **axes**:` `* **axes**:` `10` `10` ` A list of integers, along which to reduce. The default is to reduce` ` A list of integers, along which to reduce. The default is to reduce` `11` `11` ` over all the dimensions of the input tensor.` ` over all the dimensions of the input tensor. Accepted range is [-r,` `12` ` r-1] where r = rank(data).` `12` `13` `* **keepdims**:` `* **keepdims**:` `13` `14` ` Keep the reduced dimension or not, default 1 means keep reduced` ` Keep the reduced dimension or not, default 1 means keep reduced` `14` `15` ` dimension. Default value is 1.` ` dimension. Default value is 1.` `15` `16` `16` `17` `**Inputs**` `**Inputs**` `17` `18` `18` `19` `* **data** (heterogeneous) - **T**:` `* **data** (heterogeneous) - **T**:` `19` `20` ` An input tensor.` ` An input tensor.` `20` `21` `21` `22` `**Outputs**` `**Outputs**` `22` `23` `23` `24` `* **reduced** (heterogeneous) - **T**:` `* **reduced** (heterogeneous) - **T**:` `24` `25` ` Reduced output tensor.` ` Reduced output tensor.` `25` `26` `26` `27` `**Type Constraints**` `**Type Constraints**` `27` `28` `28` `29` `* **T** in (` `* **T** in (` `29` `30` ` tensor(double),` ` tensor(double),` `30` `31` ` tensor(float),` ` tensor(float),` `31` `32` ` tensor(float16),` ` tensor(float16),` `32` `33` ` tensor(int32),` ` tensor(int32),` `33` `34` ` tensor(int64),` ` tensor(int64),` `34` `35` ` tensor(uint32),` ` tensor(uint32),` `35` `36` ` tensor(uint64)` ` tensor(uint64)` `36` `37` ` ):` ` ):` `37` `38` ` Constrain input and output types to high-precision numeric tensors.` ` Constrain input and output types to high-precision numeric tensors.`
.. _l-onnx-op-reducelogsumexp-1: ReduceLogSumExp - 1 =================== **Version** * **name**: `ReduceLogSumExp (GitHub) `_ * **domain**: **main** * **since_version**: **1** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 1**. **Summary** Computes the log sum exponent of the input tensor's element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True. **Attributes** * **axes**: A list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. * **keepdims**: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is ``1``. **Inputs** * **data** (heterogeneous) - **T**: An input tensor. **Outputs** * **reduced** (heterogeneous) - **T**: Reduced output tensor. **Type Constraints** * **T** in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors.