.. _l-onnx-doc-Mul: === Mul === .. contents:: :local: .. _l-onnx-op-mul-14: Mul - 14 ======== **Version** * **name**: `Mul (GitHub) `_ * **domain**: **main** * **since_version**: **14** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 14**. **Summary** Performs element-wise binary multiplication (with Numpy-style broadcasting support). This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check `Broadcasting in ONNX `_. (Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16. **Inputs** * **A** (heterogeneous) - **T**: First operand. * **B** (heterogeneous) - **T**: Second operand. **Outputs** * **C** (heterogeneous) - **T**: Result, has same element type as two inputs **Type Constraints** * **T** in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): Constrain input and output types to all numeric tensors. **Examples** **default** :: node = onnx.helper.make_node( 'Mul', inputs=['x', 'y'], outputs=['z'], ) x = np.array([1, 2, 3]).astype(np.float32) y = np.array([4, 5, 6]).astype(np.float32) z = x * y # expected output [4., 10., 18.] expect(node, inputs=[x, y], outputs=[z], name='test_mul_example') x = np.random.randn(3, 4, 5).astype(np.float32) y = np.random.randn(3, 4, 5).astype(np.float32) z = x * y expect(node, inputs=[x, y], outputs=[z], name='test_mul') x = np.random.randint(4, size=(3, 4, 5), dtype=np.uint8) y = np.random.randint(24, size=(3, 4, 5), dtype=np.uint8) z = x * y expect(node, inputs=[x, y], outputs=[z], name='test_mul_uint8') **_mul_broadcast** :: node = onnx.helper.make_node( 'Mul', inputs=['x', 'y'], outputs=['z'], ) x = np.random.randn(3, 4, 5).astype(np.float32) y = np.random.randn(5).astype(np.float32) z = x * y expect(node, inputs=[x, y], outputs=[z], name='test_mul_bcast') **Differences** .. raw:: html
 `0` `0` `Performs element-wise binary multiplication (with Numpy-style broadcasting support).` `Performs element-wise binary multiplication (with Numpy-style broadcasting support).` `1` `1` `2` `2` `This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check Broadcasting in ONNX _.` `This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check Broadcasting in ONNX _.` `3` `3` `4` `(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.` `5` `4` `6` `**Inputs**` `**Inputs**` `5` `7` `6` `8` `* **A** (heterogeneous) - **T**:` `* **A** (heterogeneous) - **T**:` `7` `9` ` First operand.` ` First operand.` `8` `10` `* **B** (heterogeneous) - **T**:` `* **B** (heterogeneous) - **T**:` `9` `11` ` Second operand.` ` Second operand.` `10` `12` `11` `13` `**Outputs**` `**Outputs**` `12` `14` `13` `15` `* **C** (heterogeneous) - **T**:` `* **C** (heterogeneous) - **T**:` `14` `16` ` Result, has same element type as two inputs` ` Result, has same element type as two inputs` `15` `17` `16` `18` `**Type Constraints**` `**Type Constraints**` `17` `19` `18` `20` `* **T** in (` `* **T** in (` `19` `21` ` tensor(bfloat16),` ` tensor(bfloat16),` `20` `22` ` tensor(double),` ` tensor(double),` `21` `23` ` tensor(float),` ` tensor(float),` `22` `24` ` tensor(float16),` ` tensor(float16),` `25` ` tensor(int16),` `23` `26` ` tensor(int32),` ` tensor(int32),` `24` `27` ` tensor(int64),` ` tensor(int64),` `28` ` tensor(int8),` `29` ` tensor(uint16),` `25` `30` ` tensor(uint32),` ` tensor(uint32),` `26` `31` ` tensor(uint64)` ` tensor(uint64),` `32` ` tensor(uint8)` `27` `33` ` ):` ` ):` `28` `34` ` Constrain input and output types to high-precision numeric tensors.` ` Constrain input and output types to all numeric tensors.`
.. _l-onnx-op-mul-13: Mul - 13 ======== **Version** * **name**: `Mul (GitHub) `_ * **domain**: **main** * **since_version**: **13** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 13**. **Summary** Performs element-wise binary multiplication (with Numpy-style broadcasting support). This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check `Broadcasting in ONNX `_. **Inputs** * **A** (heterogeneous) - **T**: First operand. * **B** (heterogeneous) - **T**: Second operand. **Outputs** * **C** (heterogeneous) - **T**: Result, has same element type as two inputs **Type Constraints** * **T** in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors. **Differences** .. raw:: html
 `0` `0` `Performs element-wise binary multiplication (with Numpy-style broadcasting support).` `Performs element-wise binary multiplication (with Numpy-style broadcasting support).` `1` `1` `2` `2` `This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check Broadcasting in ONNX _.` `This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check Broadcasting in ONNX _.` `3` `3` `4` `4` `**Inputs**` `**Inputs**` `5` `5` `6` `6` `* **A** (heterogeneous) - **T**:` `* **A** (heterogeneous) - **T**:` `7` `7` ` First operand.` ` First operand.` `8` `8` `* **B** (heterogeneous) - **T**:` `* **B** (heterogeneous) - **T**:` `9` `9` ` Second operand.` ` Second operand.` `10` `10` `11` `11` `**Outputs**` `**Outputs**` `12` `12` `13` `13` `* **C** (heterogeneous) - **T**:` `* **C** (heterogeneous) - **T**:` `14` `14` ` Result, has same element type as two inputs` ` Result, has same element type as two inputs` `15` `15` `16` `16` `**Type Constraints**` `**Type Constraints**` `17` `17` `18` `18` `* **T** in (` `* **T** in (` `19` ` tensor(bfloat16),` `19` `20` ` tensor(double),` ` tensor(double),` `20` `21` ` tensor(float),` ` tensor(float),` `21` `22` ` tensor(float16),` ` tensor(float16),` `22` `23` ` tensor(int32),` ` tensor(int32),` `23` `24` ` tensor(int64),` ` tensor(int64),` `24` `25` ` tensor(uint32),` ` tensor(uint32),` `25` `26` ` tensor(uint64)` ` tensor(uint64)` `26` `27` ` ):` ` ):` `27` `28` ` Constrain input and output types to high-precision numeric tensors.` ` Constrain input and output types to high-precision numeric tensors.`
.. _l-onnx-op-mul-7: Mul - 7 ======= **Version** * **name**: `Mul (GitHub) `_ * **domain**: **main** * **since_version**: **7** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 7**. **Summary** Performs element-wise binary multiplication (with Numpy-style broadcasting support). This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check `Broadcasting in ONNX `_. **Inputs** * **A** (heterogeneous) - **T**: First operand. * **B** (heterogeneous) - **T**: Second operand. **Outputs** * **C** (heterogeneous) - **T**: Result, has same element type as two inputs **Type Constraints** * **T** in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors. **Differences** .. raw:: html
 `0` `0` `Performs element-wise binary multiplication (with limited broadcast support).` `Performs element-wise binary multiplication (with Numpy-style broadcasting support).` `1` `1` `2` `If necessary the right-hand-side argument will be broadcasted to match the` `3` `shape of left-hand-side argument. When broadcasting is specified, the second` `4` `tensor can either be of element size 1 (including a scalar tensor and any` `5` `tensor with rank equal to or smaller than the first tensor), or having its` `6` `shape as a contiguous subset of the first tensor's shape. The starting of the` `7` `2` `mutually equal shape is specified by the argument "axis", and if it is not set,` `This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check Broadcasting in ONNX _.` `8` `suffix matching is assumed. 1-dim expansion doesn't work yet.` `9` `3` `10` `For example, the following tensor shapes are supported (with broadcast=1):` `11` `12` ` shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor` `13` ` shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1-element tensor` `14` ` shape(A) = (2, 3, 4, 5), shape(B) = (5,)` `15` ` shape(A) = (2, 3, 4, 5), shape(B) = (4, 5)` `16` ` shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1` `17` ` shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0` `18` `19` `Attribute broadcast=1 needs to be passed to enable broadcasting.` `20` `21` `**Attributes**` `22` `23` `* **axis**:` `24` ` If set, defines the broadcast dimensions. See doc for details.` `25` `* **broadcast**:` `26` ` Pass 1 to enable broadcasting Default value is 0.` `27` `28` `4` `**Inputs**` `**Inputs**` `29` `5` `30` `6` `* **A** (heterogeneous) - **T**:` `* **A** (heterogeneous) - **T**:` `31` `7` ` First operand, should share the type with the second operand.` ` First operand.` `32` `8` `* **B** (heterogeneous) - **T**:` `* **B** (heterogeneous) - **T**:` `33` `9` ` Second operand. With broadcasting can be of smaller size than A. If` ` Second operand.` `34` ` broadcasting is disabled it should be of the same size.` `35` `10` `36` `11` `**Outputs**` `**Outputs**` `37` `12` `38` `13` `* **C** (heterogeneous) - **T**:` `* **C** (heterogeneous) - **T**:` `39` `14` ` Result, has same dimensions and type as A` ` Result, has same element type as two inputs` `40` `15` `41` `16` `**Type Constraints**` `**Type Constraints**` `42` `17` `43` `18` `* **T** in (` `* **T** in (` `44` `19` ` tensor(double),` ` tensor(double),` `45` `20` ` tensor(float),` ` tensor(float),` `46` `21` ` tensor(float16),` ` tensor(float16),` `47` `22` ` tensor(int32),` ` tensor(int32),` `48` `23` ` tensor(int64),` ` tensor(int64),` `49` `24` ` tensor(uint32),` ` tensor(uint32),` `50` `25` ` tensor(uint64)` ` tensor(uint64)` `51` `26` ` ):` ` ):` `52` `27` ` Constrain input and output types to high-precision numeric tensors.` ` Constrain input and output types to high-precision numeric tensors.`
.. _l-onnx-op-mul-6: Mul - 6 ======= **Version** * **name**: `Mul (GitHub) `_ * **domain**: **main** * **since_version**: **6** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 6**. **Summary** Performs element-wise binary multiplication (with limited broadcast support). If necessary the right-hand-side argument will be broadcasted to match the shape of left-hand-side argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor's shape. The starting of the mutually equal shape is specified by the argument "axis", and if it is not set, suffix matching is assumed. 1-dim expansion doesn't work yet. For example, the following tensor shapes are supported (with broadcast=1): shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1-element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0 Attribute `broadcast=1` needs to be passed to enable broadcasting. **Attributes** * **axis**: If set, defines the broadcast dimensions. See doc for details. * **broadcast**: Pass 1 to enable broadcasting Default value is ``0``. **Inputs** * **A** (heterogeneous) - **T**: First operand, should share the type with the second operand. * **B** (heterogeneous) - **T**: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size. **Outputs** * **C** (heterogeneous) - **T**: Result, has same dimensions and type as A **Type Constraints** * **T** in ( tensor(double), tensor(float), tensor(float16), tensor(int32), tensor(int64), tensor(uint32), tensor(uint64) ): Constrain input and output types to high-precision numeric tensors. **Differences** .. raw:: html
 `0` `0` `Performs element-wise binary multiplication (with limited broadcast support).` `Performs element-wise binary multiplication (with limited broadcast support).` `1` `1` `2` `2` `If necessary the right-hand-side argument will be broadcasted to match the` `If necessary the right-hand-side argument will be broadcasted to match the` `3` `3` `shape of left-hand-side argument. When broadcasting is specified, the second` `shape of left-hand-side argument. When broadcasting is specified, the second` `4` `4` `tensor can either be of element size 1 (including a scalar tensor and any` `tensor can either be of element size 1 (including a scalar tensor and any` `5` `5` `tensor with rank equal to or smaller than the first tensor), or having its` `tensor with rank equal to or smaller than the first tensor), or having its` `6` `6` `shape as a contiguous subset of the first tensor's shape. The starting of the` `shape as a contiguous subset of the first tensor's shape. The starting of the` `7` `7` `mutually equal shape is specified by the argument "axis", and if it is not set,` `mutually equal shape is specified by the argument "axis", and if it is not set,` `8` `8` `suffix matching is assumed. 1-dim expansion doesn't work yet.` `suffix matching is assumed. 1-dim expansion doesn't work yet.` `9` `9` `10` `10` `For example, the following tensor shapes are supported (with broadcast=1):` `For example, the following tensor shapes are supported (with broadcast=1):` `11` `11` `12` `12` ` shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor` ` shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor` `13` `13` ` shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1-element tensor` ` shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1-element tensor` `14` `14` ` shape(A) = (2, 3, 4, 5), shape(B) = (5,)` ` shape(A) = (2, 3, 4, 5), shape(B) = (5,)` `15` `15` ` shape(A) = (2, 3, 4, 5), shape(B) = (4, 5)` ` shape(A) = (2, 3, 4, 5), shape(B) = (4, 5)` `16` `16` ` shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1` ` shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1` `17` `17` ` shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0` ` shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0` `18` `18` `19` `19` `Attribute broadcast=1 needs to be passed to enable broadcasting.` `Attribute broadcast=1 needs to be passed to enable broadcasting.` `20` `20` `21` `21` `**Attributes**` `**Attributes**` `22` `22` `23` `23` `* **axis**:` `* **axis**:` `24` `24` ` If set, defines the broadcast dimensions. See doc for details.` ` If set, defines the broadcast dimensions. See doc for details.` `25` `25` `* **broadcast**:` `* **broadcast**:` `26` `26` ` Pass 1 to enable broadcasting Default value is 0.` ` Pass 1 to enable broadcasting Default value is 0.` `27` `* **consumed_inputs**:` `28` ` legacy optimization attribute.` `29` `27` `30` `28` `**Inputs**` `**Inputs**` `31` `29` `32` `30` `* **A** (heterogeneous) - **T**:` `* **A** (heterogeneous) - **T**:` `33` `31` ` First operand, should share the type with the second operand.` ` First operand, should share the type with the second operand.` `34` `32` `* **B** (heterogeneous) - **T**:` `* **B** (heterogeneous) - **T**:` `35` `33` ` Second operand. With broadcasting can be of smaller size than A. If` ` Second operand. With broadcasting can be of smaller size than A. If` `36` `34` ` broadcasting is disabled it should be of the same size.` ` broadcasting is disabled it should be of the same size.` `37` `35` `38` `36` `**Outputs**` `**Outputs**` `39` `37` `40` `38` `* **C** (heterogeneous) - **T**:` `* **C** (heterogeneous) - **T**:` `41` `39` ` Result, has same dimensions and type as A` ` Result, has same dimensions and type as A` `42` `40` `43` `41` `**Type Constraints**` `**Type Constraints**` `44` `42` `45` `43` `* **T** in (` `* **T** in (` `46` `44` ` tensor(double),` ` tensor(double),` `47` `45` ` tensor(float),` ` tensor(float),` `48` `46` ` tensor(float16)` ` tensor(float16),` `47` ` tensor(int32),` `48` ` tensor(int64),` `49` ` tensor(uint32),` `50` ` tensor(uint64)` `49` `51` ` ):` ` ):` `50` `52` ` Constrain input and output types to float tensors.` ` Constrain input and output types to high-precision numeric tensors.`
.. _l-onnx-op-mul-1: Mul - 1 ======= **Version** * **name**: `Mul (GitHub) `_ * **domain**: **main** * **since_version**: **1** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: False This version of the operator has been available **since version 1**. **Summary** Performs element-wise binary multiplication (with limited broadcast support). If necessary the right-hand-side argument will be broadcasted to match the shape of left-hand-side argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor's shape. The starting of the mutually equal shape is specified by the argument "axis", and if it is not set, suffix matching is assumed. 1-dim expansion doesn't work yet. For example, the following tensor shapes are supported (with broadcast=1): shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1-element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0 Attribute `broadcast=1` needs to be passed to enable broadcasting. **Attributes** * **axis**: If set, defines the broadcast dimensions. See doc for details. * **broadcast**: Pass 1 to enable broadcasting Default value is ``0``. * **consumed_inputs**: legacy optimization attribute. **Inputs** * **A** (heterogeneous) - **T**: First operand, should share the type with the second operand. * **B** (heterogeneous) - **T**: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size. **Outputs** * **C** (heterogeneous) - **T**: Result, has same dimensions and type as A **Type Constraints** * **T** in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.