.. _l-onnx-doccom.microsoft-DequantizeWithOrder: =================================== com.microsoft - DequantizeWithOrder =================================== .. contents:: :local: .. _l-onnx-opcom-microsoft-dequantizewithorder-1: DequantizeWithOrder - 1 (com.microsoft) ======================================= **Version** * **name**: `DequantizeWithOrder (GitHub) `_ * **domain**: **com.microsoft** * **since_version**: **1** * **function**: * **support_level**: * **shape inference**: This version of the operator has been available **since version 1 of domain com.microsoft**. **Summary** Dequantize input matrix to specific layout used in cublaslt. attr to specify output type, float16 or float32 **Attributes** * **order_input** (required): cublasLt order of input matrix. See the schema of QuantizeWithOrder for order definition. Default value is ``?``. * **order_output** (required): cublasLt order of output matrix Default value is ``?``. * **to** (required): The output data type, only support TensorProto_DataType_FLOAT (1) and TensorProto_DataType_FLOAT16 (10) Default value is ``?``. **Inputs** * **input** (heterogeneous) - **Q**: TODO: input tensor of (ROWS, COLS). if less than 2d, will broadcast to (1, X). If 3d, it is treated as (B, ROWS, COS) * **scale_input** (heterogeneous) - **S**: scale of the input **Outputs** * **output** (heterogeneous) - **F**: output tensor **Examples**