Tools#

Quantization#

The main functions.

onnxruntime.quantization.quantize.quantize_dynamic(model_input: Path, model_output: Path, op_types_to_quantize=None, per_channel=False, reduce_range=False, weight_type=QuantType.QInt8, nodes_to_quantize=None, nodes_to_exclude=None, optimize_model=True, use_external_data_format=False, extra_options=None)#

Given an onnx model, create a quantized onnx model and save it into a file

Parameters:
  • model_input – file path of model to quantize

  • model_output – file path of quantized model

  • op_types_to_quantize – specify the types of operators to quantize, like [‘Conv’] to quantize Conv only. It quantizes all supported operators by default.

  • per_channel – quantize weights per channel

  • reduce_range – quantize weights with 7-bits. It may improve the accuracy for some models running on non-VNNI machine, especially for per-channel mode

  • weight_type – quantization data type of weight. Please refer to https://onnxruntime.ai/docs/performance/quantization.html for more details on data type selection

  • nodes_to_quantize

    List of nodes names to quantize. When this list is not None only the nodes in this list are quantized. example: [

    ’Conv__224’, ‘Conv__252’

    ]

  • nodes_to_exclude – List of nodes names to exclude. The nodes in this list will be excluded from quantization when it is not None.

  • optimize_model – Deprecating Soon! Optimize model before quantization. NOT recommended, optimization will change the computation graph, making debugging of quantization loss difficult.

  • use_external_data_format – option used for large size (>2GB) model. Set to False by default.

  • extra_options

    key value pair dictionary for various options in different case. Current used:

    extra.Sigmoid.nnapi = True/False (Default is False) ActivationSymmetric = True/False: symmetrize calibration data for activations (default is False). WeightSymmetric = True/False: symmetrize calibration data for weights (default is True). EnableSubgraph = True/False :

    Default is False. If enabled, subgraph will be quantized. Dynamic mode currently is supported. Will support more in the future.

    ForceQuantizeNoInputCheck = True/False :

    By default, some latent operators like maxpool, transpose, do not quantize if their input is not quantized already. Setting to True to force such operator always quantize input and so generate quantized output. Also the True behavior could be disabled per node using the nodes_to_exclude.

    MatMulConstBOnly = True/False:

    Default is True for dynamic mode. If enabled, only MatMul with const B will be quantized.

onnxruntime.quantization.quantize.quantize_static(model_input, model_output, calibration_data_reader: CalibrationDataReader, quant_format=QuantFormat.QDQ, op_types_to_quantize=None, per_channel=False, reduce_range=False, activation_type=QuantType.QInt8, weight_type=QuantType.QInt8, nodes_to_quantize=None, nodes_to_exclude=None, optimize_model=True, use_external_data_format=False, calibrate_method=CalibrationMethod.MinMax, extra_options=None)#

Given an onnx model and calibration data reader, create a quantized onnx model and save it into a file It is recommended to use QuantFormat.QDQ format from 1.11 with activation_type = QuantType.QInt8 and weight_type = QuantType.QInt8. If model is targeted to GPU/TRT, symmetric activation and weight are required. If model is targeted to CPU, asymmetric activation and symmetric weight are recommended for balance of performance and accuracy.

Parameters:
  • model_input – file path of model to quantize

  • model_output – file path of quantized model

  • calibration_data_reader – a calibration data reader. It enumerates calibration data and generates inputs for the original model.

  • quant_format – QuantFormat{QOperator, QDQ}. QOperator format quantizes the model with quantized operators directly. QDQ format quantize the model by inserting QuantizeLinear/DeQuantizeLinear on the tensor.

  • activation_type – quantization data type of activation. Please refer to https://onnxruntime.ai/docs/performance/quantization.html for more details on data type selection

  • calibrate_method

    Current calibration methods supported are MinMax and Entropy.

    Please use CalibrationMethod.MinMax or CalibrationMethod.Entropy as options.

  • op_types_to_quantize – specify the types of operators to quantize, like [‘Conv’] to quantize Conv only. It quantizes all supported operators by default.

  • per_channel – quantize weights per channel

  • reduce_range – quantize weights with 7-bits. It may improve the accuracy for some models running on non-VNNI machine, especially for per-channel mode

  • weight_type – quantization data type of weight. Please refer to https://onnxruntime.ai/docs/performance/quantization.html for more details on data type selection

  • nodes_to_quantize

    List of nodes names to quantize. When this list is not None only the nodes in this list are quantized. example: [

    ’Conv__224’, ‘Conv__252’

    ]

  • nodes_to_exclude – List of nodes names to exclude. The nodes in this list will be excluded from quantization when it is not None.

  • optimize_model – Deprecating Soon! Optimize model before quantization. NOT recommended, optimization will change the computation graph, making debugging of quantization loss difficult.

  • use_external_data_format – option used for large size (>2GB) model. Set to False by default.

  • extra_options

    key value pair dictionary for various options in different case. Current used:

    extra.Sigmoid.nnapi = True/False (Default is False) ActivationSymmetric = True/False: symmetrize calibration data for activations (default is False). WeightSymmetric = True/False: symmetrize calibration data for weights (default is True). EnableSubgraph = True/False : Default is False. If enabled, subgraph will be quantized.

    Dyanmic mode currently is supported. Will support more in the future.

    ForceQuantizeNoInputCheck = True/False :

    By default, some latent operators like maxpool, transpose, do not quantize if their input is not quantized already. Setting to True to force such operator always quantize input and so generate quantized output. Also, the True behavior could be disabled per node using the nodes_to_exclude.

    MatMulConstBOnly = True/False:

    Default is False for static mode. If enabled, only MatMul with const B will be quantized.

    AddQDQPairToWeight = True/False :

    Default is False which quantizes floating-point weight and feeds it to solely inserted DeQuantizeLinear node. If True, it remains floating-point weight and inserts both QuantizeLinear/DeQuantizeLinear nodes to weight.

    OpTypesToExcludeOutputQuantization = list of op type :

    Default is []. If any op type is specified, it won’t quantize the output of ops with this specific op types.

    DedicatedQDQPair = True/False :

    Default is False. When inserting QDQ pair, multiple nodes can share a single QDQ pair as their inputs. If True, it will create identical and dedicated QDQ pair for each node.

    QDQOpTypePerChannelSupportToAxis = dictionary :

    Default is {}. Set channel axis for specific op type, for example: {‘MatMul’: 1}, and it’s effective only when per channel quantization is supported and per_channel is True. If specific op type supports per channel quantization but not explicitly specified with channel axis, default channel axis will be used.

    CalibTensorRangeSymmetric = True/False :

    Default is False. If enabled, the final range of tensor during calibration will be explicitly set to symmetric to central point “0”.

    CalibMovingAverage = True/False :

    Default is False. If enabled, the moving average of the minimum and maximum values will be computed when the calibration method selected is MinMax.

    CalibMovingAverageConstant = float :

    Default is 0.01. Constant smoothing factor to use when computing the moving average of the minimum and maximum values. Effective only when the calibration method selected is MinMax and when CalibMovingAverage is set to True.

onnxruntime.quantization.shape_inference.quant_pre_process(input_model_path: str, output_model_path: str, skip_optimization: bool = False, skip_onnx_shape: bool = False, skip_symbolic_shape: bool = False, auto_merge: bool = False, int_max: int = 2147483647, guess_output_rank: bool = False, verbose: int = 0, save_as_external_data: bool = False, all_tensors_to_one_file: bool = False, external_data_location: str = './', external_data_size_threshold: int = 1024) None#

Shape inference and model optimization, in preparation for quantization.

Parameters:
  • input_model_path – Path to the input model file”)

  • output_model_path – Path to the output model file

  • skip_optimization – Skip model optimization step if true. This may result in ONNX shape inference failure for some models.

  • skip_onnx_shape – Skip ONNX shape inference. Symbolic shape inference is most effective with transformer based models. Skipping all shape inferences may reduce the effectiveness of quantization, as a tensor with unknown shape can not be quantized.

  • skip_symbolic_shape – Skip symbolic shape inference. Symbolic shape inference is most effective with transformer based models. Skipping all shape inferences may reduce the effectiveness of quantization, as a tensor with unknown shape can not be quantized.

  • auto_merge – For symbolic shape inference, automatically merge symbolic dims when conflict happens.

  • int_max – For symbolic shape inference, specify the maximum value for integer to be treated as boundless for ops like slice

  • guess_output_rank – Guess output rank to be the same as input 0 for unknown ops

  • verbose – Logs detailed info of inference, 0: turn off, 1: warnings, 3: detailed

  • save_as_external_data – Saving an ONNX model to external data

  • all_tensors_to_one_file – Saving all the external data to one file

  • external_data_location – The file location to save the external file

  • external_data_size_threshold – The size threshold for external data

Calibration:

class onnxruntime.quantization.calibrate.CalibrationDataReader#
__abstractmethods__ = frozenset({'get_next'})#
__iter__()#
__next__()#
classmethod __subclasshook__(subclass)#

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

_abc_impl = <_abc._abc_data object>#
abstract get_next() dict#

generate the input data dict for ONNXinferenceSession run

The parameters.

class onnxruntime.quantization.quant_utils.QuantFormat(value)#

An enumeration.

class onnxruntime.quantization.quant_utils.QuantizationMode(value)#

An enumeration.

class onnxruntime.quantization.quant_utils.QuantType(value)#

An enumeration.