Inference#

Python Wrapper for InferenceSession#

class onnxruntime.InferenceSession(path_or_bytes, sess_options=None, providers=None, provider_options=None, **kwargs)#

This is the main class used to run a model.

Parameters:
  • path_or_bytes – filename or serialized ONNX or ORT format model in a byte string

  • sess_options – session options

  • providers – Optional sequence of providers in order of decreasing precedence. Values can either be provider names or tuples of (provider name, options dict). If not provided, then all available providers are used with the default precedence.

  • provider_options – Optional sequence of options dicts corresponding to the providers listed in ‘providers’.

The model type will be inferred unless explicitly set in the SessionOptions. To explicitly set:

so = onnxruntime.SessionOptions()
# so.add_session_config_entry('session.load_model_format', 'ONNX') or
so.add_session_config_entry('session.load_model_format', 'ORT')

A file extension of ‘.ort’ will be inferred as an ORT format model. All other filenames are assumed to be ONNX format models.

‘providers’ can contain either names or names and options. When any options are given in ‘providers’, ‘provider_options’ should not be used.

The list of providers is ordered by precedence. For example [‘CUDAExecutionProvider’, ‘CPUExecutionProvider’] means execute a node using CUDAExecutionProvider if capable, otherwise execute using CPUExecutionProvider.

__class__#

alias of type

__delattr__(name, /)#

Implement delattr(self, name).

__dir__()#

Default dir() implementation.

__eq__(value, /)#

Return self==value.

__format__(format_spec, /)#

Default object formatter.

__ge__(value, /)#

Return self>=value.

__getattribute__(name, /)#

Return getattr(self, name).

__gt__(value, /)#

Return self>value.

__hash__()#

Return hash(self).

__init__(path_or_bytes, sess_options=None, providers=None, provider_options=None, **kwargs)#
Parameters:
  • path_or_bytes – filename or serialized ONNX or ORT format model in a byte string

  • sess_options – session options

  • providers – Optional sequence of providers in order of decreasing precedence. Values can either be provider names or tuples of (provider name, options dict). If not provided, then all available providers are used with the default precedence.

  • provider_options – Optional sequence of options dicts corresponding to the providers listed in ‘providers’.

The model type will be inferred unless explicitly set in the SessionOptions. To explicitly set:

so = onnxruntime.SessionOptions()
# so.add_session_config_entry('session.load_model_format', 'ONNX') or
so.add_session_config_entry('session.load_model_format', 'ORT')

A file extension of ‘.ort’ will be inferred as an ORT format model. All other filenames are assumed to be ONNX format models.

‘providers’ can contain either names or names and options. When any options are given in ‘providers’, ‘provider_options’ should not be used.

The list of providers is ordered by precedence. For example [‘CUDAExecutionProvider’, ‘CPUExecutionProvider’] means execute a node using CUDAExecutionProvider if capable, otherwise execute using CPUExecutionProvider.

__init_subclass__()#

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

__le__(value, /)#

Return self<=value.

__lt__(value, /)#

Return self<value.

__ne__(value, /)#

Return self!=value.

__new__(**kwargs)#
__reduce__()#

Helper for pickle.

__reduce_ex__(protocol, /)#

Helper for pickle.

__repr__()#

Return repr(self).

__setattr__(name, value, /)#

Implement setattr(self, name, value).

__sizeof__()#

Size of object in memory, in bytes.

__str__()#

Return str(self).

__subclasshook__()#

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

_create_inference_session(providers, provider_options, disabled_optimizers=None)#
_reset_session(providers, provider_options)#

release underlying session object.

disable_fallback()#

Disable session.run() fallback mechanism.

enable_fallback()#

Enable session.Run() fallback mechanism. If session.Run() fails due to an internal Execution Provider failure, reset the Execution Providers enabled for this session. If GPU is enabled, fall back to CUDAExecutionProvider. otherwise fall back to CPUExecutionProvider.

end_profiling()#

End profiling and return results in a file.

The results are stored in a filename if the option onnxruntime.SessionOptions.enable_profiling().

get_inputs()#

Return the inputs metadata as a list of onnxruntime.NodeArg.

get_modelmeta()#

Return the metadata. See onnxruntime.ModelMetadata.

get_outputs()#

Return the outputs metadata as a list of onnxruntime.NodeArg.

get_overridable_initializers()#

Return the inputs (including initializers) metadata as a list of onnxruntime.NodeArg.

get_profiling_start_time_ns()#

Return the nanoseconds of profiling’s start time Comparable to time.monotonic_ns() after Python 3.3 On some platforms, this timer may not be as precise as nanoseconds For instance, on Windows and MacOS, the precision will be ~100ns

get_provider_options()#

Return registered execution providers’ configurations.

get_providers()#

Return list of registered execution providers.

get_session_options()#

Return the session options. See onnxruntime.SessionOptions.

io_binding()#

Return an onnxruntime.IOBinding object`.

run(output_names, input_feed, run_options=None)#

Compute the predictions.

Parameters:
  • output_names – name of the outputs

  • input_feed – dictionary { input_name: input_value }

  • run_options – See onnxruntime.RunOptions.

Returns:

list of results, every result is either a numpy array, a sparse tensor, a list or a dictionary.

sess.run([output_name], {input_name: x})
run_with_iobinding(iobinding, run_options=None)#

Compute the predictions.

Parameters:
  • iobinding – the iobinding object that has graph inputs/outputs bind.

  • run_options – See onnxruntime.RunOptions.

run_with_ort_values(output_names, input_dict_ort_values, run_options=None)#

Compute the predictions.

Parameters:
  • output_names – name of the outputs

  • input_dict_ort_values – dictionary { input_name: input_ort_value } See OrtValue class how to create OrtValue from numpy array or SparseTensor

  • run_options – See onnxruntime.RunOptions.

Returns:

an array of OrtValue

sess.run([output_name], {input_name: x})
run_with_ortvaluevector(run_options, feed_names, feeds, fetch_names, fetches, fetch_devices)#

Compute the predictions similar to other run_*() methods but with minimal C++/Python conversion overhead.

Parameters:
  • run_options – See onnxruntime.RunOptions.

  • feed_names – list of input names.

  • feeds – list of input OrtValue.

  • fetch_names – list of output names.

  • fetches – list of output OrtValue.

  • fetch_devices – list of output devices.

set_providers(providers=None, provider_options=None)#

Register the input list of execution providers. The underlying session is re-created.

Parameters:
  • providers – Optional sequence of providers in order of decreasing precedence. Values can either be provider names or tuples of (provider name, options dict). If not provided, then all available providers are used with the default precedence.

  • provider_options – Optional sequence of options dicts corresponding to the providers listed in ‘providers’.

‘providers’ can contain either names or names and options. When any options are given in ‘providers’, ‘provider_options’ should not be used.

The list of providers is ordered by precedence. For example [‘CUDAExecutionProvider’, ‘CPUExecutionProvider’] means execute a node using CUDAExecutionProvider if capable, otherwise execute using CPUExecutionProvider.

C Class InferenceSession#

class onnxruntime.capi._pybind_state.InferenceSession(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg1: str, arg2: bool, arg3: bool)#

This is the main class used to run a model.

__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg1: str, arg2: bool, arg3: bool) None#
end_profiling(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession) str#
property get_profiling_start_time_ns#
get_provider_options(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession) Dict[str, Dict[str, str]]#
get_providers(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession) List[str]#
initialize_session(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: List[Dict[str, str]], arg2: Set[str]) None#

Load a model saved in ONNX or ORT format.

property inputs_meta#
property model_meta#
property outputs_meta#
property overridable_initializers#
run(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: Dict[str, object], arg2: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions) List[object]#
run_with_iobinding(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: onnxruntime::SessionIOBinding, arg1: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions) None#
run_with_ort_values(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: dict, arg1: List[str], arg2: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions) std::vector<OrtValue, std::allocator<OrtValue> >#
run_with_ortvaluevector(self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions, arg1: List[str], arg2: std::vector<OrtValue, std::allocator<OrtValue> >, arg3: List[str], arg4: std::vector<OrtValue, std::allocator<OrtValue> >, arg5: List[onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice]) None#
property session_options#

RunOptions#

class onnxruntime.capi._pybind_state.RunOptions(self: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions)#

Configuration information for a single Run.

__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions) None#
add_run_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions, arg0: str, arg1: str) None#

Set a single run configuration entry as a pair of strings.

get_run_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions, arg0: str) str#

Get a single run configuration value using the given configuration key.

property log_severity_level#

Info, 2:Warning. 3:Error, 4:Fatal. Default is 2.

Type:

Log severity level for a particular Run() invocation. 0

Type:

Verbose, 1

property log_verbosity_level#

VLOG level if DEBUG build and run_log_severity_level is 0. Applies to a particular Run() invocation. Default is 0.

property logid#

To identify logs generated by a particular Run() invocation.

property only_execute_path_to_fetches#

Only execute the nodes needed by fetch list

property synchronize_execution_providers#

Synchronize execution providers after executing session.

property terminate#

Set to True to terminate any currently executing calls that are using this RunOptions instance. The individual calls will exit gracefully and return an error status.

property training_mode#

Choose to run in training or inferencing mode

SessionOptions#

class onnxruntime.capi._pybind_state.SessionOptions(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions)#

Configuration information for a session.

__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions) None#
add_external_initializers(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: list, arg1: list) None#
add_free_dimension_override_by_denotation(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: int) None#

Specify the dimension size for each denotation associated with an input’s free dimension.

add_free_dimension_override_by_name(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: int) None#

Specify values of named dimensions within model inputs.

add_initializer(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: object) None#
add_session_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: str) None#

Set a single session configuration entry as a pair of strings.

property enable_cpu_mem_arena#

Enables the memory arena on CPU. Arena may pre-allocate memory for future usage. Set this option to false if you don’t want it. Default is True.

property enable_mem_pattern#

Enable the memory pattern optimization. Default is true.

property enable_mem_reuse#

Enable the memory reuse optimization. Default is true.

property enable_profiling#

Enable profiling for this session. Default is false.

property execution_mode#

Sets the execution mode. Default is sequential.

property execution_order#

Sets the execution order. Default is basic topological order.

get_session_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str) str#

Get a single session configuration value using the given configuration key.

property graph_optimization_level#

Graph optimization level for this session.

property inter_op_num_threads#

Sets the number of threads used to parallelize the execution of the graph (across nodes). Default is 0 to let onnxruntime choose.

property intra_op_num_threads#

Sets the number of threads used to parallelize the execution within nodes. Default is 0 to let onnxruntime choose.

property log_severity_level#

Log severity level. Applies to session load, initialization, etc. 0:Verbose, 1:Info, 2:Warning. 3:Error, 4:Fatal. Default is 2.

property log_verbosity_level#

VLOG level if DEBUG build and session_log_severity_level is 0. Applies to session load, initialization, etc. Default is 0.

property logid#

Logger id to use for session output.

property optimized_model_filepath#

File path to serialize optimized model to. Optimized model is not serialized unless optimized_model_filepath is set. Serialized model format will default to ONNX unless: - add_session_config_entry is used to set ‘session.save_model_format’ to ‘ORT’, or - there is no ‘session.save_model_format’ config entry and optimized_model_filepath ends in ‘.ort’ (case insensitive)

property profile_file_prefix#

The prefix of the profile file. The current time will be appended to the file name.

register_custom_ops_library(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str) None#

Specify the path to the shared library containing the custom op kernels required to run a model.

property use_deterministic_compute#

Whether to use deterministic compute. Default is false.

Python Wrapper for SessionIOBinding#

class onnxruntime.SessionIOBinding(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession)#
__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession) None#
bind_input(*args, **kwargs)#

Overloaded function.

  1. bind_input(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: object) -> None

  2. bind_input(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice, arg2: object, arg3: List[int], arg4: int) -> None

bind_ortvalue_input(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtValue) None#
bind_ortvalue_output(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtValue) None#
bind_output(*args, **kwargs)#

Overloaded function.

  1. bind_output(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice, arg2: object, arg3: List[int], arg4: int) -> None

  2. bind_output(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice) -> None

clear_binding_inputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#
clear_binding_outputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#
copy_outputs_to_cpu(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) List[object]#
get_outputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) onnxruntime.capi.onnxruntime_pybind11_state.OrtValueVector#
synchronize_inputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#
synchronize_outputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#

C Class SessionIOBinding#

class onnxruntime.capi._pybind_state.SessionIOBinding(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession)#
__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession) None#
bind_input(*args, **kwargs)#

Overloaded function.

  1. bind_input(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: object) -> None

  2. bind_input(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice, arg2: object, arg3: List[int], arg4: int) -> None

bind_ortvalue_input(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtValue) None#
bind_ortvalue_output(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtValue) None#
bind_output(*args, **kwargs)#

Overloaded function.

  1. bind_output(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice, arg2: object, arg3: List[int], arg4: int) -> None

  2. bind_output(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding, arg0: str, arg1: onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice) -> None

clear_binding_inputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#
clear_binding_outputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#
copy_outputs_to_cpu(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) List[object]#
get_outputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) onnxruntime.capi.onnxruntime_pybind11_state.OrtValueVector#
synchronize_inputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#
synchronize_outputs(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionIOBinding) None#

Others classes#

OrtAllocatorType#

class onnxruntime.capi._pybind_state.OrtAllocatorType(self: onnxruntime.capi.onnxruntime_pybind11_state.OrtAllocatorType, value: int)#

Members:

INVALID

ORT_DEVICE_ALLOCATOR

ORT_ARENA_ALLOCATOR

__eq__(self: object, other: object) bool#
__getstate__(self: object) int#
__hash__(self: object) int#
__index__(self: onnxruntime.capi.onnxruntime_pybind11_state.OrtAllocatorType) int#
__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.OrtAllocatorType, value: int) None#
__int__(self: onnxruntime.capi.onnxruntime_pybind11_state.OrtAllocatorType) int#
__members__ = {'INVALID': <OrtAllocatorType.INVALID: -1>, 'ORT_ARENA_ALLOCATOR': <OrtAllocatorType.ORT_ARENA_ALLOCATOR: 1>, 'ORT_DEVICE_ALLOCATOR': <OrtAllocatorType.ORT_DEVICE_ALLOCATOR: 0>}#
__ne__(self: object, other: object) bool#
__repr__(self: object) str#
__setstate__(self: onnxruntime.capi.onnxruntime_pybind11_state.OrtAllocatorType, state: int) None#
__str__()#

name(self: handle) -> str

property name#

ExecutionOrder#

class onnxruntime.capi._pybind_state.ExecutionOrder(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionOrder, value: int)#

Members:

DEFAULT

PRIORITY_BASED

__eq__(self: object, other: object) bool#
__getstate__(self: object) int#
__hash__(self: object) int#
__index__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionOrder) int#
__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionOrder, value: int) None#
__int__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionOrder) int#
__members__ = {'DEFAULT': <ExecutionOrder.DEFAULT: 0>, 'PRIORITY_BASED': <ExecutionOrder.PRIORITY_BASED: 1>}#
__ne__(self: object, other: object) bool#
__repr__(self: object) str#
__setstate__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionOrder, state: int) None#
__str__()#

name(self: handle) -> str

property name#

ExecutionMode#

class onnxruntime.capi._pybind_state.ExecutionMode(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionMode, value: int)#

Members:

ORT_SEQUENTIAL

ORT_PARALLEL

__eq__(self: object, other: object) bool#
__getstate__(self: object) int#
__hash__(self: object) int#
__index__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionMode) int#
__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionMode, value: int) None#
__int__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionMode) int#
__members__ = {'ORT_PARALLEL': <ExecutionMode.ORT_PARALLEL: 1>, 'ORT_SEQUENTIAL': <ExecutionMode.ORT_SEQUENTIAL: 0>}#
__ne__(self: object, other: object) bool#
__repr__(self: object) str#
__setstate__(self: onnxruntime.capi.onnxruntime_pybind11_state.ExecutionMode, state: int) None#
__str__()#

name(self: handle) -> str

property name#

GraphInfo#

class onnxruntime.capi._pybind_state.GraphInfo(self: onnxruntime.capi.onnxruntime_pybind11_state.GraphInfo)#

The information of split graphs for frontend.

__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.GraphInfo) None#

GraphOptimizationLevel#

class onnxruntime.capi._pybind_state.GraphOptimizationLevel(self: onnxruntime.capi.onnxruntime_pybind11_state.GraphOptimizationLevel, value: int)#

Members:

ORT_DISABLE_ALL

ORT_ENABLE_BASIC

ORT_ENABLE_EXTENDED

ORT_ENABLE_ALL

__eq__(self: object, other: object) bool#
__getstate__(self: object) int#
__hash__(self: object) int#
__index__(self: onnxruntime.capi.onnxruntime_pybind11_state.GraphOptimizationLevel) int#
__init__(self: onnxruntime.capi.onnxruntime_pybind11_state.GraphOptimizationLevel, value: int) None#
__int__(self: onnxruntime.capi.onnxruntime_pybind11_state.GraphOptimizationLevel) int#
__members__ = {'ORT_DISABLE_ALL': <GraphOptimizationLevel.ORT_DISABLE_ALL: 0>, 'ORT_ENABLE_ALL': <GraphOptimizationLevel.ORT_ENABLE_ALL: 99>, 'ORT_ENABLE_BASIC': <GraphOptimizationLevel.ORT_ENABLE_BASIC: 1>, 'ORT_ENABLE_EXTENDED': <GraphOptimizationLevel.ORT_ENABLE_EXTENDED: 2>}#
__ne__(self: object, other: object) bool#
__repr__(self: object) str#
__setstate__(self: onnxruntime.capi.onnxruntime_pybind11_state.GraphOptimizationLevel, state: int) None#
__str__()#

name(self: handle) -> str

property name#