onnxruntime markdown documentation rendered with Sphinx#
The full documentation is available on onnxruntime.ai/docs, with the Python API. The following pages renders the markdown documentation.
Overview#
- ONNX Runtime Roadmap
- Privacy
- Build ONNX Runtime Server on Linux
- How to Use build ONNX Runtime Server for Prediction
- How to Use ONNX Runtime Server for Prediction
- FAQ
- Supported Operators and Data Types
- Execution Providers
- Operators implemented by CPUExecutionProvider
- Operators implemented by CUDAExecutionProvider
- Operators implemented by DmlExecutionProvider
Versions#
Contributing#
- ONNX Runtime coding conventions and standards
- Global Variables
- Thread Local variables
- No undefined symbols
- Default visibility and how to export a symbol
- static initialization order problem
- Guidelines for creating a good pull request
- Get the test data
- Compile onnx_test_runner and run the tests
- Notes on Threading in ORT
- Python Dev Notes
C API#
- How to update ONNX
- ORT API Guidelines
- Scope the impact to minimal
- Static library order matters
- Don’t call target_link_libraries on static libraries
- Every linux program(and shared lib) should link to libpthread and libatomic
- Don’t use the “-pthread” flag directly.
- CUDA projects should use the new cmake CUDA approach
- Basics of cross-compiling
- How to determine the host CPU architecture: on which cmake is running
- How to determine what CPU architecture(or architectures) you are building for
- ONNXRuntime Extensions
- Contrib Operator Schemas
- com.microsoft