Training Tutorial¶
The tutorial assumes there exist an ONNX graph saved and introduces two ways to train this model assuming a gradient can be computed for every node of this graph.
First part looks into the first API of onnxruntime-training based on class TrainingSession. This class assumes the loss function is part of the graph to train. The tutorial shows how to do that.
Second part relies on class TrainingAgent. It build a new ONNX graph to compute the gradient. This design gives more freedom to the user but it requires to write more code to implement the whole training.
Both parts rely on classes this package (onnxcustom) implements to simplify the code.
The tutorial was tested with following version:
<<<
import sys
import numpy
import scipy
import onnx
import onnxruntime
import lightgbm
import xgboost
import sklearn
import onnxconverter_common
import onnxmltools
import skl2onnx
import pyquickhelper
import mlprodict
import onnxcustom
import torch
print("python {}".format(sys.version_info))
mods = [numpy, scipy, sklearn, lightgbm, xgboost,
onnx, onnxmltools, onnxruntime, onnxcustom,
onnxconverter_common,
skl2onnx, mlprodict, pyquickhelper,
torch]
mods = [(m.__name__, m.__version__) for m in mods]
mx = max(len(_[0]) for _ in mods) + 1
for name, vers in sorted(mods):
print("{}{}{}".format(name, " " * (mx - len(name)), vers))
>>>
python sys.version_info(major=3, minor=9, micro=1, releaselevel='final', serial=0)
lightgbm 3.3.2
mlprodict 0.8.1674
numpy 1.22.1
onnx 1.10.2
onnxconverter_common 1.10.0
onnxcustom 0.4.274
onnxmltools 1.10.0
onnxruntime 1.11.993+cpu
pyquickhelper 1.11.3701
scipy 1.7.3
skl2onnx 1.10.4
sklearn 1.0.2
torch 1.10.2+cu102
xgboost 1.5.2