.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "gyexamples/plot_gbegin_cst.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_gyexamples_plot_gbegin_cst.py: Store arrays in one onnx graph ============================== Once a model is converted it can be useful to store an array as a constant in the graph an retrieve it through an output. This allows the user to store training parameters or other informations like a vocabulary. Last sections shows how to remove an output or to promote an intermediate result to an output. .. contents:: :local: Train and convert a model +++++++++++++++++++++++++ We download one model from the :epkg:`ONNX Zoo` but the model could be trained and produced by another converter library. .. GENERATED FROM PYTHON SOURCE LINES 21-44 .. code-block:: default import pprint import numpy from onnx import load from onnxruntime import InferenceSession from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from skl2onnx import to_onnx from skl2onnx.helpers.onnx_helper import ( add_output_initializer, select_model_inputs_outputs) data = load_iris() X, y = data.data.astype(numpy.float32), data.target X_train, X_test, y_train, y_test = train_test_split(X, y) model = LogisticRegression(penalty='elasticnet', C=2., solver='saga', l1_ratio=0.5) model.fit(X_train, y_train) onx = to_onnx(model, X_train[:1], target_opset={'': 14, 'ai.onnx.ml': 2}, options={'zipmap': False}) .. rst-class:: sphx-glr-script-out .. code-block:: none somewhere/workspace/onnxcustom/onnxcustom_UT_39_std/_venv/lib/python3.9/site-packages/sklearn/linear_model/_sag.py:350: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge warnings.warn( .. GENERATED FROM PYTHON SOURCE LINES 45-48 Add training parameter ++++++++++++++++++++++ .. GENERATED FROM PYTHON SOURCE LINES 48-54 .. code-block:: default new_onx = add_output_initializer( onx, ['C', 'l1_ratio'], [numpy.array([model.C]), numpy.array([model.l1_ratio])]) .. GENERATED FROM PYTHON SOURCE LINES 55-57 Inference +++++++++ .. GENERATED FROM PYTHON SOURCE LINES 57-66 .. code-block:: default sess = InferenceSession(new_onx.SerializeToString(), providers=['CPUExecutionProvider']) print("output names:", [o.name for o in sess.get_outputs()]) res = sess.run(None, {'X': X_test[:2]}) print("outputs") pprint.pprint(res) .. rst-class:: sphx-glr-script-out .. code-block:: none output names: ['label', 'probabilities', 'C', 'l1_ratio'] outputs [array([1, 1], dtype=int64), array([[0.00451057, 0.84404397, 0.15144546], [0.01676851, 0.86485994, 0.11837152]], dtype=float32), array([2.]), array([0.5])] .. GENERATED FROM PYTHON SOURCE LINES 67-77 The major draw back of this solution is increase the prediction time as onnxruntime copies the constants for every prediction. It is possible either to store those constant in a separate ONNX graph or to removes them. Select outputs ++++++++++++++ Next function removes unneeded outputs from a model, not only the constants. Next model only keeps the probabilities. .. GENERATED FROM PYTHON SOURCE LINES 77-90 .. code-block:: default simple_onx = select_model_inputs_outputs(new_onx, ['probabilities']) sess = InferenceSession(simple_onx.SerializeToString(), providers=['CPUExecutionProvider']) print("output names:", [o.name for o in sess.get_outputs()]) res = sess.run(None, {'X': X_test[:2]}) print("outputs") pprint.pprint(res) # Function *select_model_inputs_outputs* add also promote an intermediate # result to an output. # .. rst-class:: sphx-glr-script-out .. code-block:: none output names: ['probabilities'] outputs [array([[0.00451057, 0.84404397, 0.15144546], [0.01676851, 0.86485994, 0.11837152]], dtype=float32)] .. GENERATED FROM PYTHON SOURCE LINES 91-96 This example only uses ONNX graph in memory and never saves or loads a model. This can be done by using the following snippets of code. Save a model ++++++++++++ .. GENERATED FROM PYTHON SOURCE LINES 96-100 .. code-block:: default with open("simplified_model.onnx", "wb") as f: f.write(simple_onx.SerializeToString()) .. GENERATED FROM PYTHON SOURCE LINES 101-103 Load a model ++++++++++++ .. GENERATED FROM PYTHON SOURCE LINES 103-113 .. code-block:: default model = load("simplified_model.onnx", "wb") sess = InferenceSession(model.SerializeToString(), providers=['CPUExecutionProvider']) print("output names:", [o.name for o in sess.get_outputs()]) res = sess.run(None, {'X': X_test[:2]}) print("outputs") pprint.pprint(res) .. rst-class:: sphx-glr-script-out .. code-block:: none output names: ['probabilities'] outputs [array([[0.00451057, 0.84404397, 0.15144546], [0.01676851, 0.86485994, 0.11837152]], dtype=float32)] .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 0.094 seconds) .. _sphx_glr_download_gyexamples_plot_gbegin_cst.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_gbegin_cst.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_gbegin_cst.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_