{"cells": [{"cell_type": "markdown", "id": "95f7b5dd", "metadata": {}, "source": ["# Loss function in ONNX\n", "\n", "The following notebook show how to translate common loss function into ONNX."]}, {"cell_type": "code", "execution_count": 1, "id": "5d607e74", "metadata": {}, "outputs": [{"data": {"text/html": ["
run previous cell, wait for 2 seconds
\n", ""], "text/plain": [""]}, "execution_count": 2, "metadata": {}, "output_type": "execute_result"}], "source": ["from jyquickhelper import add_notebook_menu\n", "add_notebook_menu()"]}, {"cell_type": "code", "execution_count": 2, "id": "ca4a486a", "metadata": {}, "outputs": [], "source": ["from mlprodict.plotting.text_plot import onnx_simple_text_plot\n", "%load_ext mlprodict"]}, {"cell_type": "markdown", "id": "4a0a7baf", "metadata": {}, "source": ["## Square loss\n", "\n", "The first example shows how to use [onnx](https://github.com/onnx/onnx) API to represent the square loss function $E(X,Y) = \\sum_i(x_i-y_i)^2$ where $X=(x_i)$ and $Y=(y_i)$."]}, {"cell_type": "markdown", "id": "9a89aaa1", "metadata": {}, "source": ["### numpy function"]}, {"cell_type": "code", "execution_count": 3, "id": "0d1f4997", "metadata": {}, "outputs": [{"data": {"text/plain": ["array([0.5], dtype=float32)"]}, "execution_count": 4, "metadata": {}, "output_type": "execute_result"}], "source": ["import numpy\n", "\n", "\n", "def square_loss(X, Y):\n", " return numpy.sum((X - Y) ** 2, keepdims=1)\n", "\n", "\n", "x = numpy.array([0, 1, 2], dtype=numpy.float32)\n", "y = numpy.array([0.5, 1, 2.5], dtype=numpy.float32)\n", "square_loss(x, y)"]}, {"cell_type": "markdown", "id": "18d432b6", "metadata": {}, "source": ["### onnx version\n", "\n", "Following example is based on [onnx Python API](https://github.com/onnx/onnx/blob/main/docs/PythonAPIOverview.md), described with more detailed at [Introduction to onnx Python API](http://www.xavierdupre.fr/app/onnxcustom/helpsphinx/tutorials/tutorial_onnx/python.html)."]}, {"cell_type": "code", "execution_count": 4, "id": "6b75b7f0", "metadata": {}, "outputs": [], "source": ["from onnx.helper import make_node, make_graph, make_model, make_tensor_value_info\n", "from onnx import TensorProto\n", "\n", "nodes = [make_node('Sub', ['X', 'Y'], ['diff']),\n", " make_node('Mul', ['diff', 'diff'], ['diff2']),\n", " make_node('ReduceSum', ['diff2'], ['loss'])]\n", "\n", "graph = make_graph(nodes, 'square_loss',\n", " [make_tensor_value_info('X', TensorProto.FLOAT, [None]),\n", " make_tensor_value_info('Y', TensorProto.FLOAT, [None])],\n", " [make_tensor_value_info('loss', TensorProto.FLOAT, [None])])\n", "model = make_model(graph)\n", "del model.opset_import[:]\n", "opset = model.opset_import.add()\n", "opset.domain = ''\n", "opset.version = 14"]}, {"cell_type": "code", "execution_count": 5, "id": "47e630fe", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["opset: domain='' version=14\n", "input: name='X' type=dtype('float32') shape=(0,)\n", "input: name='Y' type=dtype('float32') shape=(0,)\n", "Sub(X, Y) -> diff\n", " Mul(diff, diff) -> diff2\n", " ReduceSum(diff2) -> loss\n", "output: name='loss' type=dtype('float32') shape=(0,)\n"]}], "source": ["print(onnx_simple_text_plot(model))"]}, {"cell_type": "code", "execution_count": 6, "id": "dce31928", "metadata": {}, "outputs": [{"data": {"text/html": ["
\n", ""], "text/plain": [""]}, "execution_count": 7, "metadata": {}, "output_type": "execute_result"}], "source": ["%onnxview model"]}, {"cell_type": "markdown", "id": "8acb4fe8", "metadata": {}, "source": ["Let's check it gives the same results."]}, {"cell_type": "code", "execution_count": 7, "id": "0ffcf1a8", "metadata": {}, "outputs": [{"data": {"text/plain": ["[array([0.5], dtype=float32)]"]}, "execution_count": 8, "metadata": {}, "output_type": "execute_result"}], "source": ["from onnxruntime import InferenceSession\n", "sess = InferenceSession(model.SerializeToString())\n", "sess.run(None, {'X': x, 'Y': y})"]}, {"cell_type": "markdown", "id": "7e587692", "metadata": {}, "source": ["### second API from sklearn-onnx\n", "\n", "The previous API is quite verbose. [sklearn-onnx](https://onnx.ai/sklearn-onnx/) implements a more simple API to do it where every onnx operator is made available as a class. It was developped to speed up the implementation of converters for scikit-learn (see [sklearn-onnx](https://onnx.ai/sklearn-onnx/auto_tutorial/plot_icustom_converter.html))."]}, {"cell_type": "code", "execution_count": 8, "id": "4d123a45", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["opset: domain='' version=14\n", "input: name='X' type=dtype('float32') shape=(0,)\n", "input: name='Y' type=dtype('float32') shape=(0,)\n", "Sub(X, Y) -> Su_C0\n", " Mul(Su_C0, Su_C0) -> Mu_C0\n", " ReduceSum(Mu_C0) -> Re_reduced0\n", "output: name='Re_reduced0' type=dtype('float32') shape=(1,)\n"]}], "source": ["from skl2onnx.algebra.onnx_ops import OnnxSub, OnnxMul, OnnxReduceSum\n", "\n", "diff = OnnxSub('X', 'Y')\n", "nodes = OnnxReduceSum(OnnxMul(diff, diff))\n", "model = nodes.to_onnx({'X': x, 'Y': y})\n", "\n", "print(onnx_simple_text_plot(model))"]}, {"cell_type": "code", "execution_count": 9, "id": "9bd6537a", "metadata": {}, "outputs": [{"data": {"text/plain": ["[array([0.5], dtype=float32)]"]}, "execution_count": 10, "metadata": {}, "output_type": "execute_result"}], "source": ["sess = InferenceSession(model.SerializeToString())\n", "sess.run(None, {'X': x, 'Y': y})"]}, {"cell_type": "markdown", "id": "e3073fb0", "metadata": {}, "source": ["As the previous example, this function only allows float32 arrays. It fails for any other type."]}, {"cell_type": "code", "execution_count": 10, "id": "1cd93361", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(double)) , expected: (tensor(float))\n"]}], "source": ["try:\n", " sess.run(None, {'X': x.astype(numpy.float64), \n", " 'Y': y.astype(numpy.float64)})\n", "except Exception as e:\n", " print(e)"]}, {"cell_type": "markdown", "id": "848a6e45", "metadata": {}, "source": ["### numpy API\n", "\n", "Second example is much more simple than the first one but it requires to know [ONNX operators](https://github.com/onnx/onnx/blob/main/docs/Operators.md). The most difficult type is about writing the signature. In the following example, it take two arrays of the same type `T` and returns an array of the same type, `T` being any element type (float32, float64, int64, ...)."]}, {"cell_type": "code", "execution_count": 11, "id": "80a0e035", "metadata": {}, "outputs": [{"data": {"text/plain": ["array([0.5], dtype=float32)"]}, "execution_count": 12, "metadata": {}, "output_type": "execute_result"}], "source": ["from mlprodict.npy import onnxnumpy_np, NDArrayType\n", "import mlprodict.npy.numpy_onnx_impl as npnx\n", "\n", "@onnxnumpy_np(runtime='onnxruntime',\n", " signature=NDArrayType((\"T:all\", \"T\"), dtypes_out=('T',)))\n", "def onnx_square_loss(X, Y):\n", " return npnx.sum((X - Y) ** 2, keepdims=1)\n", "\n", "onnx_square_loss(x, y)"]}, {"cell_type": "markdown", "id": "fa274cae", "metadata": {}, "source": ["This API compiles an ONNX graphs for every element type. So it works float64 as well."]}, {"cell_type": "code", "execution_count": 12, "id": "b750a1ee", "metadata": {}, "outputs": [{"data": {"text/plain": ["array([0.5])"]}, "execution_count": 13, "metadata": {}, "output_type": "execute_result"}], "source": ["onnx_square_loss(x.astype(numpy.float64), y.astype(numpy.float64))"]}, {"cell_type": "markdown", "id": "1464244f", "metadata": {}, "source": ["That's why method `to_onnx` requires to specify the element type before the method can return the associated ONNX graph."]}, {"cell_type": "code", "execution_count": 13, "id": "9cc9ab3f", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["opset: domain='' version=15\n", "input: name='X' type=dtype('float64') shape=()\n", "input: name='Y' type=dtype('float64') shape=()\n", "init: name='init' type=dtype('int64') shape=(0,) -- array([2], dtype=int64)\n", "Sub(X, Y) -> out_sub_0\n", " Pow(out_sub_0, init) -> out_pow_0\n", " ReduceSum(out_pow_0, keepdims=1) -> y\n", "output: name='y' type=dtype('float64') shape=()\n"]}], "source": ["onx = onnx_square_loss.to_onnx(key=numpy.float64)\n", "print(onnx_simple_text_plot(onx))"]}, {"cell_type": "markdown", "id": "f3a6e13c", "metadata": {}, "source": ["## log loss\n", "\n", "The log loss is defined as the following: $L(y, s) = (1 - y)\\log(1 - p(s)) + y \\log(p(s))$ where $p(s) = sigmoid(s) = \\frac{1}{1 + \\exp(-s)}$. Let's start with the numpy version."]}, {"cell_type": "markdown", "id": "fe59f7e2", "metadata": {}, "source": ["### numpy function"]}, {"cell_type": "code", "execution_count": 14, "id": "0d836772", "metadata": {}, "outputs": [{"name": "stderr", "output_type": "stream", "text": [":5: RuntimeWarning: divide by zero encountered in log\n", " ls = (1 - y) * numpy.log(1 - ps) + y * numpy.log(ps)\n"]}, {"data": {"text/plain": ["array([-inf], dtype=float32)"]}, "execution_count": 15, "metadata": {}, "output_type": "execute_result"}], "source": ["from scipy.special import expit\n", "\n", "def log_loss(y, s):\n", " ps = expit(-s)\n", " ls = (1 - y) * numpy.log(1 - ps) + y * numpy.log(ps)\n", " return numpy.sum(ls, keepdims=1)\n", "\n", "y = numpy.array([0, 1, 0, 1], dtype=numpy.float32)\n", "s = numpy.array([1e-50, 1e50, 0, 1], dtype=numpy.float32)\n", "log_loss(y, s)"]}, {"cell_type": "markdown", "id": "94d04fb8", "metadata": {}, "source": ["The function may return unexpected values because `log(0)` does not exist. The trick is usually to clip the value."]}, {"cell_type": "code", "execution_count": 15, "id": "72bc97ca", "metadata": {}, "outputs": [{"data": {"text/plain": ["array([-16.515066], dtype=float32)"]}, "execution_count": 16, "metadata": {}, "output_type": "execute_result"}], "source": ["def log_loss_clipped(y, s, eps=1e-6):\n", " ps = numpy.clip(expit(-s), eps, 1-eps)\n", " ls = (1 - y) * numpy.log(1 - ps) + y * numpy.log(ps)\n", " return numpy.sum(ls, keepdims=1)\n", "\n", "log_loss_clipped(y, s)"]}, {"cell_type": "markdown", "id": "48732418", "metadata": {}, "source": ["### numpy to onnx with onnx operators"]}, {"cell_type": "code", "execution_count": 16, "id": "27a13e36", "metadata": {"scrolled": false}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["opset: domain='' version=15\n", "input: name='Y' type=dtype('float32') shape=(0,)\n", "input: name='S' type=dtype('float32') shape=(0,)\n", "init: name='Su_Subcst' type=dtype('float32') shape=(1,) -- array([1.], dtype=float32)\n", "init: name='Cl_Clipcst' type=dtype('float32') shape=(1,) -- array([1.e-06], dtype=float32)\n", "init: name='Cl_Clipcst1' type=dtype('float32') shape=(1,) -- array([0.999999], dtype=float32)\n", "Identity(Su_Subcst) -> Su_Subcst1\n", "Neg(S) -> Ne_Y0\n", " Sigmoid(Ne_Y0) -> Si_Y0\n", " Clip(Si_Y0, Cl_Clipcst, Cl_Clipcst1) -> Cl_output0\n", " Sub(Su_Subcst1, Cl_output0) -> Su_C02\n", " Log(Su_C02) -> Lo_output0\n", "Sub(Su_Subcst, Y) -> Su_C0\n", " Mul(Su_C0, Lo_output0) -> Mu_C0\n", "Log(Cl_output0) -> Lo_output02\n", " Mul(Y, Lo_output02) -> Mu_C02\n", " Add(Mu_C0, Mu_C02) -> Ad_C0\n", " ReduceSum(Ad_C0, keepdims=1) -> Re_reduced0\n", "output: name='Re_reduced0' type=dtype('float32') shape=(1,)\n"]}], "source": ["from skl2onnx.algebra.onnx_ops import (\n", " OnnxClip, OnnxSigmoid, OnnxLog, OnnxAdd, OnnxSub, OnnxMul, OnnxNeg)\n", "\n", "eps = numpy.array([1e-6], dtype=numpy.float32)\n", "one = numpy.array([1], dtype=numpy.float32)\n", "\n", "ps = OnnxClip(OnnxSigmoid(OnnxNeg('S')), eps, 1-eps)\n", "ls1 = OnnxMul(OnnxSub(one, 'Y'), OnnxLog(OnnxSub(one, ps)))\n", "ls2 = OnnxMul('Y', OnnxLog(ps))\n", "nodes = OnnxReduceSum(OnnxAdd(ls1, ls2), keepdims=1)\n", "model = nodes.to_onnx({'Y': y, 'S': s})\n", "\n", "print(onnx_simple_text_plot(model))"]}, {"cell_type": "code", "execution_count": 17, "id": "c4bc9615", "metadata": {}, "outputs": [{"data": {"text/html": ["
\n", ""], "text/plain": [""]}, "execution_count": 18, "metadata": {}, "output_type": "execute_result"}], "source": ["%onnxview model"]}, {"cell_type": "code", "execution_count": 18, "id": "7cbe7cc7", "metadata": {}, "outputs": [{"data": {"text/plain": ["[array([-16.515068], dtype=float32)]"]}, "execution_count": 19, "metadata": {}, "output_type": "execute_result"}], "source": ["sess = InferenceSession(model.SerializeToString())\n", "sess.run(None, {'Y': y, 'S': s})"]}, {"cell_type": "markdown", "id": "bc335862", "metadata": {}, "source": ["Same results."]}, {"cell_type": "markdown", "id": "4803d9e5", "metadata": {}, "source": ["### Back to onnx API\n", "\n", "Coding the previous graph would take too much time but it is still possible to build it from the ONNX graph we just got."]}, {"cell_type": "code", "execution_count": 19, "id": "02887c2d", "metadata": {"scrolled": false}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["import numpy\n", "from onnx import numpy_helper, TensorProto\n", "from onnx.helper import (\n", " make_model, make_node, set_model_props, make_tensor, make_graph,\n", " make_tensor_value_info)\n", "\n", "\n", "def create_model():\n", " '''\n", " Converted ``OnnxReduceSum``.\n", "\n", " * producer: skl2onnx\n", " * version: 0\n", " * description: \n", " '''\n", " # subgraphs\n", "\n", " # containers\n", " print('[containers]') # verbose\n", " initializers = []\n", " nodes = []\n", " inputs = []\n", " outputs = []\n", "\n", " # opsets\n", " print('[opsets]') # verbose\n", " opsets = {'': 15}\n", " target_opset = 15 # subgraphs\n", " print('[subgraphs]') # verbose\n", "\n", " # initializers\n", " print('[initializers]') # verbose\n", "\n", " list_value = [1.0]\n", " value = numpy.array(list_value, dtype=numpy.float32)\n", "\n", " tensor = numpy_helper.from_array(value, name='i0')\n", " initializers.append(tensor)\n", "\n", " list_value = [9.999999974752427e-07]\n", " value = numpy.array(list_value, dtype=numpy.float32)\n", "\n", " tensor = numpy_helper.from_array(value, name='i1')\n", " initializers.append(tensor)\n", "\n", " list_value = [0.9999989867210388]\n", " value = numpy.array(list_value, dtype=numpy.float32)\n", "\n", " tensor = numpy_helper.from_array(value, name='i2')\n", " initializers.append(tensor)\n", "\n", " # inputs\n", " print('[inputs]') # verbose\n", "\n", " value = make_tensor_value_info('Y', 1, [None])\n", " inputs.append(value)\n", "\n", " value = make_tensor_value_info('S', 1, [None])\n", " inputs.append(value)\n", "\n", " # outputs\n", " print('[outputs]') # verbose\n", "\n", " value = make_tensor_value_info('Re_reduced0', 1, [1])\n", " outputs.append(value)\n", "\n", " # nodes\n", " print('[nodes]') # verbose\n", "\n", " node = make_node(\n", " 'Neg',\n", " ['S'],\n", " ['r0'],\n", " name='n0', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Sub',\n", " ['i0', 'Y'],\n", " ['r1'],\n", " name='n1', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Identity',\n", " ['i0'],\n", " ['r2'],\n", " name='n2', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Sigmoid',\n", " ['r0'],\n", " ['r3'],\n", " name='n3', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Clip',\n", " ['r3', 'i1', 'i2'],\n", " ['r4'],\n", " name='n4', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Sub',\n", " ['r2', 'r4'],\n", " ['r5'],\n", " name='n5', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Log',\n", " ['r4'],\n", " ['r6'],\n", " name='n6', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Log',\n", " ['r5'],\n", " ['r7'],\n", " name='n7', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Mul',\n", " ['Y', 'r6'],\n", " ['r8'],\n", " name='n8', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Mul',\n", " ['r1', 'r7'],\n", " ['r9'],\n", " name='n9', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'Add',\n", " ['r9', 'r8'],\n", " ['r10'],\n", " name='n10', domain='')\n", " nodes.append(node)\n", "\n", " node = make_node(\n", " 'ReduceSum',\n", " ['r10'],\n", " ['Re_reduced0'],\n", " name='n11', keepdims=1, domain='')\n", " nodes.append(node)\n", "\n", " # graph\n", " print('[graph]') # verbose\n", " graph = make_graph(nodes, 'OnnxReduceSum', inputs, outputs, initializers)\n", " # '8'\n", "\n", " onnx_model = make_model(graph)\n", " onnx_model.ir_version = 8\n", " onnx_model.producer_name = 'skl2onnx'\n", " onnx_model.producer_version = ''\n", " onnx_model.domain = 'ai.onnx'\n", " onnx_model.model_version = 0\n", " onnx_model.doc_string = ''\n", " set_model_props(onnx_model, {})\n", "\n", " # opsets\n", " print('[opset]') # verbose\n", " del onnx_model.opset_import[:] # pylint: disable=E1101\n", " for dom, value in opsets.items():\n", " op_set = onnx_model.opset_import.add()\n", " op_set.domain = dom\n", " op_set.version = value\n", "\n", " return onnx_model\n", "\n", "\n", "onnx_model = create_model()\n", "\n"]}], "source": ["from mlprodict.onnx_tools.onnx_export import export2onnx\n", "from mlprodict.onnx_tools.onnx_manipulations import onnx_rename_names\n", "print(export2onnx(onnx_rename_names(model)))"]}, {"cell_type": "markdown", "id": "e3c56dcc", "metadata": {}, "source": ["### numpy to onnx with numpy API"]}, {"cell_type": "code", "execution_count": 20, "id": "aaa31f99", "metadata": {"scrolled": false}, "outputs": [{"data": {"text/plain": ["array([-16.515068], dtype=float32)"]}, "execution_count": 21, "metadata": {}, "output_type": "execute_result"}], "source": ["@onnxnumpy_np(runtime='onnxruntime',\n", " signature=NDArrayType((\"T:all\", \"T\"), dtypes_out=('T',)),\n", " op_version=15)\n", "def onnx_log_loss(y, s, eps=1e-6):\n", "\n", " one = numpy.array([1], dtype=s.dtype)\n", " ceps = numpy.array([eps], dtype=s.dtype)\n", " \n", " ps = npnx.clip(npnx.expit(-s), ceps, one-ceps)\n", " ls = (one - y) * npnx.log(one - ps) + y * npnx.log(ps)\n", " return npnx.sum(ls, keepdims=1)\n", "\n", "onnx_log_loss(y, s, eps=1e-6)"]}, {"cell_type": "code", "execution_count": 21, "id": "b0c797bb", "metadata": {}, "outputs": [{"data": {"text/plain": ["array([-11.909897], dtype=float32)"]}, "execution_count": 22, "metadata": {}, "output_type": "execute_result"}], "source": ["onnx_log_loss(y, s, eps=1e-4)"]}, {"cell_type": "markdown", "id": "73872dc9", "metadata": {}, "source": ["The implementation is slightly different from the numpy implementation. `1 - y` cannot be used because 1 is an integer and the function needs to know if it is a integer 32 or 64. `numpy.array([1], dtype=s.dtype) - y` is better in this case to avoid any ambiguity on the type of constant `1`. That may be revisited in the future. The named argument is part of the ONNX graph as an initializer. An new graph is generated every time the function sees a new value. That explains why the following instructions cannot return one ONNX graph as they are more than one:"]}, {"cell_type": "code", "execution_count": 22, "id": "675a0bf7", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Unable to find signature with key= among [FctVersion((numpy.float32,numpy.float32), (1e-06,)), FctVersion((numpy.float32,numpy.float32), (0.0001,))] found=[(FctVersion((numpy.float32,numpy.float32), (1e-06,)), ), (FctVersion((numpy.float32,numpy.float32), (0.0001,)), )].\n"]}], "source": ["try:\n", " onnx_log_loss.to_onnx(key=numpy.float32)\n", "except Exception as e:\n", " print(e)"]}, {"cell_type": "markdown", "id": "9ec35ead", "metadata": {}, "source": ["Let's see the list of available graphs:"]}, {"cell_type": "code", "execution_count": 23, "id": "1bf3c9c5", "metadata": {}, "outputs": [{"data": {"text/plain": ["[FctVersion((numpy.float32,numpy.float32), (1e-06,)),\n", " FctVersion((numpy.float32,numpy.float32), (0.0001,))]"]}, "execution_count": 24, "metadata": {}, "output_type": "execute_result"}], "source": ["list(onnx_log_loss.signed_compiled)"]}, {"cell_type": "markdown", "id": "a1628f0c", "metadata": {}, "source": ["Let's pick the first one."]}, {"cell_type": "code", "execution_count": 24, "id": "f1308149", "metadata": {}, "outputs": [], "source": ["from mlprodict.npy import FctVersion\n", "onx = onnx_log_loss.to_onnx(key=FctVersion((numpy.float32,numpy.float32), (1e-06,)))"]}, {"cell_type": "code", "execution_count": 25, "id": "ab735b64", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["opset: domain='' version=15\n", "input: name='y' type=dtype('float32') shape=()\n", "input: name='s' type=dtype('float32') shape=()\n", "init: name='init' type=dtype('float32') shape=(0,) -- array([1.e-06], dtype=float32)\n", "init: name='init_1' type=dtype('float32') shape=(0,) -- array([0.999999], dtype=float32)\n", "init: name='init_2' type=dtype('float32') shape=(0,) -- array([1.], dtype=float32)\n", "Neg(s) -> out_neg_0\n", " Sigmoid(out_neg_0) -> out_sig_0\n", " Clip(out_sig_0, init, init_1) -> out_cli_0\n", " Sub(init_2, out_cli_0) -> out_sub_0\n", " Log(out_sub_0) -> out_log_0_1\n", " Log(out_cli_0) -> out_log_0\n", " Mul(y, out_log_0) -> out_mul_0\n", "Sub(init_2, y) -> out_sub_0_1\n", " Mul(out_sub_0_1, out_log_0_1) -> out_mul_0_1\n", " Add(out_mul_0_1, out_mul_0) -> out_add_0\n", " ReduceSum(out_add_0, keepdims=1) -> z\n", "output: name='z' type=dtype('float32') shape=()\n"]}], "source": ["print(onnx_simple_text_plot(onx))"]}, {"cell_type": "markdown", "id": "264bae63", "metadata": {}, "source": ["### no loss but lagg, something difficult to write with onnx"]}, {"cell_type": "code", "execution_count": 26, "id": "5af594a9", "metadata": {}, "outputs": [{"data": {"text/plain": ["array([[ 4., 4.],\n", " [ 8., 18.]], dtype=float32)"]}, "execution_count": 27, "metadata": {}, "output_type": "execute_result"}], "source": ["@onnxnumpy_np(runtime='onnxruntime',\n", " signature=NDArrayType((\"T:all\", ), dtypes_out=('T',)))\n", "def lagged(x, lag=2):\n", " return x[lag:] - x[:-lag]\n", "\n", "x = numpy.array([[0, 1], [2, 3], [4, 5], [10, 21]], dtype=numpy.float32)\n", "lagged(x)"]}, {"cell_type": "code", "execution_count": 27, "id": "897e0254", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["opset: domain='' version=15\n", "input: name='x' type=dtype('float32') shape=()\n", "init: name='init' type=dtype('int64') shape=(0,) -- array([0], dtype=int64)\n", "init: name='init_2' type=dtype('int64') shape=(0,) -- array([-2], dtype=int64)\n", "init: name='init_4' type=dtype('int64') shape=(0,) -- array([2], dtype=int64)\n", "Shape(x) -> out_sha_0\n", " Gather(out_sha_0, init) -> out_gat_0\n", " Slice(x, init_4, out_gat_0, init) -> out_sli_0_1\n", "Slice(x, init, init_2, init) -> out_sli_0\n", " Sub(out_sli_0_1, out_sli_0) -> y\n", "output: name='y' type=dtype('float32') shape=()\n"]}], "source": ["print(onnx_simple_text_plot(lagged.to_onnx(key=numpy.float32)))"]}, {"cell_type": "code", "execution_count": 28, "id": "9356da21", "metadata": {}, "outputs": [{"data": {"text/html": ["
\n", ""], "text/plain": [""]}, "execution_count": 29, "metadata": {}, "output_type": "execute_result"}], "source": ["%onnxview lagged.to_onnx(key=numpy.float32)"]}, {"cell_type": "code", "execution_count": 29, "id": "43acde20", "metadata": {}, "outputs": [], "source": []}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.5"}}, "nbformat": 4, "nbformat_minor": 5}