Funny discrepancies#

Function sigmoid is sig(x) = \frac{1}{1 + e^{-x}}. For small or high value, implementation has to do approximation and they are not always the same. It may be a tradeoff between precision and computation time… It is always a tradeoff.

Precision#

This section compares the precision of a couple of implementations of the ssigmoid function. The custom implementation is done with a Taylor expansion of exponential function: e^x \sim 1 + x + \frac{x^2}{2} + ... + \frac{x^n}{n!} + o(x^n).

import time
import numpy
import pandas
from tqdm import tqdm
from scipy.special import expit

from skl2onnx.algebra.onnx_ops import OnnxSigmoid
from skl2onnx.common.data_types import FloatTensorType
from onnxruntime import InferenceSession
from mlprodict.onnxrt import OnnxInference
from onnxcustom import get_max_opset
import matplotlib.pyplot as plt

one = numpy.array([1], dtype=numpy.float64)


def taylor_approximation_exp(x, degre=50):
    y = numpy.zeros(x.shape, dtype=x.dtype)
    a = numpy.ones(x.shape, dtype=x.dtype)
    for i in range(1, degre + 1):
        a *= x / i
        y += a
    return y


def taylor_sigmoid(x, degre=50):
    den = one + taylor_approximation_exp(-x, degre)
    return one / (den)


opset = get_max_opset()
N = 300
min_values = [-20 + float(i) * 10 / N for i in range(N)]
data = numpy.array([0], dtype=numpy.float32)

node = OnnxSigmoid('X', op_version=opset, output_names=['Y'])
onx = node.to_onnx({'X': FloatTensorType()},
                   {'Y': FloatTensorType()},
                   target_opset=opset)
rts = ['numpy', 'python', 'onnxruntime', 'taylor20', 'taylor40']

oinf = OnnxInference(onx)
sess = InferenceSession(onx.SerializeToString())

graph = []
for mv in tqdm(min_values):
    data[0] = mv
    for rt in rts:
        lab = ""
        if rt == 'numpy':
            y = expit(data)
        elif rt == 'python':
            y = oinf.run({'X': data})['Y']
            # * 1.2 to avoid curves to be superimposed
            y *= 1.2
            lab = "x1.2"
        elif rt == 'onnxruntime':
            y = sess.run(None, {'X': data})[0]
        elif rt == 'taylor40':
            y = taylor_sigmoid(data, 40)
            # * 0.8 to avoid curves to be superimposed
            y *= 0.8
            lab = "x0.8"
        elif rt == 'taylor20':
            y = taylor_sigmoid(data, 20)
            # * 0.6 to avoid curves to be superimposed
            y *= 0.6
            lab = "x0.6"
        else:
            raise AssertionError(f"Unknown runtime {rt!r}.")
        value = y[0]
        graph.append(dict(rt=rt + lab, x=mv, y=value))
  0%|          | 0/300 [00:00<?, ?it/s]
 15%|#5        | 45/300 [00:00<00:00, 444.42it/s]
 30%|###       | 90/300 [00:00<00:00, 441.90it/s]
 45%|####5     | 135/300 [00:00<00:00, 441.24it/s]
 60%|######    | 180/300 [00:00<00:00, 441.09it/s]
 75%|#######5  | 225/300 [00:00<00:00, 441.05it/s]
 90%|######### | 270/300 [00:00<00:00, 440.61it/s]
100%|##########| 300/300 [00:00<00:00, 440.80it/s]

Graph.

_, ax = plt.subplots(1, 1, figsize=(12, 4))
df = pandas.DataFrame(graph)
piv = df.pivot('x', 'rt', 'y')
print(piv.T.head())
piv.plot(ax=ax, logy=True)
plot funny sigmoid
x               -20.000000    -19.966667  ...  -10.066667  -10.033333
rt                                        ...
numpy         2.061154e-09  2.131016e-09  ...    0.000042    0.000044
onnxruntime  -5.960464e-08 -5.960464e-08  ...    0.000042    0.000044
pythonx1.2    2.473385e-09  2.557219e-09  ...    0.000051    0.000053
taylor20x0.6  2.211963e-09  2.274888e-09  ...    0.000026    0.000026
taylor40x0.8  1.648965e-09  1.704854e-09  ...    0.000034    0.000035

[5 rows x 300 columns]

<AxesSubplot:xlabel='x'>

log(sig(x)) = -log(1 + e^{-x}). When x is very negative, log(sig(x)) \\sim -x. That explains the graph. We also see onnxruntime is less precise for these values. What’s the benefit?

Computation time#

graph = []
for mv in tqdm(min_values):
    data = numpy.array([mv] * 10000, dtype=numpy.float32)
    for rt in rts:
        begin = time.perf_counter()
        if rt == 'numpy':
            y = expit(data)
        elif rt == 'python':
            y = oinf.run({'X': data})['Y']
        elif rt == 'onnxruntime':
            y = sess.run(None, {'X': data})[0]
        elif rt == 'taylor40':
            y = taylor_sigmoid(data, 40)
        elif rt == 'taylor20':
            y = taylor_sigmoid(data, 20)
        else:
            raise AssertionError(f"Unknown runtime {rt!r}.")
        duration = time.perf_counter() - begin
        graph.append(dict(rt=rt, x=mv, y=duration))

_, ax = plt.subplots(1, 1, figsize=(12, 4))
df = pandas.DataFrame(graph)
piv = df.pivot('x', 'rt', 'y')
piv.plot(ax=ax, logy=True)
plot funny sigmoid
  0%|          | 0/300 [00:00<?, ?it/s]
  5%|5         | 15/300 [00:00<00:02, 140.93it/s]
 10%|#         | 30/300 [00:00<00:01, 140.25it/s]
 15%|#5        | 45/300 [00:00<00:01, 140.15it/s]
 20%|##        | 60/300 [00:00<00:01, 140.10it/s]
 25%|##5       | 75/300 [00:00<00:01, 140.02it/s]
 30%|###       | 90/300 [00:00<00:01, 139.93it/s]
 35%|###4      | 104/300 [00:00<00:01, 139.78it/s]
 39%|###9      | 118/300 [00:00<00:01, 139.60it/s]
 44%|####4     | 132/300 [00:00<00:01, 139.64it/s]
 49%|####8     | 146/300 [00:01<00:01, 139.61it/s]
 53%|#####3    | 160/300 [00:01<00:01, 139.66it/s]
 58%|#####8    | 174/300 [00:01<00:00, 139.68it/s]
 63%|######2   | 188/300 [00:01<00:00, 139.62it/s]
 67%|######7   | 202/300 [00:01<00:00, 139.63it/s]
 72%|#######2  | 216/300 [00:01<00:00, 139.67it/s]
 77%|#######6  | 230/300 [00:01<00:00, 139.67it/s]
 81%|########1 | 244/300 [00:01<00:00, 139.70it/s]
 86%|########6 | 258/300 [00:01<00:00, 139.78it/s]
 91%|######### | 272/300 [00:01<00:00, 139.78it/s]
 95%|#########5| 286/300 [00:02<00:00, 139.80it/s]
100%|##########| 300/300 [00:02<00:00, 139.81it/s]
100%|##########| 300/300 [00:02<00:00, 139.72it/s]

<AxesSubplot:xlabel='x'>

Conclusion#

The implementation from onnxruntime is faster but is much less contiguous for extremes. It explains why probabilities may be much different when an observation is far from every classification border. In that case, onnxruntime implementation of the sigmoid function returns 0 when numpy.sigmoid() returns a smoother value. Probabilites of logistic regression are obtained after the raw scores are transformed with the sigmoid function and normalized. If the raw scores are very negative, the sum of probabilities becomes null with onnxruntime. The normalization fails.

# plt.show()

Total running time of the script: ( 0 minutes 4.594 seconds)

Gallery generated by Sphinx-Gallery