Forward backward on a neural network on GPU#

This example leverages example Train a linear regression with onnxruntime-training on GPU in details to train a neural network from scikit-learn on GPU. The code uses the same code introduced in Train a linear regression with forward backward.

A neural network with scikit-learn#

import warnings
import numpy
from pandas import DataFrame
from onnxruntime import get_device
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
from onnxcustom.plotting.plotting_onnx import plot_onnxs
from mlprodict.onnx_conv import to_onnx
from onnxcustom.utils.orttraining_helper import get_train_initializer
from onnxcustom.utils.onnx_helper import onnx_rename_weights
from onnxcustom.training.optimizers_partial import (
    OrtGradientForwardBackwardOptimizer)


X, y = make_regression(1000, n_features=10, bias=2)
X = X.astype(numpy.float32)
y = y.astype(numpy.float32)
X_train, X_test, y_train, y_test = train_test_split(X, y)

nn = MLPRegressor(hidden_layer_sizes=(10, 10), max_iter=100,
                  solver='sgd', learning_rate_init=5e-5,
                  n_iter_no_change=1000, batch_size=10, alpha=0,
                  momentum=0, nesterovs_momentum=False)

with warnings.catch_warnings():
    warnings.simplefilter('ignore')
    nn.fit(X_train, y_train)

print(nn.loss_curve_)

Out:

[15741.759466145833, 15646.286002604167, 15188.633118489583, 10890.362529296875, 2008.2755806477865, 404.16611053466795, 296.830053507487, 231.28159403483073, 191.87960815429688, 160.59067667643228, 131.52240915934246, 103.56474431355794, 78.44901677449545, 56.540050710042316, 41.54491817474365, 30.79826987584432, 23.609168230692546, 18.790011100769043, 15.5633185450236, 13.407399171193441, 11.781751489639282, 10.525839281082153, 9.51689311504364, 8.711665391921997, 7.988088870048523, 7.3841437753041586, 6.8041431554158525, 6.276253747940063, 5.769704437255859, 5.439427502155304, 5.037844480673472, 4.731258577505748, 4.430074865023295, 4.179087524414062, 3.930739171504974, 3.6941020472844444, 3.5069108414649963, 3.3016651364167533, 3.139927323659261, 2.9522842359542847, 2.820296740134557, 2.637568337917328, 2.4974668852488198, 2.3768914858500163, 2.245424398581187, 2.124481121699015, 2.022125627199809, 1.9291385988394418, 1.846208575765292, 1.7612703891595205, 1.6915117859840394, 1.6177992228666942, 1.5695338654518127, 1.507955147822698, 1.4328755231698354, 1.3896148673693338, 1.33270177145799, 1.293214137951533, 1.251545769572258, 1.2043571933110555, 1.158904716571172, 1.1235395208994547, 1.0839746056000392, 1.0606326242287953, 1.0281214362382889, 0.9934756517410278, 0.9651632934808732, 0.936517926454544, 0.9107225416103999, 0.8884509921073913, 0.8664345022042592, 0.843952547510465, 0.8207182945807775, 0.802018105884393, 0.7805648517608642, 0.7634014737606049, 0.7465715259313583, 0.7272158151865006, 0.707720771531264, 0.698722014327844, 0.6836602787176768, 0.6691327826182047, 0.6593955908219019, 0.6408519826332728, 0.6349814549088478, 0.6239300717910131, 0.613356528977553, 0.6014353368679682, 0.5914698906242848, 0.5812680383523305, 0.5684995593627294, 0.5572033104797204, 0.5482728111743927, 0.5404503508408864, 0.5288031204541525, 0.5193262363473574, 0.5126724702119827, 0.5004917035500208, 0.4964644128084183, 0.49088449199994405]

Score:

print("mean_squared_error=%r" % mean_squared_error(y_test, nn.predict(X_test)))

Out:

mean_squared_error=1.5291642

Conversion to ONNX#

onx = to_onnx(nn, X_train[:1].astype(numpy.float32), target_opset=15)
plot_onnxs(onx)
plot orttraining nn gpu fwbw

Out:

<AxesSubplot:>

Initializers to train

weights = list(sorted(get_train_initializer(onx)))
print(weights)

Out:

['coefficient', 'coefficient1', 'coefficient2', 'intercepts', 'intercepts1', 'intercepts2']

Training graph with forward backward#

device = "cuda" if get_device().upper() == 'GPU' else 'cpu'

print("device=%r get_device()=%r" % (device, get_device()))

Out:

device='cpu' get_device()='CPU'

The training session. The first instructions fails for an odd reason as the class TrainingAgent expects to find the list of weights to train in alphabetical order. That means the list onx.graph.initializer must be sorted by alphabetical order of their names otherwise the process could crash unless it is caught earlier with the following exception.

try:
    train_session = OrtGradientForwardBackwardOptimizer(
        onx, device=device, verbose=1,
        warm_start=False, max_iter=100, batch_size=10)
    train_session.fit(X, y)
except ValueError as e:
    print(e)

Out:

List of weights to train must be sorted but ['coefficient', 'intercepts', 'coefficient1', 'intercepts1', 'coefficient2', 'intercepts2'] is not. You shoud use function onnx_rename_weights to do that before calling this class.

Function onnx_rename_weights does not change the order of the initializer but renames them. Then class TrainingAgent may work.

onx = onnx_rename_weights(onx)
train_session = OrtGradientForwardBackwardOptimizer(
    onx, device=device, verbose=1,
    learning_rate=5e-5, warm_start=False, max_iter=100, batch_size=10)
train_session.fit(X, y)

Out:

  0%|          | 0/100 [00:00<?, ?it/s]
  1%|1         | 1/100 [00:00<00:17,  5.67it/s]
  2%|2         | 2/100 [00:00<00:17,  5.64it/s]
  3%|3         | 3/100 [00:00<00:17,  5.63it/s]
  4%|4         | 4/100 [00:00<00:17,  5.62it/s]
  5%|5         | 5/100 [00:00<00:16,  5.63it/s]
  6%|6         | 6/100 [00:01<00:16,  5.64it/s]
  7%|7         | 7/100 [00:01<00:16,  5.65it/s]
  8%|8         | 8/100 [00:01<00:16,  5.64it/s]
  9%|9         | 9/100 [00:01<00:16,  5.63it/s]
 10%|#         | 10/100 [00:01<00:15,  5.64it/s]
 11%|#1        | 11/100 [00:01<00:15,  5.65it/s]
 12%|#2        | 12/100 [00:02<00:15,  5.66it/s]
 13%|#3        | 13/100 [00:02<00:15,  5.67it/s]
 14%|#4        | 14/100 [00:02<00:15,  5.67it/s]
 15%|#5        | 15/100 [00:02<00:14,  5.67it/s]
 16%|#6        | 16/100 [00:02<00:14,  5.67it/s]
 17%|#7        | 17/100 [00:03<00:14,  5.68it/s]
 18%|#8        | 18/100 [00:03<00:14,  5.67it/s]
 19%|#9        | 19/100 [00:03<00:14,  5.67it/s]
 20%|##        | 20/100 [00:03<00:14,  5.67it/s]
 21%|##1       | 21/100 [00:03<00:13,  5.67it/s]
 22%|##2       | 22/100 [00:03<00:13,  5.67it/s]
 23%|##3       | 23/100 [00:04<00:13,  5.68it/s]
 24%|##4       | 24/100 [00:04<00:13,  5.68it/s]
 25%|##5       | 25/100 [00:04<00:13,  5.67it/s]
 26%|##6       | 26/100 [00:04<00:13,  5.66it/s]
 27%|##7       | 27/100 [00:04<00:12,  5.66it/s]
 28%|##8       | 28/100 [00:04<00:12,  5.64it/s]
 29%|##9       | 29/100 [00:05<00:12,  5.65it/s]
 30%|###       | 30/100 [00:05<00:12,  5.65it/s]
 31%|###1      | 31/100 [00:05<00:12,  5.65it/s]
 32%|###2      | 32/100 [00:05<00:12,  5.65it/s]
 33%|###3      | 33/100 [00:05<00:11,  5.65it/s]
 34%|###4      | 34/100 [00:06<00:11,  5.64it/s]
 35%|###5      | 35/100 [00:06<00:11,  5.64it/s]
 36%|###6      | 36/100 [00:06<00:11,  5.64it/s]
 37%|###7      | 37/100 [00:06<00:11,  5.65it/s]
 38%|###8      | 38/100 [00:06<00:10,  5.65it/s]
 39%|###9      | 39/100 [00:06<00:10,  5.65it/s]
 40%|####      | 40/100 [00:07<00:10,  5.65it/s]
 41%|####1     | 41/100 [00:07<00:10,  5.65it/s]
 42%|####2     | 42/100 [00:07<00:10,  5.66it/s]
 43%|####3     | 43/100 [00:07<00:10,  5.65it/s]
 44%|####4     | 44/100 [00:07<00:09,  5.64it/s]
 45%|####5     | 45/100 [00:07<00:09,  5.65it/s]
 46%|####6     | 46/100 [00:08<00:09,  5.65it/s]
 47%|####6     | 47/100 [00:08<00:09,  5.65it/s]
 48%|####8     | 48/100 [00:08<00:09,  5.66it/s]
 49%|####9     | 49/100 [00:08<00:09,  5.65it/s]
 50%|#####     | 50/100 [00:08<00:08,  5.65it/s]
 51%|#####1    | 51/100 [00:09<00:08,  5.65it/s]
 52%|#####2    | 52/100 [00:09<00:08,  5.64it/s]
 53%|#####3    | 53/100 [00:09<00:08,  5.65it/s]
 54%|#####4    | 54/100 [00:09<00:08,  5.65it/s]
 55%|#####5    | 55/100 [00:09<00:07,  5.64it/s]
 56%|#####6    | 56/100 [00:09<00:07,  5.64it/s]
 57%|#####6    | 57/100 [00:10<00:07,  5.64it/s]
 58%|#####8    | 58/100 [00:10<00:07,  5.65it/s]
 59%|#####8    | 59/100 [00:10<00:07,  5.64it/s]
 60%|######    | 60/100 [00:10<00:07,  5.64it/s]
 61%|######1   | 61/100 [00:10<00:06,  5.64it/s]
 62%|######2   | 62/100 [00:10<00:06,  5.64it/s]
 63%|######3   | 63/100 [00:11<00:06,  5.64it/s]
 64%|######4   | 64/100 [00:11<00:06,  5.64it/s]
 65%|######5   | 65/100 [00:11<00:06,  5.64it/s]
 66%|######6   | 66/100 [00:11<00:06,  5.65it/s]
 67%|######7   | 67/100 [00:11<00:05,  5.65it/s]
 68%|######8   | 68/100 [00:12<00:05,  5.66it/s]
 69%|######9   | 69/100 [00:12<00:05,  5.65it/s]
 70%|#######   | 70/100 [00:12<00:05,  5.66it/s]
 71%|#######1  | 71/100 [00:12<00:05,  5.66it/s]
 72%|#######2  | 72/100 [00:12<00:04,  5.65it/s]
 73%|#######3  | 73/100 [00:12<00:04,  5.65it/s]
 74%|#######4  | 74/100 [00:13<00:04,  5.65it/s]
 75%|#######5  | 75/100 [00:13<00:04,  5.66it/s]
 76%|#######6  | 76/100 [00:13<00:04,  5.66it/s]
 77%|#######7  | 77/100 [00:13<00:04,  5.65it/s]
 78%|#######8  | 78/100 [00:13<00:03,  5.66it/s]
 79%|#######9  | 79/100 [00:13<00:03,  5.66it/s]
 80%|########  | 80/100 [00:14<00:03,  5.67it/s]
 81%|########1 | 81/100 [00:14<00:03,  5.66it/s]
 82%|########2 | 82/100 [00:14<00:03,  5.66it/s]
 83%|########2 | 83/100 [00:14<00:03,  5.64it/s]
 84%|########4 | 84/100 [00:14<00:02,  5.65it/s]
 85%|########5 | 85/100 [00:15<00:02,  5.63it/s]
 86%|########6 | 86/100 [00:15<00:02,  5.64it/s]
 87%|########7 | 87/100 [00:15<00:02,  5.64it/s]
 88%|########8 | 88/100 [00:15<00:02,  5.64it/s]
 89%|########9 | 89/100 [00:15<00:01,  5.64it/s]
 90%|######### | 90/100 [00:15<00:01,  5.65it/s]
 91%|#########1| 91/100 [00:16<00:01,  5.64it/s]
 92%|#########2| 92/100 [00:16<00:01,  5.64it/s]
 93%|#########3| 93/100 [00:16<00:01,  5.64it/s]
 94%|#########3| 94/100 [00:16<00:01,  5.63it/s]
 95%|#########5| 95/100 [00:16<00:00,  5.64it/s]
 96%|#########6| 96/100 [00:16<00:00,  5.65it/s]
 97%|#########7| 97/100 [00:17<00:00,  5.64it/s]
 98%|#########8| 98/100 [00:17<00:00,  5.65it/s]
 99%|#########9| 99/100 [00:17<00:00,  5.64it/s]
100%|##########| 100/100 [00:17<00:00,  5.63it/s]
100%|##########| 100/100 [00:17<00:00,  5.65it/s]

OrtGradientForwardBackwardOptimizer(model_onnx='ir_version...', weights_to_train="['I0_coeff...", loss_output_name='loss', max_iter=100, training_optimizer_name='SGDOptimizer', batch_size=10, learning_rate=LearningRateSGD(eta0=5e-05, alpha=0.0001, power_t=0.25, learning_rate='invscaling'), value=1.5811388300841898e-05, device='cpu', warm_start=False, verbose=1, validation_every=10, learning_loss=SquareLearningLoss(), enable_logging=False, weight_name=None, learning_penalty=NoLearningPenalty(), exc=True)

Let’s see the weights.

state_tensors = train_session.get_state()

And the loss.

print(train_session.train_losses_)

df = DataFrame({'ort losses': train_session.train_losses_,
                'skl losses:': nn.loss_curve_})
df.plot(title="Train loss against iterations", logy=True)
Train loss against iterations

Out:

[14757.961, 2795.818, 365.69388, 212.84926, 171.69537, 149.75917, 139.6222, 120.68387, 110.04181, 99.32865, 81.38614, 74.74188, 64.52174, 64.46679, 55.463505, 51.162663, 47.52701, 43.98369, 41.565464, 38.424576, 36.407345, 34.78314, 34.932644, 32.182434, 29.657887, 27.84437, 30.799726, 26.41726, 25.97414, 24.410994, 22.159937, 22.646172, 21.223955, 21.442669, 19.928272, 18.194735, 20.56925, 18.521254, 18.958378, 18.67849, 18.179785, 17.945662, 17.127691, 15.49264, 18.64062, 15.970777, 16.294214, 15.174623, 14.231288, 14.736508, 14.139644, 14.6062975, 14.243106, 13.60184, 13.264147, 13.002558, 12.521936, 11.905208, 12.200501, 11.854286, 11.862492, 12.131491, 11.218429, 11.432032, 12.222882, 11.359291, 10.914027, 10.063329, 10.464721, 11.315568, 10.867605, 10.200568, 9.220118, 10.455598, 9.499249, 10.590823, 10.703003, 9.602959, 8.820037, 9.074763, 10.023335, 8.701785, 8.484463, 9.125918, 8.803666, 8.444618, 8.187214, 8.259434, 9.376672, 8.141559, 9.066541, 8.915678, 8.554631, 8.278072, 7.6390476, 6.9461083, 8.041873, 7.7092333, 8.817824, 7.824349]

<AxesSubplot:title={'center':'Train loss against iterations'}>

The convergence rate is different but both classes do not update the learning the same way.

# import matplotlib.pyplot as plt
# plt.show()

Total running time of the script: ( 0 minutes 32.859 seconds)

Gallery generated by Sphinx-Gallery