Examples#

  1. Compute a distance between two graphs.

  2. Stochastic Gradient Descent applied to linear regression

Compute a distance between two graphs.

See Distance between two graphs.

<<<

import copy
from mlstatpy.graph import GraphDistance

# We define two graphs as list of edges.
graph1 = [("a", "b"), ("b", "c"), ("b", "X"), ("X", "c"),
          ("c", "d"), ("d", "e"), ("0", "b")]
graph2 = [("a", "b"), ("b", "c"), ("b", "X"), ("X", "c"),
          ("c", "t"), ("t", "d"), ("d", "e"), ("d", "g")]

# We convert them into objects GraphDistance.
graph1 = GraphDistance(graph1)
graph2 = GraphDistance(graph2)

distance, graph = graph1.distance_matching_graphs_paths(graph2, use_min=False)

print("distance", distance)
print("common paths:", graph)

>>>

    distance 0.3318250377073907
    common paths: 0
    X
    a
    b
    c
    d
    e
    00
    11
    g
    t
    a -> b []
    b -> c []
    b -> X []
    X -> c []
    c -> d []
    d -> e []
    0 -> b []
    00 -> a []
    00 -> 0 []
    e -> 11 []
    c -> 2a.t []
    2a.t -> d []
    d -> 2a.g []
    2a.g -> 11 []

(entrée originale : graph_distance.py:docstring of mlstatpy.graph.graph_distance.GraphDistance, line 3)

Stochastic Gradient Descent applied to linear regression

The following example how to optimize a simple linear regression.

<<<

import numpy
from mlstatpy.optim import SGDOptimizer


def fct_loss(c, X, y):
    return numpy.linalg.norm(X @ c - y) ** 2


def fct_grad(c, x, y, i=0):
    return x * (x @ c - y) * 0.1


coef = numpy.array([0.5, 0.6, -0.7])
X = numpy.random.randn(10, 3)
y = X @ coef

sgd = SGDOptimizer(numpy.random.randn(3))
sgd.train(X, y, fct_loss, fct_grad, max_iter=15, verbose=True)
print('optimized coefficients:', sgd.coef)

>>>

    0/15: loss: 65.11 lr=0.1 max(coef): 1.5 l1=0/3.3 l2=0/4
    1/15: loss: 17.25 lr=0.0302 max(coef): 0.76 l1=0.24/1.5 l2=0.025/0.93
    2/15: loss: 8.623 lr=0.0218 max(coef): 0.81 l1=0.28/1.2 l2=0.031/0.75
    3/15: loss: 6.718 lr=0.018 max(coef): 0.98 l1=0.13/1.5 l2=0.0075/1.1
    4/15: loss: 3.777 lr=0.0156 max(coef): 0.98 l1=0.038/1.8 l2=0.0005/1.3
    5/15: loss: 2.199 lr=0.014 max(coef): 0.94 l1=0.019/1.9 l2=0.00017/1.4
    6/15: loss: 1.323 lr=0.0128 max(coef): 0.88 l1=0.00098/2 l2=4.1e-07/1.4
    7/15: loss: 0.8787 lr=0.0119 max(coef): 0.82 l1=0.084/2 l2=0.0032/1.3
    8/15: loss: 0.6625 lr=0.0111 max(coef): 0.77 l1=0.016/1.9 l2=0.00013/1.3
    9/15: loss: 0.5546 lr=0.0105 max(coef): 0.74 l1=0.095/1.9 l2=0.0043/1.2
    10/15: loss: 0.4725 lr=0.00995 max(coef): 0.72 l1=0.045/1.9 l2=0.00087/1.2
    11/15: loss: 0.4206 lr=0.00949 max(coef): 0.7 l1=0.12/1.9 l2=0.0057/1.2
    12/15: loss: 0.3671 lr=0.00909 max(coef): 0.69 l1=0.11/1.9 l2=0.0043/1.2
    13/15: loss: 0.329 lr=0.00874 max(coef): 0.68 l1=0.0038/1.9 l2=8e-06/1.2
    14/15: loss: 0.3041 lr=0.00842 max(coef): 0.68 l1=0.075/1.9 l2=0.0029/1.2
    15/15: loss: 0.2774 lr=0.00814 max(coef): 0.67 l1=0.042/1.9 l2=0.00075/1.2
    optimized coefficients: [ 0.662  0.674 -0.529]

(entrée originale : sgd.py:docstring of mlstatpy.optim.sgd.SGDOptimizer, line 34)