module benchmark.benchmark_perf
¶
Short summary¶
module pymlbenchmark.benchmark.benchmark_perf
Implements a benchmark about performance.
Classes¶
class |
truncated documentation |
---|---|
Factorizes code to compare two implementations. See example Benchmark of PolynomialFeatures + partialfit of SGDClassifier. |
|
Defines a bench perf test. See example Benchmark of PolynomialFeatures + partialfit of SGDClassifier. |
Methods¶
method |
truncated documentation |
---|---|
usual |
|
Generates one testing dataset. |
|
Dumps everything which is needed to investigate an error. Everything is pickled in the current folder or dump_folder … |
|
Runs the benchmark. |
|
Enumerates all possible options. |
|
Tells if the test by conf is valid or not. |
|
Returns the function call to test, it produces a dictionary |
|
Checks if a profiler applies on this set of parameters, then profiles function fct. |
|
Runs validations after the test was done to make sure it was valid. |
Documentation¶
Implements a benchmark about performance.
- class pymlbenchmark.benchmark.benchmark_perf.BenchPerf(pbefore, pafter, btest, filter_test=None, profilers=None)¶
Bases:
object
Factorizes code to compare two implementations. See example Benchmark of PolynomialFeatures + partialfit of SGDClassifier.
- Parameters:
pbefore – parameters before calling fct, dictionary
{name: [list of values]}
, these parameters are sent to the instance ofBenchPerfTest
to testpafter – parameters after calling fct, dictionary
{name: [list of values]}
, these parameters are sent to methodBenchPerfTest.fcts
btest – instance of
BenchPerfTest
filter_test – function which tells if a configuration must be tested or not, None to test them all
profilers – list of profilers to run
Every parameter specifies a function is called through a method. The user can only overwrite it.
- __init__(pbefore, pafter, btest, filter_test=None, profilers=None)¶
- Parameters:
pbefore – parameters before calling fct, dictionary
{name: [list of values]}
, these parameters are sent to the instance ofBenchPerfTest
to testpafter – parameters after calling fct, dictionary
{name: [list of values]}
, these parameters are sent to methodBenchPerfTest.fcts
btest – instance of
BenchPerfTest
filter_test – function which tells if a configuration must be tested or not, None to test them all
profilers – list of profilers to run
Every parameter specifies a function is called through a method. The user can only overwrite it.
- __repr__()¶
usual
- enumerate_run_benchs(repeat=10, verbose=False, stop_if_error=True, validate=True, number=1)¶
Runs the benchmark.
- Parameters:
repeat – number of repeatition of the same call with different datasets
verbose – if True, use tqdm
stop_if_error – by default, it stops when method validate fails, if False, the function stores the exception
validate – compare the outputs against the baseline
number – number of times to call the same function, the method then measure this number calls
- Returns:
yields dictionaries with all the metrics
- enumerate_tests(options)¶
Enumerates all possible options.
- Parameters:
options – dictionary
{name: list of values}
- Returns:
list of dictionary
{name: value}
The function applies the method fct_filter_test.
- fct_filter_test(**conf)¶
Tells if the test by conf is valid or not.
- Parameters:
conf – dictionary
{name: value}
- Returns:
boolean
- profile(kwargs, fct)¶
Checks if a profiler applies on this set of parameters, then profiles function fct.
- Parameters:
kwargs – dictionary of parameters
fct – function to measure
- class pymlbenchmark.benchmark.benchmark_perf.BenchPerfTest(**kwargs)¶
Bases:
object
Defines a bench perf test. See example Benchmark of PolynomialFeatures + partialfit of SGDClassifier.
Conventions for N, dim
In all the package, N refers to the number of observations, dim the dimension or the number of features.
- __init__(**kwargs)¶
- data(**opts)¶
Generates one testing dataset.
- Returns:
dataset, usually a list of arrays such as X, y
- dump_error(msg, **kwargs)¶
Dumps everything which is needed to investigate an error. Everything is pickled in the current folder or dump_folder is attribute dump_folder was defined. This folder is created if it does not exist.
- Parameters:
msg – message
kwargs – needed data to investigate
- Returns:
filename
- fcts(**opts)¶
Returns the function call to test, it produces a dictionary
{name: fct}
where name is the name of the function and fct the function to benchmark
- validate(results, **kwargs)¶
Runs validations after the test was done to make sure it was valid.
- Parameters:
results – results to validate, list of tuple
(parameters, results)
kwargs – additional information in case errors must traced
The function raised an exception or not.