module ml.competitions
#
Short summary#
module ensae_teaching_cs.ml.competitions
Compute metrics in for a competition
Functions#
function |
truncated documentation |
---|---|
Computes the AUC. |
|
Adapts the template available at evaluate.py … |
|
Wraps the function following the guidelines User_Building a Scoring Program for a Competition. … |
Documentation#
Compute metrics in for a competition
- ensae_teaching_cs.ml.competitions.AUC(answers, scores)#
Computes the AUC.
- Paramètres:
answers – expected answers 0 (false), 1 (true)
scores – score obtained for class 1
- Renvoie:
number
- ensae_teaching_cs.ml.competitions.main_codalab_wrapper(fct, metric_name, argv, truth_file='truth.txt', submission_file='answer.txt', output_file='scores.txt')#
Adapts the template available at evaluate.py
- ensae_teaching_cs.ml.competitions.private_codalab_wrapper(fct, metric_name, fold1, fold2, f1='answer.txt', f2='answer.txt', output='scores.txt', use_print=False)#
Wraps the function following the guidelines User_Building a Scoring Program for a Competition. It replicates the example available at competition-examples/hello_world.
- Paramètres:
fct – function to wrap
metric_name – metric name
fold1 – folder which contains the data for folder containing the truth
fold2 – folder which contains the data for folder containing the data
f1 – filename for the truth
f2 – filename for the produced answers
output – produces an output with the expected results
use_print – display intermediate results
- Renvoie:
metric