# ROC¶

A few graphs about ROC on the iris datasets.

## ROC with scikit-learn¶

We use the following example Receiver Operating Characteristic (ROC).

## ROC - TPR / FPR¶

We do the same with the class this module provides ROC.

• TPR = True Positive Rate
• FPR = False Positive Rate

You can see as TPR the distribution function of a score for a positive example and the FPR the same for a negative example.

This function draws the curve with only 10 points but we can ask for more.

We can also ask to draw bootstropped curves to get a sense of the confidence.

## ROC - score distribution¶

This another representation for the metrics FPR and TPR. $P(x<s)$ is the probability that a score for a positive example to be less than $s$. $P(->s)$ is the probability that a score for a negative example to be higher than $s$. We assume in this case that the higher the better for the score.

When curves intersects at score $s^*$, error rates for positive and negative examples are equal. If we show the confusion matrix for this particular score $s^*$, it gives:

## ROC - recall / precision¶

In this representation, we show the score.