Elegant SciPy by Juan Nunez-Iglesias

Elegant SciPy by Juan Nunez-Iglesias

Author:Juan Nunez-Iglesias
Language: eng
Format: epub
Publisher: O'Reilly Media
Published: 2017-08-17T04:00:00+00:00


import numpy as np pred = np.array([0, 1, 0, 0, 1, 1, 1, 0, 1, 1])

You can check how well you’ve done by comparing it to a vector of ground truth, classifications obtained by inspecting each message by hand.

gt = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])

Now, classification is hard for computers, so the values in pred and gt don’t match up exactly. At positions where pred is 0 and gt is 0, the prediction has correctly identified a message as nonspam. This is called a true negative. Conversely, at positions where both values are 1, the predictor has correctly identified a spam message and found a true positive.

Then, there are two kinds of errors. If we let a spam message (where gt is 1) through to the user’s inbox (pred is 0), we’ve made a false negative error. If we predict a legitimate message (gt is 0) to be spam (pred is 1), we’ve made a false positive prediction. (An email from the director of my scientific institute once landed in my spam folder. The reason? His announcement of a postdoc talk competition started with “You could win $500!”)

If we want to measure how well we are doing, we have to count the above kinds of errors using a contingency matrix. (This is also sometimes called a confusion matrix. The name is apt.) For this, we place the prediction labels along the rows and the ground truth labels along the columns. Then we count the number of times they correspond. So, for example, since there are 4 true positives (where pred and gt are both 1), the matrix will have a value of 3 at position (1, 1).

Generally:



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.