Coding the Perceptron Offset Trick

A Quick Look at Making the Theory Real

In my previous blog post I explained the logic behind the offset trick, where you incorporate a b value into your data set and weight vector for the Perceptron, and walked through a theoretical example. In this blog post, I will demonstrate the actual code style I would use if I were doing that assignment again and wanted to incorporate the b value rather than track it separately through the iterations. It would look something like this:

Making the theory real.

Note that usually the first part — stacking the data points into a single data set — is already done for you. I showed it here to keep it in line with my previous example. Also, in this example, our x value had a label of +1 and our y value had a label of -1 (not explicitly shown). The keys here are

  1. Creating the b_adjustment vector and adding it to your data set
  2. Modifying your weight vector (z in my code example) to make sure it has the same number of dimensions as your data set
  3. When iterating through the data set (not shown) to check if your data points are classified correctly, you no longer have to add b, because it is automatically added via the matrix multiplication now
  4. HOWEVER: when you’re done, you need to return the w and b values in the format expected, so for w carve off the b value from the end, and for b just provide that value

Functionally, this accomplishes the exact same thing as adding an explicit b value, adjusting it with each iteration, and tracking it separately. It just means you don’t have to go through that extra math (and thus may have a slightly positive performance impact.) The pain point is adjusting your data at the beginning of the function, and then remembering to take the right subset of the final weight vector to return the correct / expected w and b values.

Data Science, Big Data, & Cloud nerd with a focus on healthcare & a passion for making complex topics easier to understand. All thoughts are mine & mine alone.