Week 3 content of Scribble AI project
Welcome back! I hope we were all able to finish up the Neuron class last week and the tests passed. With that said, we now need to go up a couple layers of abstraction altogether, and code up a Layer and then finally, an entire feed-forward Neural Network.
So, a Layer in a neural network is just a list of neurons. We typically have three types of layers:
1) Input Layer: This is just the initial inputs that we are given
2) Hidden Layers: These make up the bulk of a network. We can have any number of hidden layers and they are in the middle.
3) Output Layer: This is the last layer that outputs what the Neural Network thinks the answer is.
Here is an image of a hidden layer highlighted.
https://drive.google.com/file/d/1IC-dA_4hrsYmRAiYqVEeYEnDvjimi7Oh/view?usp=sharing
So, to recap:
A single layer contains many neurons
The layer has to keep track of its neurons
A layer must have a forward functionality that goes through all its neurons and calls their forward functions. After that, it must keep a list of all its neurons outputs
class Layer:
def init(self, num_inputs, num_neurons):
self.num_inputs = num_inputs
self.num_neurons = num_neurons
# Creating the neurons
# Here, create the number of neurons required and store them in a list
self.outputs = []
def forward(self, inputs):
# Take the inputs and pass them each neuron's forward functions and store the outputs in the self.outputs list
After we are done with the layers, the next level of abstraction is an entire network. Just like a single layer houses multiple neurons, a network will house multiple layers.
The Network should:
Contain a list of every layer it has (1 input layer, multiple hidden layers, 1 output layer)
Similarly to Neurons and Layers, a Network must also have a forward functionality and goes through every one of its layers and calls their forward functions.
class NeuralNetwork:
def init(self, num_inputs, num_hidden_layers, num_hidden_layer_neurons, num_outputs):
self.num_inputs = num_inputs
self.num_hidden_layers = num_hidden_layers
self.num_hidden_layer_neurons = num_hidden_layer_neurons
self.num_outputs = num_outputs
# Now that we have all the required variables, go ahead and create the layers
# Always remember that we do NOT need to create a layer for the inputs. The initial
# inputs that we get make up the first input layer. So, we start from the first hidden
# layer and create layers all the way up to the last (output) layer
self.layers = []
# Create the appropriate number of hidden layers each with the appropriate number of neurons
# At the end, create the output layer
def forward(self, inputs):
# Take the inputs and pass those inputs to each layer in the network
# Tip, use a for loop and one variable to keep track of the outputs of a single layer
# Keep updating that single variable with the outputs of the layers
# At the end, whatever is in that variable will be the output of the last layer
While creating the layers, you MUST take extra care about the number of inputs. Say you are training your network on some data, say A = [10, 15, 20], B = [100, 150, 200] is B. Here, if we input something like [9, 22, 19], we would want our Neural Network to output A, since it is closer to A than it is to B. Here, the initial number of inputs is 3 (since both A and B have three unique features or numbers that define them), and the number of outputs (i.e. the number of neurons in the output layer) should be 2, either A or B. So, the input layer (the very first layer) should be of size 3 whereas the output layer (the very last layer that actually outputs the network's decision) should be of size 2. In the middle, the hidden layers can, quite literally, be of any size and numbers. We can even have 10 hidden layers with 100 neurons each if we wanted to. However, it is important that the number of inputs of a hidden layer must match the number of outputs of the previous layer. So, in our example, the first hidden layer should have 3 inputs since our very first input layer consists of 3 values or numbers. After that, the first layer can output, for example, 100 different numbers. Now, the second hidden layer must be able to take an input of size 100. So, when creating the layers, experiment with this and make sure that the number of inputs of one layer matches the number of outputs of the previous layer. Please ask us for any questions regarding this. This is very important to understand.
I hope this week's content will be of use to you and your journey in this project. I am including a python file that contains the code from this week's content.
Make sure you edit the Neuron class of this file to include whatever changes you made last week.