Implementation neural network in vanilla JS
Of course the first question is why? And the first answer is: when you learn from a different direction, you learn best.
We want to implement a neural network that does a simple AND gate:
const inputs = [
[0, 0],
[1, 0],
[1, 1],
[0, 1]
];
const outputs = [0, 0, 1, 0];
Then we will also implement an unnecessary function whose entire purpose is to print color to the console. I'm joking of course, because it is not unnecessary. Color is a way of presenting information. 40% percent of our brain is related to visual processing. Therefore visualization of the information is the most important thing in the world. And so we print to the console with color.
function ct(text, color) {
const colors = {
red: '\x1b[31m',
green: '\x1b[32m',
yellow: '\x1b[33m',
blue: '\x1b[34m',
magenta: '\x1b[35m',
cyan: '\x1b[36m',
white: '\x1b[37m'
};
return `${colors[color]}${text}\x1b[0m`;
}
Indeed, here is the beautiful result: it is much easier to understand the neuronal network this way:
This piece of code doesn't need to be explained - it's straightforward, But we have here the smallest neural network in the entire cosmos. Only two weights and one bias:
let weights = [Math.random(), Math.random()];
let bias = Math.random();
const learningRate = 0.2;
const iterations = 1000;
const inputs = [
[0, 0],
[1, 0],
[1, 1],
[0, 1]
];
const outputs = [0, 0, 1, 0];
Here we train her nicely. As simple as it can be, we just multiply the weights in the inputs and add the bias. Then make a sigmoid for all of it.
And then us calculating the distance from the truth using the derivative and update everyone according to the learning rate. Standard and basic.
for (let i = 0; i < iterations; i++) {
inputs.forEach((input, index) => {
let output = sigmoid(input[0] * weights[0] + input[1] * weights[1] + bias);
let error = outputs[index] - output;
let dOutput = error * sigmoidDerivative(output);
weights[0] += input[0] * dOutput * learningRate;
weights[1] += input[1] * dOutput * learningRate;
bias += dOutput * learningRate;
});
}
Here we beautifully print our stunning weights:
console.log('NN simplified parameters:');
console.log(`${ct('bias:', 'cyan')} ${ct(bias.toFixed(4), 'cyan')}`);
console.log(`${ct('weights:', 'magenta')} ${ct(weights.map(w => w.toFixed(4)).join(', '), 'magenta')}`);
And this includes our entire neural network. quite small...
Then we use it and print that colorful thing that we have seen before:
function predict(input) {
let output = input[0] * weights[0] + input[1] * weights[1] + bias;
let sigmoidOutput = sigmoid(input[0] * weights[0] + input[1] * weights[1] + bias);
console.log('');
console.log(`${ct('Prediction process for input:', 'white')} [${input.join(', ')}]`);
console.log(`${input[0]} ${ct('*', 'red')} ${ct(weights[0].toFixed(4), 'magenta')} ${ct('+', 'red')} ${input[1]} ${ct('*', 'red')} ${ct(weights[1].toFixed(4), 'magenta')} ${ct('+', 'red')} ${ct(bias.toFixed(4), 'cyan')} ${ct('=', 'red')} ${ct(output.toFixed(4), 'yellow')}`);
console.log(`${ct('Activation function:', 'white')} sigmoid(${output.toFixed(4)}) = ${sigmoidOutput.toFixed(4)} = ${ct(output > 0.5 ? 1 : 0, 'yellow')}`);
return output > 0.5 ? 1 : 0;
}
// Test predictions
predict([0, 0]);
predict([1, 0]);
predict([1, 1]);
predict([0, 1]);
Ok so what did we learn? We chose the smallest task imaginable for a neuron network - but we used all the components and printed color prints and thus allowed ourselves to refine the basic intuitive understanding of what a neuron network actually is and how it works and how the numbers behave. We implemented weights, biases, activation function, training, prediction...
Great fun. In the next post we will implement a slightly larger neuron network for a slightly larger task and this time in vanilla python - without using any library that includes no NumPay and of course not any PyTorch. Even though I have already built large nets - you must always go back to the basics and feel the irons up close with hands on.