The stupidest artificial intelligence that can be created - Perceptron
Our first neural network is so simple that it only has two weights. It's great, because we can start as basic as possible.
The intelligence of the network will be an AND gate. 1,1 is 1, 0,0 is 0, and 0,1 or 1,0 is 0. Lets go:
inputs = [[0, 0], [0, 1], [1, 0], [1, 1]]
the_correct_outputs = [0, 0, 0, 1]
We will initialize our only 2 weights:
weights = [random.uniform(-1, 1), random.uniform(-1, 1)]
The most important line of code in our entire network is this:
output = input[0] * weights[0] + input[1] * weights[1]
We just multiply each weight by input. Now you got it, you know how a neural network works.
To train her, it means doing exactly that, multiply the weights in the input. Then be disappointed with the output, and therefore change the weight a bit.
When I saying “A bit” in this case I mean - 0.1:
learning_rate = 0.1
So lets change/correct/update the weights 100 times, only by 0.1 each time:
# Training loop
for _ in range(100):
for i in range(len(inputs)):
# Calculate the linear combination of inputs and weights
output = inputs[i][0] * weights[0] + inputs[i][1] * weights[1]
# Calculate the error
error = the_correct_outputs[i] - output
# Update weights based on the error
weights[0] += learning_rate * error * inputs[i][0]
weights[1] += learning_rate * error * inputs[i][1]
Congratulations, you have seen before your eyes in the simplest way how a neuron network works. Let's run it and enjoy the intelligence we just created:
# predict!
test_inputs = [[0, 0], [1, 1], [0, 1], [1, 0]]
for inputs in test_inputs:
predicted_output = predict(inputs, weights)
print("Input:", inputs, "Predicted Output:", predicted_output)
And the results
Input: [0, 0] Predicted Output: 0
Input: [1, 1] Predicted Output: 1
Input: [0, 1] Predicted Output: 0
Input: [1, 0] Predicted Output: 0
Our next network will also have to contain a bias and an activation function.