We describe a computing architecture capable of simulating networks with billions of spiking neurons using an ordinary Apple MacBook Air equipped with a M2 processor, 24 GB of on-chip unified memory and a 4TB solid-state disk. We use an event-based propagation approach that processes packets of N spikes from the M neurons in the system on each processing cycle. Each neuron has C binary input connections, where C can be 128 or more. During the propagation phase, we increment the activation values for all targets of the N neurons that fired. In the second step, we use the histogram of activation values to determine the firing threshold on the fly and select the N neurons that will fire in the next packet. We note that this active selection process could be related to oscillatory activity in the brain, which may have the function of fixing the percentage of neurons that fire on each cycle. Critically, there are absolutely no restrictions on architecture, since each neuron can have a direct connection to any other neuron, allowing us to have both feed-forward as well as recurrent connections. With M = 2 32 neurons, this allows 2 64 possible connections, although actual connectivity is extremely sparse. Even with off-the-shelf hardware, the simulator can continuously propagate packets with thousands of spikes and millions of connections dozens of times a second. Remarkably, all this is possible using an energy budget of just 37 watts, close to the energy required by the human brain. The work demonstrates that brain-scale simulations are possible using current hardware, but this requires fundamentally rethinking how simulations are implemented.
reply