d993: SpiNNaker: world’s largest neuromorphic supercomputer

“SpiNNaker: world’s largest neuromorphic supercomputer switched on” – reddit discussion

SpiNNaker: Spiking Neural Network Architecture

It is designed to support spiking neural networks, which are much more similar than ANNs to biological neural networks. It is funded by the Human Brain Project. What do you guys think? Will this kind of architectures (neuromorphic computing) enable much progress in the Spiking Neural Network field?


‘Human brain’ supercomputer with 1 million processors – https://www.manchester.ac.uk/discover/news/human-brain-supercomputer-with-1million-processors-switched-on-for-first-time/


SpiNNaker Project – The SpiNNaker Chip.

The basic building block of the SpiNNaker machine is the SpiNNaker multicore System-on-Chip. The chip is a Globally Asynchronous Locally Synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a light-weight, packet-switched asynchronous communications infrastructure.

The figure to the right shows that each SpiNNaker chip contains two silicon dies: the SpiNNaker die itself and a 128 MByte SDRAM (Synchronous Dynamic Random Access Memory) die, which is physically mounted on top of the SpiNNaker die and stitch-bonded to it.

The micro-architecture assumes that processors are ‘free’: the real cost of computing is energy. This is why we use energy-efficient ARM9 embedded processors and Mobile DDR (Double Data Rate) SDRAM, in both cases sacrificing some performance for greatly enhanced power efficiency.
SpiNNaker Chip

The figure to the left shows a plot of the SpiNNaker die, with the 18 identical processing subsystems located in the periphery, and the Network-on-Chip and shared components in the centre. At start-up, following self-test, one of the cores is elected to a special role as Monitor Core and thereafter performs system management tasks. Normally, 16 cores are used to support the application and one is reserved as a spare for fault tolerance and manufacturing yield-enhancement purposes.

Inter-processor communication is based on an efficient multicast infrastructure inspired by neurobiology. It uses a packet-switched network to emulate the very high connectivity of biological systems. The packets are source-routed, i.e., they only carry information about the issuer and the network infrastructure is responsible for delivering them to their destinations.

The heart of the communications infrastructure is a bespoke multicast router that is able to replicate packets where necessary to implement the multicast function associated with sending the same packet to several different destinations.

SpiNNaker chips have six bidirectional, inter-chip links that allow networks of various topologies. Inter-chip communication uses self-timed channels, which, although costly in wires, are significantly more power efficient than synchronous links of similar bandwidth.

The SpiNNaker die area is 102 mm2 (10.386 mm × 9.786 mm). It was originally taped-out in December 2010. The first batch of fully-functional packaged chips was delivered on May 20th, 2011.

See also:

https://arxiv.org/abs/1810.06835 – SpiNNTools: The Execution Engine for the SpiNNaker Platform

SpiNNTools: The Execution Engine for the SpiNNaker Platform

Distributed systems are becoming more common place, as computers typically contain multiple computation processors. The SpiNNaker architecture is such a distributed architecture, containing millions of cores connected with a unique communication network, making it one of the largest neuromorphic computing platforms in the world. Utilising these processors efficiently usually requires expert knowledge of the architecture to generate executable code. This work introduces a set of tools (SpiNNTools) that can map computational work described as a graph in to executable code that runs on this novel machine. The SpiNNaker architecture is highly scalable which in turn produces unique challenges in loading data, executing the mapped problem and the retrieval of data. In this paper we describe these challenges in detail and the solutions implemented.