wiki:2015/multihetero15

Which?

The question of what neural model(s) should be used for hardware simulation has always been contentious. Is a very simple neuron model (e.g. leaky-integrate-and-fire) adequate, or are dynamics beyond first-order required? Some models (such as the Izhikevich model) have potentially sophisticated dynamics, but either involve biologically unrealistic parameter sensitivity or have been implemented in very approximate ways which make formal treatment difficult. However, in hardware, by and large one thing has been constant: the need to pick a neural model to implement on-chip.

Some interesting ones we are looking at:

1) Adaptive Exponential Integrate and Fire: Similar properties to Izhikevich, better stability. 2) Hodgkin-Huxley: The reference standard for neurobiology, though challenging(!) to implement. 3) Mihalas-Niebur: Intriguingly, has similar dynamics to Izhikevich but a closed-form solution. 4) Neuron with non-linear dendrites and binary synapses

{Please, add any others you want to try here}

What?

We have introduced the SpiNNaker chip in previous CapoCaccia? workshops: a universal neuromorphic - "neuromimetic" device that doesn't fix the model in advance, but is rather programmable to support a large number of potential models. In previous workshops we have focussed on using it with existing neural models already built for it - most notably the aforementioned LIF and Izhikevich models, but following a comprehensive rewrite of the tool chain, we can now offer the ability, without excessive low-level coding, to write your own model for the platform. Most of the "machinery" for creating neural models is now in a set of libraries so you only need write the core update routines that drive the model and a short Python module (also built on libraries) that describe the model to the configuration tool ("PACMAN").

Coming soon: Description of the tool chain and neural model creation

There are a few limitations to bear in mind:

1) Neuron models will run a discrete-time update, that is to say there is a simulation time step. 2) We assume you will run multiple neurons per core. That might be as few as 10 or as many as 200, but fewer and you will likely have poor performance, more and the simulation will likely break either by running out of memory or time per timestep 3) You have 32k instruction memory and 64k data memory to work with, per core. The data memory in particular must be able to hold all state variables for each neuron that have to be updated each time step. 4) Synapses reside in a separate DRAM; you have 256MB per chip for synapses. This data is only available when a spike arrives that terminates on that synapse 5) Your model's code must be fast enough (i.e. require a small enough number of instructions per update) that you can evaluate all your neurons within about half of a time step. So the more complex the model, the fewer the number of neurons per core. There are no hard-and-fast rules here; this always requires some tuning.

We will explain all of this, and more, to anyone not familiar with this, in a series of tutorials during the first week. We will also explain the new tool chain and help with installations during this first week.

Coming soon: Link to SpiNNaker software tools installation

Why?

So why would anyone want to build new models? Some thoughts:

1) They offer dynamics that allow much more interesting/functional simulations. 2) You are interested in benchmarking different models against each other in the same simulation. 3) You're developing a new chip and want to prototype whether a new model you've got is interesting or a waste of time. 4) You have a new model but no clear data on what it might do, and want to find out.

{Feel free to add to this list!}

By the end of the workshop, we hope to have several models implemented and benchmark them against each other.

Last modified 4 years ago Last modified on 04/28/15 12:46:16