Memristive nanodevices represent a powerful candidate for future neuromorphic computing systems and a potential tool to investigate the human brain memory in terms of plasticity and learning. We have recently demonstrated la_barbera_2015.pdf​ that fundamental processes observed in biological synapses can be successfully reproduced and controlled through emerging memristive devices operation. If various plasticity mechanisms have been already demonstrated in memristive systems, we showed that more synaptic features can be embedded in a single memory component by exploiting the basic physics of filamentary resistive switching. Currently we are interested in exploiting the intrinsic, non-linear and dynamic device volatility in order to explore several learning strategies to model different circuit topologies and level of processing by large scale memristive devices circuit modeling.


The main objective of this workgroup focused on exploiting our memristive synaptic model in a spike-based system for neuromorphic tasks to highlight a rich panel of features in terms of functionalities and performances required for the material implementation of bio-inspired circuits.


In Machine Learning as well as in Neuromorphic Computing possible applications are focused on achieving high performances in pattern recognition and features extraction. A principal core of any neural network is the non-linear transformation of input data into a different representation that allows feature extraction. Similarly, the concept of Reservoir Computing (RC) addresses the exploitation of random dynamical systems for data processing. This approach relies on an untrained dynamical system to be able to map different input classes to different internal representations. The novel idea proposed in this workgroup is to use a class of filamentary-type memristive devices as a dynamical system for pre-processing of the MNIST data by exploiting the non-linear state change in the synaptic connection with state relaxation described by a dynamic time constant able to achieve separation property and fading memory for processing. Thus, the key-learning element is the synaptic dynamics that, by controlling the physics devices’ volatility, allows controlling the Short Term to Long Term Plasticity Transition. At system level, we explored two different approaches: a classical Random Reservoir architecture and a Cross-bar Reservoir approach. The first exploits the random dynamic connections in recurrent networks in which integrated and fired neurons are combined with memristives devices. Thanks to their intrinsic volatility properties to tune the Short Term- to Long Term Plasticity regime, memristors could offer the possibility to extract features on a higher dimension and to forget data. This strategy was not successfully in terms of features extraction performances. Possible reasons could be related to the dimension of the reservoir and the exploitation of a single instance of this classifier. An ensemble of n-reservoir computing classifiers could be a possible solutions to achieve better performances but with a high computational complexity and cost. Major contribution by Sergio Solinas. The second strategy was to propose a network of such filamentary-type memristors in a Cross-bar system and the main contribution was given by Christopher Bennett and Damir Vodenicarevic. By exploiting the non-linear transformations of the input data (separation) due to the intrinsic relaxation time constant, the current state of the network is only affected by the previous states up to a certain time (fading memory). The system allows for two simultaneous, geometrically equivalent inputs at both row and column neuron. The objective is to implement massive coincidence detection, just incrementing the Long Term Plasticity when both inputs are present. It has been designed a spike train input for both row and column which correctly presents time slices of a given matrix (as row/column respectively). And then to pass coincident spikes into the memristive synaptic's model. Over all orientations, these spike trains are stored and executed. Then, the outputs per row are passed onto a multilayer perceptron acting as the read-out. Currently, LIF neurons are maintained at the output but it may be interesting to remove these to deliver just current output, which may be simpler for the MLP.

Next steps

Results show that this approach is successfully performing the classification task with the 60% of True Positive (TP) recognized MNIST data digit. In order to improve this performance some algorithm optimizations have to be adopted. We demonstrated how such system can correctly recognize a noisy version of MNIST data set trough a MATLAB simulation with the 70% TP. We used a class of filamentary-type memristive devices as a dynamical system for pre-processing MNIST data set. It could be interesting using temporal data generated by biologically realistic sensors such as musical notes registered by silicon Cochlea as well as visual ones generated by DVS for instance. Another possible improvement to this study could be done by adding the devices’ variability that could be a potential candidate to make an ensemble of n-classifier to achieve better performances.


  • 1) Memristive Xbar system + Reservoir Computing Synaptic Dynamics approach: Christopher
  • 2) Reservoir Computing Memristive Synaptic Dynamics approach: Sergio, Christopher
  • 3) Features Extractor for Readout: linear Classifier and Multi-Layer-Perceptron: Damir, Christopher
  • 4) Inputs data: MNIST digits, Cochlea / Auditory sensors data : Qian, Antonio
  • 5) Brainstorming Discussions: Fabien, Christopher, Sergio, Damir, Julien, Quin, Paolo, Naous, Mostafa, Abu, Manuel, Fabio, Paul, Yulia,..


First week:

  • "Introduction" meeting: we introduced in general memristive devices and a particular class of CBRAM, the Electro-Chemical Metallization cells. We presented our last results and the device's "ingredients" that allows us to play with the basic physics of filamentary resistive switching to implement synaptic features in a single memory component.
  • "Kick-off" meeting: we discussed about possible strategies to exploit our synaptic model in a spike-based system for extracting features by playing with different plasticity regimes (STP, LTP,..). Thus, the idea is to start working in parallel with different subgroups.

Second week:

  • "Brainstorming" meeting with group's members who have reached the workshop in the second week in which we presented the workgroup's objectives fixed in the first week and discuss possible improvements and future trends.
  • "Individual subgroups meeting" to follow the work in progress, to collect all the individual contributions, to get results.
  • "Final meeting" to prepare final presentation & demo and to discuss next steps.


Filamentary Switching: Synaptic Plasticity through Device Volatility la_barbera_2015.pdf

Volatile Memristive Devices as Short-Term Memory in a Neuromorphic Learning Architecture burger_volatile_memristive_devices_STM.pdf

Last modified 4 years ago Last modified on 05/09/15 13:21:21

Attachments (9)