arXiv link here.
Author Archives: jeshragh
Prof. Jason Eshraghian Presents Tutorial at IEEE AICAS 2022 (Incheon, Korea)
Tapeout success on the eFabless / Google SkyWater MPW6 shuttle!
Our SNN accelerator has cleared all tapeout checks and is on the way to being fabricated.
Prof. Jason Eshraghian Presents “Brain-Inspired AI using Neuromorphic Algorithms and Hardware” at the Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University
New Paper: “Memristor-based binarized spiking neural networks” published in IEEE Nanotechnology Magazine
Read more here.
New Preprint & Code on Quantization-Aware Training with SNNs
We have a new preprint along with code that simplifies the process of training quantized spiking neural networks in snnTorch.

Quantization-Aware training of SNNs with periodic schedules
We propose several techniques to smooth the process of training QSNNs, one of which is the use of cosine annealing and periodic boosting, which provides the network additional opportunities to continue searching for more optimized solution spaces.
Github link to code here.
Preprint link here.
New snnTorch Tutorial: Population Codes with SNNs
We have a new a new snnTorch tutorial/notebook on population coding.
Biologically, the average neuronal firing rate is roughly 0.1-1Hz, which is far slower than the reaction response time of animals and humans.
But if we pool together multiple neurons and count their spikes together, then it becomes possible to measure a firing rate for a population of neurons in a very short window of time.
As it turns out, population codes are also a handy trick for deep learning. Having more output neurons provides far more ‘pathways’ for errors to backprop through.
It also lets us take a more ‘parallel’ approach to training SNNs by swapping sequential steps through time for matrix-vector mults.
Here, we run through using a large pool of output neurons (instead of just the usual ~10 output neurons) to obtain decent results in 1 single simulated time-step.
Link to the tutorial here.
Population Codes in SNNs
Most Popular Article Dec. 2021 in IEEE Transactions on Biomedical Circuits and Systems: “Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications”
Read the paper here.
New Paper: “How to build a memristive integrate-and-fire model for spiking neuronal signal generation” published in IEEE Transactions on Circuits and Systems I: Regular Papers
Read he paper here.
snnTorch at Facebook’s PyTorch 2021 Dev Day

snnTorch at PyTorch Developer Day
Check out the line up of speakers here.
New Preprint: Training Spiking Neural Networks Using Lessons From Deep Learning
How can we train biologically plausible Spiking Neural Networks with the Deep Learning hammer? Our perspective/tutorial/review aims to tackle this question. We explore how the neural code affects learning via gradient descent, the interplay between the STDP learning rule and the backpropagation-through-time algorithm, and step through using online learning with SNNs.

Computational graph of a spiking neuron
This preprint goes hand-in-hand with our recently updated snnTorch interactive tutorial series that goes from designing simple spiking neurons to training large-scale networks.
Link to the preprint here.
Link to the SNN tutorial series here.
snnTorch GitHub link here.
Prof. Jason Eshraghian Presents “Deep Learning with snnTorch” Tutorial at ICONS 2021
The tutorial Jason gave at the International Conference on Neuromorphic Systems on training spiking neural networks using modern deep learning frameworks is online – check it out at the link here.
Jason Eshraghian Awarded Fulbright Future Fellowship
Jason was awarded a Fulbright Future Fellowship to continue working with Professor Wei Lu at the University of Michigan in developing RRAM-based neuromorphic accelerators.
A New Approach to SRAM PUFs
Building secure hardware is now more important than ever, and we’ve just published a new approach to building silicon-based physically unclonable functions.
This chip was taped out back in 2019, which was sadly the last time I was in Korea! Led by colleagues Seungbum Baek and Prof. Jongphil Hong at Chungbuk National University.
Link to paper here.

When you turn on SRAM (the chunky, but super fast memory on a CPU), each cell seemingly randomly jumps to a high or low voltage: a 1 or a 0. This randomness is determined by physical variations in each memory cell.
When building nanoscale transistors, it’s unsurprisingly challenging to make these cells perfectly identical. This is the stuff of nightmares when designing processors and analog ICs. But it’s perfect for physically unclonable functions.
Each SRAM cell can only generate a single bit as a response. But by cross-coupling them, we can generate multiple responses from a single SRAM cell. So we get the speed and power benefits of SRAM, with the added bonus of a huge challenge-response space.
This huge space of unique identifiers means that malicious attackers trying to crack a system will have a pretty bad time.
4 Papers Accepted to IEEE ISCAS 2021
We had 4 papers accepted for IEEE ISCAS 2021! All of the work is related to AI hardware and using spikes to reduce overhead.

arXiv link here.
“AI Reflections in 2020” Published in Nature Machine Intelligence

Image credit: : AAUB/DigitalVisionVectors/Getty
Nature Machine Intelligence invited me to contribute to their Feature Article, “AI reflections in 2020”, where I discussed my research on copyrighting & patenting the work generated by AI.
But the implications go far beyond generative algorithms. As a society, we’re only recently realising the value of our data. The important issues of “who owns our data?”, and “why isn’t it licensed for commercial use, just as copyrightable works are?” need to be answered.
Link to article here.
Best Live Demonstration Award at the IEEE Conference on Electronic Circuits and Systems 2020
Our retina-controlled upper-limb prosthesis system led by Coen Arrow won the Best Live Demonstration Award at the IEEE International Conference on Electronic Circuits and Systems.
Using sparse spiking electrical signals generated by retina photoreceptor cells in real-time could potentially assist with rehabilitation, and complement EMG signals to achieving high-precision feedback on a constrained power supply.
We somehow did this whole thing remotely across three continents, with Coen Arrow at the University of Western Australia; Hancong Wu & Kia Nazarpour at the University of Edinburgh, and the University of Michigan.
Code for the retina simulator can be found here.
New Paper: “Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications” Published in IEEE Transactions on Biomedical Circuits and Systems
We published our perspective on integrating neural net accelerators into the clinical & outpatient workflow in the IEEE Transactions on Biomedical Circuits and Systems journal.
Neural nets are extremely resource-intensive. For example, GPT-3 needs the equivalent energy of a nuclear reactor running for an entire month just to train. For continuous, portable monitoring, this simply isn’t an option.
This paper dives into the hardware constraints we currently face in using deep learning in healthcare. We go into the viability of using neuromorphic computing, spiking neural nets, and in-memory computing in alleviating these constraints.
Cross-continental collaboration between Australia, USA and Europe with Mostafa Rahimi Azghadi, Corey Lammie, Melika Payvand, Elisa Donati, Bernabé Linares-Barranco and Giacomo Indiveri.
Read more here.

IEEE ECCTD 2020 Keynote Address: CMOS-Memristor Nanoelectronics for Neuromorphic Computing

These principles are applied to neuromorphic systems for brain-inspired computation. The powerful capabilities of these neuromorphic processors can be applied to a plethora of real-world challenges, from data-driven healthcare, to neurostimulation, and in AI-generated artwork, as we make a profound shift away from the sequential processing of Von Neumann machines towards parallel, interconnected neural-inspired structures.
Watch the recording here: http://ecctd2020.eu/node/22
Editorial in Nature Machine Intelligence
One neural network is good for recognising patterns, but two are great for creating them. Nature Machine Intelligence did a neat write-up about our legal research on determining who owns artificially generated patterns, images, videos, and artwork.

Picture produced with generative AI art tool, artbreeder, by Jacob Huth
Jason Eshraghian is delivering an extended tutorial at the IEEE Artificial Intelligence Circuits and Systems Conference “Training Spiking Neural Networks Using Lessons from Deep Learning” in Incheon, Korea this June. See more here.
The extended 1.5 hour session will include the fundamentals of spiking neural networks, resource-constrained SNN-Hardware co-design, and a hands-on session where train an SNN from scratch.