Led by PI Prof. Aiming Yan and Co-PIs Nobby Kobayashi, Jairo Selasco Jr., David Lederman, Wei Chen, Huizhong Xu, Sung-Mo Steve Kang, and Jason Eshraghian.
Author Archives: jeshragh
News Feature: “Brain-Inspired AI Code Library Notches Milestone” – Prof. Jason Eshraghian in Tech Briefs
“A new open source code library, snnTorch, has surpassed 100,000 downloads and is used in a wide variety of projects — from NASA satellite tracking efforts to optimizing chips for AI.”
See the full article here.
OpenNeuromorphic Talk: “Neuromorphic Intermediate Representation”
Presented by Jens Pedersen, Bernhard Vogginger, Felix Bauer, and Jason Eshraghian. See the recording here.
NSF Award: $900,000 to Develop the Neuromorphic Integrated Circuits Education Network
Led by PI Prof. Shantanu Chakrabartty along with Co-PIs Prof. Andreas Andreou and Prof. Jason Eshraghian. See the press release here.
New Paper: “Exploiting deep learning accelerators for neuromorphic workloads” led by Ph.D. Candidate Vincent Sun in collaboration with Graphcore published in Neuromorphic Computing and Engineering
See the full paper here.
Abstract
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but in a twist of irony, when training on modern graphics processing units (GPUs) this becomes more expensive than non-spiking networks. The emergence of Graphcore’s Intelligence Processing Units (IPUs) balances the parallelized nature of deep learning workloads with the sequential, reusable, and sparsified nature of operations prevalent when training SNNs. IPUs adopt multi-instruction multi-data (MIMD) parallelism by running individual processing threads on smaller data blocks, which is a natural fit for the sequential, non-vectorized steps required to solve spiking neuron dynamical state equations. We present an IPU-optimized release of our custom SNN Python package, snnTorch, which exploits fine-grained parallelism by utilizing low-level, pre-compiled custom operations to accelerate irregular and sparse data access patterns that are characteristic of training SNN workloads. We provide a rigorous performance assessment across a suite of commonly used spiking neuron models, and propose methods to further reduce training run-time via half-precision training. By amortizing the cost of sequential processing into vectorizable population codes, we ultimately demonstrate the potential for integrating domain-specific accelerators with the next generation of neural networks.
New snnTorch Tutorial: Exoplanet Hunter by Undergraduate Students Ruhai Lin, Aled dela Cruz, and Karina Aguilar
See the tutorial here.
The transit method is a widely used and successful technique for detecting exoplanets. When an exoplanet transits its host star, it causes a temporary reduction in the star’s light flux (brightness). Compared to other techniques, the transmit method has has discovered the largest number of planets.
Astronomers use telescopes equipped with photometers or spectrophotometers to continuously monitor the brightness of a star over time. Repeated observations of multiple transits allows astronomers to gather more detailed information about the exoplanet, such as its atmosphere and the presence of moons.
Space telescopes like NASA’s Kepler and TESS (Transiting Exoplanet Survey Satellite) have been instrumental in discovering thousands of exoplanets using the transit method. Without the Earth’s atmosphere in the way, there is less interference and more precise measurements are possible. The transit method continues to be a key tool in advancing our understanding of exoplanetary systems. For more information about transit method, you can visit NASA Exoplanet Exploration Page.
New Paper: “To spike or not to spike: A digital hardware perspective on deep learning acceleration” led by Dr. Fabrizio Ottati in IEEE JETCAS
Find the paper on IEEE Xplore here.
Abstract:
As deep learning models scale, they become increasingly competitive from domains spanning from computer vision to natural language processing; however, this happens at the expense of efficiency since they require increasingly more memory and computing power. The power efficiency of the biological brain outperforms any large-scale deep learning (DL) model; thus, neuromorphic computing tries to mimic the brain operations, such as spike-based information processing, to improve the efficiency of DL models. Despite the benefits of the brain, such as efficient information transmission, dense neuronal interconnects, and the co-location of computation and memory, the available biological substrate has severely constrained the evolution of biological brains. Electronic hardware does not have the same constraints; therefore, while modeling spiking neural networks (SNNs) might uncover one piece of the puzzle, the design of efficient hardware backends for SNNs needs further investigation, potentially taking inspiration from the available work done on the artificial neural networks (ANNs) side. As such, when is it wise to look at the brain while designing new hardware, and when should it be ignored? To answer this question, we quantitatively compare the digital hardware acceleration techniques and platforms of ANNs and SNNs. As a result, we provide the following insights: (i) ANNs currently process static data more efficiently, (ii) applications targeting data produced by neuromorphic sensors, such as event-based cameras and silicon cochleas, need more investigation since the behavior of these sensors might naturally fit the SNN paradigm, and (iii) hybrid approaches combining SNNs and ANNs might lead to the best solutions and should be investigated further at the hardware level, accounting for both efficiency and loss optimization.
New Paper: “Capturing the Pulse: A State-of-the-Art Review on Camera-Based Jugular Vein Assessment” led by Ph.D. Candidate Coen Arrow in Biomedical Optics Express
See the full paper here.
Abstract
Heart failure is associated with a rehospitalisation rate of up to 50% within six months. Elevated central venous pressure may serve as an early warning sign. While invasive procedures are used to measure central venous pressure for guiding treatment in hospital, this becomes impractical upon discharge. A non-invasive estimation technique exists, where the clinician visually inspects the pulsation of the jugular veins in the neck, but it is less reliable due to human limitations. Video and signal processing technologies may offer a high-fidelity alternative. This state-of-the-art review analyses existing literature on camera-based methods for jugular vein assessment. We summarize key design considerations and suggest avenues for future research. Our review highlights the neck as a rich imaging target beyond the jugular veins, capturing comprehensive cardiac signals, and outlines factors affecting signal quality and measurement accuracy. Addressing an often quoted limitation in the field, we also propose minimum reporting standards for future studies.
Invited Talk: Kraw Lecture Series “Bridging the Gap Between Artificial Intelligence and Natural Intelligence” by Prof. Jason Eshraghian
See the recording here.
The Kraw Lecture Series in Silicon Valley is made possible by a generous gift from UC Santa Cruz alumnus George Kraw (Cowell ‘71, history and Russian literature) and Raphael Shannon Kraw. The lecture series features acclaimed UC Santa Cruz scientists and technologists who are grappling with some of the biggest questions of our time.
Abstract: The brain is the perfect place to look for inspiration to develop more efficient neural networks. Indeed, the inner workings of our synapses and neurons offer a glimpse at what the future of deep learning might look like. Our brains are constantly adapting, our neurons processing all that we know, mistakes we’ve made, failed predictions—all working to anticipate what will happen next with incredible speed. Our brains are also amazingly efficient. Training large-scale neural networks can cost more than $10 million in energy expense, yet the human brain does remarkably well on a power budget of 20 watts.
We can apply the computational principles that underpin the brain, and use them to engineer more efficient systems that adapt to ever changing environments. There is an interplay between neural inspired algorithms, how they can be deployed on low-power microelectronics, and how the brain provides a blueprint for this process.
New Paper: “Neuromorphic cytometry: implementation on cell counting and size estimation” led by Ziyao Zhang and Omid Kavehei in Neuromorphic Computing and Engineering
See the full paper here.
Abstract
Imaging flow cytometry (FC) is a powerful analytic tool that combines the principles of conventional FC with rich spatial information, allowing more profound insight into single-cell analysis. However, offering such high-resolution, full-frame feedback can restrain processing speed and has become a significant trade-off during development. In addition, the dynamic range (DR) offered by conventional photosensors can only capture limited fluorescence signals, which compromises the detection of high-velocity fluorescent objects. Neuromorphic photo-sensing focuses on the events of interest via individual-firing pixels to reduce data redundancy and latency. With its inherent high DR, this architecture has the potential to drastically elevate the performance in throughput and sensitivity to fluorescent targets. Herein, we presented an early demonstration of neuromorphic cytometry, demonstrating the feasibility of adopting an event-based resolution in describing spatiotemporal feedback on microscale objects and for the first time, including cytometric-like functions in object counting and size estimation to measure 8 µm, 15 µm microparticles and human monocytic cell line (THP-1). Our work has achieved highly consistent outputs with a widely adopted flow cytometer (CytoFLEX) in detecting microparticles. Moreover, the capacity of an event-based photosensor in registering fluorescent signals was evaluated by recording 6 µm Fluorescein isothiocyanate-marked particles in different lighting conditions, revealing superior performance compared to a standard photosensor. Although the current platform cannot deliver multiparametric measurements on cells, future endeavours will include further functionalities and increase the measurement parameters (granularity, cell condition, fluorescence analysis) to enrich cell interpretation.
Brain-Inspired Machine Learning at UCSC: Class Tape-out Success
This quarter, I introduced Brain-Inspired Machine Learning as a course to University of California, Santa Cruz. And while machine learning is cool and all, it’s only as good as the hardware it runs on.
31 students & first time chip designers all took the lead on building DRC/LVS clean neuromorphic circuits. Students came from grad & undergrad backgrounds across various corners of the university. ECE, CSE, Math, Computational Media, Bioengineering, Psychology, etc. Many had never even taken an ECE 101 class, and started learning from scratch 2 weeks ago.
Their designs are now all being manufactured together in the Sky130 Process. Each design is compiled onto the same piece of silicon with TinyTapeout, thanks to Matt Venn and Uri Shaked.
We spent Friday night grinding in my lab while blaring metalcore tunes. All students managed to clear all checks. The final designs do a heap of cool things like accelerate sparse matrix-multiplies, event denoising, to simulating reservoir networks. I naturally had to squeeze in a Hodgkin-Huxley neuron in the 6 hours before the deadline (pictured).
Not sure if it’s the cost of living, or the mountain lions on campus, but damn. UCSC students have some serious grit.

Prof. Jason Eshraghian, Prof. Charlotte Frenkel and Prof. Rajit Manohar Present Tutorial at ESSCIRC/ESSDIRC (Lisbon, Portugal)
The tutorial titled “Open-Source Neuromorphic Circuit Design” ran in-person at the IEEE European Solid-State Circuits/Devices Conference, alongside co-presenters Prof. Charlotte Frenkel (TU Delft) and Prof. Rajit Manohar (Yale University). A live demo session with the notebooks that I’ll go through have been uploaded to GitHub at this link.
Tutorial Overview: As a bio-inspired alternative to conventional machine-learning accelerators, neuromorphic circuits outline promising energy savings for extreme-edge scenarios. While still being considered as an emerging approach, neuromorphic chip design is now being included in worldwide research roadmaps: the community is growing fast and is currently catalyzed by the development of open-source design tools and platforms. In this tutorial, we will survey the diversity of the open-source neuromorphic chip design landscape, from digital and mixed-signal small-scale proofs-of-concept to large-scale platforms. We will also provide a hands-on overview of the associated design challenges and guidelines, from which we will extract upcoming trends and promising use cases.
New Paper: “Training spiking neural networks using lessons from deep learning” in the Proceedings of the IEEE
My baby was finally accepted for publication. Available open-access on IEEE Xplore.
Prof. Jason Eshraghian Presenting Invited Talk at the Memristec Summer School, Dresden, Germany
Prof. Jason Eshraghian will be presented “A Hands-On Approach to Open-Source Memristive Neuromorphic Systems” at the DFG priority programme Memristec Summer School in Dresden, Germany.
Thsi interdisciplinary Summer School on memristive systems will bring students together and enable discussion with experts from the fields of fabrication, characterization and the theory of memristors.
Prof. Jason Eshraghian and Fabrizio Ottati to Present Tutorial at ICONS 2023 (Santa Fe, NM, USA)
The tutorial titled “From Training to Deployment: Build a Spiking FPGA on a $300 Budget” will run in-person at the International Conference on Neuromorphic Systems in Santa Fe, NM, USA.
Tutorial Abstract:
So you want to use natural intelligence to improve artificial intelligence? The human brain is a great place to look to improve modern neural networks. The computational cost of deep learning exceeds millions of dollars to train large-scale models, and yet, our brains somehow achieve remarkable feats within a power budget of approximately 10-20 watts. While we may be far from having a complete understanding of the brain, we are at a point where a set of design principles has enabled us to build ultra-efficient deep learning tools. Most of these are linked back to event-driven spiking neural networks (SNNs).
In a cruel twist of irony, most of our SNNs are trained and battle-tested on GPUs. GPUs are far from optimized for spike-based workloads. The neuromorphic hardware that is out there for research and/or commercial use (a lot of love going to you, Intel Labs and SynSense), is considerably more expensive than a consumer-grade GPU, or require a few more steps than a single-click purchase off Amazon. The drawback of rich feature-sets is that at some point in the abstraction, such tools become inflexible. How can we move towards using low-cost hardware that sits on our desk, or fits in a PCIe slot in our desktops, and accelerates SNNs?
This tutorial will take a hands-on approach to learning how to train SNNs for hardware deployment, and running these models on a low-cost FPGA for inference. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently learn how to deploy the design to off the shelf hardware (namely, FPGAs), using the AMD Xilinx Kria KV260 starter kit as the accelerating platform. To port the SNN model to the hardware platform, a software tool being developed by Mr. Fabrizio Ottati, hls4nm, will be used, showing how to reap the rewards of training SNNs using hardware that you can own, break apart, and can put back together.
Telluride Workshop: Open Source Neuromorphic Hardware, Software and Wetware
Prof. Jason Eshraghian & Dr. Peng Zhou were topic area leaders at the Telluride Neuromorphic Engineering & Cognition Workshop. Tasks addressed included:
-
porting open silicon (hardware) to neuromorphic engineering,
-
linking in-vitro neural networks (wetware) to neuromorphic computing, and
-
modelling and training those with spiking neural networks using neuromorphic software.
A project highlight includes the development of the Neuromorphic Intermediate Representation (NIR), an intermediate representation to translate various neuromorphic and physics-driven models that are based on continuous time ODEs into different formats. This makes it much easier to deploy models trained in one library to map to a large variety of backends.
Ruijie Zhu and Prof. Jason Eshraghian Present Invited Talk “Scaling up SNNs with SpikeGPT” at the Intel Neuromorphic Research Centre
Abstract: If we had a dollar for every time we heard “It will never scale!”, then neuromorphic engineers would be billionaires. This presentation will be centered on SpikeGPT, the first large-scale language model (LLM) using spiking neural nets (SNNs), and possibly the largest SNN that has been trained using error backpropagation.
The need for lightweight language models is more pressing than ever, especially now that we are becoming increasingly reliant on them from word processors and search engines, to code troubleshooting and academic grant writing. Our dependence on a single LLM means that every user is potentially pooling sensitive data into a singular database, which leads to significant security risks if breached.
SpikeGPT was built to move towards addressing the privacy and energy consumption challenges we presently run into using Transformer blocks. Our approach decomposes self-attention down into a recurrent form that is compatible with spiking neurons, along with dynamical weight matrices where the dynamics are learnable, rather than the parameters as with conventional deep learning.
We will provide an overview of what SpikeGPT does, how it works, and what it took to train it successfully. We will also provide a demo on how users can download pre-trained models available on HuggingFace so that listeners are able to experiment with them.
Link to the talk can be found here.
Sustainability Award of the GAMM Annual Meeting 2023 Awarded to Dr. Alexander Henkes, Prof. Henning Wessels and Prof. Jason Eshraghian
The research team from TU Braunschweig and University of California, Santa Cruz was honoured with the Sustainability Award at the annual conference of the Association of Applied Mathematics and Mechanics (GAMM) in Dresden. The scientists have developed an AI method that can help maintain bridges more efficiently and extend their service life.
Read more here.
New Preprint: Brain-inspired learning in artificial neural networks: A Review led by Ph.D. Candidate Samuel Schmidgall
Abstract: Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, and robotics. However, there exist fundamental differences between ANNs’ operating mechanisms and those of the biological brain, particularly concerning learning processes. This paper presents a comprehensive review of current brain-inspired learning representations in artificial neural networks. We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks’ capabilities. Moreover, we delve into the potential advantages and challenges accompanying this approach. Ultimately, we pinpoint promising avenues for future research in this rapidly advancing field, which could bring us closer to understanding the essence of intelligence.
Link to the preprint here.

Preprint Update: Training Spiking Neural Networks Using Lessons from Deep Learning
We submitted this extensive (and opinionated) guide to training spiking neural networks to the Proceedings of the IEEE 18 months ago. During this time, the preprint has reached 100+ citations, snnTorch has cracked 80,000 downloads, and it has helped numerous people enter the field of neuromorphic computing… and much of the content that was true 18 months ago has significantly changed.
While we continue to wait for the peer review process to do its thing, I’ve taken the liberty to revamp the preprint to reflect the rapidly changing world of training and using SNNs.
The latest version includes “Practical Notes” with black magic tricks that have helped us improve the performance of SNNs, code-snippets that reduce verbose explanations, and a fresh account of some of the latest going-ons in the neuroscience-inspired deep learning world..
Thank you to Gregor Lenz, Xinxin Wang and Max Ward for working through this >50 page monster.
Preprint link here.