Prof. Jason Eshraghian and Dr. Fabrizio Ottati Present Tutorial at ISFPGA (Monterey, CA)

Fabrizio Ottati and I will be running a tutorial tomorrow (Sunday, 3 March) at the International Symposium on Field-Programmable Gate Arrays (ISFPGA) in Monterey, CA titled: “Who needs neuromorphic hardware? Deploying SNNs to FPGAs via HLS”.

snn-to-fpga

We’ll go through software and hardware: training SNNs using quantization-aware techniques across weights and stateful quantization, and then show how to go from an snnTorch model straight into AMD/Xilinx FPGAs for low-power + flexible deployment.

GitHub repo: https://github.com/open-neuromorphic/fpga-snntorch

Tutorial summary: https://www.isfpga.org/workshops-tutorials/#t2

OpenNeuromorphic Talk: “Neuromorphic Intermediate Representation”

In this workshop, we will show you how to move models from your favourite framework directly to neuromorphic hardware with 1-2 lines of code. We will present the technology behind, the Neuromorphic Intermediate Representation , and demonstrate how we can use it to run a live spiking convnet on the Speck chip.
Presented by Jens Pedersen, Bernhard Vogginger, Felix Bauer, and Jason Eshraghian. See the recording here.

Invited Talk: Kraw Lecture Series “Bridging the Gap Between Artificial Intelligence and Natural Intelligence” by Prof. Jason Eshraghian

See the recording here.

The Kraw Lecture Series in Silicon Valley is made possible by a generous gift from UC Santa Cruz alumnus George Kraw (Cowell ‘71, history and Russian literature) and Raphael Shannon Kraw. The lecture series features acclaimed UC Santa Cruz scientists and technologists who are grappling with some of the biggest questions of our time.

Abstract: The brain is the perfect place to look for inspiration to develop more efficient neural networks. Indeed, the inner workings of our synapses and neurons offer a glimpse at what the future of deep learning might look like. Our brains are constantly adapting, our neurons processing all that we know, mistakes we’ve made, failed predictions—all working to anticipate what will happen next with incredible speed. Our brains are also amazingly efficient. Training large-scale neural networks can cost more than $10 million in energy expense, yet the human brain does remarkably well on a power budget of 20 watts.

We can apply the computational principles that underpin the brain, and use them to engineer more efficient systems that adapt to ever changing environments. There is an interplay between neural inspired algorithms, how they can be deployed on low-power microelectronics, and how the brain provides a blueprint for this process.

Prof. Jason Eshraghian, Prof. Charlotte Frenkel and Prof. Rajit Manohar Present Tutorial at ESSCIRC/ESSDIRC (Lisbon, Portugal)

The tutorial titled “Open-Source Neuromorphic Circuit Design” ran in-person at the IEEE European Solid-State Circuits/Devices Conference, alongside co-presenters Prof. Charlotte Frenkel (TU Delft) and Prof. Rajit Manohar (Yale University). A live demo session with the notebooks that I’ll go through have been uploaded to GitHub at this link.

Tutorial Overview: As a bio-inspired alternative to conventional machine-learning accelerators, neuromorphic circuits outline promising energy savings for extreme-edge scenarios. While still being considered as an emerging approach, neuromorphic chip design is now being included in worldwide research roadmaps: the community is growing fast and is currently catalyzed by the development of open-source design tools and platforms. In this tutorial, we will survey the diversity of the open-source neuromorphic chip design landscape, from digital and mixed-signal small-scale proofs-of-concept to large-scale platforms. We will also provide a hands-on overview of the associated design challenges and guidelines, from which we will extract upcoming trends and promising use cases.

Prof. Jason Eshraghian Presenting Invited Talk at the Memristec Summer School, Dresden, Germany

Prof. Jason Eshraghian will be presented “A Hands-On Approach to Open-Source Memristive Neuromorphic Systems” at the DFG priority programme Memristec Summer School in Dresden, Germany.

Thsi interdisciplinary Summer School on memristive systems will bring students together and enable discussion with experts from the fields of fabrication, characterization and the theory of memristors.

 

Prof. Jason Eshraghian and Fabrizio Ottati to Present Tutorial at ICONS 2023 (Santa Fe, NM, USA)

The tutorial titled “From Training to Deployment: Build a Spiking FPGA on a $300 Budget” will run in-person at the International Conference on Neuromorphic Systems in Santa Fe, NM, USA.

Tutorial Abstract: 

So you want to use natural intelligence to improve artificial intelligence? The human brain is a great place to look to improve modern neural networks. The computational cost of deep learning exceeds millions of dollars to train large-scale models, and yet, our brains somehow achieve remarkable feats within a power budget of approximately 10-20 watts. While we may be far from having a complete understanding of the brain, we are at a point where a set of design principles has enabled us to build ultra-efficient deep learning tools. Most of these are linked back to event-driven spiking neural networks (SNNs).

In a cruel twist of irony, most of our SNNs are trained and battle-tested on GPUs. GPUs are far from optimized for spike-based workloads. The neuromorphic hardware that is out there for research and/or commercial use (a lot of love going to you, Intel Labs and SynSense), is considerably more expensive than a consumer-grade GPU, or require a few more steps than a single-click purchase off Amazon. The drawback of rich feature-sets is that at some point in the abstraction, such tools become inflexible. How can we move towards using low-cost hardware that sits on our desk, or fits in a PCIe slot in our desktops, and accelerates SNNs?

This tutorial will take a hands-on approach to learning how to train SNNs for hardware deployment, and running these models on a low-cost FPGA for inference. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently learn how to deploy the design to off the shelf hardware (namely, FPGAs), using the AMD Xilinx Kria KV260 starter kit as the accelerating platform. To port the SNN model to the hardware platform, a software tool being developed by Mr. Fabrizio Ottati, hls4nm, will be used, showing how to reap the rewards of training SNNs using hardware that you can own, break apart, and can put back together.

Telluride Workshop: Open Source Neuromorphic Hardware, Software and Wetware

Prof. Jason Eshraghian & Dr. Peng Zhou were topic area leaders at the Telluride Neuromorphic Engineering & Cognition Workshop. Tasks addressed included:

A project highlight includes the development of the Neuromorphic Intermediate Representation (NIR), an intermediate representation to translate various neuromorphic and physics-driven models that are based on continuous time ODEs into different formats. This makes it much easier to deploy models trained in one library to map to a large variety of backends.

Ruijie Zhu and Prof. Jason Eshraghian Present Invited Talk “Scaling up SNNs with SpikeGPT” at the Intel Neuromorphic Research Centre

spikegpt-architecture

Abstract: If we had a dollar for every time we heard “It will never scale!”, then neuromorphic engineers would be billionaires. This presentation will be centered on SpikeGPT, the first large-scale language model (LLM) using spiking neural nets (SNNs), and possibly the largest SNN that has been trained using error backpropagation.

The need for lightweight language models is more pressing than ever, especially now that we are becoming increasingly reliant on them from word processors and search engines, to code troubleshooting and academic grant writing. Our dependence on a single LLM means that every user is potentially pooling sensitive data into a singular database, which leads to significant security risks if breached.

SpikeGPT was built to move towards addressing the privacy and energy consumption challenges we presently run into using Transformer blocks. Our approach decomposes self-attention down into a recurrent form that is compatible with spiking neurons, along with dynamical weight matrices where the dynamics are learnable, rather than the parameters as with conventional deep learning.

We will provide an overview of what SpikeGPT does, how it works, and what it took to train it successfully. We will also provide a demo on how users can download pre-trained models available on HuggingFace so that listeners are able to experiment with them.

Link to the talk can be found here.

Prof. Jason Eshraghian and Prof. Charlotte Frenkel to Present Tutorial at ISCAS 2023 (Monterey, CA, USA)

The tutorial titled “How to Build Open-Source Neuromorphic Hardware and Algorithms” will run in-person at the IEEE International Symposium on Circuits and Systems in Monterey, CA, USA.

Tutorial Overview: The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials.

This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.

Prof. Jason Eshraghian Presents Invited Talk at the NICE Workshop 2023 (San Antonio, TX, USA)

Jason Eshraghian is delivering an invited talk at the 2023 Neuro-Inspired Computing Elements “All Aboard the Open-Source Neuromorphic Hype Train” in San Antonio, Texas, USA in April.

The presentation session will give an overview for neuromorphic hardware developed in open-source processes, and highlight the Tiny Neuromorphic Tape-out project that will take place at the Telluride Neuromorphic Cognition and Engineering Workshop.

Prof. Jason Eshraghian Presents Invited Talk at FOSSi Latch-Up 2023 (Santa Barbara, CA, USA)

Jason Eshraghian gave an invited talk at the 2023 Free and Open Source Silicon (FOSSi) Latch-Up Conference “Open Source Brain-Inspired Neuromorphic Software and Hardware” at UC Santa Barbara, CA, USA.

The presentation session will give an overview for how open source tooling has been used to propose and implement neuromorphic solutions and applications. The presentation will highlight the Tiny Neuromorphic Tape-out project that will take place at the Telluride Neuromorphic Cognition and Engineering Workshop.

The recording is available on YouTube.

New Preprint: “OpenSpike: An OpenRAM SNN Accelerator” led by Undergraduate Researcher Farhad Modaresi Accepted for ISCAS 2023 in Monterey, CA

Farhad Modaresi has led the design and tape-out of a fully open-sourced spiking neural network accelerator in the Skywater 130 process. The design is based on memory macros generated using OpenRAM.

Many of the advances in deep learning this past decade can be attributed to the open-source movement where researchers have been able to reproduce and iterate upon open code bases. With the advent of open PDKs (SkyWater), EDA toolchains, and memory compilers (OpenRAM by co-author Matthew Guthaus), we hope to port rapid acceleration in hardware development to the neuromorphic community. 

Check out the preprint here: https://arxiv.org/abs/2302.01015

GitHub repo with RTL you are welcome to steal: https://github.com/sfmth/OpenSpike

OpenSpike Schematic and Layout

Prof. Jason Eshraghian Presents Tutorial at IEEE AICAS 2022 (Incheon, Korea)

Jason Eshraghian is delivering an extended tutorial at the IEEE Artificial Intelligence Circuits and Systems Conference “Training Spiking Neural Networks Using Lessons from Deep Learning” in Incheon, Korea this June. See more here.

The extended 1.5 hour session will include the fundamentals of spiking neural networks, resource-constrained SNN-Hardware co-design, and a hands-on session where train an SNN from scratch.

IEEE ECCTD 2020 Keynote Address: CMOS-Memristor Nanoelectronics for Neuromorphic Computing

Professor Sung-Mo Kang and Jason Eshraghian delivered the keynote address for the IEEE European Conference on Circuit Theory and Design titled “CMOS-Memristor Nanoelectronics for Neuromorphic Computing”.
The talk takes a journey from the early stages of CMOS design to current memristor nanoelectronics with critical views for device threading, interconnect and technology for achieving multidimensional design goals: reliability, throughput performance, energy consumption and manufacturing costs.

These principles are applied to neuromorphic systems for brain-inspired computation. The powerful capabilities of these neuromorphic processors can be applied to a plethora of real-world challenges, from data-driven healthcare, to neurostimulation, and in AI-generated artwork, as we make a profound shift away from the sequential processing of Von Neumann machines towards parallel, interconnected neural-inspired structures.

Watch the recording here: http://ecctd2020.eu/node/22