New Preprint: Brain-inspired learning in artificial neural networks: A Review led by Ph.D. Candidate Samuel Schmidgall

Abstract: Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, and robotics. However, there exist fundamental differences between ANNs’ operating mechanisms and those of the biological brain, particularly concerning learning processes. This paper presents a comprehensive review of current brain-inspired learning representations in artificial neural networks. We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks’ capabilities. Moreover, we delve into the potential advantages and challenges accompanying this approach. Ultimately, we pinpoint promising avenues for future research in this rapidly advancing field, which could bring us closer to understanding the essence of intelligence.

Link to the preprint here.

SNN Overview

Preprint Update: Training Spiking Neural Networks Using Lessons from Deep Learning

We submitted this extensive (and opinionated) guide to training spiking neural networks to the Proceedings of the IEEE 18 months ago. During this time, the preprint has reached 100+ citations, snnTorch has cracked 80,000 downloads, and it has helped numerous people enter the field of neuromorphic computing… and much of the content that was true 18 months ago has significantly changed.

While we continue to wait for the peer review process to do its thing, I’ve taken the liberty to revamp the preprint to reflect the rapidly changing world of training and using SNNs.

The latest version includes “Practical Notes” with black magic tricks that have helped us improve the performance of SNNs, code-snippets that reduce verbose explanations, and a fresh account of some of the latest going-ons in the neuroscience-inspired deep learning world..

Thank you to Gregor LenzXinxin Wang and Max Ward for working through this >50 page monster.

Preprint link here.

Prof. Jason Eshraghian and Prof. Charlotte Frenkel to Present Tutorial at ISCAS 2023 (Monterey, CA, USA)

The tutorial titled “How to Build Open-Source Neuromorphic Hardware and Algorithms” will run in-person at the IEEE International Symposium on Circuits and Systems in Monterey, CA, USA.

Tutorial Overview: The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials.

This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.