Neuromorphic Computing Group

Brain-Inspired Systems at UC Santa Cruz

Neuromorphic Computing Group

New Paper: “Optically Tunable Electrical Oscillations in Oxide-Based Memristors for Neuromorphic Computing” led by Collaborator Dr. Shimul K. Nath

optical memristor
The application of hardware-based neural networks can be enhanced by integrating sensory neurons and synapses that enable direct input from external stimuli. Here, we report direct optical control of an oscillatory neuron based on volatile threshold switching in V 3 O 5. The devices exhibit electroforming-free operation with switching parameters that can be tuned by optical illumination. Using temperature-dependent electrical measurements, conductive atomic force microscopy (C-AFM), in-situ thermal imaging, and lumped element modelling, we show that the changes in switching parameters, including threshold and hold voltages, arise from overall conductivity increase of the oxide film due to the contribution of both photo-conductive and bolometric characteristics of V 3 O 5, which eventually affects the oscillation dynamics. Furthermore, our investigation reveals V 3 O 5 as a new bolometric material with a remarkable temperature coefficient of resistivity (TCR) as high as-4.6% K-1 at 423 K. We show the utility of optically tuneable device response and spiking frequency by demonstrating in-sensor reservoir computing with reduced computational effort and an optical encoding layer for spiking neural network, respectively, using a simulated array of devices. This article is protected by copyright. All rights reserved.

New snnTorch Tutorial: Spiking-Tactile MNIST by Undergraduate Students Dylan Louie, Hannah Cohen-Sandler, and Shatoparba Banerjee

See the tutorial here.

The next tutorial from UCSC’s Brain-Inspired Machine Learning class is by Dylan J. LouieHannah Cohen Sandler and Shatoparba Banerjee.

They show how to train an SNN for tactile sensing using the Spiking-Tactile MNIST Neuromorphic Dataset. This dataset was developed in Benjamin C.K. Tee‘s lab in NUS. It consists of handwritten digits obtained by human participants writing on a neuromorphic tactile sensor array.

For more information about the dataset, see the preprint by Hian Hian See et al. here.

 

Prof. Jason Eshraghian and Dr. Fabrizio Ottati Present Tutorial at ISFPGA (Monterey, CA)

Fabrizio Ottati and I will be running a tutorial tomorrow (Sunday, 3 March) at the International Symposium on Field-Programmable Gate Arrays (ISFPGA) in Monterey, CA titled: “Who needs neuromorphic hardware? Deploying SNNs to FPGAs via HLS”.

snn-to-fpga

We’ll go through software and hardware: training SNNs using quantization-aware techniques across weights and stateful quantization, and then show how to go from an snnTorch model straight into AMD/Xilinx FPGAs for low-power + flexible deployment.

GitHub repo: https://github.com/open-neuromorphic/fpga-snntorch

Tutorial summary: https://www.isfpga.org/workshops-tutorials/#t2

New Preprint: “Addressing cognitive bias in medical language models” led by Ph.D. Candidate Samuel Schmidgall

Preprint link here.

Abstract: The integration of large language models (LLMs) into the medical field has gained significant attention due to their promising accuracy in simulated clinical decision-making settings. However, clinical decision-making is more complex than simulations because physicians’ decisions are shaped by many factors, including the presence of cognitive bias. However, the degree to which LLMs are susceptible to the same cognitive biases that affect human clinicians remains unexplored. Our hypothesis posits that when LLMs are confronted with clinical questions containing cognitive biases, they will yield significantly less accurate responses compared to the same questions presented without such biases.In this study, we developed BiasMedQA, a novel benchmark for evaluating cognitive biases in LLMs applied to medical tasks. Using BiasMedQA we evaluated six LLMs, namely GPT-4, Mixtral-8x70B, GPT-3.5, PaLM-2, Llama 2 70B-chat, and the medically specialized PMC Llama 13B. We tested these models on 1,273 questions from the US Medical Licensing Exam (USMLE) Steps 1, 2, and 3, modified to replicate common clinically-relevant cognitive biases. Our analysis revealed varying effects for biases on these LLMs, with GPT-4 standing out for its resilience to bias, in contrast to Llama 2 70B-chat and PMC Llama 13B, which were disproportionately affected by cognitive bias. Our findings highlight the critical need for bias mitigation in the development of medical LLMs, pointing towards safer and more reliable applications in healthcare.

New Paper: “Surgical Gym: A high-performance GPU-based platform for reinforcement learning with surgical robots” led by PhD Candidate Samuel Schmidgall accepted at the 2024 IEEE Intl. Conf. on Robotics and Automation (ICRA 2024)

Preprint link here.

Abstract: Recent advances in robot-assisted surgery have resulted in progressively more precise, efficient, and minimally invasive procedures, sparking a new era of robotic surgical intervention. This enables doctors, in collaborative interaction with robots, to perform traditional or minimally invasive surgeries with improved outcomes through smaller incisions. Recent efforts are working toward making robotic surgery more autonomous which has the potential to reduce variability of surgical outcomes and reduce complication rates. Deep reinforcement learning methodologies offer scalable solutions for surgical automation, but their effectiveness relies on extensive data acquisition due to the absence of prior knowledge in successfully accomplishing tasks. Due to the intensive nature of simulated data collection, previous works have focused on making existing algorithms more efficient. In this work, we focus on making the simulator more efficient, making training data much more accessible than previously possible. We introduce Surgical Gym, an open-source high performance platform for surgical robot learning where both the physics simulation and reinforcement learning occur directly on the GPU. We demonstrate between 100-5000x faster training times compared with previous surgical learning platforms. The code is available at: https://github.com/SamuelSchmidgall/SurgicalGym.

OpenNeuromorphic Talk: “Neuromorphic Intermediate Representation”

In this workshop, we will show you how to move models from your favourite framework directly to neuromorphic hardware with 1-2 lines of code. We will present the technology behind, the Neuromorphic Intermediate Representation , and demonstrate how we can use it to run a live spiking convnet on the Speck chip.
Presented by Jens Pedersen, Bernhard Vogginger, Felix Bauer, and Jason Eshraghian. See the recording here.

New Paper: “Exploiting deep learning accelerators for neuromorphic workloads” led by Ph.D. Candidate Vincent Sun in collaboration with Graphcore published in Neuromorphic Computing and Engineering

See the full paper here.

Abstract

Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but in a twist of irony, when training on modern graphics processing units (GPUs) this becomes more expensive than non-spiking networks. The emergence of Graphcore’s Intelligence Processing Units (IPUs) balances the parallelized nature of deep learning workloads with the sequential, reusable, and sparsified nature of operations prevalent when training SNNs. IPUs adopt multi-instruction multi-data (MIMD) parallelism by running individual processing threads on smaller data blocks, which is a natural fit for the sequential, non-vectorized steps required to solve spiking neuron dynamical state equations. We present an IPU-optimized release of our custom SNN Python package, snnTorch, which exploits fine-grained parallelism by utilizing low-level, pre-compiled custom operations to accelerate irregular and sparse data access patterns that are characteristic of training SNN workloads. We provide a rigorous performance assessment across a suite of commonly used spiking neuron models, and propose methods to further reduce training run-time via half-precision training. By amortizing the cost of sequential processing into vectorizable population codes, we ultimately demonstrate the potential for integrating domain-specific accelerators with the next generation of neural networks.

New snnTorch Tutorial: Exoplanet Hunter by Undergraduate Students Ruhai Lin, Aled dela Cruz, and Karina Aguilar

See the tutorial here.

The transit method is a widely used and successful technique for detecting exoplanets. When an exoplanet transits its host star, it causes a temporary reduction in the star’s light flux (brightness). Compared to other techniques, the transmit method has has discovered the largest number of planets.

Astronomers use telescopes equipped with photometers or spectrophotometers to continuously monitor the brightness of a star over time. Repeated observations of multiple transits allows astronomers to gather more detailed information about the exoplanet, such as its atmosphere and the presence of moons.

Space telescopes like NASA’s Kepler and TESS (Transiting Exoplanet Survey Satellite) have been instrumental in discovering thousands of exoplanets using the transit method. Without the Earth’s atmosphere in the way, there is less interference and more precise measurements are possible. The transit method continues to be a key tool in advancing our understanding of exoplanetary systems. For more information about transit method, you can visit NASA Exoplanet Exploration Page.

New Paper: “To spike or not to spike: A digital hardware perspective on deep learning acceleration” led by Dr. Fabrizio Ottati in IEEE JETCAS

Find the paper on IEEE Xplore here.

Abstract:

As deep learning models scale, they become increasingly competitive from domains spanning from computer vision to natural language processing; however, this happens at the expense of efficiency since they require increasingly more memory and computing power. The power efficiency of the biological brain outperforms any large-scale deep learning (DL) model; thus, neuromorphic computing tries to mimic the brain operations, such as spike-based information processing, to improve the efficiency of DL models. Despite the benefits of the brain, such as efficient information transmission, dense neuronal interconnects, and the co-location of computation and memory, the available biological substrate has severely constrained the evolution of biological brains. Electronic hardware does not have the same constraints; therefore, while modeling spiking neural networks (SNNs) might uncover one piece of the puzzle, the design of efficient hardware backends for SNNs needs further investigation, potentially taking inspiration from the available work done on the artificial neural networks (ANNs) side. As such, when is it wise to look at the brain while designing new hardware, and when should it be ignored? To answer this question, we quantitatively compare the digital hardware acceleration techniques and platforms of ANNs and SNNs. As a result, we provide the following insights: (i) ANNs currently process static data more efficiently, (ii) applications targeting data produced by neuromorphic sensors, such as event-based cameras and silicon cochleas, need more investigation since the behavior of these sensors might naturally fit the SNN paradigm, and (iii) hybrid approaches combining SNNs and ANNs might lead to the best solutions and should be investigated further at the hardware level, accounting for both efficiency and loss optimization.

New Paper: “Capturing the Pulse: A State-of-the-Art Review on Camera-Based Jugular Vein Assessment” led by Ph.D. Candidate Coen Arrow in Biomedical Optics Express

See the full paper here.

Abstract

Heart failure is associated with a rehospitalisation rate of up to 50% within six months. Elevated central venous pressure may serve as an early warning sign. While invasive procedures are used to measure central venous pressure for guiding treatment in hospital, this becomes impractical upon discharge. A non-invasive estimation technique exists, where the clinician visually inspects the pulsation of the jugular veins in the neck, but it is less reliable due to human limitations. Video and signal processing technologies may offer a high-fidelity alternative. This state-of-the-art review analyses existing literature on camera-based methods for jugular vein assessment. We summarize key design considerations and suggest avenues for future research. Our review highlights the neck as a rich imaging target beyond the jugular veins, capturing comprehensive cardiac signals, and outlines factors affecting signal quality and measurement accuracy. Addressing an often quoted limitation in the field, we also propose minimum reporting standards for future studies.

Invited Talk: Kraw Lecture Series “Bridging the Gap Between Artificial Intelligence and Natural Intelligence” by Prof. Jason Eshraghian

See the recording here.

The Kraw Lecture Series in Silicon Valley is made possible by a generous gift from UC Santa Cruz alumnus George Kraw (Cowell ‘71, history and Russian literature) and Raphael Shannon Kraw. The lecture series features acclaimed UC Santa Cruz scientists and technologists who are grappling with some of the biggest questions of our time.

Abstract: The brain is the perfect place to look for inspiration to develop more efficient neural networks. Indeed, the inner workings of our synapses and neurons offer a glimpse at what the future of deep learning might look like. Our brains are constantly adapting, our neurons processing all that we know, mistakes we’ve made, failed predictions—all working to anticipate what will happen next with incredible speed. Our brains are also amazingly efficient. Training large-scale neural networks can cost more than $10 million in energy expense, yet the human brain does remarkably well on a power budget of 20 watts.

We can apply the computational principles that underpin the brain, and use them to engineer more efficient systems that adapt to ever changing environments. There is an interplay between neural inspired algorithms, how they can be deployed on low-power microelectronics, and how the brain provides a blueprint for this process.

New Paper: “Neuromorphic cytometry: implementation on cell counting and size estimation” led by Ziyao Zhang and Omid Kavehei in Neuromorphic Computing and Engineering

See the full paper here.

Abstract

Imaging flow cytometry (FC) is a powerful analytic tool that combines the principles of conventional FC with rich spatial information, allowing more profound insight into single-cell analysis. However, offering such high-resolution, full-frame feedback can restrain processing speed and has become a significant trade-off during development. In addition, the dynamic range (DR) offered by conventional photosensors can only capture limited fluorescence signals, which compromises the detection of high-velocity fluorescent objects. Neuromorphic photo-sensing focuses on the events of interest via individual-firing pixels to reduce data redundancy and latency. With its inherent high DR, this architecture has the potential to drastically elevate the performance in throughput and sensitivity to fluorescent targets. Herein, we presented an early demonstration of neuromorphic cytometry, demonstrating the feasibility of adopting an event-based resolution in describing spatiotemporal feedback on microscale objects and for the first time, including cytometric-like functions in object counting and size estimation to measure 8 µm, 15 µm microparticles and human monocytic cell line (THP-1). Our work has achieved highly consistent outputs with a widely adopted flow cytometer (CytoFLEX) in detecting microparticles. Moreover, the capacity of an event-based photosensor in registering fluorescent signals was evaluated by recording 6 µm Fluorescein isothiocyanate-marked particles in different lighting conditions, revealing superior performance compared to a standard photosensor. Although the current platform cannot deliver multiparametric measurements on cells, future endeavours will include further functionalities and increase the measurement parameters (granularity, cell condition, fluorescence analysis) to enrich cell interpretation.

Brain-Inspired Machine Learning at UCSC: Class Tape-out Success

This quarter, I introduced Brain-Inspired Machine Learning as a course to University of California, Santa Cruz. And while machine learning is cool and all, it’s only as good as the hardware it runs on.

31 students & first time chip designers all took the lead on building DRC/LVS clean neuromorphic circuits. Students came from grad & undergrad backgrounds across various corners of the university. ECE, CSE, Math, Computational Media, Bioengineering, Psychology, etc. Many had never even taken an ECE 101 class, and started learning from scratch 2 weeks ago.

Their designs are now all being manufactured together in the Sky130 Process. Each design is compiled onto the same piece of silicon with TinyTapeout, thanks to Matt Venn and Uri Shaked.

We spent Friday night grinding in my lab while blaring metalcore tunes. All students managed to clear all checks. The final designs do a heap of cool things like accelerate sparse matrix-multiplies, event denoising, to simulating reservoir networks. I naturally had to squeeze in a Hodgkin-Huxley neuron in the 6 hours before the deadline (pictured).

Not sure if it’s the cost of living, or the mountain lions on campus, but damn. UCSC students have some serious grit.

Hodgkin-Huxley Neuron Model GDS Art

Prof. Jason Eshraghian, Prof. Charlotte Frenkel and Prof. Rajit Manohar Present Tutorial at ESSCIRC/ESSDIRC (Lisbon, Portugal)

The tutorial titled “Open-Source Neuromorphic Circuit Design” ran in-person at the IEEE European Solid-State Circuits/Devices Conference, alongside co-presenters Prof. Charlotte Frenkel (TU Delft) and Prof. Rajit Manohar (Yale University). A live demo session with the notebooks that I’ll go through have been uploaded to GitHub at this link.

Tutorial Overview: As a bio-inspired alternative to conventional machine-learning accelerators, neuromorphic circuits outline promising energy savings for extreme-edge scenarios. While still being considered as an emerging approach, neuromorphic chip design is now being included in worldwide research roadmaps: the community is growing fast and is currently catalyzed by the development of open-source design tools and platforms. In this tutorial, we will survey the diversity of the open-source neuromorphic chip design landscape, from digital and mixed-signal small-scale proofs-of-concept to large-scale platforms. We will also provide a hands-on overview of the associated design challenges and guidelines, from which we will extract upcoming trends and promising use cases.

Prof. Jason Eshraghian Presenting Invited Talk at the Memristec Summer School, Dresden, Germany

Prof. Jason Eshraghian will be presented “A Hands-On Approach to Open-Source Memristive Neuromorphic Systems” at the DFG priority programme Memristec Summer School in Dresden, Germany.

Thsi interdisciplinary Summer School on memristive systems will bring students together and enable discussion with experts from the fields of fabrication, characterization and the theory of memristors.