Binh presented the accepted paper, “Accelerating Neuromorphic Deep Brain Stimulation Optimization through Knowledge Distillation and Enforced Sparsity” at IEEE NER 2025 in San Diego, CA.
Abstract:
Closed-loop Deep Brain Stimulation (DBS) systems hold immense promise for treating motor symptoms in Parkinson’s disease (PD) with greater adaptability and efficiency than traditional open-loop approaches. Spiking Neural Networks (SNNs) are particularly well-suited for implementing the control logic in these systems due to their inherent energy efficiency. However, training SNNs, especially using computationally intensive methods like Reinforcement Learning (RL), presents a significant bottleneck, often requiring extensive time and resources. To address this, we introduce a Knowledge Distillation (KD) framework specifically designed to train SNN-based DBS controllers. We leverage a pre-trained, high-performance Deep Spiking Q-Network (DSQN) as a ’teacher’ to rapidly guide the training of ’student’ SNNs. Our KD approach incorporates a tunable sparsity-enforcing mechanism, allowing us to generate student networks that exhibit varying degrees of sparse, bioinspired activity. We demonstrate that this KD framework achieves a dramatic reduction in training time compared to the initial RL process. Furthermore, we conduct a comprehensive analysis of the trade-offs between network sparsity, controller performance, and the resulting DBS parameters. Our findings support KD as a powerful and practical methodology for developing efficient, sparse, and biologically plausible SNN controllers, significantly accelerating the design and in silico validation of advanced neuromodulation systems.

