New Paper: “Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications” Published in IEEE Transactions on Biomedical Circuits and Systems

We published our perspective on integrating neural net accelerators into the clinical & outpatient workflow in the IEEE Transactions on Biomedical Circuits and Systems journal.

Neural nets are extremely resource-intensive. For example, GPT-3 needs the equivalent energy of a nuclear reactor running for an entire month just to train. For continuous, portable monitoring, this simply isn’t an option.

This paper dives into the hardware constraints we currently face in using deep learning in healthcare. We go into the viability of using neuromorphic computing, spiking neural nets, and in-memory computing in alleviating these constraints.

Cross-continental collaboration between Australia, USA and Europe with Mostafa Rahimi Azghadi, Corey Lammie, Melika Payvand, Elisa Donati, Bernabé Linares-Barranco and Giacomo Indiveri

Read more here.