Analog Neural Networks: Nature Communications Publishes ECE Researchers’ Blueprint for Precision

by A.J. Kleber

Computer architectures with historical origins as far back as Ancient Greece have a critical role to play in the development and deployment of highly advanced, energy-efficient deep learning networks.

While combining analog computation with machine learning may seem challenging and counterintuitive at first, it’s a growing area of research as engineers confront the high power consumption costs of traditional digital architectures when operations are scaled up to the complexity and density required for artificial intelligence, not to mention cutting-edge deep learning methods. An analog approach offers a lower-power alternative … if you can just get around the pesky tendency towards limited numeric precision, errors, and noise present in analog hardware.

In a new paper published by Nature Communications, first-authored by recent alum Cansu Demirkiran (ECE PhD’24) alongside advisor Professor Ajay Joshi and industry collaborators, the researchers lay out an eponymous “blueprint for precise and fault-tolerant analog neural networks.” Using an efficient mathematical approach called the residue number system (RNS), Demirkiran and her colleagues have developed a method of using multiple low-precision analog operations to compose high-precision operations, thus eliminating  the need for energy-inefficient high-precision data converters. Their work targets the efficient acceleration of deep neural networks (DNNs) without compromising accuracy. The authors also leverage mathematical redundancy to counter analog hardware’s inherent vulnerability to junk data (noise). The paper lays out a foundation for achieving very high precision for DNN training and inference indeed. The team asserts that their results have the potential for achieving energy efficiency that is “several orders of magnitude higher” than what is currently possible with more conventional analog hardware (using high-precision data converters), while ensuring high-precision computing.

Cansu Demirkiran successfully defended her doctoral dissertation in early August of this year. She is a member of Professor Joshi’s Integrated Circuits, Architectures and Systems Group, and was selected as one of the 2024 MLCommons Rising Stars.

Professor Joshi headshotProfessor Ajay Joshi is a member of the BU ECE faculty, a Hariri Institute Faculty Research Fellow and Affiliate, and an affiliate of the Photonics Center. His honors include two Google Faculty Research Awards (2019 & 2018), Best Paper Awards at ASIACCS (2018) and HOST (2023), and an NSF CAREER Award (2012); he is also the recipient of a 2024 BU Technology Development Ignition Award.