State-of-the-art deep learning has a carbon emission problem. Can neuromorphic engineering help?
Abstract
Deep learning has attracted a lot of attention from both academic, as well as, industrial parties mainly due to its success when working large datasets and its ability to improve performance by scaling up the size of the models. However, the current trends of training state-of-the-art deep learning models are worrisome. Recent data show that training cutting-edge models is vastly energy inefficient and pose a threat to the democratisation of this technology: the resources required to train a model might be accessible only by a few large corporations in the near future. Moreover, executing trained state-of-the-art deep learning models on mobile devices with limited resources is currently not possible due to the large amounts of computations, memory and energy requirements these models need. Neuromorphic engineering is a relatively recent interdisciplinary research field that attempts to simulate neurons and synapses directly on hardware and at a level that is closer to biology. The advantage of this approach is that because neurons are simulated in an asynchronous manner the overall energy consumption is very low since neurons that do not participate in the computations consume nearly zero energy. While a method to train neural networks directly on neuromorphic devices has yet to be discovered it has already been demonstrated that executing trained neural networks on neuromorphic platforms comes with large energy savings and lower prediction latencies.