23 November 2022

The planet is full of learning machines and, consequently, of minds. All animals equipped with a brain are, among other things, physically constrained learning machines, optimised by evolution and experience to exploit limited thermodynamic resources. What physical principles underlie this? How might they be used to make artificial learning agents?

We investigated the link between the stochastic thermodynamics of elementary learning machines and the information-theoretical idea of average learning error. We showed that optimal learners minimise the power dissipated as they minimise error rate. This is true for classical (thermal) and quantum learning machines. As far as we are aware, there are no naturally occurring learning machines based on quantum effects—surely a contingent feature of our particular place in the cosmos.
The biggest challenge we had to overcome was to reformulate stochastic thermodynamic relations so that they can be extended to the fully quantum regime where the temperature is very low. These are quantum generalisations of the classical Crookes and Jarzynski equalities. Once found, we defined a class of quantum learning machines driven by quantum noise, such as quantum tunnelling, rather than by the thermal noise typical of electrochemical systems.

The discovery of the link between learning optimisation and optimal power consumption motivated a further question: if thermodynamic principles drive the emergence of learning machines, then why are artificial machine learning algorithms so energy inefficient?

Our work suggests that this inefficiency is a result of the history of machine learning and the wide availability of inexpensive CMOS processors. Deep learning has its roots in attempts to numerically simulate a 1960s-level understanding of mammalian neurosystems. It has come of age recently as a result of the rapid increase in training data and fast chips on which to run the simple models of neural networks.

By redirecting our view to physical learning machines and away from algorithms, substantial improvements in terms of energy dissipation should be possible.

Quantum switches dissipate the least power compared to classical switches. We devised quantum learning machines based on networks of quantum switches that promise the ability to engineer very energy-efficient quantum learning agents that are fundamentally different from unitary, gate-based coherent quantum computing.

Spiking neural nets have a greater learning capability. We came up with quantum spiking neural nets based on quantum nanomechanical resonators.

A surprising consequence of these models is that they suggest that all spiking learning models may be reduced to networks of coupled clocks.

Our future work will develop practical spiking neural net devices for optical implementation.

Our predictions may be implemented in several EQUS technologies, including, superconducting circuits, optomechanics, ion traps and single photonics.

Read the full paper here: doi.org/10.1080/00107514.2022.2135672


This story was first published in the 2022 EQUS annual report.

Privacy Preference Center