This MIT study highlights future developments in advanced computer hardware.  These designs will be faster and use less energy based on recent discoveries.

New hardware offers faster computation for artificial intelligence, with much less energy | MIT News | Massachusetts Institute of Technology

New hardware offers faster computation for artificial intelligence, with much less energy.  Engineers working on “analog deep learning” have found a way to propel protons through solids at unprecedented speeds.

Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.