![]() RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. ![]() To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. ![]() “It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. On AI chips, moving data from memory to computing units is one major bottleneck. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing.īy reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. Then the results are moved back to the device. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. 17 issue of Nature.Ĭurrently, AI computing is both power hungry and computationally expensive. The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. “The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition. In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud. The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration. The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Photo: David Baillot/UC San Diego Jacobs School of EngineeringĪn international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing. The NeuRRAM neuromorphic chip was developed by an international team of researchers co-led by UC San Diego engineers.
0 Comments
Leave a Reply. |