The new chip reduces the power consumption of Neural Networks by 95 Percent

in hive-138458 •  3 years ago 

The most recent advances in artificial intelligence systems such as speech or facial systems have gained respect for neural networks, a highly interconnected machine for simple information processors learning to perform tasks by analyzing large sets of training details.

But neural nets are large, and their numbers are very strong, so they do not work very well on portable devices. Most neural network-based smartphone applications simply upload data to Internet servers, process it and send the results back to the phone.

New-Computer-Chip-Reduces-Neural-Networks’-Power-Consumption-by-95-Percent-768x512.jpg

Source

Now, MIT researchers have set a specific goal that increases the speed of neural-network integration three to seven times higher than its predecessors, while reducing power consumption by 94 percent to 95 percent. That can make it useful to use local neural networks in smartphones or plug them into home appliances.

"A common model for processing is that there is memory on one side of the chip, and there is a processor on the other side of the chip, and you move the data back and forth between them when performing these calculations," said Avishek Biswas, MIT graduate student in electrical engineering and science. of computers, which was leading the development of the new chip.

“As these machine learning systems require a lot of calculation, this data transfer is an integral part of energy efficiency. But calculating these algorithms can be simplified by a single operation, called a dot product. The way we worked was, can we use this dot function inside memory so you don't have to move this data back and forth? ”

Biswas and his thesis advisor, Anantha Chandrakasan, principal of MIT's School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science, unveiled a new chip in Biswas paper unveiled this week at the International Solid State Circuits Conference.

All or nothing


One of the system buttons is that all instruments are 1 or -1. That means they can be made inside the memory itself as simple switches that can close the circuit or leave it open. Recent theoretical work suggests that two-dimensional neural nets should lose a little accuracy - somewhere between one and two percent.

A study by Biswas and Chandrakasan confirms this. In the test, they used the full implementation of the neural network on a standard computer with the same binary weight on their chip. The results of their chip were typically within 2 to 3 percent of a typical network.

“This is a promising global demonstration of the real world of SRAM-in-memory analog computing for deep learning systems,” said Dario Gil, vice president of artificial intelligence at IBM. “The results show a surprising indication of energy-efficient use of convolution functions for memory planning. It will certainly open the way for the hiring of neural convolutional complex networks more for image and video classifications in the IoT internet of things in the future. ”

Thank you for reading ...

Regards,
@Winy

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  


image.png

Your post has been upvoted by @zero-to-infinity. We are supporting all the STEM Content Publish in Steemit.

For more,you can visit this community

JOIN WITH US ON DISCORD SERVER:

Support us by delegating STEEM POWER.
20 SP50 SP100 SP250 SP500 SP

Follow @zero-to-infinity & @steemitblog for latest updates