Researchers combine light sensor and neural network

in hive-116221 •  5 years ago 

Shared From Dlike

Researchers at the Technical University of Vienna implemented both a photosensor and a neural network in one electrical circuit, greatly accelerating image processing.

Machine vision today consists of a light sensor - usually a CCD circuit - and a computer with machine intelligence where the signal from the sensor is sent for processing. This method is in principle satisfactory in today's larger systems on prototype self-driving cars capable of processing over a hundred frames per second. But extending machine vision to all smaller devices requires a significant reduction in both weight and electrical power, and robots that orient themselves around humans will also need to process signals faster.

This new chip chip, which combines both image capture and coarse basic processing, is basically simple: light sensing photodiodes are also neural network nodes. Such a system was able to recognize the letters "n", "v" and "z" with some learning even before sending the signal, at a theoretical frequency of up to twenty million frames per second. At the same time, it did not require external power.

However, it should be noted that this is a very primitive device: it contains only 27 sensors and can process 3 × 3 pixel images. Before practical solutions arrive, plenty of water will flow.


Shared On DLIKE

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!