Face recognition technology, without a doubt, is a useful and necessary development, but with all its merits, it can carry and harm, violating the freedom of a person and allowing you to keep track of him. Especially for this purpose, an algorithm was created that disrupts the recognition system, not allowing it to intrude into privacy.
The new system is built on the basis of deep machine learning, which means it is capable of self-improvement in the future. The algorithm itself uses dynamic methods of destroying the recognition algorithms. To develop a counteraction system, scientists used two AIs that fought each other. One worked to identify individuals, and another tried to break it. The result of this "war" was the creation of a special filter (so far only for photos) that replaces certain pixels in images in such a way that the human eye can not see the difference, but the face recognition system fails.
Professor Aarabi tested his invention on photographs of 600 people of different ethnic and gender backgrounds. As a result, it turned out that the system is able to reduce the proportion of recognized individuals from 100% to 0.5%. But that is not all. In the course of the study, a rather pleasant bonus was found: the system does not just prevent the algorithm from recognizing faces, but also does not allow you to read skin color, ethnicity, sex, facial expressions, etc., making the photo "just a picture" for the recognition algorithm.