Researchers of the Indian Institute of Science (IISc) have warned that machine-learning and artificial intelligence algorithms used in sophisticated applications such as autonomous applications are not silly and can be easily tampered by presenting errors.
Machine-learning and AI software are trained with the initial set of data such as images of cats and it learns to identify Felin images because such data is fed. One common example is that Google is throwing a better result because more people search for the same information.
AI applications are becoming mainstream in health care, payment processing, deploying drones to monitor the crowd and areas of face recognition in offices and airports.
“If your data input is not clear and obvious, then the AI machine can throw surprising results and it can be dangerous. In the autonomous driving, the AI engine should be properly trained on all road signs, the IISc’s Computational Sciences Department Associate Professor R Venkatesh Babu told ET, If the input sign is different, then it can change the course of the vehicle, which can lead to disaster They said, “The system also requires adequate cybersecurity measures to stop hackers from infiltrating and changing inputs.” Babu and his student Konda Reddy Mopuri and Aditya Ganesan in a paper published in the prestigious Trans. Pattern analysis of IEEE and Machine Intelligence has shown how errors can be introduced in machine-learning algorithms, African Chameleon for missiles, and Bananas can throw different results like a custard apple.
He has shared his algorithm in open source platform for others to work and improve the software.
read full post on technewspaper.in
click on the link to read post
https://technewspaper.in/indian-institute-of-science-researchers-may-have-warning-for-google-facebook-and-other-tech-companies-using-ai/