I think you are mixing AI with anything related to IT, electronics or software.
The point is AI developments would allow the creation of lethal autonomous weapons (killer robots basically). It sounds crazy, but have a look at the following :
- Google is quietly providing AI technology for drone strike targeting project
- Thousands of leading AI researchers sign pledge against killer robots. If these concerns from the community are not addressed and translated into legislation, you can be assured that the military will find less scrupulous engineers and go ahead. There is growing pressure to legislate in particular at the UN Convention on Conventional Weapons, like the Campaign to Stop Killer Robots
I'm well aware of weapons of war being autonomous, as well as the google employee strike in retaliation. Due to the Artificial narrow intelligence in the Software of our hardware we optimize out computing experience. AI is also being used for art, gaming, architecture, automotive advancements and science.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Ok I thought that you were minimising the worrisome applications of AI in the military. Yes AI has a lot of great and positive outcomes in other domains.
In general technology itself is amoral, however the society needs to put legal bounds to its usage. These limits ideally would emerge from a public democratic debate, which is rarely the case due to moneyed interests
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit