Some end-of-the-world concerns at the hands of smart robots are masking real problems that we are already facing because we allow these modern computing systems to increasingly manage our lives.
If some artificial intelligence monitors are true, you will be in a race toward what is known as the "technological exclusivity" point, a virtual point where artificial intelligence outweighs our human intelligence and continues to develop itself beyond our expectations. But if that happens, a rare assumption of course, what will happen to us?
Over the past few years, a number of celebrities, from Stephen Hawking, to Elon Mask and Bill Gates, have warned that we should worry more about the potentially dangerous consequences of artificial intelligence systems.
We have found that they have already invested their money in projects that they believe are important in this regard. Mask, like many billionaires, supports OpenA, a non-profit organization devoted to the development of artificial intelligence devices that serve and benefit humanity in general.
But these fears seem to exaggerate from the point of view of many people. "Concerns about the emergence of killer robots are like those concerns about the population increase on Mars," says Andrew Ng of Stanford University, who is also the chief scientist at the Chinese Internet giant Baidu.
But that does not mean that our increased reliance on artificial intelligence devices carries no real risk. In fact, there are already risks.
While smarter systems are increasingly involved in decision-making in many areas, from health care to financial matters to criminal justice, there is a greater likelihood of exposure to situations where smart systems are not properly controlled and scrutinized.
In addition, the use of artificial intelligence devices on a large scale can lead to many consequences, such as changing our relationships with doctors if they rely entirely on medical devices, or changing the means of monitoring our neighborhoods as well.
What exactly does artificial intelligence mean? These devices can simply be referred to as devices that can accomplish things that require human intelligence, such as understanding the natural language of human beings, recognizing the faces in some images, driving the car, and predicting the emergence of books that we may know a lot, based on our enjoyment of similar books we read earlier .
How do artificial intelligence systems help us?
The current approach to artificial intelligence is to try to "teach" those devices, trained in a certain way to learn and respond to different behaviors using large amounts of data, such as recognizing a face in a multi-faceted image, Smart, like the game "Goo".
Deep Mind, a company owned by Google, is collaborating with the National Health Service of the United Kingdom on a number of projects, including the training of some of the FDA's applications for diagnosing cancer and ophthalmology by examining images of patients.
Others use exercise equipment to diagnose the first symptoms of certain conditions, such as heart disease and Alzheimer's disease. Artificial intelligence is also used to analyze large amounts of data in search of potentially new drugs and drugs, actions that humans could have accomplished to take so long.
It is certain that the knowledge of the organs will soon become indispensable in the field of health care. Artificial intelligence devices can also help us manage highly complex systems, such as global shipping networks. For example, the electronic system at the container terminal at Port Bhutan in Sydney manages the movement of thousands of containers entering or leaving the port.
The system is also controlled by a fleet of container carriers, operating without a driver, in a completely free zone. Similarly, in the mining sector, equipment is used to obtain optimal results and to plan and coordinate resource transfers, such as raw iron.
They also transport raw materials to mine trucks, which also operate autonomously and without driver, and transport these materials to cargo trains at the port.
Artificial intelligence devices operate in many areas, ranging from financial matters to transportation and communications, monitoring the stock market to detect suspicious transactions, or helping to monitor air navigation and traffic on the ground.
It also helps prevent malicious or spamming mail from reaching your mailbox. These are all just the beginning for artificial intelligence devices. As these techniques evolve, the number of programs and applications you use increases.
Where does the problem lie?
Instead of just worrying about artificial intelligence devices controlling our lives in the future, we need to know that the real risk lies in placing too much trust in the smart systems we make.
The learning of electronic devices is due to some of the training programs that have been developed in order to teach these devices how to discover a certain pattern amid a huge collection of data.
But when the machine learns to accomplish that task, it is put into action to analyze more new data that it has not worked on before. When the computer gives us a specific answer, we will not normally be able to figure out how to reach that particular answer or conclusion.
Here are some obvious problems. The quality of any device or electronic system is measured by the quality of the data it is provided to learn from. But there is a different experience from which we can learn an important lesson.
There was an electronic system in an automated hospital to identify patients with pneumonia to identify them in a serious condition that could lead to death, so that he was hospitalized urgently.
But the automated system came out with a completely different result. People with asthma were classified as less likely to die and did not need urgent treatment at the hospital.
This is because, in normal situations, people with pneumonia and history know that they also suffer from asthma, directly to the intensive care unit. And thus receive treatment that significantly reduces the risk of death.
But the agency concluded that people with asthma who suffered from pneumonia at the same time were less likely to die.
Since artificial intelligence devices are designed to evaluate many things in our life, from your credit rating to your ability to hold a job, and even the likelihood of some criminals returning to commit certain crimes, the risk of these devices being sometimes wrong and unaware of it worsens the situation.
Because much of the data we provide is not complete or comprehensive, we should not expect perfect answers or conclusions at all times.
Realizing this fact is the first step in dealing with such threats. Decision-making based on artificial intelligence systems should be more rigorous and careful.
Since we make AI systems to be another version of us, they are likely to be just like us, clever and with some flaws as well.
Source:
http://bbctrendingnews.blogspot.in/2017/01/the-risk-of-complete-dependence-on.html
http://www.bbc.com/future/story/20161110-the-real-risks-of-artificial-intelligence
One of the biggest problems could be recognizing AI in the first place. If a synthetic entity achieved sentience independently or by accident, what would necessarily obligate it to make it's presence known?
I found an article about that which talks about it in detail. Check it out:
https://steemit.com/science/@thecazador/how-do-we-measure-consciousness
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit