Are Self Driving Cars Road Ready or at a Roadblock?

in technology •  6 years ago  (edited)

 Are Self Driving Cars Ready For The Road?

Skeptics say that full autonomy could be further away than the industry is actually admitting. Fully autonomous cars are far from only a few months away. Tesla is supposed to have a fully autonomous car by 2018, Google had expectations as well for next year 2019, which will be the same year that self driving taxis will be on the road in Singapore (talk about risky). GM also plans to have a fully autonomous car in 2019 with no steering wheel, preventing drivers from intervening while in the car (another risky situation).   

Cars are already being tested in Arizona. In testing, there have been a few crashes, with some being deadly, but with the rapid advancement of technology, there are hopes to have a safe self driving car on the market by next year. However, none of this is a guarantee and more testing is being done.   The dream of a fully autonomous car may not be as close as we think. There is growing concerns with artificial intelligence experts “AI experts” indicating that it may be a few years or even decades before self driving systems can be reliable and safe. Experts like Gary Marcus is preparing for a painful outcome, referred to as the “AI winter.” This delay could have a huge consequence for companies who want to invest in self driving technology.   

Self Driving Cars is a Tricky Experiment and No One Knows The Outcome

A lot of car companies have been optimistic about autonomy. Over the years AI accomplished the unthinkable in its success in the tech industry through deep learning. Deep learning is a method that uses machine learning algorithms to create structured information from data sets. It operates Google search, Facebook news feeds, conversational speech to texts algorithms, and Go playing systems. We also use deep learning to detect earthquakes, predict heart disease, and flag suspicious activity on a camera feed, along with many other innovations. With all that being said, deep learning requires a lot of training to operate properly.   

Researchers think they can improve generalization skills with the right algorithms, but research has shown recently that deep learning is worse at generalizing than assumed. One study shows that conventional deep learning systems has a difficult time generalizing with different frames of a video, labeling a bear as a baboon or a weasel depending on minor shifts to the background. Small changes to pictures can completely change the system’s judgement, something several researchers take advantage of in adversarial data sets.   

With the same AI technology being used with chat-bots, a lot of companies lost faith in their chat bot projects and very few are still in active development. Many chat bots are just not up to standard. It is not just for collecting data, but the chat bots should have the ability to respond to what someone is saying, drawing on broader conversational skills to produce a unique response to each individual. Deep learning unfortunately can’t make that type of chat bot. This leaves a very scary question, will self driving cars get better like other AI successes or will it run into the same generalization problem that chat bots have. How unpredictable will the driving be? (It seems extremely unpredictable at this point).   

Despite the unpredictability behind this technology it is still very early and may be too early to know for sure. Self driving cars are a scientific experiment and the answer is unknown. Gary Marcus worries that driving in accident prone scenarios may be more complicated than the industry can admit. With surprising new things happening all the time, it would not be a good thing for deep learning (no one can afford any more accidents either).   

Safety is Key

A fatal crash in 2016 causes a Model S to drive full speed into the rear of a tractor-trailer, the sensor was confused by the height of the trailer and the bright reflection of the sun. This is a cause for concern that can’t be ignored. In March, a self driving Uber vehicle crashes and kills a woman pushing a bicycle after she emerges from a unauthorized cross walk. According to the National Transportation Safety Board (NTSB report) Uber’s software misidentified the woman as an unknown object, a vehicle, and then a bicycle, but never as a human. Another crash occurs in California, the California crash was a Model X car that went toward a barrier and sped up moments before the collision, the reason behind this is still unknown. Every accident that has occurred appears to be from some type of unexpected circumstance.   

This is Not an Isolated Problem

With all the unpredictable circumstances that have occurred companies are looking at alternatives. Many companies moved to an older AI technique called “rule based AI.” This technique allows engineers to apply code specific behaviors or logic to a self-directed system. It would assist in allowing companies to avoid some of the deep learning’s limitations, however, it is hard to predict how successful engineers will be in preventing certain errors. A study by the Rand Corporation estimates that self driving cars need to drive at least 275 million miles without a fatality to prove that they are as safe as human drivers (which humans are not the safest drivers themselves). Improving the accident rate may be harder than it looks and will take a lot more testing. Techniques are being used in hopes that it will work in the near future, but there is no proof that this technology will get to the precision necessary to be as successful as we need it to be.  

 

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  
  ·  6 years ago (edited)

Deep learning is only applicable to systems that almost never change. In the paper "Deep Learning: A Critical Appraisal", Marcus describes how deep learning was able to learn how to play Breakout to an acceptable degree. But when a small wall was added to the middle of the screen, the deep learning AI completely broke down. Now imagine this same type of AI being employed on real life roads, in which the environment changes often and unexpectedly.

Deep learning AI is flat out unacceptable for driving. "Unsafe at any speed" well describes it.

Hey sumireko,

It is true. It appears that self driving cars are nowhere near ready to be used any time soon. At this time it would be a great danger to trust such technology. AI still has a long way to go and has many glitches. I agree that using AI would be unacceptable for driving.

Congratulations @keeley! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You made your First Comment

Click here to view your Board of Honor
If you no longer want to receive notifications, reply to this comment with the word STOP

Do not miss the last post from @steemitboard:

Meet the Steemians Contest - The results, the winners and the prizes
Meet the Steemians Contest - Special attendees revealed
Meet the Steemians Contest - Intermediate results

Support SteemitBoard's project! Vote for its witness and get one more award!