Thousands of courses for $10 728x90

الثلاثاء، 4 فبراير 2020

How Do Self-Driving Cars Work and What Problems Remain?

Are you ready for your car to become a self-driving chauffeur? 

Progress in the field of self-driving cars has been enormous over the last decade. Waymo and Uber, both top contenders in the race for an autonomous driving future, weren’t even incorporated before 2009. 

Between 2015 and 2019, Tesla’s autopilot achieved more than 1 billion miles of total use. And between January of 2019 and January of 2020, Tesla’s autopilot is expected to more than double to more than 2.3 billion miles of use.

Even with all this progress, accidents and deaths from self-driving cars still pose a very real threat. In this article, we cover the ins and outs of the autonomous vehicle industry, the technology driving the progress and what problems threaten public safety as the technology is rolled out (all puns intended).  

What is a self-driving car? 

A self-driving car, also known as an autonomous vehicle, is a connected car that relies on a combination of hardware, software and machine learning to navigate various weather, obstacles and road conditions using real-time sensory data. 

People commonly associate self-driving cars with artificial intelligence, but many cars today have achieved multiple levels of autonomy without artificial intelligence. Features such as brake assist, lane assist, and adaptive cruise control, for example, can be considered autonomous driving to some degree. 

Self-driving cars do not rely on advances in artificial intelligence to move the industry forward, though the level of autonomy depends on the sophistication of the deep learning models used to control the car. In theory, there are 5 levels of autonomy that define a self-driving car. 

The 5 levels of autonomous vehicles

These 5 levels of autonomous vehicles were outlined by SAE International in 2014 to have a common point of reference for the industry. Each level depends on the level of automation and how much human involvement is required. 

Level 0

Okay, so there are technically 6 levels of self-driving cars, starting with absolutely no automation. 

In this level, humans control every aspect of the driving environment from acceleration, shifting gears, steering, navigation, weather, and more. An example of a vehicle in the Level 0 phase is the Ford Model T, because it doesn’t have any features that reduce the car’s reliance on humans, such as cruise control and even automatic windows. 

In Level 0, the human is responsible for executing maneuvers, monitoring the environment, and fall back performance in the event of an error with the car (flat tire, loss of brakes, etc.). There is no aspect of automation in this level.  

Level 1

The first step towards self-driving cars is basic driver assistance. Your car may actually fall within this spectrum of self-driving if it has lane assist, brake assist, or cruise control. 

A feature as small as side-mirror indicator lights to alert the driver when a car is in the next lane can be considered Level 1 driver assistance. Other common driver assistance features include a vibrating steering wheel when an unsignaled lane departure occurs and a self-parallel parking feature.  

In Level 1, there are some aspects of automation in the execution of driving functions such as steering, accelerating, and decelerating.

Level 2

The next rung on the self-driving ladder is Level 2 autonomy and is actually a big step up from Level 1.

In Level 2, the automated system finally takes control of the functional aspects of driving such as steering, acceleration and deceleration, among others. The human driver, however, is still responsible for monitoring the driving environment.

Examples of cars currently in Level 2 autonomy include Tesla’s vehicles with autopilot enabled and Nissan’s ProPilot assist.

Level 3

Level 3 autonomy is when self-driving cars cross the chasm into monitoring the driving environment conditionally. The conditional caveat is that a human driver is still the fallback redundancy when dynamic driving is required. 

If a car with Level 3 autonomy cannot adequately navigate an obstacle in the road or dangerous weather conditions, it will require the human driver to intervene. 

Uber’s self-driving car is an example of Level 3 because while the car controls most of the navigation, the human is still needed for edge-case scenarios the system has not been trained on. 

Level 4

This is currently the highest level attained by the autonomous vehicle industry. Level 4 is defined as high automation. The self-driving system is responsible for all execution, monitoring, and fall back, but is not 100% effective in all driving modes.

This means that the car will not understand how to perform in extremely rare scenarios that the models have not been trained to recognize. 

Waymo the autonomous vehicle company being created by Google, is currently in Level 4 autonomy. Its cars are currently testing self-driving ridesharing in major U.S. cities without human drivers. But there are still rare cases where the self-driving car is implicated in a situation that extends beyond the model’s understanding and ability to avoid an accident. 

Level 5

Level 5 is the goal of self-driving characterized by full automation. 

Full automation means a human being never has to intervene and the car can adequately handle every road (or off-road), weather, obstacle or any other condition, it faces. A world of Level 5 would work best as a network of only other Level 5 autonomous vehicles. If human error is involved, the system is vulnerable to failure. 

Since training machine learning models is essential to handle Level 5 driverless scenarios, some believe whoever has the most data has the most autonomy. George Hotz, the founder of self-driving startup Comma.ai, believes Tesla will be the first to reach Level 5 autonomy based entirely on the amount of data they collect. 

Technology inside self-driving vehicles

While the body of a self-driving car isn’t a reinvention, companies creating self-driving technologies have had to reinvent the way in which the car interfaces with the world around it. A combination of hardware, software and machine learning are needed to have the abilities and redundancy of a self-driving car Level 3 and above. 

animated depiction and descriptions of self-driving car hardware

Radar 

Radar, or Radio Detection and Ranging, is what self-driving cars use to supplement higher resolution sensors when visibility is low, such as in a storm or at night. 

Radar works by continuously emitting radio waves that reflect back to the source to provide information on the distance, direction and speed of objects. Although Radar is accurate in all visibility conditions and is relatively inexpensive, it does not have the most detailed information about the objects being detected. 

LiDAR

LiDAR, or Light Detection and Ranging, is what self-driving cars use to model their surroundings and provide highly accurate geographical data in a 3D map. 

Compared to Radar, LiDAR has much higher resolution. This is because LiDAR sensors emit lasers — instead of radio waves — to detect, track and map the car’s surroundings with data being transmitted at the speed of light, literally. 

Unfortunately, laser beams do not perform as accurately in weather conditions such as snow, fog, smoke or smog. 

But even a small object like a child’s ball rolling into the street can be recognized by LiDAR sensors. LiDAR not only tracks the ball’s position, but also the speed and direction, which allows the car to yield or stop if the object presents danger to passengers or pedestrians. 

Cameras and computer vision

Cameras used in self-driving cars have the highest resolution of any sensor. The data processed by cameras and computer vision software can help identify edge-case scenarios and detailed information of the car’s surroundings. 

All Tesla vehicles with autopilot capabilities, for example, have 8 external facing cameras which help them understand the world around their cars and train their models for future scenarios. 

Unfortunately, cameras don’t work as well when visibility is low, such as in a storm, fog or even dense smog. Thankfully self-driving cars have been built with redundant systems to fall back on when one or more systems aren’t functioning properly. 

Complementary sensors 

Self-driving cars today also have hardware to enable GPS tracking, ultrasonic sensors for object detection, and IMU (inertial measurement unit) to measure the car’s velocity. 

An often overlooked but important sensor for self-driving cars is a microphone to process audio information. This becomes vitally important when detecting the need to yield to an emergency vehicle or detecting a nearby accident that could be hazardous to the car. 

Computation

In order for self-driving software to interface with the hardware components in real-time, processing all sensor data efficiently, it needs a computer with the processing power to handle this amount of data. 

The computer chips in your standard computer or smartphone are known as Central Processing Units (CPU) but when you consider how much computational power is needed for a self driving car, a CPU does not have anywhere near the bandwidth to handle the number of operations — measured in GOPS, or giga (billion) operations per second. 

Graphical Processing Units (GPU) have become the de facto chip for many self-driving car companies. But even GPUs are not the ideal solution when you consider how much data needs to be processed by autonomous vehicles. 

Neural network accelerators (NNA), introduced in Tesla’s FSD chip in 2019, have far superior computing power for processing real-time data from the various cameras and sensors within their self-driving car. 

According to Tesla, here is how these chips compare when processing the frames per second for 35 billion GOPS (giga operations per second):

  • CPU: 1.5
  • GPU: 17
  • NNA: 2100

As you can see, Tesla’s NNAs are a breakthrough technology in self-driving car computation. 

Software technology of self-driving cars

When self-driving cars reach Level 5 autonomy, they will almost certainly use a combination of three distinct components: hardware, data and neural network algorithms. 

We’ve already touched on the hardware component, which is currently the one component with the most achievement. The algorithms and data components have a long way to come before we reach Level 5 autonomy.

Neural network algorithms 

A neural network is a sophisticated algorithm based on complex matrices designed to recognize patterns without being programmed to do so specifically. Neural network algorithms are actually trained using the labeled data to become adept at analyzing dynamic situations and acting on their decisions. 

Some of the algorithms that have been built using neural networks and used in self-driving cars are:

depictions and descriptions of the software driving autonomous vehicles

Neural networks must be trained with data about the task they are expected to perform. When Google trains image recognition neural networks, for example, they must train the model with millions upon millions of labeled images. 

Data

Data is one of the most important components for fully autonomous vehicles (Level 5) to become a reality. 

Large amounts of data are the raw materials for deep learning models to become finished products, in this case, fully autonomous vehicles. 

Tesla currently has the largest source of data with more than 400,000 vehicles on the road transmitting data from their fleet of sensors. By January 2019, Tesla had 1 billion miles of autopilot usage data. Compare this to Waymo who only passed 10 million autonomous miles by October 2018. 

According to Rand, in order for an autonomous vehicle to demonstrate a higher level of reliability than humans, the autonomous technology would need to be 100% in control for 275 million miles before it can be proven safer than humans with a 95% confidence level. 

Points of failure for self-driving vehicles 

In engineering, a single point of failure is one that will cause the entire system to stop working if it fails. One of the key tenets of engineering is redundancy, or a secondary system that acts as a failsafe in case one stops working. This is why airplanes have more than one engine, because if one fails, the plane can still fly. 

Since self-driving cars use cameras, Radar, LiDAR and other sensors to understand its surroundings, the likelihood of a single point of failure leaving the car inoperable is extremely low. 

When Tesla designed their FSD (fully self driving) chip, they put in two independent and identical computers, not only for redundancy in case one fails, but for communication between the two to validate decisions. 

But even with all this redundancy, the main point of failure for self-driving cars is in the software. 

Deep learning models are trained using real-world driving and simulations, but even after billions of miles of experience, there are still rare edge cases these learning models won’t understand how to handle. 

These edge cases are a major point of failure for self-driving cars since deep learning models do not equate to intelligence. Some of the looming problems threatening the future of self-driving cars are:

  1. Predicting agent behavior: It’s currently difficult to entirely understand the semantics of a scene, the behavior of other agents on the road and appearance cues such as blinkers and brake lights. Not to mention, predicting human error such as when a person signals a left turn but actually turns right.
  2. Understanding perception complexity: Self-driving vehicles fail when objects are blocked from view such as during snowstorms, objects viewed in a reflection, fast moving objects around a blind spot and other long-tail scenarios.  
  3. Cybersecurity threats: Software is written by humans, and humans write code with vulnerabilities. Although very few people understand neural networks well enough to exploit these vulnerabilities, it can and will be done.
  4. Continuous development and deployment: One problem facing self-driving vehicles is the process of re-validating changes to the software. If and when the code base changes, does this require testing for another 275 million miles to validate performance?

animated depictions of the problems yet to be solved for self-driving cars

Real-world examples of self-driving system failure

On March 18, 2018, Uber’s self-driving car killed a pedestrian who was crossing the street illegally. Uber’s Level 3 autonomy likely failed in the machine learning model’s ability to make a decision based on the sensory detection of a pedestrian. 

Not to mention the failure on behalf of the fallback system in the event of an imminent accident: the human. The Uber safety driver behind the wheel failed to take action to prevent the accident.

Only 5 days later, on March 23, 2018, Tesla’s Level 2 autonomy vehicle hit a median divider head-on, killing the driver. 

Tesla confirmed autopilot mode was engaged and that the system failed because the lane divider lines were not clearly defined. 

The future of self-driving cars

Despite the definite problems outlined above, self-driving car companies are moving forward and improving every day. 

Considering an estimated 93% of car accidents are caused by human error, the opportunity for self-driving cars to remove a major threat in the daily lives of billions of humans is too great to pass up. There will be many debates over the efficacy of self-driving cars as well as regulatory hurdles before we see Level 5 autonomy deployed globally.

animated infographic about self-driving cars

Related articles:

The post How Do Self-Driving Cars Work and What Problems Remain? appeared first on The Simple Dollar.



Source The Simple Dollar https://ift.tt/32YYnqU

ليست هناك تعليقات:

إرسال تعليق