Self Driving Cars

A self-driving car uses a combination of sensors, cameras, radar and artificial intelligence (AI) to travel between destinations without a human operator. To qualify as fully autonomous, a vehicle must be able to navigate without human intervention to a predetermined destination over roads that have not been adapted for its use.

Companies developing or testing autonomous cars include Audi, BMW, Ford, Google, General Motors, Tesla, Volkswagen and Volvo. Google’s test involved self-driving cars – including Toyota Prii and an Audi TT – navigating over 140,000 miles of streets and highways.

How self-driving cars work

AI technologies power self-driving car systems. Developers of self-driving cars use vast amounts of data from image recognition systems, along with machine learning and neural networks, to build systems that can drive autonomously.

The neural networks identify patterns in the data, which is fed to the machine learning algorithms. That data includes images from cameras on self-driving cars from which the neural network learns to identify traffic lights, trees, curbs, pedestrians, street signs and other parts of any given driving environment.

Google’s self-driving car project, called Waymo, uses a mix of sensors, Lidar (light detection and ranging — a technology similar to radar) and cameras and combines all of the data those systems generate to identify everything around the vehicle and predict what those objects might do next. This happens in fractions of a second. Maturity is important for these systems. The more the system drives, the more data it can incorporate into its deep learning algorithms, enabling it to make more nuanced driving choices.

The following outlines how Google Waymo vehicles work:

  • The driver or passenger sets a destination. The car’s software calculates a route.
  • A rotating, roof-mounted Lidar sensor monitors a 60-meter range around the car and creates a dynamic three-dimensional (3D) map of the car’s current environment.
  • A sensor on the left rear wheel monitors sideways movement to detect the car’s position relative to the 3D map.
  • Radar systems in the front and rear bumpers calculate distances to obstacles.
  • AI software in the car is connected to all the sensors and collects input from Google Street View and video cameras inside the car.
  • The AI simulates human perceptual and decision-making processes using deep learning and controls actions in driver control systems, such as steering and brakes.
  • The car’s software consults Google Maps for advance notice of things like landmarks, traffic signs and lights.
  • An override function is available to enable a human to take control of the vehicle.