How driverless cars make decisions
Changing lanes is a simple process: check that it’s safe, then mirror, signal, manoeuvre. At least that’s the case for human drivers, as we largely rely on our senses and muscle memory. To perform the same task, a driverless car relies on programming, some degree of artiﬁcial intelligence and a myriad of sensor systems (such as split view cameras) making multiple observations of its surroundings. But what exactly do driverless cars have to consider when deciding where to go and how to get there safely?
Sharing the road
Most autonomous vehicles utilise a combination of sensors, radar and light detection and ranging (LIDAR) devices. These allow the vehicle to recognise other vehicles as well as cyclists and pedestrians.
Large vehicles can obstruct sensors and prevent autonomous vehicles from knowing how to react. However, future driverless cars will be able to communicate and share sensory information, providing extra ‘eyes’ for the car.
Vans often straddle both the road and pavement when making deliveries. By showing driverless cars many examples of such events, the AI can learn to recognise when a car is temporarily parked.
Hazard, indicator and traffic lights are universal instructions for drivers. Autonomous vehicles can react to them in the same way as humans thanks to a pre-programmed set of rules.
City-specific information can aid a driverless car’s decisions. For example, the vehicle will be more confident in overtaking a stationary vehicle on streets that receive lots of deliveries.
This article was originally published in How It Works issue 115, written by James Horton
For more science and technology articles, pick up the latest copy of How It Works from all good retailers or from our website now. If you have a tablet or smartphone, you can also download the digital version onto your iOS or Android device. To make sure you never miss an issue of How It Works magazine, subscribe today!