How do driverless cars navigate a city? Driverless cars use a whole mix of software and sensors to navigate through a city. They use Lidar, radar, cameras, and self-learning algorithms.
Driverless cars are becoming more common these days, but they always have to have a driver paying attention at the wheel in case something goes wrong. There are 5 levels of automation for cars, 6 if you count no automation as a level. Level 1 has some driver assistance, such as keeping within lanes. Level 2 has partial automation, such as steering on a highway. Level 3 has conditional automation, which means the driver has to take control when the car asks them to. Level 4 has high automation, which means the car can move on its own in most situations, but if it comes to a difficulty and there is no driver to assist, it will just stop. Level 5 is full automation in all situations. This doesn’t exist yet.
At some point, cars will become completely autonomous. Currently, Google has test cars that have driven over 1 million km without having any accidents. Actually, that is not completely true. The cars have been involved in plenty of accidents, but almost all of them have been caused by other drivers driving into the driverless cars. There was one incident in 2016 when a Google car hit the side of a bus but Google updated the software and there haven’t been any accidents since then. The Google test car would be a level 4 car. Anyway, we are not going to look at how effective or not driverless cars will be today. I want to learn how they do what they do. Let’s take the Google driverless car as a good example because it has the software and hardware that would be rolled out onto regular cars.
The first thing the cars need is excellent map software and GPS. With the Google car, this will come from Google maps. Since Google maps was introduced in 2005, the technology has improved exponentially. Today, it is incredibly detailed and has street information for most cities in most countries around the world. Coupled with that, the GPS satellites that Google has access to can pinpoint the cars location to about 4 m, depending on the surroundings. Tall buildings can block the signal. However, Google maps alone isn’t enough because there are many things in a city that don’t show up on the maps. Driverless cars have to cope with people, other cars, roadworks, animals, cyclists, emergency vehicles, random stuff in the street, weather, and a host of other unexpected things. How do they do this?
The second thing a driverless car needs is lots of cameras. Driverless cars have cameras to cover every available viewpoint. These cameras are connected to the car’s CPU, but they might also be connected to a service center where somebody can offer assistance. They will also probably record everything that they see. These cameras are high definition, but they are not so much use in the dark or in bad weather. Driverless cars need other systems to cope in these conditions.
Driverless cars also use lidar and radar to be able to “see” when the conditions don’t allow their cameras to work properly. Lidar and radar are similar technologies but lidar uses light and radar uses radio waves. Lidar shoots out a laser and detects how long it takes the light to come back. By knowing the amount of time, the onboard CPU can work out the distance to whatever objects are around the car and that are reflecting the light back. Radar is the same, but it calculates the time it takes for a radio wave to come back. Both lidar and radar are useful, but lidar is far faster and more accurate. It can make high-definition 3d images in any weather. The trouble with lidar is that it is very expensive, so a lot of cars will use radar as well.
The biggest part of the driverless car system, though, is the onboard computer. The car can have all the sensors in the world, but if the onboard computer can’t tell what they are, it is all worthless. The onboard computer has to be able to not only see everything but also to work out what it is seeing. And then it has to work out how to react to what it has seen. With a lot of things, it is not that complicated, but people and bicycles seem to be proving a difficult problem. They are both unpredictable, but bicycles are also difficult to see. They provide an unusual profile and cars have trouble working out which way they are moving or predicting where they will go. However, Google say that their car is almost 99% perfect at seeing bikes.
One of the systems that will be in self driving cars is machine learning algorithms. These systems might not be able to tell what a bicycle is or which way it is going, but they learn with each interaction. Another way of helping them to learn is to use humans as teachers. The human drives and teaches the AI system what to do in certain situations. With each experience, the number of situations that the car will find unpredictable shrink.
The last system is one that has not been tested yet. Once there are enough self driving cars on the streets, they would probably all be networked. Once that happens, we will have swarm data because all of the cars will know what all of the other cars are seeing. That will increase their ability to drive. And this is what I learned today.
If you liked this, try these:
Sources
https://cariad.technology/de/en/news/stories/self-driving-cars-navigation-swarm-data.html
https://www.bbc.com/future/article/20211126-how-driverless-cars-will-change-our-world
https://www.nytimes.com/interactive/2024/09/03/technology/zoox-self-driving-cars-remote-control.html
https://en.wikipedia.org/wiki/Self-driving_car