This machine learning project could help jumpstart self-driving cars again

An individual standing in front of a digital map

Liu's team has dramatically cut down the time needed for autonomous vehicles to become seasoned drivers in complex road situations by using machine learning and real-world data.

Image: Brenda Ahearn, U of Michigan

Based on all the hype around autonomous vehicles (AVs) in recent years, you would imagine that our roads would be crisscrossed by swarms of driverless cars by now, ferrying their owners to their next destination, while they try and discern real news from fake on their mobile phones in the back seat.

However, convoys of autonomous trucks or fleets of autonomous taxis or passenger cars have not really materialized as per the breathless hype from a few years ago.

Also: An autonomous car that wakes up and greets you could be in your future

The Insurance Institute for Highway Safety expects a paltry 3.5 million self-driving vehicles to ply U.S. roads by 2025. That number inches up to 4.5 million by 2030 — and, even here, experts doesn't expect these vehicles to be whizzing about on their own steam and will instead rely on their human counterparts to make crucial decisions.

So, what's holding autonomous vehicles back?

It's not easy be us

Believe it or not, for all the millions of miles these self-driving cars have driven for testing purposes, and all the sensors and hyper-detailed urban mapping they have been equipped with, autonomous cars just don't seem to be able to do the one essential thing that they should — predict the glorious unpredictability of other erratic humans.

“The safety performance of these automated driving vehicles, even with a state-of-the-art, automated driving system, is not on par with human drivers right now,” Henry Liu, professor of civil engineering at University of Michigan, told ZDNET.

Also: This new technology could blow away GPT-4 and everything like it

Liu is also director of M-City — a 32-acre mock city on campus that tests autonomous vehicles — and head of the Center for Connected and Automated Transportation, funded by the US Department of Transportation. 

The main problem dogging AVs, as described by Liu, is a “curse of rarity” — the fact that encountering accidents on your daily drive happens very infrequently. It takes hundreds of millions, maybe billions, of miles of driving by autonomous vehicles to encounter a few accidents and learn from them.

Consider, for instance, autonomous vehicle firm Waymo trumpeting the fact that it reached one million miles of public autonomous driving with no human monitor in the vehicle and experienced only 18 minor and two major “contact events”.

Also: GM unveiled a semi-autonomous driving system with a $300K price tag

How would this car respond to someone who decides to cross the road on a whim, like a kid who is running late on the morning trek to school?

This kind of incident actually happened in Tempe, Arizona in 2018 when an Uber test vehicle failed to identify a person crossing the road with her bicycle outside of a crossing, did not take evasive action even when it could have, and struck and killed her.

Today, self-driving cars routinely have problems identifying objects objects on the road, from paper bags to a huddle of pigeons. On other occasions, the results are fatal.

Self-driver's Ed

I road with a tunnel and an industrial facility in the background

The highway portion of M-City's augmented reality testing environment for autonomous vehicles.

Image: Brenda Ahearn, U of Michigan

Humans have to negotiate random, complex, unpredictable events, big and small on the road. While some end up in collisions or worse, for the most part we are able to instantaneously adjust and accommodate towards a safe outcome.

Unfortunately, algorithms that haven't been fed exactly that kind of incident to learn from are not so flexible.

So, how do you ensure that your self-driving ride has been trained on the potentially life-saving experience of sharing the road with Mad Max, as he tries to run over the crew from Fast and the Furious, versus one that primarily stuck to driving Miss Daisy around around strip malls for millions of miles?

Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

Liu and his team started collecting real-world data, such as speed and direction, from a few hundred privacy preserving sensors installed at smart intersections in Ann Arbor and Detroit that offer a treasure trove of traffic data, including accidents.

Additionally, up to 160 volunteer passenger cars were also outfitted accordingly for the study.

One particular two-lane roundabout turned out to be a fountain of accident gold, thanks to roundabouts being largely unfamiliar to American drivers. Liu knew the area well, having repeatedly taken his hapless son to it in order to get him seasoned for his driver's exam.

The U of Michigan study then stripped non-safety, critical information from the driving data — in other words, they took out all the boring miles of safe driving in between accidents, but kept the stuff that ended up in fender-benders. This data was then fed into the neural network used to train the autonomous vehicle.

Also: What is deep learning? Everything you need to know

The team then headed to M-City, a kind of Truman Show for cars — a pioneering, fake urban environment with stop lights, robot pedestrians, and other vehicles.

“We've created a mixed reality testing environment,” said Liu. “The AV test vehicles we're using are real, but the background vehicles are virtual, which allows us to train them to create challenging scenarios that only happen rarely on the road.”

In this space, test vehicles encounter many more dangerous situations with much greater frequency — albeit virtually, which dramatically compresses the period of time needed to learn the ways of the road.

Also: AI bots have aced medical school exams, but should they become your doctor?

Liu estimates the time needed to train a Waymo-type car could now be just a few thousand miles that contains a variety of crashes instead of the sleep-inducing, tens of millions of miles of uneventful tarmac.

And even then, experts say that computers in AVs will not have the kind of quick, intuitive thinking that humans do when encountering complex, unpredictable situations.

You can't get a better industry barometer on this line of thinking than Ford and Volkswagon's decision to write-off billions of dollars and pull out of Argo, the company they hoped would navigate them to their their self-driving dreams.

Bad news for the self-driving industry, perhaps, but great news for humanity trying to keep one step ahead of the rise of the machines.

Source