12 Comments
User's avatar
Reeeetired's avatar

Waymo needs to add "Don't block mail boxes" to its list. Some years ago there was a pool cleaner who always parked in front of the mailboxes for 4 people across the street. One of those 4, exactly which one was never discovered, finally had enough skipped mail days and did some naughty things to the offending pickup truck with the chemicals in the back. Problem solved.

Johnny Oh's avatar

It's almost like the computer brain can't "Think" like a human or something. If the thing encounters something that isn't in its spreadsheet/database, or is buried so far down in the "Unlikely to happen" pile that all it can do is sit there until directed, it is just as dangerous as a person who can't make up their mind and vapor-locks in traffic. The people who would use these things, are generally the same people who realize they can't trust themselves to make decisions on the road (especially surface streets), and are the same people who program them. Is this a solution? I don't think so.

Tom from WNY's avatar

It was waiting for Marty McFly to pilot it home!

Raconteur's avatar

Self driving vehicles are like any other complicated mechanism; they must go through many iterations to work out the bugs. Until they go through those iterations, it's a crap shoot as to what will screw-up first. Each iteration also comes with its own inherent bugs, so what it fixed, buggers something else.

They will never be 100% reliable. 95%? maybe. 90% is doable, but think about the reliability of the human driver. At least with the machine, it's measurable and predictable. With humans?? It's predictable: they'll screw it up in a heart beat.

Steve S6's avatar

AI software by nature is not predictable.

Paul Koning's avatar

That's exactly right.

In other words: a computer is a machine, and its behavior is fixed by the combination of its hardware design and the software in it. If you know those two well enough, in principle you can figure out how the machine will behave. You'll be able to know whether that behavior is correct -- meaning that it meets the specifications.

But there is a problem in practice. Most software is large enough, and almost always constructed with insufficient care, that you don't actually know what it will do. And most specifications are far worse, so even if you know what the machine does you don't know if it is correct.

Now consider AI ("learning" systems, such as "large language models"). There, the "software" is not just the program, and in fact the program is only a tiny part. The real "software" is all the data constructed by the program as a result of consuming all the training data it was fed. How does that affect the behavior of the machine? No one knows; I would argue no one can know, it's explicitly out of scope for anyone to know. Furthermore, these systems, like self-driving cars, don't have specifications. "Drive safely" is not a specification; it's an expression of a wish, or a dream. "Don't block mailboxes" is a little closer to a specification, but of course it's only about 0.0001% of what it takes to drive a car properly.

So AI systems behave in unknowable ways that have no discernible relationship to the product goals, and the product goals are not testable specifications either. That means there is no reason such system can ever be a valid option in safety-critical settings. Not now, not next century.

By the way, such things as autopilots in airplanes are not AI, and must not be replaced by AI type systems. An airplane autopilot is a very simple servomechanism, with a precisely defined specification, a small and precisely known set of inputs, and just a couple of outputs. It is feasible to build software and hardware that reliably and accurately performs that task. But, Elon Musk notwithstanding, the job of driving a car is not "autopilot", it is many orders of magnitude harder.

It's amazing how well the Tesla self-driving machinery works; I got to experience it on NH back roads as well as highways when I had a loaner for a few days. But it insists on having an alert driver involved. As an assistant it is very nice. I would not trust it without my hands near the steering wheel, and no one else should either.

Raconteur's avatar

Does your and Paul's definition of predictable differ from mine? If the software is not predictable, how can it be used to control anything??

If I enter the command to go from A to B, if it can't be predicted that it will in fact, go from A to B, then what purpose does it serve other than to be chaotic?

Steve S6's avatar

When I query an AI and get a specific response then initiate another instance and repeat the exact query but with the toggle to show source links and I get a substantially different response I say it's not predictable.

Dale Flowers's avatar

AI driven vehicles may be the wave of the future. Not like we have much of a choice. As long as Joe Citizen has recourse in the legal system to hold the code writers and vendors accountable like they do for errant human drivers, I can live with it. But all those type of vehicles should be painted Day-Glo International Orange.

Paul Koning's avatar

I remember the Dutch Highway Patrol police vehicles when I was growing up: Porsche 911s, painted white with dayglo orange stripes. You couldn't miss them and you sure as heck could not get away from them.

Steve S6's avatar

1. Never do v1.0 software.

2. All updates have v1.0 software in them.

Alfred's avatar

You do know they've been in San Francisco for five years now the second densest city in the US and have covered over 67 million miles. The data is in, there is zero doubt about the safety.