Maybe we can make a 15-min "driving test" and have the self-driving car drive the test circuit to prove that it knows how to do it all, just like we do for humans.
We have had those, in the form of test environments for a while. It turns out, it's possible to ace those environments, and still suck as a self driving algorithm. Humans we test the mechanical skills, because we assume they have judgement. We need to test the judgement of self driving, because the mechanical skills are relatively easy to program.
Trouble is, a human can generally recognize an edge case (flat tire, recently repaved road with no markings, first responders blocking lanes) and take appropriate action. They also communicate with people in the environment around them to get necessary information.
With self driving cars, they're limited to what sensors are designed in.
So, do you have the car refuse to operate if the TPMS system is inop or low? How do you deal with people who just put the TPMS sensors in a pipe and pressurize it, like mechanics did to my MiL's car to get the idiot light to stay off?
Do you infer shoulder sizes and where the markings should be, or do you cut out when you hit a newly paved section?
If there's a bridge plate in the road to cover large potholes or other roadwork, how much do you slow down, and can you tell it apart from general debris?
Can they tell if a cable in the road is high voltage, or just coax? Will they respond to a road closed with tape appropriately?
Self driving cars don't work at scale, because they don't have an interface to take direction from construction workers, first responders, or anyone around them. And trying to give them an interface programmatically will either be useless (IE, it's sufficiently privileged that a bunch of construction/utility people won't have access), or a massive security vulnerability (Car jacked by anyone in a hi-vis).
Self driving works just enough to be a serious hazard to not just the operator, but to everyone around it, and especially without proper V2X communications, should not be allowed to be operated on any public roadways without explicitly licensed and monitored test drivers and strict reporting requirements for incidents. Because those incidents can be used to help other self driving programs improve without reproducing deadly crashes like Tesla has had dozens of times, or Uber, or others, by designing to address those edge cases once they're raised.
I'd like to note that there are no fatalities associated with MB or Toyota's self driving development, and I've seen those systems first hand, in operation. Meanwhile, Tesla et al just keep YOLOing and deploying death traps to production.