Think about all of those moment-to-moment rapid decisions that you make while driving a car.
Go ahead, do a slow-motion post-driving analysis in your mind. Think about a recent trip to the grocery store or perhaps a driving trek to a local mall.
Whether you realize it or not, there were hundreds upon hundreds or likely thousands of minuscule driving decisions that you made, all part of a larger web of driving decisions in the course of a driving journey. And, notably, they all ultimately encompassed some variant of altogether life-or-death considerations.
How so?
Let’s see.
Imagine that you are driving along amid city streets on an otherwise ordinary day.
If you decide to take that upcoming right turn just a bit overly fast there is a heightened risk that you could inadvertently go awry. You might veer into a pedestrian that is standing out at the curb edge. You might swing wide and brush against another car that is in a nearby lane.
Bam, you smack into someone.
In case you are doubtful that these are decidedly life-or-death-related decisions, I have some somber stats to share with you.
There are about 6.7 million car crashes each year in the United States alone. Those car crashes produce approximately 2.5 million injuries and over 40,000 human fatalities. Few of us are contemplating those ominous stats when we get behind the wheel of a car. Nonetheless, they are quite illuminating numbers and worthy of giving due pause whenever undertaking a driving trip. For more coverage and my in-depth analysis of those driving usage stats, see the link here.
Now that you are thinking about all of those itsy-bitsy decisions and how vital they are, let’s zoom to a more macroscopic perspective. You can envision this like using one of those zoom-in to zoom-out visualizations when looking at the planet earth from the vantage of an orbiting space station. I’m sure that you’ve seen those before. A person is standing on a city street and the camera pulls back to reveal them standing amidst a city block, which the camera zooms out further to reveal the entirety of the city. This zooming out continues. You next see an entire geographical area and then next an entire continent.
Okay, so we’ll do that same kind of panoramic envisioning, but in a different context.
You are one person that is driving on a city street. We zoom out and there are lots of other cars also driving on that same city street. We continue to zoom out and can see lots and lots of cars throughout the entire city, all being driven by a human at the wheel. Keep zooming out and you’ll see cars being driven throughout an entire geographical region and then throughout the entire United States.
There are around 250 million registered automobiles in the United States. Of course, at any singular point in time, not all of them are necessarily underway. We do keep our cars parked and ostensibly stationary for about 95% or more of their available usage time. In any case, when we do use our cars, the aggregate number of miles traveled annually is estimated at 3.2 trillion miles (in the United States alone). A typical everyday driver probably drives around 12,000 miles per year, which varies depending upon where you live and what type of work you do.
What does this driving-related zoom-in to zoom-out visioning get us?
The choices that we make as drivers in the microscopic acts of driving are inevitably a virtual kind of collective conglomeration. By adding up all those seemingly minuscule acts, we can begin to potentially detect patterns of widespread behaviors. Those patterns can reveal that the day-to-day moment-to-moment driving decisions might overall be increasing our risks while driving a car or can possibly be reducing our risks, contingent on what choices are being made.
Let’s use an example to go from the considered microscopic to the macroscopic. Doing so will provide handy fodder to turn this discussion from somewhat theory-based to one that is imminently practical and provides pragmatic insights for us all.
You are once again at the wheel of a car and driving along in a city that has the usual hustle and bustle taking place. There are other cars around you. Pedestrians are on the sidewalks, and some are jaywalking across the busy streets. Bike riders are amidst all this chaos. The usual zoo of wild and crazy traffic endangerment exists.
Just another day in paradise, as they say.
Being late for getting to work, you are driving with a bit of a strident rush in mind. This does not imply that you are recklessly driving. It is just that you are driving with a certain amount of verve and zest, hoping to shave some time off your daily commute to the office. Perhaps this can be characterized as undoubtedly not driving in a lackadaisical manner. You are driving with a fervently determined goal to try and beat the clock as best you can.
We can now layout a specific driving scenario.
At one point during your driving trek, you are in a major artery of the city that has two lanes going in each direction, making that a total of four lanes (two lanes going southbound, plus two other lanes going northbound). You are in the northbound lanes.
For the moment, you are in the rightmost lane of those two northbound lanes.
Traffic is running along smoothly but busily so. Speeds are around 45 miles per hour. In addition to the lanes of car traffic, there is a bike lane on your side of the street. The bike lane is to your right. Bike riders are actively using that bike lane. The time of day is around rush hour and thus many cars and trucks are flowing along in traffic. Plus, there are lots of bike riders occupying the bike lane.
Hopefully, you’ve got a pretty good picture in your mind of the driving setting.
Suddenly, you realize that a large truck, one of those massively sized moving vans has incrementally come up beside your car, doing so to your left (and ergo in the northbound left lane). The truck is blocking your view of the southbound lanes, but that’s okay since you don’t need to see the traffic that is heading in the opposite direction of your travel.
In a sense, you are now trapped in your right lane. To your left is this big lurking truck. There is traffic directly ahead of you in your right lane. There is traffic behind you in your right lane. You have your own little pocket, as it were, sitting tightly squeezed between all those other interlopers.
And there is the bike lane to be considered too.
Please keep in mind that you are flowing along at around 45 miles per hour. Your vehicle is in motion. All the other cars and the truck next to you are also in motion. Turns out that there are bike riders in the bike lane, and they too are in motion.
Here’s what comes next.
The moving van is veering somewhat in its lane. The driver of the truck is having a hard time keeping the lengthy and slightly erratic vehicle entirely and solely in its own lane. This is a largely unwieldy truck and the roadway is not the best. The lanes are slightly askew and the roadway surface is rather rough.
Meanwhile, some of the bike riders decide they are being held back by some slower bike riders ahead of them.
As such, several of these urgently pressing bikers are opting to pass the slower ones. The act of passing them though is going to bunch them all up for a few split seconds, essentially completely occupying the entire width of the bike lane. Indeed, the odds are that some of the bike riders are going to slop over into the regular lane of traffic as they make their dicey and rapid movement around the slowpoke bike riders.
Are you ready to make a snap-driving decision?
The everyday squeeze play has become the ultra-squeeze play.
You can try to stay squarely in the middle of your lane.
In that case, you are hoping that there is sufficient clearance for the swaying truck (coming somewhat toward you, from your left) and enough clearance for those bikes that are bulging out of the bike lane and into your lane of traffic (coming somewhat toward you, from your right).
Or you might decide to ride toward the edge of your lane.
Which edge?
Do you choose to align with the right edge or the left edge of your lane?
By choosing the right edge, doing so would keep you further away from the truck, at least as much as is feasible at this juncture. That truck is an imposing figure and rather painstakingly foreboding. It is extremely bulky and heavy, for which a glancing blow from the truck into your underway car is bound to be grievously problematic, possibly even fatal. You might be wise to give the most clearance that you can to the wayward truck.
Thus, you aim your car toward the rightmost edge of your lane. This makes perfectly good sense.
Whoa, don’t forget about those bike riders!
The bike riders are extremely vulnerable.
The odds are that if you brushed against a bike rider, it would be really bad times for that person. In fact, by striking one of them, there is a notable chance that several of the bike riders would go down to the ground all at the same time, akin to a bowling ball striking bowling pins. Those battered bike riders would undoubtedly get hurt, and there is a chance of fatalities depending upon how things go.
You are now in the proverbial situation of being between a rock and hard place.
One decision entails trying to stay afield of the menacing truck. This is probably safer for you. A collision with that truck is likely to be lethal to you. But you abundantly need to decide between staying afield of those bike riders too. On a somewhat selfish basis, there is little doubt that colliding with the bike riders would probably not be especially injurious to you. Sadly, it would most likely be severely injurious if not fatal to them.
Choices, choices, choices.
Some might assert that it is up to fate to decide. Just stay in the middle of your lane. Whatever happens, is whatever is going to happen. Don’t be sweating it. You are rightfully able to be in your lane and by staying in the middle you will always have a clear conscience about what you did.
Maybe so, maybe not.
Suppose you do remain in the middle. It could be that the truck marginally veers into your lane and rams into your car. Had you been just a tad toward the right edge of your lane, the truck and your car would have never touched each other. Because you opted to doggedly be in the middle of your lane, the collision takes place. Bad news for you.
The point is that none of the options are risk-free and nor are any of them outcome-free.
Whichever choice you make, it could turn out to be the “wrong” choice in terms of having an adverse result. Driving is a game of playing the odds, though by referring to the matter as a “game” we need to realize that this is a life-or-death gambit. It is not a playful game. It is a serious contest encompassing probabilities and erstwhile chances that can turn an innocent moment into a heart-wrenching and irreversibly unfavorable one.
Which choice did you make?
I don’t want you to be thinking that there is a wrong choice or a right choice per se. That’s not what the scenario is trying to portend. The emphasis of this setting is that you are being called upon throughout a driving journey to make essential driving decisions. Those driving decisions are vital. You often make them in a split second. They come and go, like a river that flows endlessly.
Most of the time, those driving decisions are not especially notable. In this case, perhaps you stayed in the middle and everything turned out okay, or maybe you went to the right edge or the left edge, and everything turned out okay. I would dare say that you would not likely remember the next day that you had that precipitous decision to make.
If you tried to remember all the hundreds or thousands of driving choices in each driving trek, you’d probably go bonkers. It just isn’t a reasonable thing to do. Sure, some of the more monumental ones will probably stick with you. For example, this particular scenario could stay in your mindset for a long time, especially if it was a real squeaker and the situation flared to a point of nearly having gotten struck or nearly having struck a bike rider.
Assuming that no one got hurt and there was no collision of any kind, the chances are that this instance would eventually recede in your memory banks. You might recall it from time to time, particularly if telling tall tales about some of your harrowing driving experiences.
Shift gears a little bit and let’s zoom-out, akin to the zoom-out notion earlier postulated.
You aren’t the only one to have ever been in a situation like this. Being pinned between the wayward truck and those boisterous bike riders is undoubtedly something that happens with some frequency. It happens each day. It happens in your city, and it happens in many other locales.
Consider that daily, zillions of drivers are making the same decision that you were just confronted with. This happens throughout the day. It happens over a period of a year, over and over again. We could end up with zillions upon zillions of those specific decisions being made.
Here is the kicker.
If those drivers all made independent decisions, we would expect that presumably there might be an equal chance of staying in the middle versus going toward the right edge or the left edge. We could look at a statistical distribution and see that in the aggregate there was an equal chance of which way drivers were opting to go.
On the other hand, drivers might have a propensity or specific tendency that would become apparent by examining the aggregated instance. Suppose that the numbers showed that by and large, the drivers went to the right edge. Overall they seemed to be choosing to get away from the truck, though this was simultaneously increasing the risks of hitting the bike riders.
In any real-world sense of things, we do not have any practical means to ferret out this kind of decision-making in the aggregate.
We aren’t able to collect the zillions of daily driving decisions being made by each driver, and we cannot then add those up into a convenient database that would show us all of the zillions upon zillions of those itty-bitty driving decisions made by the over 225 million licensed drivers in the United States.
That’s a darned shame.
Wait for a second, wave a magic wand, and pretend that we could collect that humongous dataset.
This would enable all sorts of insightful analyses on driving and driver behaviors. We might discover that human drivers are more prone to making one type of decision over another and that this had heightened their risk of incurring a car crash. We could then try to educate drivers accordingly, aiming to change behavior toward being less risk-prone. We might redesign our cars or at least the driving controls, accordingly. We might alter the roadways and our infrastructure, doing so to minimize the bad choices and maximize the good choices.
That kind of data collection and analysis could make a big difference in reducing the number of annual injuries and fatalities associated with driving a car. Sorry to say though that we don’t have a magic wand, but we do have something nearly “magical” that is gradually arising, namely the advent of AI-based true self-driving cars.
Let’s talk about self-driving cars.
The future of cars consists of AI-based true self-driving cars. There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
Here’s an intriguing question that is worth pondering: How might driving decisions at the micro-level and also at the macroscopic aggregated level be pertinent to the advent of AI-based true self-driving cars?
We’ll consider quite mindfully this hearty question. First, I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And The Statistical Trolley Dilemma
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
Driving a car is tantamount to making ongoing life-or-death decisions, as noted earlier in this discussion.
The elaborated example of being squeezed between a large truck to your left and those bike riders to your right was a simple and yet commonly encountered instance of life-or-death driving choices. The lives involved or on the line can include the driver of the car, the passengers that might be in that car, the drivers of nearby cars, the passengers inside those nearby cars, pedestrians nearby, bike riders nearby, and so on.
Some would arguably contend that life-or-death driving decisions are extremely rare.
Their viewpoint is that maybe once in a lifetime as a driver you might come upon a situation that incorporates life-or-death choices. Otherwise, for some 99.999% of your driving experiences, you won’t presumably have any such grave matters to consider.
This is an extremely offbeat way to characterize driving a multi-ton vehicle that is able to move at tremendous speeds and convey enormous physical forces. I submit that we indeed are all faced most of the time with life-or-death driving choices. It is perhaps 99.999% of the time that we, fortunately, make the correct or sufficiently apt decisions and avert getting into dire circumstances.
Despite having managed to deviate away from the grim reaper much of the time, the struggle confronting imminent life-or-death while driving, or shall we also say at least the possibility of injury-or-noninjury is a constant one and not an oddball rarity.
There is a famous or some would say infamous mind-bending exercise known as the Trolley Problem that has garnered a great deal of debate and angst in the self-driving car industry and pertains to the weighty decisions involved in driving a car. See my extensive discussion about the Trolley Problem at this link here.
Many pundits and vendors in the self-driving car niche are quick to claim that the Trolley Problem is irrelevant to the advent of self-driving cars. They decry the Trolly Problem as purely theoretical, impractical, and a nonsensical distractor from the realities of driverless cars.
Those making such an argument are flat-out wrong.
They either misunderstand how to apply the Trolley Problem to the matter of self-driving cars or they are wishful that it should not be considered applicable. Their wishful thinking at this time allows them to disregard or downplay the issues raised. This in turn can provide a basis for not seeking to encompass Trolley Problem-related solutions into their AI driving systems.
I’ve predicted that those that take a head-in-the-sand approach to this topic will find themselves and their companies on the legal hook down the road. Eventually, there are going to be gargantuan lawsuits against many of the automakers and self-driving tech firms on their paucity of Trolley Problem considerations. Only then will they apparently take seriously the Trolley Problem, though it will assuredly be the case that they will still fight against it tooth-and-nail, hoping to avoid mega-sized legal losses.
As a quick primer about the Trolley Problem, it is a relatively straightforward thought experiment. Imagine that you are standing at a train track and have access to a control that will shunt a trolley onto one of two forking tracks. On one of the forked tracks is a person that is tied down to the rails and cannot getaway. On the other forked track, there are three people tied down to the rails and unable to escape (the number of people tied down varies by how the setup is envisioned, sometimes five people are mentioned rather than three, etc.).
Which direction do you decide to send the oncoming trolley?
It is a devilish problem. You are either going to choose to kill one person or three people. The inclination by some is that they won’t move the switch at all, thereby avoiding having to make a choice. That’s not really a means to avoid the issue since the switch is already preset to go onto one track or the other. Your attempt to avoid being involved will nonetheless still produce death.
I won’t get into all of the details herein, so see my discussion at this link here. The main point is that you are at times faced with very difficult life-or-death situations, and you need to make the most horrible of choices.
In a similar way, the example of being squeezed between the large truck and the bike riders was a Trolley Problem-related consideration.
You had to choose from which was more or less unfavorable. The only notable difference per se with the Trolley Problem was that death in this real-world setting was not an absolute certainty. There was a probability of death, and also a probability associated with injuries. Some would contend that this is why the Trolley Problem is irrelevant, due to the thought experiment entailing only sure-death and not a probability of death. That is a rather feeble argument and there are many variations of the Trolley Problem, including the incorporation of probabilities rather than absolute certainties.
Now that I’ve got you up-to-speed about the Trolley Problem, we can return to the aspects of the microscopic elements of driving decisions and the aggregated macroscopic perspective.
As notably articulated in a research paper “The Trolley, The Bull Bar, And Why Engineers Should Care About The Ethics Of Autonomous Cars” (authored by Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan, published in the Proceedings of the IEEE), there is a Statistical Trolley Dilemma to be considered and ought not to be unheeded: “Alas, ignoring the challenges of autonomous vehicles as explicit ethical agents will only postpone the problem. Even if every action of an autonomous car is oriented toward minimizing the absolute risk of a crash, each action will also shift relative risk from one road user to another. The cars may not be making decisions between outright sacrificing the lives of some to preserve those of others, but they will be making decisions about who is put at marginally more risk of being sacrificed.”
Upon examining the use case of a driver caught between a large truck and a bike rider, along with implied reference to all sorts of akin driving situations, they saliently point out that: “These are not the dramatic, life and death decisions featured in trolley dilemmas. But once they are aggregated over millions of cars driving billions of miles, these small statistical decisions add up to life and death consequences—and prompt the same questions as the trolley dilemma did.”
In a manner of speaking, I would assert that the Trolley Problem does apply to the microscopic day-to-day moment-to-moment life-or-death driving decisions that we make continually while at the steering wheel and that furthermore we need to also recognize the Statistical Trolley Dilemma on a macroscopic scale too.
This is latter aspect is assuredly the accumulation of zillions upon zillions of those day-to-day Trolley Problem instances that add up over time and constitute large-scale aggregated patterns of driving behaviors.
Conclusion
Trying to get human drivers to be contemplative about the Trolley Problem and ergo adjust their driving behavior is a nearly futile dreamy notion. Humans tend to be resistant to change and have a hard time adopting new driving practices.
For the semi-autonomous vehicles such as at Level 2 and Level 3, there is an opportunity to have the assisted driving features attempt to aid a human driver by incorporating Trolley Problem related solving capabilities. The automation providing driving assistance could seek to alert human drivers, possibly even overtaking the driving controls. I’ve though covered this extensively as a dicey proposition that can lead to a tug-of-war between a human driver and the assisting automation. It is going to be ugly and assuredly a can of worms.
AI-based true self-driving cars are a different matter since there is no human driver at the wheel. This means that we can expect that the AI driving system will be considering Trolley Problem issues. When a car crash or collision occurs involving a self-driving car, we ought to be able to do a full analysis of what the AI driving system was doing and what decisions it made during the incident at hand.
On a large-scale basis, we could accumulate these AI driving system aspects into a centralized database that could be used to study what seems to be working well and what seems to be not quite going so well. Some have suggested that we might need to establish an ethics-oriented oversight board that entails examining the programming of AI driving systems and to what degree the Trolley Problem is being addressed (see my column coverage).
Here’s a final remark to give you something significant to ponder.
If we could essentially force all self-driving cars to abide by some set of driving rules, such that in the case of being squeezed between say a large truck and a bunch of bike riders, the AI driving system would do as it has been collectively established to do, what would we want that rule to be?
That’s the kind of decision that some people would skittishly say is above their paygrade. Maybe so, but it certainly ought to not be left to chance by some semi-random choosing AI algorithm, or be in the hands of some overworked AI programmer that while in the throes of coding up the AI made a life-or-death prior decision about how the AI driving system is going to react.
Those are decidedly scary defaults. I doubt that any of us would want our lives teetering on the balance by such loosely or poorly determined and altogether careless and mindless proclivities.
I wouldn’t and neither should you.