OPINION

Death in the denominator: AI for AV

Olga Uskova offers her unique perspective on the much-discussed topic of Artificial Intelligence for autonomous vehicles. It can’t simply be a question of trust, can it?

“To the Moika Embankment, please.”


I am sitting in a taxi in the city of St Petersburg, Russia, and I’m looking at my random taxi driver.


It's a 60-year-old man who is constantly staring at his satnav and very occasionally looks at the road. It's night. It's winter. It's a busy highway from the airport.


What is in his head? How safe am I?


I am holding the door-handle so in case anything untoward happens I should be able to jump out of the car.

There are a lot of talks and discussions now on the topic of Artificial Intelligence (AI) for autonomous vehicles. Many people wonder - how can we entrust our lives to a system that was built on deep neural networks at the time when we still don't fully understand how they function and what is happening inside of them.


But at the same time we constantly trust our lives to unknown public transport drivers and we can't even imagine what is going on in their heads and how adequate (or inadequate) they are. Let's try to figure some of this out.

Quite some time ago, people came up with five commandments to solve complex scientific problems:


  1. Perfect Accuracy - We always strive to get the best answer.
  2. Comprehensive Completeness - We want to know everything about the task, all the possible data.
  3. Predictable Frequence - We want to get the same result every time we conduct an experiment under the same conditions.
  4. Exceptional Speed - We expect to get the results in minimal time.
  5. Transparency - We want to know how we got the result.


But for the current complex systems, such as AI for autonomous vehicles, this approach turned out to be poorly applicable. We want to create an artificial brain that will look like our own human brain.


Humans themselves are not optimal, repeatable or comprehensive. People live and work by the Principle of the Best-Effort. Everything people demand from other people is their best effort. This is the best that any methodology can do in the tasks of this class. It makes no sense to search for the perfect solution on an infinite space of data.


In biological systems, neural reactions are initially unpredictable in a logical way because they depend on a huge variety of complex electrochemical processes and the release of signal substances. We can't be 100 percent sure that the exact same impulse will always appear for the same stimulus. And actually we don't really manage this. Outbursts of anger, passion, despair or unexplained joy…it’s impossible to optimize it all.


Since most of our activities are based on this 'Best Efforts Pinciple' and there are no predetermined correct answers, human organisms are naturally resistant to minor internal errors and failures. With a perfectly balanced organism, we would not have lived very long.


So we at Cognitive Technologies offer a new approach: AN ARTIFICIAL INTELLIGENCE CT-COMPROMISE

The key idea of this approach is 'Sufficiency'. There is no need to pursue an illusory ideal if you have achieved sufficient functionality.

We are introducing a new coordinate system:


1. Instead of 'Perfect Accuracy' - Permissible Sufficiency. The point is that on the infinite space of solutions, we should look not for the optimal solution, but for the permissible option, exactly as it happens in real life.

2. Instead of 'Comprehensive Completeness' - Available data completeness.

3. Instead of 'Predictable Frequence' - Acceptable Variability, with the possibility of small deviations.

4. Instead of the 'Exceptional Speed' of the whole project performance - maximum speed within the particular task.

5. Instead of the 'Transparency of the Process' - trust the system's work and result, as happens with human drivers.


However, the most important thing for this whole system is one common denominator: NON-ALLOWABILITY OF DEATH.


And this thought is always in our subconscious mind while we are sitting in someone's car. The instinct of self-protection is inherent in all of us by nature.


But in the case of Artificial Intelligence, the situation is more complicated because of the fact that the AI must not only protect itself, but, first and foremost, save its passenger - save a human life. And here the moral territory begins. Which human should it save? Just its passenger or the pedestrians in the road in front of it, too? And what option should it choose in case of such a conflict? This is the topic of my next article.

FYI

Olga Uskova is President of Cognitive Technologies

uskova.olga.a@gmail.com


The author would like to thank Monica Anderson for inspiration as the idea of the system came up while listening to some of her lectures.


Share now...