While the phrase “artificial intelligence” (AI) is unquestionably and undoubtedly frequently misused, it can be a tricky subject for non-experts.
People often mistakenly conflate contemporary AI with the version they’re most familiar with, such as in a sci-fi movie. There is a lot of fuzz around the phrases AI, machine learning, deep learning, intelligent personal assistant and natural language processing. In the press and on social media these computer science terms are often used interchangeably, and there is a lot of confusion about what they are and how they can support different vertical industries, such as automotive industry and subsequently autonomous driving.
Surprising figures by MMC Ventures, an investment firm in London, looked at 2,830 European companies that claim to make use of AI and found that 40% of them are not using any machine learning, a field of AI but basic form of computer program and software automation, or advanced statistical databases.
Let’s start with the basics: The following definition of Artificial Intelligence is proposed within the European Commission's Communication on AI: “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.”
In simpler terms AI refers to machines that can learn, reason, and act for themselves with some degree of autonomy. AI can be distinguished between Artificial Narrow Intelligence (ANI) also known as “Weak” AI, General or “Strong” AI and Artificial Superintelligence. Weak artificial intelligence is a form of AI that exists in our world today specifically designed to be focused on a narrow and single task such as autonomous driving. It contrasts with strong AI, in which an AI is capable of all and any cognitive functions that a human may have and no different than a real human mind, and Artificial Superintelligence is the type of intelligence that is smarter than humans.
Another sub-type of weak AI is called super-weak AI. An autonomous car using super-weak AI, for example, can rely on the data from other cars and close infrastructure without performing all the calculations itself. Super-weak AI function is limited to critical situations evaluation while the rest of its power is dedicated to data verification that it received from the network, or nodes such as blockchain functions.
Machine learning comes in several flavours. You may find Figure 1 (below) very useful in depicting the different meanings of machine learning. The most wide-spread approaches are supervised learning, unsupervised learning, and reinforcement learning. In order to understand which machine learning algorithms will be required for autonomous driving first we need to be able to collect data to train and use (training data).
As to what regards the collected data, it is often useful to distinguish between structured and unstructured data. Structured data is data that is organized according to pre-defined models (such as in a relational database), while unstructured data does not have a known organization (such as in an image or a piece of text). Especially true for autonomous driving, unstructured data are coming from ADAS and sensors and in the future from the whole Internet of Things environment.
“In order to understand which machine learning algorithms will be required for autonomous driving first we need to be able to collect data to train and use”
Camera, radar and lidar sensors provide rich data (such as shape, speed and distance to ensure reliability and redundancy) about the car’s environment (or localisation). An autonomous vehicle must be able to make sense of this constant flow of information called sensor fusion. Image classification plays a very important role in localization and actuation, while the biggest challenge for any algorithm is to develop an image-based model for prediction and feature selection.
The type of regression algorithms that can be used for self-driving cars are a Bayesian regression, neural network regression, and decision forest regression, among others. In addition, cars predict the behaviour of every object (vehicle or human) in their surroundings. How they will move, in which direction, at which speed, what trajectory they will follow. Here, the decision matrix algorithms are majorly utilized for decision making. The sensor inputs are fed into a high-performance, centralized AI computer which combines the relevant portions of data for the car to make driving decisions.
Whether a car needs to brake or take a left turn is based on the level of confidence these algorithms have on recognition, classification and prediction of the next movement of objects. Planning is also important as the car plans the route to follow, or in other words generates its trajectory through reinforcement learning. Finally, control engineers use the trajectory generated in the previous step to change accordingly the steering, acceleration and braking.
Deep learning is a subset of machine learning and has become increasingly powerful in recent years, with notable achievements such as Deepmind’s AlphaGo. It has also been successfully deployed in commercial vehicles like Mobileye’s path planning system. Autonomous driving will benefit from deep learning techniques called supervised learning (pre-labelled data sets for finding correlations) and reinforcement learning (using rewards and punishments) to train the AI to process live information from the sensors and historical data to make informed decisions based on past learning and hypotheses on how drivers will react to potential actions based on simulations.
Natural-language processing (NLP) is an area of AI concerned with the interactions between computers and human (natural) languages like English, Spanish, and Greek and it’s surprisingly hard to get computers to understand and generate these unrestricted natural languages. Amazon’s Alexa, Apple’s Siri and Google’s Assistant, among others, have introduced natural language-based voice control (Intelligent Personal assistants) to consumers. OEMs are now actively incorporating intelligent personal assistants into vehicles with more enhancements in sound recognition processing, and better natural language processing accuracy. Furthermore, many assistants also allow the OEM or automotive supplier to customize the user experience and add supported features, even after the launch of the vehicle (aftermarket). Other interesting applications are using data on users’ habits to deliver personalized services (in the future, MaaS), largely related to adjusting entertainment preferences, directing drivers to services like restaurants or making in-car payments (robot-taxis).
What is also important to understand is that there are no boundaries on the forms AI can or will take now or in the future in order to interact with us.
In my next article in the series, I will explore how to implement an autonomous driving technology roadmap that stakeholders need to be involved with, what skills they need to have and what are the benefits/costs to both business and society.
Lina Konstantinopoulou is smart mobility advisor at CLEPA, the European Association of Automobile Suppliers