AI in “smart” vehicles
Véronique Cherfaoui is a tenured university professor attached to UTC-Heudiasyc, a CNRS/UTC UMR. She is also head of the “Interacting Robotic Systems” team, one of the laboratory’s three teams and Director of SIVALab, a joint laboratory between UTC, CNRS and Renault.
“The research project we’re developing in this joint laboratory concerns the localization and perception integrity of intelligent vehicles. Research scientists/engineers and manufacturers prefer this characterization, as total autonomy is not for tomorrow. However, by aiming for autonomy, we can develop intermediate systems such as driving aids, while ensuring vehicle safety. Intelligent vehicle research is part of the field of robotics, and robotics needs AI to increase its decision-making autonomy,” she says.
The autonomous or intelligent vehicle is equipped with highly complex systems. “These are systems that must enable it to navigate at high speed and evolve in extremely diverse, complex environments where the different dynamics of all road users must be taken into account, such as trucks, other vehicles, pedestrians, cyclists, scooters, etc.,” she stresses.
So, rather than full autonomy being the horizon for the time being, efforts are being made to develop driving partner vehicles aimed at increasing people’s mobility and safety.
The role of AI in the complex systems implemented in intelligent vehicles? “AI is involved at various levels in autonomous vehicles. In robotics, we talk about the “perception-decision-action” cycle. In the first stage, we try to perceive our environment and understand it. Based on this, the second step is to decide on the manoeuvre to be carried out, then finally to implement this decision through actions on the motor, the wheels, the steering wheel and so on. AI can be applied at all levels. In recent years, learning techniques based on deep neural networks (deep learning) have made enormous progress in environmental perception. These systems can detect vehicles, pedestrians, road markings, signs and many other elements of a road scene with very high success rates.
However, they are “black boxes” and it is difficult to predict and explain the cases that don’t work. There is AI at the manoeuvre decision level too, with techniques that can be based on trajectory planning and prediction and risk calculation. On the other hand, there is currently little AI in the action phase, since command and control algorithms based on vehicle dynamic models are efficient”, explains Véronique Cherfaoui. However, other avenues are being explored. “Research is being carried out in what is known as “End to End”. This involves developing deep neural networks that take sensor data (cameras, lidars, etc.) as input, where output is a direct action on the steering wheel, gas pedal or brake. However, this approach has its limits, because we can’t imagine every possible situation and we can’t explain what led to a particular decision. Car manufacturers are not yet ready to embark on this type of system, because their liability is at stake, and we can’t guarantee operational reliability”, she believes.
In any case, this is not the path chosen by Véronique Cherfaoui, who insists that she is first and foremost a roboticist. “I don’t develop generative AI tools or neural networks, for example. My role is to adapt AI tools to my questions as a roboticist. In the decisionmaking phase, for example, I can use deep neural networks, but also reasoning, taking uncertainties into account”, she explains. From perception to action, uncertainties are many and varied. Yet neural networks have a hard time modelling uncertainties. “The project we’re running with Renault is based on taking uncertainties into account, from the sensor right through to decisionmaking. If our information is too uncertain, we communicate this to the system, which could give the hand back to the driver because it is unable to decide”, she explains.
These issues are at the heart of the CAP TWINNING collaborative project, part of the PostGenAI@Paris AI cluster supported by Sorbonne University. “The idea is for the vehicle to be a driving partner, with driving shared between the human and the vehicle. We assume that the vehicle knows how to manage a given situation, but when it feels it is in difficulty, it must tell the driver, who can then take over again at any time. In this way, the driver is always in the loop. In this project, the idea is to implement interfaces and AI that enable the driver to understand what the car is doing, and conversely, for the driver to understand what the car is doing. When the system makes a decision, it has to be able to explain it to the partner driver. The aim is to give the vehicle maximum autonomy, an autonomy designed to facilitate the mobility of certain people and improve road safety,” concludes Véronique Cherfaoui.
MSD