Guaranteed AI and approximate AI

Sébastien Dester­cke is a CNRS research sci­en­tist and head of the Knowl­edge, Uncer­tain­ty, Data (CID) team at Heudi­asyc, a joint UTC/CNRS research lab­o­ra­to­ry. He is also the tit­u­lar hold­er of the Indus­tri­al Chair in Arti­fi­cial Intel­li­gence (AI) launched at the begin­ning of 2022.

In addi­tion to UTC, this chair involves The Sor­bonne Cen­ter for Arti­fi­cial Intel­li­gence (SCAI), renamed Sor­bonne Clus­ter for Arti­fi­cial Intel­li­gence, the CNRS and Sopra Ste­ria, found­ing spon­sor of the UTC Foun­da­tion for Inno­va­tion. At UTC, this chair mobi­lizes two lab­o­ra­to­ries work­ing on AI itself — Heudi­asy­c’s CID (Knowl­edge, Uncer­tain­ty, Data) team and LMAC (Applied Maths), whose work is part­ly at the heart of AI — and three oth­ers – UTC-Rober­val, UTC-BMBI (Bio­me­chan­ics and Bio­engi­neer­ing Lab­o­ra­to­ry) and UTC-Avenues — spe­cial­ized in fields where AI appli­ca­tions are now play­ing a grow­ing role. Heudi­asyc is also involved in AI applications.

There isn’t just a sin­gle AI, but many AI mod­els, each tai­lored to spe­cif­ic appli­ca­tions. “For all these mod­els, there are two main under­ly­ing trends: guar­an­teed rea­son­ing and approx­i­mate rea­son­ing. Sta­tis­ti­cal meth­ods, the most wide­ly used at present, are part of approx­i­mate rea­son­ing. When these mod­els make pre­dic­tions, they can only do so with sta­tis­ti­cal guar­an­tees. Con­verse­ly, exist­ing cal­cu­la­tion meth­ods in proces­sors, as well as a cer­tain num­ber of ver­i­fiers in com­put­ers, or even assign­ments in Par­cour­sup , are part of guar­an­teed rea­son­ing. In short, we need to be sure that a giv­en input will always cor­re­spond to the same out­put”, he explains.

Take pre­dic­tive AI sys­tems. “The ques­tion is to know whether their pre­dic­tive capac­i­ties are in line with what has been observed in the real world, for exam­ple, in the field of med­ical diag­no­sis. In this case, it’s essen­tial to add a fair esti­mate of cer­tain­ty to the pre­dic­tion, so that the infor­ma­tion pro­vid­ed is as use­ful as pos­si­ble”, he adds.

So what about gen­er­a­tive AI, the lat­est AI tech­nol­o­gy? “This tech­nol­o­gy needs huge quan­ti­ties of data, which it will struc­ture in neur­al net­works. Gen­er­a­tive AI will build, gen­er­ate plau­si­ble out­puts from the data they have. If I take Chat­G­PT, for exam­ple and give it a text to illus­trate, it will cre­ate a new image from images and texts close to the one to be illus­trat­ed. This means that these meth­ods have no require­ment for verac­i­ty. If I ask him for a piece of infor­ma­tion often found in his data, such as the year of Charle­mag­ne’s birth, he’ll answer around the year 800. On the oth­er hand, if I ask it to trans­late a text in a rare lan­guage of which it has seen few texts, it will gen­er­ate a trans­la­tion, but I doubt it will be accu­rate. A pos­si­ble dis­ad­van­tage of this AI is that its oper­a­tion is not intel­li­gi­ble. It’s often referred to as a “black box”. Cou­pled with the fact that it does not seek to pre­dict real­i­ty, it is there­fore dan­ger­ous to use it in cer­tain fields, such as med­i­cine or trans­port, unless it is cou­pled with oth­er sys­tems that make up for its short­com­ings. You have to think of it as an assis­tant, not as a deci­sion-mak­er,” he stresses.

Final­ly, whether gen­er­a­tive or pre­dic­tive, anoth­er cur­rent prob­lem with AI is that it can gen­er­ate or repro­duce dam­ag­ing bias­es. This is what hap­pened, for exam­ple, with Ama­zon’s HR [human resources] tool, which tend­ed to favour male appli­cants because, at the out­set, it was main­ly men who applied, so they are more numer­ous in the database.

Among the tools devel­oped at Heudi­asyc? “Per­son­al­ly, my work focus­es on mod­els that can approach real­i­ty. Mod­els that are robust and reli­able, and that pro­vide me with suf­fi­cient sta­tis­ti­cal guar­an­tees. These two cri­te­ria are part of the broad­er issue of trust­ed AI. The first is a deci­sive aspect in many indus­tri­al appli­ca­tions, and par­tic­u­lar­ly in AI sys­tems. Their robust­ness is mea­sured by their abil­i­ty to adapt to chang­ing deploy­ment con­di­tions and envi­ron­ments, with­out los­ing qual­i­ty. The sec­ond is to quan­ti­fy the uncer­tain­ty asso­ci­at­ed with the sys­tem’s pre­dic­tions. The aim is for the sys­tem to be able to quan­ti­fy its own con­fi­dence in its pre­dic­tions. If the sys­tem gives a 90% accu­rate pre­dic­tion, I’d like it to be accu­rate 90% of the time. If we take the autonomous, i.e., dri­ver­less car as an exam­ple, it’s obvi­ous that trust­ed AI is essen­tial,” asserts Sébastien Destercke.

What are the chal­lenges for the next gen­er­a­tion of AI? “One of them will undoubt­ed­ly be to be able to com­bine guar­an­teed AI with the extrap­o­la­tion and inter­pre­ta­tion capa­bil­i­ties of gen­er­a­tive AI, in par­tic­u­lar LLM (large lan­guage mod­el) mod­els”, he concludes.

MSD

Le magazine

Novembre 2024 - N°64

L’intelligence artificielle : un outil incontournable

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram