AI reasoning and fuzzy mathematics

Pro­fes­sor Zyed Zalila is a spe­cial­ist in fuzzy math­e­mat­ics and arti­fi­cial intel­li­gence at UTC and is attached to the Tech­nol­o­gy, Soci­ety and Human­i­ties (TSH) depart­ment. He is also found­ing direc­tor of Intel­litech , a UTC spin-off cre­at­ed in 1998 and focussing on R&D in fuzzy math­e­mat­ics and AI reasoning.

What’s spe­cial about Intel­litech? All its employ­ees are UTC grad­u­ates. Zyed Zalila has a long his­to­ry with fuzzy math­e­mat­ics, since he pro­posed a spe­cif­ic the­o­ry dur­ing his PhD and began teach­ing it at UTC in 1993.

In con­crete terms? “It’s a total­ly rig­or­ous math­e­mat­i­cal approach the object of which is the study of fuzzy math­e­mat­ics, i.e., the impre­cise, the uncer­tain, the sub­jec­tive. If I say there are about 50 peo­ple in front of the build­ing, that is an impre­ci­sion. I can, of course, count them, but that takes time and ener­gy. But an esti­mate could be enough to make a quick deci­sion. Uncer­tain­ty, on the oth­er hand, is called “epis­temic”, because it’s due to igno­rance; not to be con­fused with sto­chas­tic uncer­tain­ty, which is linked to chance. Final­ly, if I say: “It’s warm in this room”, I’m being sub­jec­tive. In fact, the dri­ver of a vehi­cle is sim­ply pro­cess­ing fuzzy infor­ma­tion about the envi­ron­ment detect­ed by his sens­es and using fuzzy rules to adapt his/her dri­ving actions”, he explains.

While all clas­si­cal math­e­mat­ics is based on Aris­totle’s axiom that there are only two states of truth — the true and the false — fuzzy math­e­mat­ics is based on an infi­nite num­ber of states in between. “They are there­fore con­tin­u­ous log­ics. Human beings don’t rea­son in bina­ry terms — black or white. They also under­stand all shades of grey. So we need to adapt the math­e­mat­i­cal tool to the way humans per­ceive their world. That’s why con­tin­u­ous, non-lin­ear fuzzy math­e­mat­ics is so effec­tive at mod­el­ling human rea­son­ing and per­cep­tion,” he explains.

Since 1956, two schools of AI have been at log­ger­heads: con­nec­tion­ist AI and cog­ni­tive AI, i.e., AI based on knowl­edge. The for­mer has set itself the goal of mod­el­ling per­cep­tion. “When you look at a per­son, it’s the occip­i­tal lobe con­nect­ed to the reti­nas by the optic nerve that enables you to recon­struct their face, but with­out rea­son­ing. Based on the obser­va­tion that the sens­es are an uncon­scious automa­tism, the fol­low­ers of this school decid­ed to build a neur­al net­work to mim­ic the human brain. In fact, the first appli­ca­tions con­cerned vision. I trans­form the image into pix­els and, by com­bin­ing them, I say that one rep­re­sents a cat, anoth­er a baby; how­ev­er, the result­ing hyper­con­nect­ed graph is opaque. As this tech­nol­o­gy is auto­mat­ed, it proves to be effi­cient, even if not very fru­gal since it requires a huge amount of data to con­verge. That’s why we call it robust but not intel­li­gi­ble AI”, says Prof. Zalila. The sec­ond school aims to mod­el reasoning.

“When we rea­son, we have to under­stand every­thing and we can only do so on the basis of knowl­edge, which is trans­par­ent. Here again, the pro­po­nents of cog­ni­tive AI invoke Aris­to­tle and his syl­lo­gism: all men are mor­tal, Socrates is a man, I deduce that Socrates is mor­tal. They have thus devel­oped expert sys­tems where, in a giv­en sit­u­a­tion, one rule is trig­gered, then anoth­er and so on until the final deci­sion is reached. This tech­nol­o­gy com­bines IF-THEN rules, pre­vi­ous­ly pro­duced by experts, with bina­ry log­ic. Unlike con­nec­tion­ist AI, this tech­nol­o­gy is intel­li­gi­ble, because the rules are com­pre­hen­si­ble, but not robust. Indeed, if the process mod­elled is com­plex, no human brain will be able to con­scious­ly and simul­ta­ne­ous­ly analyse more than ten vari­ables to pro­duce the rules”, he explains.

Both approach­es have their lim­i­ta­tions. Lim­its which, accord­ing to Zyed Zalila, were over­come as ear­ly as 2003, thanks to the devel­op­ment of the gen­er­al rea­son­ing AI XTRACTIS with Intel­litech. XTRACTIS auto­mates the 3 modes of human rea­son­ing and extends them to con­tin­u­ous log­ics: induc­tion to dis­cov­er robust and intel­li­gi­ble pre­dic­tive mod­els from obser­va­tions. This is the exper­i­men­tal sci­en­tif­ic method. Deduc­tion and abduc­tion exploit the mod­els induced, in order to make pre­dic­tions or seek the most opti­mal solu­tions to a mul­ti-objec­tive query. “To make an epi­ge­net­ic diag­no­sis of can­cers, tak­ing into account the influ­ence of envi­ron­men­tal fac­tors on the genome, I need to process no few­er than 27 000 inter­act­ing vari­ables. No human brain is capa­ble of dis­cov­er­ing the pre­dic­tive mod­el required. XTRACTIS, on the oth­er hand, can induc­tive­ly reveal the pre­dic­tors and rules of the hid­den phe­nom­e­non, so as to make a robust and intel­li­gi­ble diag­no­sis. XTRACTIS becomes my ex-brain: it helps me solve com­plex prob­lems that my brain knows how to pose but can’t solve. This oper­a­tional and sov­er­eign­ly pow­er­ful AI enables us to pro­duce sci­en­tif­ic knowl­edge, but it also meets the require­ments of ACT AI: the intel­li­gi­bil­i­ty of the deci­sion-mak­ing sys­tem becomes essen­tial when the appli­ca­tion is crit­i­cal and high-risk”, con­cludes Zyed Zalila.

MSD

Le magazine

Novembre 2024 - N°64

L’intelligence artificielle : un outil incontournable

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram