AI reasoning and fuzzy mathematics
Professor Zyed Zalila is a specialist in fuzzy mathematics and artificial intelligence at UTC and is attached to the Technology, Society and Humanities (TSH) department. He is also founding director of Intellitech , a UTC spin-off created in 1998 and focussing on R&D in fuzzy mathematics and AI reasoning.
What’s special about Intellitech? All its employees are UTC graduates. Zyed Zalila has a long history with fuzzy mathematics, since he proposed a specific theory during his PhD and began teaching it at UTC in 1993.
In concrete terms? “It’s a totally rigorous mathematical approach the object of which is the study of fuzzy mathematics, i.e., the imprecise, the uncertain, the subjective. If I say there are about 50 people in front of the building, that is an imprecision. I can, of course, count them, but that takes time and energy. But an estimate could be enough to make a quick decision. Uncertainty, on the other hand, is called “epistemic”, because it’s due to ignorance; not to be confused with stochastic uncertainty, which is linked to chance. Finally, if I say: “It’s warm in this room”, I’m being subjective. In fact, the driver of a vehicle is simply processing fuzzy information about the environment detected by his senses and using fuzzy rules to adapt his/her driving actions”, he explains.
While all classical mathematics is based on Aristotle’s axiom that there are only two states of truth — the true and the false — fuzzy mathematics is based on an infinite number of states in between. “They are therefore continuous logics. Human beings don’t reason in binary terms — black or white. They also understand all shades of grey. So we need to adapt the mathematical tool to the way humans perceive their world. That’s why continuous, non-linear fuzzy mathematics is so effective at modelling human reasoning and perception,” he explains.
Since 1956, two schools of AI have been at loggerheads: connectionist AI and cognitive AI, i.e., AI based on knowledge. The former has set itself the goal of modelling perception. “When you look at a person, it’s the occipital lobe connected to the retinas by the optic nerve that enables you to reconstruct their face, but without reasoning. Based on the observation that the senses are an unconscious automatism, the followers of this school decided to build a neural network to mimic the human brain. In fact, the first applications concerned vision. I transform the image into pixels and, by combining them, I say that one represents a cat, another a baby; however, the resulting hyperconnected graph is opaque. As this technology is automated, it proves to be efficient, even if not very frugal since it requires a huge amount of data to converge. That’s why we call it robust but not intelligible AI”, says Prof. Zalila. The second school aims to model reasoning.
“When we reason, we have to understand everything and we can only do so on the basis of knowledge, which is transparent. Here again, the proponents of cognitive AI invoke Aristotle and his syllogism: all men are mortal, Socrates is a man, I deduce that Socrates is mortal. They have thus developed expert systems where, in a given situation, one rule is triggered, then another and so on until the final decision is reached. This technology combines IF-THEN rules, previously produced by experts, with binary logic. Unlike connectionist AI, this technology is intelligible, because the rules are comprehensible, but not robust. Indeed, if the process modelled is complex, no human brain will be able to consciously and simultaneously analyse more than ten variables to produce the rules”, he explains.
Both approaches have their limitations. Limits which, according to Zyed Zalila, were overcome as early as 2003, thanks to the development of the general reasoning AI XTRACTIS with Intellitech. XTRACTIS automates the 3 modes of human reasoning and extends them to continuous logics: induction to discover robust and intelligible predictive models from observations. This is the experimental scientific method. Deduction and abduction exploit the models induced, in order to make predictions or seek the most optimal solutions to a multi-objective query. “To make an epigenetic diagnosis of cancers, taking into account the influence of environmental factors on the genome, I need to process no fewer than 27 000 interacting variables. No human brain is capable of discovering the predictive model required. XTRACTIS, on the other hand, can inductively reveal the predictors and rules of the hidden phenomenon, so as to make a robust and intelligible diagnosis. XTRACTIS becomes my ex-brain: it helps me solve complex problems that my brain knows how to pose but can’t solve. This operational and sovereignly powerful AI enables us to produce scientific knowledge, but it also meets the requirements of ACT AI: the intelligibility of the decision-making system becomes essential when the application is critical and high-risk”, concludes Zyed Zalila.
MSD