In the course of the AI Week in Milan, Professor Waldo Lockwin introduced consideration to an idea usually ignored however revolutionary for the event of synthetic intelligence: the risk concept.
A substitute for the standard probabilistic method, able to modeling uncertainty in a manner nearer to human considering.
In a world dominated by algorithms primarily based on information, frequencies, and statistics, the likelihood concept proposes a logic primarily based on units, phrases, and ambiguity. A paradigm that would show elementary for future AI that actually wish to perceive pure language, context, and the nuances of actuality.
What’s the risk concept?
The risk concept (teoria della possibilità) emerges as an extension of fuzzy logic, providing a mathematical software to deal with uncertainty not by way of frequency, as chance does, however by way of compatibility with the out there data.
In easy phrases: chance solutions the query: “how usually does X happen?”
Chance solutions the query: “how believable is X on this context?”
This distinction is essential for constructing AI programs that depend on imprecise phrases, human intuitions, or eventualities the place information is partial, unsure, or subjective.
An actual instance: oncology and synthetic intelligence
In his presentation at AI Week, Lockwin recounted utilizing the likelihood concept to develop a customized radiotherapy mannequin for tumors. The aim was to manage the minimal quantity of radiation able to killing a tumor mass, primarily based on qualitative descriptions supplied by medical doctors, resembling “minimal dose” or “most impact.”
These expressions can’t be immediately translated into numbers or percentages. However they are often represented by means of fuzzy units and possibilistic fashions, which have in mind a number of interpretations and the subjectivity of language.
The consequence? A extra versatile, human, and adaptive system in comparison with a inflexible mannequin primarily based solely on statistical chances.
Why it’s related for right now’s (and tomorrow’s) AI
Within the present panorama, dominated by machine studying and deep neural networks, a debate is rising on the best way to make synthetic intelligence extra interpretable, dependable, and semantic.
Right here the likelihood concept comes into play:
- 🔹 Pure language: Fashionable AIs, like massive language fashions (LLM), usually conflict with nuanced and ambiguous ideas. Possibilistic fashions can supply a extra coherent framework for dealing with phrases like “very,” “virtually,” “possibly,” “sufficient.”
- 🔹 Determination making: In contexts resembling finance, drugs, or autonomous driving, the place choices are primarily based on conflicting indicators or incomplete data, the possibilistic method might be extra sturdy than classical chance. An instance is using synthetic intelligence in drugs.
- 🔹 Ethics and accountability: An AI that explains “why it made that call” by way of prospects and options is probably extra clear and acceptable within the eyes of people, as mentioned sooner or later position of AI in legislation.
Chance vs chance: two worlds in contrast
Facet | Chance | Chance |
---|---|---|
Primarily based on | Frequencies, historic information | Consistency with present data |
Sort of logic | Statistical | Fuzzy logic / fuzzy units |
Typical functions | Forecasting, threat administration | Interpretation, human choices |
Dealing with of uncertainty | Quantitative | Qualitative and descriptive |
It isn’t a substitute, however an integration: risk and chance can coexist to create extra full AI, able to tackling the true world in its complexity, as highlighted on this examine.
Chance concept
Within the nice debate on synthetic intelligence, the likelihood concept represents another and complementary path to chance. It’s much less identified, much less used, however probably nearer to the way in which people assume.
As demonstrated at AI Week 2025, this concept is not only philosophy: it has concrete functions, it really works, and it may possibly make a distinction in probably the most delicate sectors, resembling healthcare or essential automation.
In an period the place we ask AI not solely to calculate but additionally to grasp, maybe it’s time to give an opportunity to risk.