Vision

“In the future, workers in all the quantitative sciences will be obliged, as a matter of practical necessity, to use probability theory in the manner expounded here.”

E.T. Jaynes. Probability: The Logic of Science

The 3rd wave of AI

Experts Systems Summary

Experts Systems

The first Wave of AI, starting in the 1960s, was carried by expert systems, which allowed to automatically apply reasoning rules to domain knowledge databases. This approach relies heavily on domain experts handcrafting bodies of knowledge and rules. It benefits from a mathematically grounded reasoning capability, but offers very limited learning or generalization capabilities

Deep learning summary

Deep learning

Around the 2000s, the second wave of AI started to expand, leading to the current success and achievements of Deep Neural Networks. The paradigm shift consisted in stopping to program expert rules, but instead leveraging machine learning algorithms to train increasingly larger models by learning as much as possible from ever growing datasets. This approach relies heavily on huge volumes of relevant data being available, has a significant learning and generalization capability, but offers very limited reasoning and explainability capabilities. It is also subject to implicit biases which may be present in the datasets used to train the models, with the risk of sometimes leading to some unexpected, inappropriate or even unethical behaviors of the trained system once it is put in production.

Probabilistic AI summary

Probabilistic AI

Today, a third wave of AI is preparing, which will leverage models able to learn from few examples, to integrate expert knowledge, to generalize from what was learned, to reason, and to explain the decisions that are made. We believe that probabilistic programming has an important role to play to allow the development of such models, and that the dedicated hardware accelerators developed at HawAI.tech will be a key enabler of this third wave of AI.

Addressing today's AI challenges

In order for AI to improve our lives and help us in our work, it must be efficient and, above all things, it must be trustworthy.
This is why wider adoption requires addressing key challenge

Explainability

Explainability

The challenge

Automated decision-making may have significant business and social impact, for example in mobility, health and finance. Deep-learning systems are becoming more complex, making it daunting to analyze and understand how they reach conclusions. This poses a fundamental challenge for using machine learning systems in high-stakes settings.

Probabilistic AI solution

Bayesian theory inherently enables white-box AI models without additional layers. The model variables and their dependency relationships have an actual meaning for the algorithm designer in his field of expertise, allowing to fully understand the mechanics of the algorithm, especially when the results differ from his expectation.

Uncertainty handling

Uncertainty handling

The challenge

Closed problems, such as a game of chess, are easy for machines as they require only pure logic. Open problems, such as a real chess board, requires handling incompletness and uncertainty of data, for example, obstructions and light sources. Even with huge amount of data or simulations simulation, the robustness of trained algorithms is hardly quantifiable.

Probabilistic AI solution

In Bayesian models, uncertainty, incomplestness and noise are taken into account starting from the perception layers of the AI system, and they natively propagate through all the following layers. For instance, an imperfect sensor/actuator may be modeled with adequate probability distributions acquired either by proper characterization or physical modelling, or both.

Frugality

Frugality

The challenge

Reliance on big data tend to increase capabilities differentials between large and small entities, incentive to collect large amounts of personal data, and hinder progress in areas with dirty data or few data points. Even with alternative techniques such as reinforcement learning, training a single model may have a significant electricity cost and emissions level similar to a car.

Probabilistic AI solution

By incorporating “prior” information explicitly in the model before improving further based on the available data, Bayesian methods are better suited to situations with limited data. Reducing the data needed to train the system helps in addressing larger use-cases, reducing the training time and providing a less energy-hungry, more sustainable AI.