Fabio Cuzzolin Deciphered Epistemic Artificial Intelligence
Photo by Wow AI

Fabio Cuzzolin Deciphered Epistemic Artificial Intelligence

Fabio Cuzzolin, Professor of Artificial Intelligence and Director of the Visual AI Labs at Oxford Brookes University, attended Worldwide AI Webinar to talk about the new concept of Epistemic Artificial Intelligence, which is the core of the E-pi project carried out by three European universities.

Read on for the highlights of his keynote. Check out his whole keynote on our website and YouTube channel .?


What is Epistemic AI?

Epistemic AI starts from a paradoxical principle stating that AI should first and foremost learn from the data it cannot see.?

In traditional machine learning, we learn from the limited available evidence in the form of training data that a model can describe, with some limited generalization powers. Eventually, current models end up adapting nicely to data that is not too different from the training data but not so much from all the data relevant to the problem ahead. It does that one want to solve.?

In Epistemic AI we start from a stance of complete ignorance. Then, the limited available training data is used only to temper the state of epistemic ignorance. Epistemic AI aims to be a new learning paradigm which seesk sets of models compatible with the training data so epistemic learning can be seen as a form of set-wise learning.?


Uncertainty theory

The theory is an array of formalisms devised to encode what Fabio called “second-order” or “epistemic uncertainty about a phenomenon”. He explained that this uncertainty was about the very probabilistic process that generates the data itself and uncertainty theory is model uncertainty.

The model uncertainty uses more complex measures than standard probability measures which often amount to convex sets of probability distributions. He also listed some examples of uncertainty measures that correspond to convex sets of probability such as lower/upper probabilities, probability boxes, random sets, or exercise probabilities.?

Epistemic AI takes its name from the fact that the goal is of not forgetting one's ignorance and is achieved by modern epistemic uncertainty using the mathematics of uncertainty theory. - Fabio Cuzzolin


The E-pi Research Program

Since ML is an application of optimization theory, rewriting its foundations means formulating a theory of optimization under epistemic uncertainty, according to professor Cuzzolin.

He shared that in this research program, they also revisited the way uncertainty was treated in AI and injected four advances into the three main subfields of ML:

  • Unsupervised learning
  • Statistical learning theory
  • Supervised learning
  • Reinforcement learning

The research program started with an uncertain world where you have missing data, unpredictable human behavior, scarce data, limited or biased training sets, etc.? These all induce two paths of study including mathematical work on theories of epistemic uncertainty and more focused work on the formulation of optimization frameworks and epistemic uncertainty.??

Fabio additionally listed 7 main objectives of the E-pi research program, which are:

  1. to create an automatic framework for optimization on the epistemic uncertainty, output in sets of models, and leveraging techniques from the second-order uncertainty.?
  2. to reformulate unsupervised learning in the epistemic optimization framework.?
  3. to solve the seeds of a new epistemic learning theory as a generalization of the classical statistical learning theory by WAPLIC and others as a robust foundation to supervise learning.?
  4. to bring about a robust epistemic theory of supervised learning
  5. to develop an epistemic reinforcement learning framework by formulating in new terms the problem of sequential decision-making under epistemic uncertainty
  6. to validate the epistemic AI paradigms for autonomous driving
  7. To foster an ecosystem of academic, research, industry, and societal partners throughout Europe

Applications

Supervised learning

The first application project that the research team started working on is deep networks in particular convolutional networks. In this study, they were working to devise epistemic deep network outputs cores for sets of outcomes.

The process of training such a network

To train the model, there are 2 tasks they need to work on. The first one is to provide a generation of classical class entropy or focal laws for instance

And the second one is to study the extension of the dropout techniques we're producing an ensemble of networks. Here they would be careful with selecting the classes of models they want to work on. Shall they start with CNN first??

The answer is yes. The team was working to devise epistemic losses, then when optimized can allow them to train a random sets CNN.

Besides, as efficiency is key, they couldn’t use one output neuron for each possible set of classes. Therefore, the team did some efficient sampling beforehand, in unsupervised data discovery.??

Unsupervised learning

The second project they are working on is generative adversarial networks (GANs). They want to generalize sampling in the generation process using the imprecise Monte Carlo technique with importance sampling. They also expect to replace conventional expectations with expectations of random sets in the formulation of adversarial loss. Overall, the project aims at developing an epistemic adversarial approach to unsupervised model adaptation.

The project came up against two challenges: efficience sampling in imprecise probability is sparely studied and there's no accepted definition of expectation of a random set.?

Reinforcement learning

In this project, the team wants to formulate a theory of sequential decision-making in an epistemic setting and generalize a standard of a classical paramount result of reinforcement learning in form of Belmann’s equation.

However, they also had some challenges in training the model such as spare works on graphical models with imprecise probability, various alternative uncertainty measures, and the issue of efficiency.

The result

Fabio was proud to share the results of the team’s past 2 year efforts: the ROad event Awareness Dataset (ROAD) for Autonomous Driving.

According to him, ROAD is designed to test an autonomous vehicle's ability to detect road events, defined as triplets composed by an active agent, the action(s) it performs, and the corresponding scene locations. ROAD comprises videos originally from the Oxford RobotCar Dataset annotated with bounding boxes showing the location in the image plane of each road event.?

They benchmark various detection tasks, proposing as a baseline a new incremental algorithm for online road event awareness termed 3D-RetinaNet. They also report the performance on the ROAD tasks of Slowfast and YOLOv5 detectors, as well as that of the winners of the ICCV2021 ROAD challenge, which highlights the challenges faced by situation awareness in autonomous driving.?

ROAD is designed to allow scholars to investigate exciting tasks such as complex (road) activity detection, future event anticipation, and continual learning.?

Watch his whole speech on our website and YouTube channel .

要查看或添加评论,请登录

社区洞察

其他会员也浏览了