Spring of Artificial Intelligence Seminar Series | A taste of AI at FBK

A series of seminars dedicated to AI

Sala “Luigi Stringa”, FBK Povo

Via Sommarive 18, Povo

SEMINARS

  • 16/5 15:00 – PAOLO TRAVERSO, Fondazione Bruno Kessler
    Learning Discrete Planning Domains from ContinuousnObservations
  • 23/5 15:00 – CARLO STRAPPARAVA, Fondazione Bruno Kessler
    Natural Language Processing for Creative Language
  • 30/5 15:00 – MASSIMILIANO MANCINI, La Sapienza – Fondazione Bruno Kessler
    Learning to Adapt: a Deeper Look at Domain Adaptation for Visual Recognition
  • 4/6 11:00 – HECTOR GEFFNER, ICREA & Universitat Pompeu Fabra
    Between Model-free and Model-based AI: Learning Representations
  • 13/6 15:00 – OSWALD  LANZ, Fondazione Bruno Kessler
    Learning to Recognize Actions in Videos
  • 19/6 15:00 – ALBERTO GRIGGIO, Fondazione Bruno Kessler
    Incremental Linearization for Satisfiability and Verification Modulo Nonlinear Arithmetic and Transcendental Functions
  • 20/6 14:30 – MICHELA MILANO, University of Bologna
    Empirical Model Learning: merging knowledge-based and data-driven decision models through machine learning

WHERE

Sala Stringa @Fondazione Bruno Kessler, Via Sommarive 18, Povo (Trento).
Except for the seminar lectured by Paolo Traverso (16/5 15:00) in ‘Sala Consiglio’ and the seminar by Michela Milano (20/6 14:30) held in ‘Room 211’ both @Fondazione Bruno Kessler

SPEAKERS

Paolo Traverso
Learning Discrete Planning Domains from ContinuousnObservations

We propose a framework for learning discrete deterministic planning domains. In this framework, an agent learns the domain by observing the action effects through continuous features that describe the state of the environment after the execution of each action. Besides, the agent learns its perception function, i.e., a probabilistic mapping between state variables and sensor data represented as a vector of continuous random variables called perception variables. We define an algorithm that updates the planning domain and the perception function by (i) introducing new states, either by extending the possible values of state variables or by weakening their constraints; (ii) adapts the perception function to fit the observed data (iii) adapts the transition function on the basis of the executed actions and the effects observed via the perception function. The framework is able to deal with exogenous events that happen in the environment.

Carlo Strapparava
Natural Language Processing for Creative Language

Dealing with creative language and in particular with effective, persuasive and even witty language has often been considered outside the scope of computational linguistics, and, in general, a challenge for AI systems.  Nonetheless, it is possible to exploit current NLP techniques to address some founding aspects of these linguistic phenomena.  We briefly review some computational experiences about these typical creative genres, such as creating catching and witty news titles or producing lyric parodies.

Massimiliano Mancini
Learning to Adapt: a Deeper Look at Domain Adaptation for Visual Recognition

Deep networks have significantly improved the state of the arts for several tasks in computer vision. Unfortunately, the impressive performance gains have come at the price of the use of massive amounts of labelled data. As the cost of collecting and annotating data is often prohibitive, given a target task where few or no training samples are available, it would be desirable to build effective learners that can leverage information from labelled data of a different but related source domain. However, a major obstacle in adapting models to the target task is the shift in data distributions across domains. This problem, typically denoted as domain shift, has motivated research into Domain Adaptation (DA).  Traditional DA algorithms assume the presence of a single source and a single target domain. However, in real-world applications, different situations may arise. For instance, in some cases, the source domain could be a mixture of diverse datasets, while in other settings target samples may not be given at the training stage or may arise from temporal data streams. Alternatively, in some applications knowledge about different domains may only be provided in the form of side-information (e.g. metadata) and should be effectively exploited to guide the adaptation process. In this talk, I will provide an overview of the problem of DA, focusing on visual recognition tasks, and describe our recent works on adaptation in case of dynamic, non-standard settings.

Hector Gefner
Between Model-free and Model-based AI: Learning Representations

During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs by hand. In more recent decades, research increasingly shifted to the development of learners capable of inferring behaviour and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts.  Model-based approaches, on the other hand, require suitable representations and scalable algorithms. Model-free learners and model-based solvers have parallels with Systems 1 and 2 in current theories of the human mind (D. Kahneman, Thinking, Fast and Slow): the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. A key difference, however, is that Systems 1 and 2 are tightly integrated, while learners and solvers are not. This two-way integration between learners and solvers is indeed one of the key open challenges in AI, and it involves, among other things,
learning meaningful representations and models from data. In the talk, I review these ideas and present our recent work that is aimed at this challenge.

Oswald Lanz
Learning to Recognize Actions in Videos

In 2015 the first artificial system has been reported to beat human performance on ImageNet visual recognition, with Top-5 error rate below 5%. This has not happened with video yet, for example, the best-ranked entry in the EPIC-Kitchens Action Recognition leaderboard achieves a 45.95% Top-5 recognition accuracy. This gap can be interpreted with the increased difficulty of learning the more complex spatiotemporal patterns in videos, from weak supervision with limited data. In this talk, I will focus on deep architectures for video representation learning in this context. I will present key ideas behind LSTA [cvpr19] and HF-Nets [arXiv:1905.12462] that realize spatiotemporal aggregation of video frame sequences under a complementary perspective. LSTA extends LSTM with built-in attention and a novel output gating, it learns a smooth tracking of discriminative features for late aggregation of frame level features, providing a +22% accuracy gain over LSTM baseline on GTEA-61 dataset. On the opposite, HF-Nets perform deep hierarchical aggregation to develop spatiotemporal features early, hereby boosting recognition accuracy of the popular TSN from 17% to 41% on 20BN-something dataset adding almost no overhead.  We participated with variants of these models in the CVPR 2019 EPIC-Kitchens Challenge and will conclude the talk with an overlook of our submission.

Alberto Griggio
Incremental Linearization for Satisfiability and Verification Modulo Nonlinear Arithmetic and Transcendental Functions

Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula with respect to some theory or combination of theories; Verification Modulo Theories (VMT) is the problem of analyzing the reachability for transition systems represented in terms of SMT formulae. In this article, we tackle the problems of SMT and VMT over the theories of nonlinear arithmetic over the reals (NRA) and of NRA augmented with transcendental (exponential
and trigonometric) functions (NTA). We propose a new abstraction-refinement approach for SMT and VMT on NRA or NTA, called Incremental Linearization. The idea is to abstract nonlinear multiplication and transcendental functions as uninterpreted functions in an abstract space limited to linear arithmetic on the rationals with uninterpreted functions. The uninterpreted functions are incrementally axiomatized by means of upper- and lower-bounding piecewise-linear constraints. In the case of transcendental functions, particular care is required to ensure the soundness of the abstraction. The method has been implemented in the MathSAT SMT solver and in the nuXmv model checker. An extensive experimental evaluation on a wide set of benchmarks from verification and mathematics demonstrates the generality and the effectiveness of our approach.

Michela Milano
Empirical Model Learning: merging knowledge-based and data-driven decision models through machine learning

Designing good models is one of the main challenges for obtaining realistic and useful decision support and optimization systems. Traditionally combinatorial models are crafted by interacting with domain experts with limited accuracy guarantees. Nowadays we have access to data sets of unprecedented scale and accuracy about the systems we are deciding on. In this talk, we propose a methodology called Empirical Model Learning that uses machine learning to extract data-driven decision model components and integrates them into an expert-designed decision model. We outline the main domains where EML could be useful and we show how to ground Empirical Model Learning on a problem of thermal-aware workload allocation and scheduling on a multi-core platform. In addition, we discuss how to use EML with different optimization and machine learning techniques, and we provide some hints about recent work on EML for hierarchical optimization and on-line/off-line optimization.


Contacts

Privacy Notice
Pursuant to art. 13 of EU Regulation No. 2016/679 – General Data Protection Regulation and as detailed in the Privacy Policy for FBK event’s participants, we inform you that the event will be recorded and disclosed on the FBK institutional channels. In order not to be filmed or recorded, you can disable the webcam and/or mute the microphone during virtual events or inform the FBK staff who organize the public event beforehand.