LCSL logo
MaLGa logo

MaLGa Seminar Series

We are involved in the organization of the MaLGa Seminar Series, in particular those on Statistical Learning and Optimization. The MaLGa seminars are divided in four main threads, including Statistical Learning and Optimization as well as Analysis and Learning, Machine Learning and Vision, Machine Learning for Data Science.

An up-to-date list of ongoing seminars is available on the MaLGa webpage.

Seminars will be streamed on our YouTube channel.

Beyond Action Recognition: Detailed Video Modeling

Speaker: Gül Varol
Speaker Affiliation: École des Ponts ParisTech

Date: 2021-12-03
Time:
Location:

Abstract
In this talk, I will present some of our recent works on a variety of tasks in computer vision, in particular focusing on detailed video modeling. Action recognition has been a standard problem in the research community working on videos. However, there is more to learn in videos than a closed set of pre-defined semantic action categories. This talk will cover three different directions towards more detailed understanding of dynamic visual contents. (i) First, we will look at our end-to-end text-to-video retrieval approach that learns to map videos and textual descriptions into a joint space, and see the advantages of joint image and video training using transformers. (ii) Then, we will explore a more fine-grained problem of localising text in sign language videos, using weakly-aligned subtitles in sign language interpretation data, again in conjunction with transformers. (iii) Finally, we will go beyond semantics, and look at 3D reconstruction from video data for recovering detailed hand-object interactions, this time we will discuss the limitations of the learning-based methods due to lack of data, and opt for an optimization-based approach. Bain et al. “Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval”, ICCV 2021. Varol et al. “Read and Attend: Temporal Localisation in Sign Language Videos”, CVPR 2021. Bull et al. “Aligning Subtitles in Sign Language Videos”, ICCV 2021. Hasson et al. “Towards unconstrained joint hand-object reconstruction from RGB videos”, 3DV 2021.

Bio
Gül Varol is a research faculty at the IMAGINE team of École des Ponts ParisTech. Previously, she was a postdoctoral researcher at the University of Oxford (VGG). She obtained her PhD from the WILLOW team of Inria Paris and École Normale Supérieure (ENS). Her thesis received the ELLIS PhD Award. During her PhD, she spent time at MPI, Adobe, and Google. Her research is focused on human understanding in videos, specifically action recognition, body shape and motion analysis, and sign languages.

Sparsity and convergence analysis of generalized conditional gradient methods

Speaker: Marcello Carioni
Speaker Affiliation: University of Cambridge

Date: 2021-11-29
Time: 3:00 pm
Location: 706, via Dodecaneso 35

Abstract
In this talk we introduce suitable generalized conditional gradient algorithms for solving variational inverse problems consisting in the sum of a smooth fidelity term and a convex, coercive regularizer. We exploit the sparse structure of the variational problem by designing iterates as suitable linear combinations of the extremal points of the ball of the regularizer and we prove sublinear convergence of the algorithm. Then, under some further structural assumptions, we show how to improve the rate of convergence by lifting the problem to the space of measures supported on the above-mentioned extremal points and using known convergence results for generalized conditional gradient methods for the total variation minimization. Finally, we apply our algorithm to solve dynamic inverse problems regularized with optimal transport energies.

Bio
I am a Newton International Fellow at the University of Cambridge working in the CIA (Cambridge Image Analysis) group. Previously, I have been a postdoctoral researcher at Karl-Franzens University in Graz and at the University of Würzburg. I obtained my PhD in 2017 from the Max-Planck Institute for Mathematics in the Science in Leipzig.

Three common reinforcement learning tricks: when and why do they work

Speaker: He Niao
Speaker Affiliation: ETH Zurich

Date: 2021-11-24
Time: 2:30 pm
Location: Room 509, via Dodecaneso 35, Genova, IT

Abstract
Reinforcement learning has achieved remarkable breakthroughs recently for outperforming humans in many challenging tasks. Behind the scenes lies in the integration of various algorithmic techniques: neural function approximation, double learning, entropy regularization, etc. This talk will unveil some of the mysteries behind these techniques from theoretical perspectives, by understanding the asymptotic and finite-time behaviors of the algorithm dynamics.

Bio
Niao He is currently an Assistant Professor in the Department of Computer Science at ETH Zurich, where she leads the Optimization and Decision Intelligence (ODI) Group. She is also an ELLIS Scholar and a core faculty member of ETH AI Center, ETH-Max Planck Center of Learning Systems, and ETH Foundations of Data Science. Previously, she was an assistant professor at the University of Illinois at Urbana-Champaign from 2016 to 2020. Before that, she received her Ph.D. degree in Operations Research from Georgia Institute of Technology in 2015. Her research interests are in optimization, machine learning, and reinforcement learning.

COVID-19: modelli e indicatori per provvedimenti di sanità pubblica

Speaker: Stefano Merler
Speaker Affiliation: Fondazione Bruno Kessler

Date: 2021-11-15
Time: 3:00 pm
Location: Remote

Abstract
Nel seminario si presenterà in modo informale una panoramica delle attività svolte dal Centro Health Emergencies della Fondazione Bruno Kessler di Trento nell’ambito delle azioni di contrasto all’epidemia di COVID-19, dall’analisi in tempo reale dei dati alla costruzione di indici sintetici, dalla statistica alla modellistica matematica.

Bio
Stefano Merler è un epidemiologo matematico, direttore del Centro Health Emergencies della Fondazione Bruno Kessler di Trento. Si occupa dello studio dei pattern di trasmissione delle malattie infettive, applicando tecniche statistiche o di modellizzazione matematica per comprendere la storia naturale dei patogeni e il decorso clinico delle infezioni e per valutare il potenziale impatto di diverse strategie di mitigazione o contenimento. È autore di circa 140 articoli scientifici.

Deep Learning in Computational Imaging

Speaker: Felix Lucka
Speaker Affiliation: Centrum Wiskunde & Informatica (Amsterdam)

Date: 2021-11-8
Time: 3:00 pm
Location: Remote

Abstract
Due to its remarkable success for a variety of complex image processing problems, Deep Learning is nowadays also more commonly used in the domain of computational image reconstruction and inverse problems. In this talk, we will highlight some of the challenges and potential solutions of integrating Deep Learning into computational imaging work-flows found in scientific, clinical or industrial applications using imaging modalities such as X-ray CT, Magnetic Resonance Imaging, Photoacoustic Tomography and Ultrasound.

Bio
After obtaining a first degree in mathematics and physics in 2011, Felix Lucka did a PhD in applied mathematics at WWU Münster (Germany), which included a research visit at UCLA, followed by a postdoc at UCL. Since 2017, he is a tenure track researcher in the Computational Imaging group at the Centrum Wiskunde & Informatica (CWI, Amsterdam). His main interests are mathematical challenges arising from biomedical imaging applications that have a classical inverse problem described by partial differential equations at their core.

Reinforcement learning for animal behavior

Speaker: Massimo Vergassola
Speaker Affiliation: CNRS

Date: 2021-10-25
Time: 3:00 pm
Location: room 706, via Dodecaneso 35, Genova, IT and remote

Bio
Massimo Vergassola is a theoretical physicist working at the interface of statistical mechanics and biology. After his studies in Rome and Nice, he was a postdoc at Princeton University and a chargé de recherche at CNRS in Nice. He worked on fundamental and computational aspects of turbulence and turbulent transport. He then started his venture in biological physics, by forming and directing the unit of biological physics at Institut Pasteur from 2004 to 2013. From 2013 to 2019 he was a professor at UCSD, and contributed seminal work on the use of reinforcement learning for physical systems. He is now Directeur de Recherche at the Ecole Normale Superieure de Paris and works on a broad array of topics, from turbulent navigation to the fundamental limits of sensing in biological systems, chemotaxis, synchronisation in the early stages of embryonic development and the physical and computational principles for olfactory sensing. He is the recipient of a CNRS Bronze Medal ; the Biomedical prize thérèse Lebrasseur from the Fondation de France ; the grand prix EADS from Académie des Sciences ; a targeted grant from the Simons Foundation ; he is a fellow of the APS and was elected to the Chair Line of the division of biological physics of the APS. He is the director of the ENS-PSL Initiative on Quantitative Biology.

Non-Stationary Delayed Bandits with Intermediate Observations

Speaker: Claire Vernade
Speaker Affiliation: DeepMind (UK)

Date: 2021-10-18
Time: 3:00 pm
Location: Room 706 DIMA - VII floor, via Dodecaneso 35, Genova, IT and remote

Abstract
We consider the problem of learning with delayed bandit feedback, meaning by trial and error, in changing environments. This problem is ubiquitous in many online recommender systems that aim at showing content, which is ultimately evaluated by long-term metrics like a purchase, or a watching time. Mitigating the effects of delays in stationary environments is well-understood, but the problem becomes much more challenging when the environment changes. In fact, if the timescale of the change is comparable to the delay, it is impossible to learn about the environment, since the available observations are already obsolete. However, the arising issues can be addressed if relevant intermediate signals are available without delay, such that given those signals, the long-term behavior of the system is stationary. To model this situation, we introduce the problem of stochastic, non-stationary and delayed bandits with intermediate observations. We develop a computationally efficient algorithm based on UCRL, and prove sublinear regret guarantees for its performance.

Bio
Claire is a Research Scientist at DeepMind in London UK. She received her PhD from Telecom ParisTech in October 2017, under the guidance of Prof. Olivier Cappé. From January 2018-October 2018, she worked part-time as an Applied Scientist at Amazon in Berlin, while doing a post-doc with Alexandra Carpentier at the University of Magdeburg in Germany. Her research is on sequential decision making. It mostly spans bandit problems, but Claire's interest also extends to Reinforcement Learning and Learning Theory. While keeping in mind concrete problems -- often inspired by interactions with product teams -- she focuses on theoretical approaches, aiming for provably optimal algorithms. She recently received an Outstanding Paper Award at ICLR for a joint work on a game-theoretic approach to PCA.

Three Mathematical Tales of Machine Learning

Speaker: Massimo Fornasier
Speaker Affiliation: Technical University of Munich

Date: 2021-10-04
Time: 3:30 pm
Location: room 706 DIMA - VII floor, via Dodecaneso 35, Genova, IT and remote

Abstract
I will tell three mathematical tales of machine learning related to my most recent work: 1. identification of deep neural networks, 2. global optimization over manifolds, 3. Mean-field optimal control of NeurODE. Tale 1. is about the proof that, despite the NP-hardness of the problem, generic neural networks can be identified up to natural symmetries by a finite number of input-output samples scaling with the complexity of the network. Numerical validation of the result is presented. A crucial subproblem of the identification pipeline is the solution of a nonconvex optimization over the sphere. Tale 2. is in fact about solving global optimizations over spheres by means of a multi-agent dynamics, which combines a consensus mechanism and random exploration. The proof of global solution is based on showing that the large particle limit of the SDE system is distributed as the solution of the deterministic PDE, whose large time asymptotics converges to a global minimizer. I present numerical results in robust linear regression for computing eigenfaces. In the Tale 3. I introduce NeurODE, which are neural networks approximable by ODE. I show that their training can be formulated as a mean-field optimal control and I present the derivation of a mean-field Pontryagin maximum principle characterizing optimal parameters/controls and its well-posedness. Again a numerical experiment of a simple 2D classification problem validates the theoretical results.

Bio
Massimo Fornasier received his doctoral degree in computational mathematics in 2003 from the University of Padua, Italy. After spending from 2003 to 2006 as a postdoctoral research fellow at the University of Vienna and University of Rome La Sapienza, he joined the Johann Radon Institute for Computational and Applied Mathematics (RICAM) of the Austrian Academy of Sciences where he served as a senior research scientist until March 2011. He was an associate researcher from 2006 to 2007 for the Program in Applied and Computational Mathematics of Princeton University, USA. In 2011 Fornasier was appointed Chair of Applied Numerical Analysis at TUM. In 2021 he was awarded by a ERC Starting Grant. The research of Massimo Fornasier embraces a broad spectrum of problems in mathematical modeling, analysis and numerical analysis. He is particularly interested in the concept of compression as appearing in different forms in data analysis, image and signal processing, and in the adaptive numerical solutions of partial differential equations or high-dimensional optimization problems.

Computer Vision to Digitalise the Retail: Pipelines, Infrastructure & Open Problems

Speaker: Federico Roncallo
Speaker Affiliation: Trax Retail

Date: 2021-6-22
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
In the seminar we will discuss how we can stack together different state of the art models to produce store insights. We will get a flavour of what it means maintaining a MLOps pipeline and I will introduce some of the research challenges we are trying to solve. Spoiler Alert: The presentation will not go deep into specific algorithms and it has been thought to show a different Computer Vision use case.

Bio
After I took my master degree in Computer Science at the University of Genoa in March 2018 I worked for a year as Research Engineer in a joint collaboration between the Université de Paris and the aerospace company Safran. The main focus of my research was on anomaly detection in the time series domain. Once my contract ended I decided to leave the academia and the time series world to join Qopius, a start-up operating in the retail world, as Computer Vision Researcher. In February 2020 Qopius was acquired by Trax, which is considered an unicorn company in the retail digitalisation, making myself an actual employee of the latest mentioned company. At the beginning of 2021 I took the lead of the AI Paris team of Trax.

Computational Cellular Engineering

Speaker: Simone Bianco
Speaker Affiliation: IBM

Date: 2021-6-1
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
Cellular engineering is a new scientific discipline which focuses on designing the structure of cells which performs specific functions. The use of computational methods for synthetic biology and cellular engineering is accelerating research and development and becoming essential for the progress of the disciplines. In this talk I will present recent advances in computational cellular engineering, with some examples and a vision for the role of artificial intelligence in the future of cellular engineering.

Bio
Simone Bianco is research manager of the department of Functional Genomics and Cellular Engineering at the IBM Almaden Research Center. He got his BS and MS in Physics at the University of Pisa, Italy, and his PhD in Physics from the University of North Texas. His main research interests are in in theoretical evolutionary biology, especially the evolution of RNA viruses, and cellular engineering. Dr Bianco is IBM PI and site director of the Center for Cellular Construction, an NSF Science and Technology Center in partnership with UC San Francisco, UC Berkeley, Stanford University, SF State University and the SF Exploratorium. The center aims at transforming cell biology into an engineering discipline. He is a TED speaker with over 1M views, leader of one of 2018 IBM's 5-in-5, the 5 projects which will change the world in the next 5 years according to IBM, and a honorary visiting lecturer for the Society for Industrial and Applied Mathematics, for his standing in the field of dynamical systems and his commitment to education.

Proximal and Invertible Neural Networks

Speaker: Gabriele Steidl
Speaker Affiliation: TU Berlin

Date: 2021-3-25
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
Proximal neural networks (PNNs) have the advantage of a controlled Lipschitz constant which make them interesting for many applications. We give an introduction into proximal neural networks, in particular their convolutional variant. Further, we are interested in invertible networks (see also normalizing flows) which can be used similarly as GANs to sample from a high dimensional unknown distribution by using a simpler one. We demonstrate an application of INNs in grazing incidence X-ray fluorescence, a non-destructive technique for analyzing the geometry and compositional parameters of nanostructures appearing e.g. in computer chips. We propose to reconstruct the posterior parameter distribution given a noisy measurement generated by the forward model by an appropriately learned invertible neural network. This network resembles the transport map from a reference distribution to the posterior. We demonstrate by numerical comparisons that our method can compete with established Markov Chain Monte Carlo approaches, while being more efficient and flexible in applications.

Bio
Gabriele Steidl received her PhD and Habilitation in Mathematics from the University of Rostock (Germany), in 1988 and 1991, respectively. From 1992 to 1993 she worked as a consultant at the Verband Deutscher Rentenversicherungsträger in Frankfurt am Main. From 1993 to 1996, she held a position as Assistant Professor at the Department of Mathematics at the TU Darmstadt. From 1996 to 2010, she was Professor at the Department of Mathematics and Computer Science at the University of Mannheim. From 2011 to 2020, she was Professor at the Department of Mathematics at the TU Kaiserslautern and Consultant of the Fraunhofer Institute for Industrial Mathematics. Since 2020, she is Professor at the Department of Mathematics at the TU Berlin. She worked as a Postdoc at the University of Debrecen (Hungary), the Banach Center Warsaw and the University of Zürich and was a Visiting Professor at the ENS Paris/Cachan and the Université Paris East Marne-la-Vallée and the Sorbonne. Since 2020 she is a member of the DFG Fachkollegium Mathematik and the Program Director of SIAG-IS (SIAM).

Critical Points, Multiple Testing and Point Source Detection for Cosmological Data

Speaker: Domenico Marinucci
Speaker Affiliation: University of Rome Tor Vergata

Date: 2021-5-11
Time: 4:30 pm
Location: On line streaming on YouTube

Abstract
Over the last two decades, Cosmology has experienced a sort of revolution, where a flood of data of unprecedented accuracy has become available by means of a number of different ground-based and satellite experiments. The analysis of these maps entails a number of extremely interesting mathematical and statistical questions, mostly related to the geometry of spherical random fields. In this talk, we shall be concerned in particular with issues related to detection of point sources (Galaxies) in Cosmic Microwave Background radiation (CMB) Data; we shall discuss the connections with spherical wavelets, distribution of critical points for spherical random fields, and multiple testing procedures. If time permits, we will also discuss some ongoing developments concerning point source detection in the more challenging framework of so-called polarization data.

Bio
Domenico Marinucci is a full Professor of Probability and Mathematical Statistics at the Department of Mathematics of the University of Rome Tor Vergata, which he directed for 8 years. He is a former ERC grantholder, a member of the Planck and Euclid missions of the European Space Agency and the editor in chief of the Electronic Journal of Statistics. He is also an invited speaker for the European Congress of Mathematics in 2021. His research interests are mainly in the geometry of spherical random fields, with applications to Cosmology.

On the use of 3D Gray Code Kernels for motion-related tasks in videos

Speaker: Elena Nicora
Speaker Affiliation: DIBRIS, University of Genoa

Date: 2021-04-27
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
In order to solve many problems in Computer Vision, current state-of-art approaches leverage the use of deep architectures, whose impressive results are provided to the price of dramatic requirements in terms of data and computational resources. Traditional approaches may come in hand by pursuing efficiency and portability at the expense of a less precise result. In this seminar I will present the Gray-Code Kernels (GCKs), a family of filter kernels that can be employed as a highly efficient filtering scheme, used in literature almost exclusively for fast 2D pattern matching. GCKs were originally designed so that successive convolutions of an image with a set of such filters require only two operations per pixel, thus cutting down the execution time with respect to classical convolutions. In this talk I will discuss how 3D GCKs may be exploited to efficiently gather meaningful spatio-temporal cues from videos, useful in motion-based saliency detection problems. I will also discuss the potentials of the method for application to motion segmentation and classification problems.

Bio
Elena is a third year PhD student working in the Machine Learning & Vision unit at MaLGa, under the supervision of Nicoletta Noceti. Her research interests include DL and traditional methods for various motion understanding related tasks, such as motion detection, motion-based segmentation and human action classification.

Meaningful data and semantic interoperability: utopia or a possible reality (in the Italian Public Sector)?

Speaker: Giorgia Lodi
Speaker Affiliation: STLab - Istituto di Scienze della Cognizione, CNR

Date: 2021-04-16
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
After many years, we are still struggling to make data sources from different (public) actors truly interoperable, also from a semantic perspective. In this talk, I will present existing tools and artefacts that can be used for a paradigm shift, where data can be interlinked and automatic inference of new knowledge can be enabled. We will discuss how this can be leveraged in the public sector and in the development of new (machine learning) applications.

Bio
Giorgia Lodi received the Ph.D. degree in Computer Science from the University of Bologna (Italy) in 2006. She is currently a permanent technologist at the Institute of Cognitive Science and Technologies (ISTC) of the National Council of Research of Italy (CNR) – Semantic Technology Laboratory (STLab). In this context, she is the privacy reference point at ISTC, she is a member of European workings groups on data semantics and she coordinates projects on (open) data management and semantic interoperability. In the past, she carried out consulting activities for “Agenzia per l’Italia Digitale” (AgID), where she worked in such areas as open government data, Linked Open Data, Semantic Web, Semantic Interoperability and Big Data.

Parameter-free Stochastic Optimization of Variationally Coherent Functions

Speaker: Francesco Orabona
Speaker Affiliation: Boston University

Date: 2021-4-13
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
We consider the problem of finding the minimizer of a differentiable function F: R^d -> R using access only to noisy gradients of the function. This is a fundamental problem in stochastic optimization and machine learning. Indeed, a plethora of algorithms have been proposed to solve this problem. However, the choice of the algorithm and its parameters crucially depends on the (unknown) characteristics of the function F. On the other hand, for convex Lipschitz functions it is possible to design parameter-free optimization algorithms that guarantee optimal performance without any hyperparameter to tune. Unfortunately, they do not seem to work on non-convex functions. In an effort to go beyond convex functions, we focus on variationally coherent functions: they are defined by the property that, at any point x, the vector pointing towards the optimal solution x^* and the negative gradient form an acute angle. This class contains convex, quasi-convex, tau-star-convex, and pseudo-convex functions. We propose a new algorithm based on the Follow The Regularized Leader framework with the added twist of using rescaled gradients and time-varying linearithmic regularizers. We can prove almost sure convergence to the global minimizer x^* of variationally coherent functions. Additionally, the very same algorithm with the same hyperparameters, after T iterations guarantees on convex functions that the expected suboptimality gap is bounded by O(||x^*-x_0|| T^{-1/2+eps}) for any eps>0, up to polylog terms. This is the first algorithm to achieve both these properties at the same time. Also, the rate for convex functions essentially matches the performance of parameter-free algorithms.

Bio
Francesco Orabona is an Assistant Professor at Boston University. He received his PhD in Electronic Engineering from the University of Genoa, Italy, in 2007. As a result of his activities, Dr. Orabona has published more than 70 papers in scientific journals and conferences on the topics of online learning, optimization, and statistical learning theory. His latest work is on "parameter-free" machine learning and optimization algorithms, that is algorithms that achieve the best performance without the need to tune parameters, like regularization, learning rate, momentum, etc.

Towards imitation learning in robotics

Speaker: Luca Garello
Speaker Affiliation: DIBRIS, Università di Genova e Istituto Italiano di Tecnologia

Date: 2021-3-30
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
Imitation learning plays a key role in our development from the early years of our life. In fact, by observing expert demonstrators we are able to learn new skills. For this reason the idea of having robots able to learn new tasks by using demonstration policies is the subject of an increasing number of research. With our work we focus on the ability of remapping actions in our perspective and we propose a generative model able to shift the perspective from third person to first person. This perspective translation is performed by using only RGB images. Moreover, our model generates an embedded representation of the action which can be used to understand the action. These embeddings are autonomously learnt following a time-consistent pattern without the human supervision. In the last part of the seminar we will show how our model can be successfully implemented on a real robot to perform an imitation task.

Bio
Luca is a 2nd year PhD student at MALGA and collaborates with the Robotics Brain and Cognitive Sciences laboratory at the Italian Institute of Technology (iit). His research interests revolve around machine learning and computer vision applied to Robotics, with a focus on new algorithms that enhance the quality of Human-Robot-Interaction.

Regularity properties of Entropic Optimal Transport in applications to machine learning

Speaker: Giulia Luise
Speaker Affiliation: Imperial College

Date: 2021-3-16
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
The entropic regularization has proved to be a powerful tool to define approximations of optimal transport distances with improved computational and statistical aspects. In this talk we will focus on further advantages of such entropic regularization, in terms of smoothness. We discuss its regularity properties and their role in some machine learning problems where regularized optimal transport is used as discrepancy metric in supervised and unsupervised frameworks.

Bio
Giulia Luise has recently obtained her PhD in Machine Learning at UCL, London, under the supervision of Massimiliano Pontil and Carlo Ciliberto. Her main research interest focuses on the interplay of optimal transport and machine learning. She is now a Research Associate at Imperial College, where she started working on reinforcement learning.

Foundations of deep convolutional models through kernel methods

Speaker: Alberto Bietti
Speaker Affiliation: New York University

Date: 2021-02-16
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
Deep learning has been most widely successful in tasks where the data presents a rich structure, such as images, audio, or text. The choice of network architecture is believed to play a key role in exploiting this structure, for instance through convolutions and pooling on natural signals, yet a precise study of these properties and how they affect learning guarantees is still missing. Another challenge for the theoretical understanding of deep learning models is that they are often over-parameterized and known to be powerful function approximators, while being seemingly easy to optimize using gradient methods. We study deep models through the lens of kernel methods, which naturally define functional spaces for learning in a non-parametric manner, and naturally appear when considering the optimization of infinitely-wide networks in certain regimes. This allows us to study invariance and stability properties of various convolutional architectures by studying the geometry of the kernel mapping, as well as approximation properties of learning in different regimes.

Bio
Alberto is a Faculty Fellow/Postdoc at the NYU Center for Data Science in New York. He completed his PhD in 2019 from Inria and Université Grenoble-Alpes under the supervision of Julien Mairal, and later spent part of 2020 as a postdoc at Inria Paris hosted by Francis Bach. His research interests revolve around machine learning, optimization and statistics, with a focus on developing the theoretical foundations of deep learning.

Towards Causal Representation Learning

Speaker: Francesco Locatello
Speaker Affiliation: Amazon

Date: 2021-02-02
Time: 3:00 pm
Location: On line streaming on YouTube

Abstract
The two fields of machine learning and graphical causality arose and developed separately. However, there is now strong cross-pollination, and increasing interest in both fields to benefit from the advances of the other. In this talk, I will discuss my late PhD work, highlighting some points of contact between causality and machine learning, and proposing key research questions at the intersection of both. As most work in causality starts from the premise that the causal variables are observed, a central problem for AI and causality is causal representation learning: the discovery of high-level causal variables from low-level observations.

Bio
Francesco Locatello recently joined Amazon as a Senior Applied Scientist. He defended his PhD at ETH Zurich, where he was a Doctoral Fellow at the Max Planck ETH Center for Learning Systems and ELLIS supervised by Gunnar Rätsch (ETH Zurich) and Bernhard Schölkopf (Max Planck Institute for Intelligent Systems). He held a Google PhD Fellowship in Machine Learning and received the best paper award at the International Conference of Machine Learning (ICML) 2019.

Data Driven Regularization

Speaker: Andrea Aspri
Speaker Affiliation: The Johann Radon Institute for Computational and Applied Mathematics

Date: 2020-12-15
Time: 3:30 pm
Location: On line streaming on YouTube

Abstract
In this talk I will speak about some recent results on the study of linear inverse problems under the premise that the forward operator is not at hand but given indirectly through some input-output training pairs. We show that regularisation by projection and variational regularisation can be formulated by using the training data only and without making use of the forward operator. I will provide some information regarding convergence and stability of the regularised solutions. Moreover, we show, analytically and numerically, that regularisation by projection is indeed capable of learning linear operators, such as the Radon transform. This is a joint work with Yury Korolev (University of Cambridge) and Otmar Scherzer (University of Vienna and RICAM).

Bio
Andrea Aspri is currently a research scientist at RICAM (Linz) in the “Inverse Problems and Mathematical Imaging” group, led by Prof. Otmar Scherzer. He has just obtained a post-doc position at University of Pavia, starting from November 1st, 2020, under the supervision of Prof. Elisabetta Rocca and financed by the project “Department of Excellence”. His research topics are mainly focused on uniqueness and stability issues for inverse problems, on data driven regularization algorithms and on shape optimization problems.

Date

Speaker

Title

Location

Dec 3, 2021 Gül Varol Beyond Action Recognition: Detailed Video Modeling Remote, @UniGE
Nov 24, 2021 Marcello Carioni Sparsity and convergence analysis of generalized conditional gradient methods Remote, @UniGE
Nov 24, 2021 He Niao Three common reinforcement learning tricks: when and why do they work Remote, @UniGE
Nov 15, 2021 Stefano Merler COVID-19: modelli e indicatori per provvedimenti di sanità pubblica Remote
Nov 8, 2021 Felix Lucka Deep Learning in Computational Imaging Remote
Oct 25, 2021 Massimo Vergassola Reinforcement Learning for Animal Behavior Remote, @UniGE
Oct 18, 2021 Claire Vernade Non-Stationary Delayed Bandits with Intermediate Observations Remote, @UniGE
Oct 4, 2021 Massimo Fornasier Three Mathematical Tales of Machine Learning Remote, @UniGE
Jun 22, 2021 Federico Roncallo Computer Vision to Digitalise the Retail: Pipelines, Infrastructure & Open Problems. Remote
Jun 1, 2021 Simone Bianco Computational Cellular Engineering Remote
May 25, 2021 Gabriele Steidl Proximal and Invertible Neural Networks Remote
May 11, 2021 Domenico Marinucci Critical Points, Multiple Testing and Point Source Detection for Cosmological Data Remote
Apr 27, 2021 Elena Nicora On the use of 3D Gray Code Kernels for motion-related tasks in videos Remote
Apr 16, 2021 Giorgia Lodi Meaningful data and semantic interoperability: utopia or a possible reality (in the Italian Public Sector)? Remote
Apr 13, 2021 Francesco Orabona Parameter-free Stochastic Optimization of Variationally Coherent Functions Remote
Mar 30, 2021 Luca Garello Towards imitation learning in robotics Remote
Mar 16, 2021 Giulia Luise Regularity properties of Entropic Optimal Transport in applications to machine learning Remote
Feb 16, 2021 Alberto Bietti Foundations of deep convolutional models through kernel methods Remote
Feb 2, 2021 Francesco Locatello Towards Causal Representation Learning Remote
Dec 15, 2020 Andrea Aspri Data Driven Regularization Remote

Showing 1-20 of 129 results