RegML 2020
Regularization Methods for Machine Learning
Genova, June 29 - July 3

RegML 2020 was held from June 29 to July 3 as a live stream event

Certificates of attendance and exam instructions have been sent!

Course at a Glance

The course was held online from June 29th to July 3rd

Understanding how intelligence works and how it can be emulated by machines is an age old dream and arguably one of the biggest challenges in modern science. Learning, with its principles and computational implementations, is at the very core of this endeavor. Recently, for the first time, we have been able to develop artificial intelligence systems able to solve complex tasks considered out of reach for decades. Modern cameras recognize faces, and smart phones voice commands, cars can see and detect pedestrians and ATM machines automatically read checks. In most cases at the root of these success stories there are machine learning algorithms, that is, software that is trained rather than programmed to solve a task. Among the variety of approaches to modern computational learning, we focus on regularization techniques, that are key to high-dimensional learning. Regularization methods allow to treat in a unified way a huge class of diverse approaches, while providing tools to design new ones. Starting from classical notions of smoothness, shrinkage and margin, the course will cover state of the art techniques based on the concepts of geometry (aka manifold learning), sparsity and a variety of algorithms for supervised learning, feature selection, structured prediction, multitask learning and model selection. Practical applications for high dimensional problems, in particular in computational vision, will be discussed. The classes will focus on algorithmic and methodological aspects, while trying to give an idea of the underlying theoretical underpinnings. Practical laboratory sessions will give the opportunity to have hands on experience.

RegML is a 20 hours advanced machine learning course including theory classes and practical laboratory sessions. The course covers foundations as well as recent advances in Machine Learning with emphasis on high dimensional data and a core set techniques, namely regularization methods. In many respects the course is a compressed version of the 9.520 course at MIT.

Related courses:

The course started in 2008 has seen an increasing national and international attendance over the years, with a peak of over 90 participants in 2014.

Important dates:

  • application deadline: May 1
  • notification of acceptance: May 15
  • registration fee deadline: May 29

Registration fee:

  • students and postdocs: waived
  • professors: waived
  • professionals: EUR 150
  • UNIGE students and IIT affiliates: no fee
Once accepted, each candidate has to follow the instructions in the acceptance email and proceed with the payment. The registration fee is non-refundable.


Lorenzo Rosasco

University of Genova
(also Istituto Italiano di Tecnologia and Massachusetts Institute of Technology)

lorenzo [dot] rosasco [at] unige [dot] it


Emanuele Rodolà

Sapienza University of Rome

Geometric Deep Learning

The past decade in computer vision research has witnessed the re-emergence of "deep learning", and in particular convolutional neural network (CNN) techniques, allowing to learn powerful image feature representations from large collections of examples. CNNs achieve a breakthrough in performance in a wide range of applications such as image classification, segmentation, detection and annotation. Nevertheless, when attempting to apply the CNN paradigm to 3D shapes, point clouds and graphs (feature-based description, similarity, correspondence, retrieval, etc.) one has to face fundamental differences between images and geometric objects. Shape analysis, graph analysis and geometry processing pose new challenges that are non-existent in image analysis, and deep learning methods have only recently started penetrating into these communities. The purpose of this tutorial is to overview the foundations and the current state of the art on learning techniques for non-Euclidean data. Special focus will be put on deep learning techniques (CNN) applied to Euclidean and non-Euclidean manifolds for tasks of shape classification, retrieval and correspondence. The tutorial will present in a new light the problems of 3D computer vision and geometric data processing, emphasizing the analogies and differences with the classical 2D setting, and showing how to adapt popular learning schemes in order to deal with non-Euclidean structures. The tutorial will assume no particular background, beyond some basic working knowledge that is a common denominator for students and practitioners in machine learning and graphics.

Krikamol Muandet

Max Planck Institute for Intelligent Systems

Recent Advances in Hilbert Space Representation of Probability Distributions

A Hilbert space embedding of probability distributions has recently emerged as a powerful tool for machine learning and statistical inference. In this tutorial, I will introduce the concept of Hilbert space embedding of distributions and its recent applications in machine learning, statistical inference, causal inference, and econometrics. The first part of the tutorial will focus on understanding how one can generalize feature maps of data points to probability distributions and how this new representation of distributions allows us to build powerful algorithms such as maximum mean discrepancy (MMD), Hilbert-Schmidt Independence Criteria (HSIC), and support measure machine (SMM). In the second part, I will explain how we can generalize this idea to represent conditional distributions. The embedding of conditional distributions extends the capability of Hilbert space embedding to model more complex dependence in various applications such as dynamical systems, Markov decision processes, reinforcement learning, latent variable models, the kernel Bayes’ rule, and causal inference. At the end of the tutorial, I will discuss recent advances in this research area as well as highlight potential future research directions.
Review: arXiv:1605.09522.


Mon 29th9:30-11:00GoToWebinarClass 1Introduction to Statistical Machine Learning lect_1
11:30-13:00GoToWebinarClass 2Tikhonov Regularization and Kernels lect_2
14:00-16:00GoToWebinarLab 1Binary classification and model selection Matlab | Python
Tue 30th9:30-11:00GoToWebinarClass 3Early Stopping and Spectral Regularization lect_3
11:30-13:00GoToWebinarClass 4Regularization for Multi-task Learning lect_4
14:00-16:00GoToWebinarLab 2Spectral filters and multi-class classification Matlab
Wed 1st9:30-11:00GoToWebinarTutorial 1Emanuele Rodolà - Geometric Deep Learning talk_1
11:30-13:00GoToWebinarTutorial 2Krikamol Muandet - Hilbert Space Representation of Probability Distributions talk_2
Thu 2nd9:30-11:00GoToWebinarClass 5Sparsity Based Regularization lect_5
11:30-13:00GoToWebinarClass 6Structured Sparsity lect_6
14:00-16:00GoToWebinarLab 3Sparsity-based learning Matlab | Python
Fri 3rd9:30-11:00GoToWebinarClass 7Data Representation: Dictionary Learning lect_7
11:30-13:00GoToWebinarClass 8Data Representation: Deep Learning lect_8


Stefano Vigogna

University of Genova
Laboratory for Computational and Statistical Learning

vigogna [at] dibris [dot] unige [dot] it

Modiana Pasquinelli

University of Genova
LCSL Administrative Office

modiana [at] unige [dot] it