Understanding how intelligence works and how it can be emulated in machines is an age old dream and arguably one of the biggest challenges in modern science. Learning, with its principles and computational implementations, is at the very core of this endeavor. Recently, for the first time, we have been able to develop artificial intelligence systems able to solve complex tasks considered out of reach for decades. Modern cameras recognize faces, and smart phones voice commands, cars can see and detect pedestrians and ATM machines automatically read checks. In most cases at the root of these success stories there are machine learning algorithms, that is softwares that are trained rather than programmed to solve a task. Among the variety of approaches to modern computational learning, we focus on regularization techniques, that are key to high- dimensional learning. Regularization methods allow to treat in a unified way a huge class of diverse approaches, while providing tools to design new ones. Starting from classical notions of smoothness, shrinkage and margin, the course will cover state of the art techniques based on the concepts of geometry (aka manifold learning), sparsity and a variety of algorithms for supervised learning, feature selection, structured prediction, multitask learning and model selection. Practical applications for high dimensional problems, in particular in computational vision, will be discussed. The classes will focus on algorithmic and methodological aspects, while trying to give an idea of the underlying theoretical underpinnings. Practical laboratory sessions will give the opportunity to have hands on experience.
RegML is a 22 hours advanced machine learning course including theory classes and practical laboratory sessions. The course covers foundations as well as recent advances in Machine Learning with emphasis on high dimensional data and a core set techniques, namely regularization methods. In many respect the course is compressed version of the 9.520 course at MIT".
The course started in 2008 has seen an increasing national and international attendance over the years with a peak of over 90 participants in 2014.
NOTE: the course has a small registration fee from this year: 50 Eu (100 Eu for industrial attendee and no fee for students from UNIGE), read this letter for details. When accepted each candidate has to follow the instruction in the acceptance email to proceed with the payment.
Notification of acceptance: May 13.
The school will be from 18th to 22nd June 2018.
Classes will take place at the Department of Informatics Bioengineering Robotics and Systems Engineering (DIBRIS) of the University of Genova in Via Dodecaneso 35, 16146 Genova. See here for directions and travelling information
Here you can find a list of hotels near the department (~ 20' walk) or in the city centre (~20' by bus).
Simula Research Laboratory
timo (at) simula (dot) no
Optimization Perspectives on Learning to Control
Given the dramatic successes in machine learning over the past half decade, there has been a resurgence of interest in applying learning techniques to continuous control problems in robotics, self-driving cars, and unmanned aerial vehicles. Though such applications appear to be straightforward generalizations of reinforcement learning, it remains unclear which machine learning tools are best equipped to handle decision making, planning, and actuation in highly uncertain dynamic environments.
This tutorial will survey the foundations required to build machine learning systems that reliably act upon the physical world. The primary technical focus will be on numerical optimization tools at the interface of statistical learning and dynamical systems. We will investigate how to learn models of dynamical systems, how to use data to achieve objectives in a timely fashion, how to balance model specification and system controllability, and how to safely acquire new information to improve performance. We will close by listing several exciting open problems that must be solved before we can build robust, reliable learning systems that interact with an uncertain environment.
|1||Mon 6/18||9:30 - 11:00||Introduction to Statistical Machine Learning||Lect 1|
|2||Mon 6/18||11:30 - 13:00||Tikhonov Regularization and Kernels||Lect 2|
|3||Mon 6/18||14:00 - 16:00||Laboratory 1: Binary classification and model selection||Lab 1|
|4||Tue 6/19||9:30 - 11:00||Early Stopping and Spectral Regularization||Lect 3|
|5||Tue 6/19||11:30 - 13:00||Regularization for Multi-task Learning||Lect 4|
|6||Tue 6/19||14:00 - 16:00||Laboratory 2: Spectral filters and multi-class classification||Lab 2|
|-||Wed 6/20||9:30 - 13:00||Benjamin Recht - Optimization Perspectives on Learning to Control|
|7||Thu 6/21||9:30 - 11:00||Sparsity Based Regularization||Lect 5|
|8||Thu 6/21||11:30 - 13:00||Structured Sparsity||Lect 6|
|9||Thu 6/21||14:00 - 16:00||Laboratory 3: Sparsity-based learning||Lab 3|
|10||Fri 6/22||9:30 - 11:00||Data Representation: Dictionary Learning||Lect 7|
|11||Fri 6/22||11:30 - 13:00||Data Representation: Deep Learning||Lect 8|