Advanced Machine Learning

Course#: CSCI-GA.3033-007

Instructor: Mehryar Mohri

Graders/TAs: Vitaly Kuznetsov.

Mailing List

Course Description

This course introduces and discusses advanced topics in machine learning. The objective is both to present some key topics not covered by basic graduate ML classes such as Foundations of Machine Learning, and to bring up advanced learning problems that can serve as an initiation to research or to the development of new techniques relevant to applications.

- Advanced standard scenario:
- Learning kernels.
- Deep ensemble methods.
- Structured prediction.

- On-line learning scenario:
- On-line learning basics.
- Learning and games.
- Learning with large expert spaces.
- Online convex optimization.
- Bandit problems.
- Sequential portfolio selection.

- Large-scale learning:
- Dimensionality reduction
- Low-rank approximation.
- Large-scale optimization.
- Distributed learning.
- Clustering.
- Spectral learning.
- Massive multi-class classification.

- Other non-standard learning scenarios:
- Domain adaptation and sample bias correction.
- Transduction and semi-supervised learning.
- Active learning.
- Time series prediction.
- Privacy-aware learning.

It is strongly recommended to those who can to also attend the Machine Learning Seminar.

Location and Time

Warren Weaver Hall Room 101,

251 Mercer Street.

Tuesdays 5:10 PM - 7:00 PM.

Prerequisite

Students are expected to be familiar with basic machine learning concepts and must have attended a graduate ML class such as Foundations of Machine Learning or equivalent, at Courant or elsewhere.

Projects and Assignments

There will be 2 homework assignments and a topic presentation and report. The final grade is a combination of the assignment grades and the topic presentation grade. The standard high level of integrity is expected from all students, as with all Math and CS courses.

Lectures

- Lecture 01: Learning kernels.
- Lecture 02: Deep ensemble methods.
- Lecture 03: Structured prediction.
- Lecture 04: Online learning basics.
- Lecture 05: Learning and games.
- Lecture 06: Learning with large expert spaces.
- Lecture 07: Online convex optimization.
- Lecture 08: Bandit problems.
- Lecture 09: Bandit convex optimization.
- Lecture 17: Domain adaptation.
- Lecture 18: Transduction.
- Lecture 19: Active learning.
- Lecture 20: Time series prediction.

Technical Papers

An extensive list of recommended papers for further reading is provided in the lecture slides.

Homework

- Homework 1 [solution].
- Homework 2 [solution].
- Topic presentations.
- Structured Prediction, Ningshan Zhang.

References: C. Cortes, V. Kuznetsov, and MM. Ensemble methods for structured prediction. In ICML, 2014.

H. Kadri, M. Ghavamzadeh, and P. Preux. A Generalized Kernel Approach to Structured Output Learning. In ICML, 2013. - On Complexity as Bounded Rationality, David Kasofsky.

Reference: C. Papadimitriou and M. Yannakakis. On Complexity as Bounded Rationality. In STOC, pages 726-733, 1994. - Online Learning for Globa Cost Functions, Jonay Trenous.

Reference: E. Even-Dar, R. Kleinberg, S. Mannor, and Y. Mansour. Online Learning for Global Cost Functions. In COLT, 2009. - Bandit Problems, Tolga Yenisey.

Reference: V. Dani and T.P. Hayes. How to Beat the Adaptive Multi-Armed Bandit, 2006. - Convergence of Eigenspaces in Kernel Principal Component Analysis, Shixin Wang.

Reference: Laurent Zwald and Gilles Blanchard. On the Convergence of Eigenspaces in Kernel Principal Component Analysis. In NIPS 2005.

- Structured Prediction, Ningshan Zhang.

Previous years