Optimization for machine learning
M2 IASD/MASH, Université Paris Dauphine-PSL, 2024-2025
Aims of the course
Study the optimization paradigm, as well as the optimization algorithms that are used in learning and data science. We will be interested in both the theoretical guarantees of these algorithms and their practical use.
Main link
Google doc for the course
Course material
Session 1 (Introduction 1/2)
PDF
Session 2 (Introduction 2/2)
PDF
Session 3 (Basics of gradient descent)
PDF
Session 4 (Note on a stepsize choice for gradient descent)
PDF
Sessions 5+7 (Lecture notes)
PDF
Session 6 (Automatic differentiation)
PDF Tutorial
Session 9 (Subgradient methods)
PDF
Session 10 (Stochastic gradient 1/2)
PDF
Session 11 (Stochastic gradient 2/2)
PDF
Session 13 (Regularization and prox)
PDF
Session 14 (Sparse regularization)
[Notebook]
Material for lab sessions
Lab 1/4: Basics of gradient descent
[Original] [Solutions]
Lab 2/4: Advanced aspects gradient descent
[Original] [Solutions]
Lab 3/4: Stochastic gradient methods
[Original] [Solutions]
Materials on this page are available under Creative Commons
CC BY-NC 4.0 license.
La version française de cette page se trouve
ici.