Latest updates


2024.11.18: Solution for third notebook+notes session 13.
2024.11.14: Notebook for lab 3.
2024.11.08: Resources for session 11.
2024.11.04: Resources for session 10.
2024.10.17: Resources for session 9 and solution for second notebook.
2024.10.09: Resources for sessions 5-7 and second notebook.
2024.09.28: Note from the first notebook online.
2024.09.27: First notebook online.
2024.09.19: Ressources for session 2.
2024.09.16: Ressources for session 1.
2024.09.15: Course webpage online.

Instructors

Gabriel Peyré
gabriel.peyre@ens.fr

Clément Royer
clement.royer@lamsade.dauphine.fr

Irène Waldspurger
waldspurger@ceremade.dauphine.fr

Retour à la page des enseignements

Optimization for machine learning

M2 IASD/MASH, Université Paris Dauphine-PSL, 2024-2025


Aims of the course

     Study the optimization paradigm, as well as the optimization algorithms that are used in learning and data science. We will be interested in both the theoretical guarantees of these algorithms and their practical use.

     Main link Google doc for the course

Course material

     Session 1 (Introduction 1/2) PDF
     Session 2 (Introduction 2/2) PDF
     Session 3 (Basics of gradient descent) PDF
     Session 4 (Note on a stepsize choice for gradient descent) PDF
     Sessions 5+7 (Lecture notes) PDF
     Session 6 (Automatic differentiation) PDF Tutorial
     Session 9 (Subgradient methods) PDF
     Session 10 (Stochastic gradient 1/2) PDF
     Session 11 (Stochastic gradient 2/2) PDF
     Session 13 (Regularization and prox) PDF
     Session 14 (Sparse regularization) [Notebook]

Material for lab sessions

     Lab 1/4: Basics of gradient descent [Original] [Solutions]
     Lab 2/4: Advanced aspects gradient descent [Original] [Solutions]
     Lab 3/4: Stochastic gradient methods [Original] [Solutions]


Materials on this page are available under Creative Commons CC BY-NC 4.0 license.
La version française de cette page se trouve ici.