Fall 2022 - CMPT 409 D200

Special Topics in Theoretical Computing Science (3)

Optimization for Machine Learning

Class Number: 6121

Delivery Method: In Person

Overview

  • Course Times + Location:

    Sep 7 – Dec 6, 2022: Mon, 2:30–3:20 p.m.
    Burnaby

    Sep 7 – Dec 6, 2022: Thu, 2:30–4:20 p.m.
    Burnaby

  • Prerequisites:

    CMPT 307 with a minimum grade of C-.

Description

CALENDAR DESCRIPTION:

Current topics in theoretical computing science depending on faculty and student interest.

COURSE DETAILS:

This course (Optimization for Machine Learning) introduces the foundational concepts of convex and non-convex optimization with applications to machine learning. It will give the students experience in 1. Proving theoretical guarantees for optimization algorithms, 2. Analyzing machine learning (ML) problems from an optimization perspective and 3. Developing and analyzing new optimization methods for ML applications.

Topics

  • Basics: Subdifferentials, Optimality conditions, Conjugates, Lipschitz continuity, Convexity
  • Machine Learning Basics: Linear/Logistic regression, Kernel methods, Deep learning
  • (Non)-Convex minimization 1: (Projected/Proximal) Gradient Descent, Nesterov/Polyak momentum
  • (Non)-Convex minimization 2: Mirror Descent, Newton/Quasi-Newton/Gauss-Newton method
  • (Non)-Convex minimization 3: Stochastic gradient descent (SGD), Variance reduction techniques
  • (Non)-Convex minimization 4: Adaptivity for SGD, Coordinate Descent
  • Applications to training ML models (logistic regression, kernel machines, neural networks)
  • Online optimization 1: Regret minimization, Online to Batch, Follow the (regularized) leader
  • Online optimization 2: Optimistic Gradient Descent , Adaptive gradient methods (AdaGrad, Adam)
  • Applications to Imitation learning, Reinforcement learning
  • Min-Max optimization 1: Primal-dual methods, (Stochastic) Gradient Descent-Ascent, Proximal point
  • Min-Max optimization 2: (Stochastic) Extragradient, Acceleration, Variance reduction
  • Applications to GANs, Robust optimization, Multi-agent RL

Grading

NOTES:

There will be a couple of assignments with the major evaluation components being a paper presentation and a final project. The details will be discussed in the first week of classes.

Materials

MATERIALS + SUPPLIES:

Reference Books

  • Convex Optimization, Boyd and Vandenberghe, 2004, 9780521833783
  • Numerical Optimization, Nocedal and Wright, 2006, 9780387303031
  • First-order Methods in Optimization, Beck, 2017, 9781611974980
  • Convex Optimization: Algorithms and Complexity, Bubeck, 2014, 9781601988607
  • Lectures on Convex Optimization, Nesterov, 2018, 9783319915777

REQUIRED READING NOTES:

Your personalized Course Material list, including digital and physical textbooks, are available through the SFU Bookstore website by simply entering your Computing ID at: shop.sfu.ca/course-materials/my-personalized-course-materials.

Registrar Notes:

ACADEMIC INTEGRITY: YOUR WORK, YOUR SUCCESS

SFU’s Academic Integrity website http://www.sfu.ca/students/academicintegrity.html is filled with information on what is meant by academic dishonesty, where you can find resources to help with your studies and the consequences of cheating. Check out the site for more information and videos that help explain the issues in plain English.

Each student is responsible for his or her conduct as it affects the university community. Academic dishonesty, in whatever form, is ultimately destructive of the values of the university. Furthermore, it is unfair and discouraging to the majority of students who pursue their studies honestly. Scholarly integrity is required of all members of the university. http://www.sfu.ca/policies/gazette/student/s10-01.html