Ke Li
I am an Assistant Professor at Simon Fraser University in beautiful Vancouver, Canada. I was formerly a Member of the Institute for Advanced Study (IAS) in Princeton, and received my Ph.D. from UC Berkeley, where I was advised by Jitendra Malik, and my bachelor's in computer science from the University of Toronto. My research interests are in machine learning, computer vision and algorithms. I can be reached by e-mail at keli [at] sfu [dot] ca. While at the IAS, I organized the IAS Seminar Series on Theoretical Machine Learning with Sanjeev Arora - check out past seminars here and on Twitter.
Google Scholar | Twitter
Prospective MSc/PhD Students: I will be taking on a few new students this year. If you are interested in working with me, please fill out this form. Due to the volume of emails I receive, I am unfortunately unable to respond to every email; however, I review submissions through the form regularly and will reach out to selected students.
Prospective SFU Undergraduate/MPCS Students: If you are interested in working on a research or capstone project on AI or related areas, please fill out this form.
For a quick introduction to my research, see the following talk videos:
IAS Workshop on Theory of Deep Learning (Video) (Slides): this is on generative modelling and nearest neighbour search and is aimed at machine learning researchers
CMU ML/Duolingo Seminar (Video) (Slides): this is an extended version of the above (with more details on nearest neighbour search) and is aimed at machine learning graduate students
CIFAR Deep Learning and Reinforcement Learning Summer School (Video) (Slides): this is on generative modelling and is aimed at a broader audience in the style of a tutorial
IAS Special Year Seminar (Video): this is on meta-learning and is aimed at machine learning researchers
Research Directions
I am interested in tackling fundamental problems that cannot be solved using a straightforward application of conventional techniques. Below are the major areas that I contributed to:
- Generative Modelling (Slides 1) (Slides 2): Most generative models are latent variable models, including variational autoencoders (VAEs), generative adversarial nets (GANs) and diffusion probabilistic models. The gold standard for training generative models is with maximum likelihood estimation (MLE) — however, it is not feasible to use MLE for modern, highly expressive generative models because the marginal log-likelihood is intractable. As a result, the evidence lower bound (ELBO), a lower bound on the marginal log-likelihood, is often maximized instead. The ELBO is only a good approximation to the marginal log-likelihood if the variational distribution is close to the true posterior, and so an expressive variational distribution is required. Diffusion probabilistic models increase the expressivity of the variational distribution by taking it to be the result of applying many small transformations to an analytical distribution, but do so at the expense of sampling time. We are developing an alternative approach known as Implicit Maximum Likelihood Estimation (IMLE) that maximizes a different lower bound to the marginal log-likelihood without needing to choose a variational distribution and the approximation quality improves with the expressivity of the genrative model. This makes it possible to sidestep the long sampling time of diffusion models, while still maintaining a good approximation to MLE.
Related papers: Adaptive IMLE | Implicit Maximum Likelihood Estimation | Conditional IMLE | On the Implicit Assumptions of GANs
- Learning to Optimize (Slides): While machine learning has been applied to a wide range of domains, one domain that has conspicuously been left untouched is the design of tools that power machine learning itself. In this line of work, we ask the following question: is it possible to automate the design of algorithms used in machine learning? We introduced the first framework for learning a general-purpose iterative optimization algorithm automatically. The key idea is to treat the design of an optimization algorithm as a reinforcement learning/optimal control problem and view a particular update formula (and therefore a particular optimization algorithm) as a particular policy. Finding the optimal policy then corresponds to finding the best optimization algorithm. We parameterize the update formula using a neural net and train it using reinforcement learning to avoid the problem of compounding errors. This has inspired various subsequent work on meta-learning.
Related papers: Learning to Optimize | Learning to Optimize Neural Nets
- Fast Nearest Neighbour Search (Slides): The method of k-nearest neighbours is widely used in machine learning, statistics, bioinformatics and database systems. Attempts at devising fast algorithms, however, have come up against a recurring obstacle: the curse of dimensionality. Almost all exact algorithms developed over the past 40 years exhibited a time complexity that is exponential in ambient or intrinsic dimensionality, and such persistent failure in overcoming the curse of dimensionality led to conjectures that doing so is impossible. We showed that, surprisingly, this is in fact possible — we developed an exact randomized algorithm whose query time complexity is linear in ambient dimensionality and sublinear in intrinsic dimensionality. The key insight is to avoid the popular strategy of space partitioning, which we argue gives rise to the curse of dimensionality. We demonstrated a speedup of 1-2 orders of magnitude over locality-sensitive hashing (LSH).
Related papers: Fast k-Nearest Neighbour Search via Dynamic Continuous Indexing | Fast k-Nearest Neighbour Search via Prioritized DCI
Students
Selected Papers
Generative Modelling
- DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross Diffusion (Project Page) (Code) (Video)
Kiyohiro Nakayama, Mikaela Angelina Uy, Jiahui Huang, Shi-Min Hu, Ke Li, Leonidas J Guibas
IEEE/CVF International Conference on Computer Vision (ICCV), 2023
- Adaptive IMLE for Few-shot Pretraining-free Generative Modelling (Project Page) (Code) (Video)
Mehran Aghabozorgi, Shichong Peng, Ke Li
International Conference on Machine Learning (ICML), 2023
- CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis (Project Page) (Code) (Video)
Shichong Peng, Alireza Moazeni, Ke Li
Advances in Neural Information Processing Systems (NeurIPS), 2022
- Micro and Macro Level Graph Modeling for Graph Variational Auto-Encoders (Code) (Slides)
Kiarash Zahirnia, Oliver Schulte, Parmis Naddaf, Ke Li
Advances in Neural Information Processing Systems (NeurIPS), 2022
- Multimodal Shape Completion via Implicit Maximum Likelihood Estimation (Code)
Himanshu Arora, Saurabh Mishra, Shichong Peng, Ke Li, Ali Mahdavi-Amiri
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022
- Variational Model Inversion Attacks (Code)
Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani
Advances in Neural Information Processing Systems (NeurIPS), 2021
- Gotta Go Fast When Generating Data with Score-Based Models (Code) (Blog Post)
Alexia Jolicoeur-Martineau, Ke Li*, Rémi Piché-Taillefer*, Tal Kachman*, Ioannis Mitliagkas
arXiv:2105.14080, 2021
- Generating Unobserved Alternatives (Project Page) (Code)
Shichong Peng, Ke Li
arXiv:2011.01926, 2020
- Inclusive GAN: Improving Data and Minority Coverage in Generative Models (Code)
Ning Yu, Ke Li, Peng Zhou, Jitendra Malik, Larry Davis, Mario Fritz
European Conference on Computer Vision (ECCV), 2020
- Multimodal Image Synthesis with Conditional Implicit Maximum Likelihood Estimation
Ke Li*, Shichong Peng*, Tianhao Zhang*, Jitendra Malik
International Journal of Computer Vision (IJCV), 2020
- Diverse Image Synthesis from Semantic Layouts via Conditional IMLE (Project Page) (Code) (Talk)
Ke Li*, Tianhao Zhang*, Jitendra Malik
IEEE/CVF International Conference on Computer Vision (ICCV), 2019
- Non-Adversarial Image Synthesis with Generative Latent Nearest Neighbors (Code) (Talk)
Yedid Hoshen, Ke Li, Jitendra Malik
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
- On the Implicit Assumptions of GANs (Poster)
Ke Li, Jitendra Malik
NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning, 2018
- Super-Resolution via Conditional Implicit Maximum Likelihood Estimation (Project Page) (Talk)
Ke Li*, Shichong Peng*, Jitendra Malik
arXiv:1810.01406, 2018
- Implicit Maximum Likelihood Estimation (Project Page) (Reviews) (Slides) (Poster) (Code) (Talk)
Ke Li, Jitendra Malik
arXiv:1809.09087, 2018
Neural Rendering
- PAPR in Motion: Seamless Point-level 3D Scene Interpolation (Project Page) (Code) (Video)
Shichong Peng, Yanshu Zhang, Ke Li
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Highlight), 2024
- ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process (Project Page)
Kiyohiro Nakayama, Mikaela Angelina Uy, Yang You, Ke Li, Leonidas Guibas
arXiv:2401.08140, 2024
- PAPR: Proximity Attention Point Rendering (Project Page) (Code) (Video)
Yanshu Zhang*, Shichong Peng*, Alireza Moazeni, Ke Li
Advances in Neural Information Processing Systems (NeurIPS) (Spotlight), 2023
- NeRF Revisited: Fixing Quadrature Instability in Volume Rendering (Project Page) (Code) (Video)
Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Krishna Thomas, Leonidas Guibas, Ke Li
Advances in Neural Information Processing Systems (NeurIPS), 2023
- SCADE: NeRFs from Space Carving With Ambiguity-Aware Depth Estimates (Project Page) (Code) (Video) (Slides) (Poster)
Mikaela Angelina Uy, Ricardo Martin-Brualla, Leonidas Guibas, Ke Li
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
- Deep Medial Fields (Code)
Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
arXiv:2106.03804, 2021
- DeRF: Decomposed Radiance Fields
Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021
Learning to Optimize
Fast Nearest Neighbour Search
Instance Segmentation
Other Topics
- Better Knowledge Retention through Metric Learning
Ke Li*, Shichong Peng*, Kailas Vodrahalli*, Jitendra Malik
arXiv:2011.13149, 2020
- Approximate Feature Collisions in Neural Nets
Ke Li*, Tianhao Zhang*, Jitendra Malik
Advances in Neural Information Processing Systems (NeurIPS), 2019
- Trajectory Normalized Gradients for Distributed Optimization
Jianqiao Wangni, Ke Li, Jianbo Shi, Jitendra Malik
arXiv:1901.08227, 2019
- Are All Training Examples Created Equal? An Empirical Study
Kailas Vodrahalli, Ke Li, Jitendra Malik
arXiv:1811.12569, 2018
- Efficient Feature Learning using Perturb-and-MAP
Ke Li, Kevin Swersky, Richard Zemel
NIPS Workshop on Perturbations, Optimization and Statistics, 2013
Teaching
CMPT 726: Machine Learning (Spring 2023)
CMPT 983 G200: Generative Models (Fall 2022)
CMPT 726: Machine Learning (Spring 2022)
CMPT 983 G200: Generative Models (Fall 2021)
CMPT 726: Machine Learning (Spring 2021)
CS 189: Introduction to Machine Learning (Summer 2018)
Talks
Regression Done Right
- University of British Columbia — Mar 2022
Overcoming Mode Collapse and the Curse of Dimensionality (Extended Version)
- University of Illinois at Urbana-Champaign — Feb 2020
- University of Washington — Jan 2020
- University of Texas at Austin — Jan 2020
- Vector Institute for Artificial Intelligence — Dec 2019
- Stanford University — Dec 2019
- Institute for Advanced Study (IAS) — Oct 2019
- Google NYC — Oct 2019
- Massachusetts Institute of Technology — Oct 2019
- Cornell Tech — Oct 2019
- Carnegie Mellon University — Oct 2019
- Simons Institute for the Theory of Computing — Jun 2019
- Google Seattle — Jun 2019
- DeepMind — Jun 2019
- University of California, Berkeley — May 2019
No More Mode Collapse
- Nvidia — Dec 2019
- Google Mountain View — Dec 2019
- BAIR/BDD Computer Vision Workshop — Sep 2019
- Adobe — Aug 2019
- Nielsen — Jul 2019
Implicit Maximum Likelihood Estimation
- BAIR/FAIR Workshop — Aug 2019
- University of California, Berkeley — Aug 2018
Tutorial on Implicit Generative Models
- BAIR Seminar — Aug 2019
- CIFAR Deep Learning and Reinforcement Learning Summer School (DLRLSS) — Jul 2019
Fast k-Nearest Neighbour Search via Dynamic Continuous Indexing
- Google NYC — Jan 2020
- Simons Institute for the Theory of Computing — Nov 2018
- NIPS 2017 Workshop on Nearest Neighbours for Modern Applications with Massive Data — Dec 2017
Meta-Learning: Why It's Hard and What We Can Do
- Institute for Advanced Study (IAS) — Apr 2020
Learning to Optimize
- BAIR Fall Workshop — Oct 2017
- University of Toronto — Jun 2017
- University of California, Berkeley — Feb 2017
Meta-Learning
- Intuition Machines Seminar — Apr 2017
Professional Service
Seminars:
Workshops:
Journals:
- Reviewer for IEEE Transactions on Information Theory
- Reviewer for IEEE Transactions on Signal Processing
- Reviewer for IEEE Transactions on Neural Networks and Learning Systems
- Reviewer for Information Sciences
Conferences:
- Program co-chair for CRV
- Meta-reviewer for AAAI and ICCV
- Reviewer for NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AAAI, IJCAI and DeepMath