Google Scholar | Twitter

For a quick introduction to my research, see the following talk videos:

**Generative Modelling**(Slides 1) (Slides 2)**:**Most generative models are latent variable models, including variational autoencoders (VAEs), generative adversarial nets (GANs) and diffusion probabilistic models. The gold standard for training generative models is with maximum likelihood estimation (MLE) — however, it is not feasible to use MLE for modern, highly expressive generative models because the marginal log-likelihood is intractable. As a result, the evidence lower bound (ELBO), a lower bound on the marginal log-likelihood, is often maximized instead. The ELBO is only a good approximation to the marginal log-likelihood if the variational distribution is close to the true posterior, and so an expressive variational distribution is required. Diffusion probabilistic models increase the expressivity of the variational distribution by taking it to be the result of applying many small transformations to an analytical distribution, but do so at the expense of sampling time. We are developing an alternative approach known as Implicit Maximum Likelihood Estimation (IMLE) that maximizes a different lower bound to the marginal log-likelihood without needing to choose a variational distribution and the approximation quality improves with the expressivity of the genrative model. This makes it possible to sidestep the long sampling time of diffusion models, while still maintaining a good approximation to MLE.

Related papers: Adaptive IMLE | Implicit Maximum Likelihood Estimation | Conditional IMLE | On the Implicit Assumptions of GANs

**Learning to Optimize**(Slides)**:**While machine learning has been applied to a wide range of domains, one domain that has conspicuously been left untouched is the design of tools that power machine learning itself. In this line of work, we ask the following question: is it possible to automate the design of algorithms used in machine learning? We introduced the first framework for learning a general-purpose iterative optimization algorithm automatically. The key idea is to treat the design of an optimization algorithm as a reinforcement learning/optimal control problem and view a particular update formula (and therefore a particular optimization algorithm) as a particular policy. Finding the optimal policy then corresponds to finding the best optimization algorithm. We parameterize the update formula using a neural net and train it using reinforcement learning to avoid the problem of compounding errors. This has inspired various subsequent work on meta-learning.

Related papers: Learning to Optimize | Learning to Optimize Neural Nets

**Fast Nearest Neighbour Search**(Slides)**:**The method of*k*-nearest neighbours is widely used in machine learning, statistics, bioinformatics and database systems. Attempts at devising fast algorithms, however, have come up against a recurring obstacle: the curse of dimensionality. Almost all exact algorithms developed over the past 40 years exhibited a time complexity that is exponential in ambient or intrinsic dimensionality, and such persistent failure in overcoming the curse of dimensionality led to conjectures that doing so is impossible. We showed that, surprisingly, this is in fact possible — we developed an exact randomized algorithm whose query time complexity is linear in ambient dimensionality and sublinear in intrinsic dimensionality. The key insight is to avoid the popular strategy of space partitioning, which we argue gives rise to the curse of dimensionality. We demonstrated a speedup of 1-2 orders of magnitude over locality-sensitive hashing (LSH).

Related papers: Fast*k*-Nearest Neighbour Search via Dynamic Continuous Indexing | Fast*k*-Nearest Neighbour Search via Prioritized DCI

- Mehran Aghabozorgi
- Tristan Engst
- Alireza Moazeni
- Shichong Peng
- Chirag Vashist
- Yanshu Zhang

- DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross Diffusion (Project Page) (Code) (Video)

Kiyohiro Nakayama, Mikaela Angelina Uy, Jiahui Huang, Shi-Min Hu,**Ke Li**, Leonidas J Guibas

*IEEE/CVF International Conference on Computer Vision (ICCV)*, 2023 - Adaptive IMLE for Few-shot Pretraining-free Generative Modelling (Project Page) (Code) (Video)

Mehran Aghabozorgi, Shichong Peng,**Ke Li**

*International Conference on Machine Learning (ICML)*, 2023 - CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis (Project Page) (Code) (Video)

Shichong Peng, Alireza Moazeni,**Ke Li**

*Advances in Neural Information Processing Systems (NeurIPS)*, 2022 - Micro and Macro Level Graph Modeling for Graph Variational Auto-Encoders (Code) (Slides)

Kiarash Zahirnia, Oliver Schulte, Parmis Naddaf,**Ke Li**

*Advances in Neural Information Processing Systems (NeurIPS)*, 2022 - Multimodal Shape Completion via Implicit Maximum Likelihood Estimation (Code)

Himanshu Arora, Saurabh Mishra, Shichong Peng,**Ke Li**, Ali Mahdavi-Amiri

*IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops*, 2022 - Variational Model Inversion Attacks (Code)

Kuan-Chieh Wang, Yan Fu,**Ke Li**, Ashish Khisti, Richard Zemel, Alireza Makhzani

*Advances in Neural Information Processing Systems (NeurIPS)*, 2021 - Gotta Go Fast When Generating Data with Score-Based Models (Code) (Blog Post)

Alexia Jolicoeur-Martineau,**Ke Li***, Rémi Piché-Taillefer*, Tal Kachman*, Ioannis Mitliagkas

*arXiv:2105.14080*, 2021 - Generating Unobserved Alternatives (Project Page) (Code)

Shichong Peng,**Ke Li**

*arXiv:2011.01926*, 2020 - Inclusive GAN: Improving Data and Minority Coverage in Generative Models (Code)

Ning Yu,**Ke Li**, Peng Zhou, Jitendra Malik, Larry Davis, Mario Fritz

*European Conference on Computer Vision (ECCV)*, 2020 - Multimodal Image Synthesis with Conditional Implicit Maximum Likelihood Estimation

**Ke Li***, Shichong Peng*, Tianhao Zhang*, Jitendra Malik

*International Journal of Computer Vision (IJCV)*, 2020 - Diverse Image Synthesis from Semantic Layouts via Conditional IMLE (Project Page) (Code) (Talk)

**Ke Li***, Tianhao Zhang*, Jitendra Malik

*IEEE/CVF International Conference on Computer Vision (ICCV)*, 2019 - Non-Adversarial Image Synthesis with Generative Latent Nearest Neighbors (Code) (Talk)

Yedid Hoshen,**Ke Li**, Jitendra Malik

*IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019 - On the Implicit Assumptions of GANs (Poster)

**Ke Li**, Jitendra Malik

*NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning*, 2018 - Super-Resolution via Conditional Implicit Maximum Likelihood Estimation (Project Page) (Talk)

**Ke Li***, Shichong Peng*, Jitendra Malik

*arXiv:1810.01406*, 2018 - Implicit Maximum Likelihood Estimation (Project Page) (Reviews) (Slides) (Poster) (Code) (Talk)

**Ke Li**, Jitendra Malik

*arXiv:1809.09087*, 2018

- PAPR in Motion: Seamless Point-level 3D Scene Interpolation (Project Page) (Code) (Video)

Shichong Peng, Yanshu Zhang,**Ke Li**

*IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)***(Highlight)**, 2024 - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process (Project Page)

Kiyohiro Nakayama, Mikaela Angelina Uy, Yang You,**Ke Li**, Leonidas Guibas

*arXiv:2401.08140*, 2024 - PAPR: Proximity Attention Point Rendering (Project Page) (Code) (Video)

Yanshu Zhang*, Shichong Peng*, Alireza Moazeni,**Ke Li**

*Advances in Neural Information Processing Systems (NeurIPS)***(Spotlight)**, 2023 - NeRF Revisited: Fixing Quadrature Instability in Volume Rendering (Project Page) (Code) (Video)

Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Krishna Thomas, Leonidas Guibas,**Ke Li**

*Advances in Neural Information Processing Systems (NeurIPS)*, 2023 - SCADE: NeRFs from Space Carving With Ambiguity-Aware Depth Estimates (Project Page) (Code) (Video) (Slides) (Poster)

Mikaela Angelina Uy, Ricardo Martin-Brualla, Leonidas Guibas,**Ke Li**

*IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2023 - Deep Medial Fields (Code)

Daniel Rebain,**Ke Li**, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi

*arXiv:2106.03804*, 2021 - DeRF: Decomposed Radiance Fields

Daniel Rebain, Wei Jiang, Soroosh Yazdani,**Ke Li**, Kwang Moo Yi, Andrea Tagliasacchi

*IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021

- Learning to Optimize Neural Nets (Slides) (Blog Post) (Talk)

**Ke Li**, Jitendra Malik

*arXiv:1703.00441*, 2017 - Learning to Optimize (ICLR Version) (Slides) (Poster) (Code) (Blog Post) (Talk)

**Ke Li**, Jitendra Malik

*arXiv:1606.01885*, 2016 and*International Conference on Learning Representations (ICLR)*, 2017

- IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (Project Page) (Code) (Video)

Yuzhen Mao, Martin Ester,**Ke Li**

*International Conference on Learning Representations (ICLR)*, 2024 - Fast
*k*-Nearest Neighbour Search via Prioritized DCI (Talk) (Slides) (Project Page) (CPU Code) (CUDA Code) (Poster)

**Ke Li**, Jitendra Malik

*International Conference on Machine Learning (ICML)*, 2017 - Fast
*k*-Nearest Neighbour Search via Dynamic Continuous Indexing (Slides) (Project Page) (Code)

**Ke Li**, Jitendra Malik

*International Conference on Machine Learning (ICML)*, 2016

- Amodal Instance Segmentation

**Ke Li**, Jitendra Malik

*European Conference on Computer Vision (ECCV)*, 2016 - Iterative Instance Segmentation

**Ke Li**, Bharath Hariharan, Jitendra Malik

*IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2016

- Better Knowledge Retention through Metric Learning

**Ke Li***, Shichong Peng*, Kailas Vodrahalli*, Jitendra Malik

*arXiv:2011.13149*, 2020 - Approximate Feature Collisions in Neural Nets

**Ke Li***, Tianhao Zhang*, Jitendra Malik

*Advances in Neural Information Processing Systems (NeurIPS)*, 2019 - Trajectory Normalized Gradients for Distributed Optimization

Jianqiao Wangni,**Ke Li**, Jianbo Shi, Jitendra Malik

*arXiv:1901.08227*, 2019 - Are All Training Examples Created Equal? An Empirical Study

Kailas Vodrahalli,**Ke Li**, Jitendra Malik

*arXiv:1811.12569*, 2018 - Efficient Feature Learning using Perturb-and-MAP

**Ke Li**, Kevin Swersky, Richard Zemel

*NIPS Workshop on Perturbations, Optimization and Statistics*, 2013

- University of British Columbia — Mar 2022

- University of Illinois at Urbana-Champaign — Feb 2020
- University of Washington — Jan 2020
- University of Texas at Austin — Jan 2020
- Vector Institute for Artificial Intelligence — Dec 2019
- Stanford University — Dec 2019
- Institute for Advanced Study (IAS) — Oct 2019
- Google NYC — Oct 2019
- Massachusetts Institute of Technology — Oct 2019
- Cornell Tech — Oct 2019
- Carnegie Mellon University — Oct 2019
- Simons Institute for the Theory of Computing — Jun 2019
- Google Seattle — Jun 2019
- DeepMind — Jun 2019
- University of California, Berkeley — May 2019

- Nvidia — Dec 2019
- Google Mountain View — Dec 2019
- BAIR/BDD Computer Vision Workshop — Sep 2019
- Adobe — Aug 2019
- Nielsen — Jul 2019

- BAIR/FAIR Workshop — Aug 2019
- University of California, Berkeley — Aug 2018

- BAIR Seminar — Aug 2019
- CIFAR Deep Learning and Reinforcement Learning Summer School (DLRLSS) — Jul 2019

- Google NYC — Jan 2020
- Simons Institute for the Theory of Computing — Nov 2018
- NIPS 2017 Workshop on Nearest Neighbours for Modern Applications with Massive Data — Dec 2017

- Institute for Advanced Study (IAS) — Apr 2020

- BAIR Fall Workshop — Oct 2017
- University of Toronto — Jun 2017
- University of California, Berkeley — Feb 2017

- Intuition Machines Seminar — Apr 2017

- Co-organizer of IAS Seminar Series on Theoretical Machine Learning

- Lead organizer of BIRS Workshoop on 3D Generative Models

- Reviewer for IEEE Transactions on Information Theory
- Reviewer for IEEE Transactions on Signal Processing
- Reviewer for IEEE Transactions on Neural Networks and Learning Systems
- Reviewer for Information Sciences

- Program co-chair for CRV
- Meta-reviewer for AAAI and ICCV
- Reviewer for NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AAAI, IJCAI and DeepMath