talks

These are my talks, at varying levels of formality, with abstracts (click to expand) and links to video where they exist.

This list is mainly for personal reference, but to those who want to watch a talk of mine and don’t know which: more recent ones should be better, assuming that my ability to explain things increases monotonically.

Last updated July 29, 2023.

teaching

2023-07-24 to 2023-07-28. Quantum and quantum-inspired linear algebra. PCMI 2023 Graduate Summer School.

An exciting recent line of work unifies many interesting quantum algorithms under a powerful linear algebraic framework known as “quantum singular value transformation” (QSVT). We’ll introduce this framework, build some tools in polynomial approximation that are helpful for applying it, and investigate what kinds of results it can achieve. Our goal will be to understand how and when to use QSVT in quantum algorithm design, and ultimately, whether it can reveal new quantum speedups for interesting problems in data analysis, machine learning, or quantum simulation.

talks

2023-05-30. An improved classical singular value transformation for quantum machine learning. Google Quantum CS Theory seminar.

The field of quantum machine learning (QML) has produced many proposals for attaining large quantum speedups for computationally intensive tasks in machine learning and data analysis. However, such speedups can only manifest if classical algorithms for these tasks perform significantly slower than quantum ones. We study the quantum-classical gap of several fundamental problems in QML through the quantum singular value transformation (QSVT) framework. QSVT, introduced by Gilyén, Su, Low and Wiebe [GSLW, STOC’18], unifies all major types of quantum speedup; in particular, QSVT on low-rank classical data encompasses a wide variety of QML proposals. We challenge these proposals by providing a classical algorithm that matches the performance of QSVT in this regime, up to a small polynomial overhead. We show that, given a bounded matrix A, a vector b, a bounded degree-d polynomial p, and linear-time pre-processing, we can output a description of a vector v such that ‖v − p(A)b‖ ≤ ε‖b‖ in Õ(d^9 ‖A‖_F^4 / ε^2) time. This improves upon the best known classical algorithm [CGLLTW, STOC’20] and narrows the gap with QSVT, which, after linear-time pre-processing to load input into a quantum-accessible memory, can output a measurement from the state |p(A)b〉 in Õ(d ‖A‖_F) time. Our key insight is to combine the Clenshaw recurrence, an iterative method for computing matrix polynomials, with sketching techniques to simulate QSVT classically. Based on joint work with Ainesh Bakshi.

2023-05-10. Query-optimal estimation of unitary channels in diamond distance. Cornell’s Junior Theorists’ Workshop.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error ε, using O(d²/ε) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity δ using O(d²/√δ) applications in parallel, thereby requiring Ω(d²) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2023-04-27. Query-optimal estimation of unitary channels in diamond distance. MIT Algorithms and Complexity Seminar.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error ε, using O(d²/ε) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity δ using O(d²/√δ) applications in parallel, thereby requiring Ω(d²) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2023-03-31. Dequantization (and Other Work). Cold Quanta.
2023-01-13. Two results on estimation of unitary channels. MIT QI seminar.

Since quantum mechanics models the evolution of a closed system as a unitary operator, a natural question is how to interact with such a dynamics to learn its corresponding operator. I will present two algorithms that learn a unitary channel given the ability to apply it. One learns the coefficients of a local Hamiltonian from its real-time evolution, and one learns a general unitary channel to diamond norm distance with the optimal number of applications. Both of these results give polynomial improvements over prior work, and follow from simple conceptual insights about the structure of the hypothesis space. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2023-01-05. Query-optimal estimation of unitary channels in diamond distance. Quarterly Theory Workshop: 2022 Junior Theorists Workshop.

We will present a quantum algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error ε, using O(d²/ε) applications of the unknown channel and only one qudit. This improves over prior work by polynomial factors. We also show Ω(d²/ε) applications suffice, even if one has access to the inverse or controlled versions of the unknown unitary. Thus, our algorithm uses both the optimal number of qudits and the optimal number of applications. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-12-12. Query-optimal estimation of unitary channels in diamond distance. Sandia QPL Seminar.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error ε, using O(d²/ε) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries up to a sub-logarithmic factor, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity δ using O(d²/√δ) applications in parallel, thereby requiring Ω(d²) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-12-02. Query-optimal estimation of unitary channels in diamond distance. CMU theory seminar.
2022-10-20. Query-optimal estimation of unitary channels in diamond distance. SQuInT 2022.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error ε, using O(d²/ε) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries up to a sub-logarithmic factor, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity δ using O(d²/√δ) applications in parallel, thereby requiring Ω(d²) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-10-18. Query-optimal estimation of unitary channels in diamond distance. Quantum Innovators in Computer Science and Mathematics (University of Waterloo).

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error ε, using O(d²/ε) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries up to a sub-logarithmic factor, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity δ using O(d²/√δ) applications in parallel, thereby requiring Ω(d²) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-10-05. Query-optimal estimation of unitary channels in diamond distance. Perimeter Institute QI seminar.
2022-07-15. Equioscillation (theme & variations). UW theory lunch.

I talk about the idea of equioscillation, which comes up a lot in approximation theory. I’ll introduce Chebyshev polynomials, a very useful family of polynomials (and the polynomials that equioscillate the most). I’ll also prove some nice results that use equioscillation, as many as time permits.

2022-03-30. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. MIT A+C seminar.

Hamiltonian learning is the problem of learning a geometrically local \(N\)-qubit Hamiltonian \(H\) to precision \(\varepsilon\), supposing we are given copies of its Gibbs state \(\rho = \exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H))\) at a known inverse temperature \(\beta\). This is the quantum generalization of the problem of parameter learning Markov Random Fields. I will present an algorithm that solves this problem with optimal sample complexity (number of copies of \(\rho\) needed) and optimal time complexity in the high-temperature (low \(\beta\)) regime. This improves on the previous best algorithm given by Anshu et al. (FOCS 2020), for the high-temperature regime. This work was done jointly with Jeongwan Haah and Robin Kothari.

2022-03-07. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. QIP 2022.
2022-01-25. On quantum linear algebra for machine learning. IPAM Quantum Numerical Linear Algebra workshop.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by Gilyén, Low, Su, and Wiebe. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2021-12-03. “Sufficient statistics considered harmful”. UW theory lunch talk.

This talk was a brief exposition of Andrea Montanari’s paper Computational implications of reducing data to sufficient statistics.

2021-11-23. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. University of Washington theory seminar.

Hamiltonian learning is the problem of learning a geometrically local \(N\)-qubit Hamiltonian \(H\) to precision \(\varepsilon\), supposing we are given copies of its Gibbs state \(\rho = \exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H))\) at a known inverse temperature \(\beta\). I will present an algorithm that solves this problem with optimal sample complexity (number of copies of \(\rho\) needed) and optimal time complexity in the high-temperature (low \(\beta\)) regime. This improves on the previous best algorithm given by Anshu et al. (FOCS 2020), for the high-temperature regime. This work was done jointly with Jeongwan Haah and Robin Kothari.

2021-10-27. On quantum linear algebra for machine learning. Perimeter Institute colloquium.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by Gilyén, Low, Su, and Wiebe. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2021-07-12. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. Simons Institute, Quantum Wave in Computing Reunion.

Hamiltonian learning is the problem of learning a geometrically local \(N\)-qubit Hamiltonian \(H\) to precision \(\varepsilon\), supposing we are given copies of its Gibbs state \(\rho = \exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H))\) at a known inverse temperature \(\beta\). I will present an algorithm that solves this problem with optimal sample complexity (number of copies of \(\rho\) needed) and optimal time complexity in the high-temperature (low \(\beta\)) regime. This improves on the previous best algorithm given by Anshu et al. (FOCS 2020), for the high-temperature regime. This work was done jointly with Jeongwan Haah and Robin Kothari.

2021-04-06. On quantum linear algebra for machine learning. University of Illinois, IQIST seminar series.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by Gilyén et al. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2021-03-30. On quantum linear algebra for machine learning. Simons Institute, quantum colloquium.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by Gilyén et al. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2020-11-13. On Lipschitz continuity of matrix functions. UW theory lunch talk.

This was a casual 30-minute talk around a cute result I’ve tweeted about before: For \(A, B \in \mathbb{R}^{n\times n}\), \(\| |A| - |B| \| = O(\log(n)\|A - B\|)\), where \(|\cdot|\) denotes taking the absolute value of the eigenvalues of the matrix and \(\|\cdot\|\) denotes spectral norm. The log factor is tight!

2020-11-05. An improved quantum-inspired algorithm for linear regression. UW Allen School theory group colloquium.

Can quantum computers be used to perform machine learning faster? This still-open research question is crucial to understanding the future applications of quantum computing. I’ll briefly introduce one type of algorithm—quantum linear algebra—that is a leading candidate for quantum speedups in machine learning, and discuss the issues such algorithms have. In this talk, I will focus on the task of solving low-rank systems of linear equations, which can be done by a quantum linear algebra algorithm. For this setting, I, in joint work with András Gilyén and Zhao Song, show that a classical algorithm can perform just as well as this quantum algorithm, with a runtime “only” an 8th power greater. This improves on prior work, which gave a classical algorithm with runtime a 28th power greater. Our classical algorithm is, in fact, just stochastic gradient descent, giving some hope that this 1-to-8 gap between quantum and classical is closer than it appears in theory. (No prior knowledge of quantum computing will be assumed for this talk.)

2020-09-28. On quantum linear algebra vs classical linear algebra. UT quantum group meeting.
2020-06-23. Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. STOC 2020.

5-minute live version.

2020-02-25. A classical algorithm framework for dequantizing quantum machine learning. Simons Institute, Quantum Algorithms workshop.

We discuss a classical analogue to the singular value transformation framework for quantum linear algebra algorithms developed by Gilyén et al. The proofs underlying this classical framework have natural connections to well-known ideas in randomized numerical linear algebra. Namely, our proofs are based on one key observation: approximating matrix products is easy in the quantum-inspired input model. This primitive’s simplicity makes finding classical analogues to quantum machine learning algorithms straightforward and intuitive. We present sketches of the key proofs of our sketches as well as of its applications to dequantizing low-rank quantum machine learning.

Joint work with Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, and Chunhao Wang.

2020-01-28. Quantum-inspired classical linear algebra. Simons Institute, The Quantum Wave in Computing Boot Camp.

We discuss a recent line of work on classical linear algebra algorithms inspired by quantum machine learning. Such algorithms disprove that their corresponding quantum algorithms admit exponential speedups. Further, they show an interesting connection between QML and classical sketching and sampling literature, which helps clarify where exactly quantum algorithms gain advantage over classical. We discuss some lemmas central to this line of work, algorithms based on these lemmas, and some open questions in this space.

2020-01-06. Quantum-inspired algorithms for recommendation systems, principal component analysis, and supervised clustering. QIP plenary talk.

Slides. (possibly dead link)

2019-07-30. Explaining some recent quantum-inspired algorithms. Santa Fe Institute “New Algorithms for Optimization, Sampling, Learning, and Quantum Simulations” workshop.
2019-06-03. Building a classical framework to analyze quantum machine learning speedups. TQC plenary talk.

We cover some recent work on “dequantizing” QML algorithms and the set of classical length-squared sampling techniques used to achieve these results. First, we motivate the correspondence between QML with state preparation assumptions and classical ML with sampling assumptions. In the latter setting, we describe an algorithm toolkit which bears resemblance to quantum operations on classical distributions. Second, we describe the capabilities and limitations of these techniques in comparison with quantum computing, by considering their application to low-rank matrix inversion (1811.04909 and 1811.04852), supervised clustering (1811.00414), and recommendation systems (1807.04271). This theory of quantum-like classical sampling algorithms formalizes some intuition informing our evaluation of polylogarithmic-time QML algorithms as “good” or “bad”. By comparing QML algorithms to classical ML algorithms strengthened with sampling assumptions, we can distinguish the speedups artificially created from quantum state preparation assumptions from the speedups that truly exploit the quantumness of the algorithm.

2019-05-16. Quantum-inspired classical linear algebra. CIFAR QIS Meeting.
2019-05-15. Quantum-inspired classical linear algebra algorithms: Why and how? TCS+.

Abstract: Over the past ten years, the field of quantum machine learning (QML) has produced many polylogarithmic-time procedures for linear algebra routines, assuming certain “state preparation” assumptions. Though such algorithms are formally incomparable with classical computing, a recent line of work uses an analogous classical model of computation as an effective point of comparison to reveal speedups (or lack thereof) gained by QML. The resulting “dequantized” algorithms assume sampling access to input to speed up runtimes to polylogarithmic in input size.

In this talk, we will discuss the motivation behind this model and its relation to existing randomized linear algebra literature. Then, we will delve into an example quantum-inspired algorithm: Gilyen, Lloyd, and Tang’s algorithm for low-rank matrix inversion. This dequantizes a variant of Harrow, Hassidim, and Lloyd’s matrix inversion algorithm, a seminal work in QML. Finally, we will consider the implications of this work on exponential speedups in QML. No background of quantum computing is assumed for this talk.

2019-04-18. Quantum 101: Grover’s Algorithm. UW theory lunch talk.
2018-12-11. A Quantum-inspired Recommendation Systems Algorithm. (R,B) MSR AI Seminar.

Over the past ten years, the field of quantum machine learning (QML) has produced many polylogarithmic-time procedures for linear algebra routines, assuming certain “state preparation” assumptions. One of the most prominent examples of these is Kerenidis and Prakash’s quantum recommendation system, which takes user-product preference data in an efficient data structure and outputs recommendations for users in time polylogarithmic in input size, exponentially faster than all previous classical recommendation systems. We discuss a “de-quantization” of their algorithm: a classical algorithm that performs just as well as the quantum algorithm, up to polynomial slowdown. Thus, we exponentially improve the runtime over previous algorithms with a purely classical algorithm. We describe the algorithm’s toolkit of classical length-squared sampling techniques, motivated by a new quantum-inspired paradigm. Finally, we consider the implications of our work on exponential speedups in QML. No background of quantum computing is assumed for this talk.

2018-11-29. A classical toolkit for quantum machine learning algorithms. MSR QuArC.

We cover some recent work on “dequantizing” QML algorithms, and the set of classical length-squared sampling techniques used to achieve these results. First, we describe the toolkit, which bears resemblance to performing quantum operations on classical distributions. Second, we describe the capabilities and limitations of these techniques in comparison with quantum computing, by considering their application to low-rank matrix inversion (1811.04909 and 1811.04852), supervised clustering (1811.00414), and recommendation systems (1807.04271). This theory of quantum-like classical sampling algorithms formalizes some intuition informing our evaluation of polylogarithmic-time QML algorithms as “good” or “bad”. By comparing QML algorithms to classical ML algorithms strengthened with sampling assumptions, we can distinguish the speedups created from quantum state preparation assumptions from the speedups which rely in part on the quantumness of the algorithm.

2018-10-09. A quantum-inspired classical algorithm for recommendation systems. UW Theory Seminar.
2018-06-18/19. A set of two two-hour informal talks regarding my de-quantized algorithm for recommendation systems. Occurred during the Simons Institute summer cluster on quantum computation.