talks

These are my talks, at varying levels of formality, with abstracts (click to expand) and links to video where they exist.

This list is mainly for personal reference, but to those who want to watch a talk of mine and don’t know which: more recent ones should be better. hopefully.

Last updated January 1, 2025.

teaching

2023-07-24 to 2023-07-28. Quantum and quantum-inspired linear algebra. PCMI 2023 Graduate Summer School.

An exciting recent line of work unifies many interesting quantum algorithms under a powerful linear algebraic framework known as “quantum singular value transformation” (QSVT). We’ll introduce this framework, build some tools in polynomial approximation that are helpful for applying it, and investigate what kinds of results it can achieve. Our goal will be to understand how and when to use QSVT in quantum algorithm design, and ultimately, whether it can reveal new quantum speedups for interesting problems in data analysis, machine learning, or quantum simulation.

talks

2024-12-17. Structure learning of Hamiltonians from real-time evolution. Workshop on correlations of quantum many-body systems.

We consider learning a Hamiltonian H from its dynamics, exp(-iHt). We present a general approach to this task, based on term cancellation rather than the dynamical decoupling of the prior work of Huang, Tong, Fang, and Su. With this approach, we develop an algorithm which achieves nearly all of the properties one might desire of a Hamiltonian learning algorithm, including having Heisenberg-limited scaling and being able to solve “structure learning”, i.e. learning the interactions terms as well as their strengths.

Joint work with Ainesh Bakshi, Allen Liu, and Ankur Moitra.

2024-11-21. My perspective on Gibbs sampling. Simons Institute Quantum Industry Day.

I will survey the current literature on algorithms for Gibbs sampling, with an eye towards quantum advantage. I’ll synthesize this into a view on the field as it stands, and an outlook for where it could go in the future.

2024-10-27. How to learn a Hamiltonian. FOCS 2024 workshop, Recent Advances in Quantum Learning.

When can we efficiently learn the parameters of an unknown quantum system? This question, which we formalize as Hamiltonian learning, captures the fundamental task of the researcher who wants to understand the behavior of a complicated quantum device. I will survey what is known about this problem, including techniques, results, and open directions, with an eye towards accessibility. In particular, I’ll discuss the many insights that classical TCS bring to the table, along with the new and exciting challenges which come with learning in the quantum realm.

2024-10-23. High-temperature Gibbs states are unentangled and efficiently preparable. Stanford Q-Farm Thermalization and Quantum Information Workshop.

We show that thermal states of local Hamiltonians are separable above a constant temperature. Specifically, for a local Hamiltonian H on a graph with degree d, its Gibbs state at inverse temperature beta is a classical distribution over product states for all beta < 1/(cd), where c is a constant. This proof of “sudden death of thermal entanglement” resolves the fundamental question of whether many-body systems can exhibit entanglement at high temperature.

Joint work with Ainesh Bakshi, Allen Liu, and Ankur Moitra.

2024-10-10. Learning the closest product state. UC Berkeley quantum pod talk.

We study the problem of finding a product state with optimal fidelity to an unknown n-qubit quantum state ρ, given copies of ρ. This is a basic instance of a fundamental question in quantum learning: is it possible to efficiently learn a simple approximation to an arbitrary state?

We give an algorithm which finds a product state with fidelity Δ-close to optimal, using N = n^poly(1/Δ) copies of ρ and poly(N) classical overhead. This talk will describe some of the key ideas of this algorithm through motivating examples.

Joint work with Ainesh Bakshi, John Bostanci, William Kretschmer, Zeph Landau, Jerry Li, Allen Liu, and Ryan O’Donnell.

2024-08-20. Quantum algorithms. IAS Workshop on Quantum Information and Physics.
2024-02-28. Constructing polynomial systems for Hamiltonian learning. Simons Institute one-day open problem workshop on Hamiltonian learning.
2023-12-05. Learning quantum Hamiltonians at any temperature in polynomial time. UC Berkeley quantum seminar.

A quantum system of n interacting particles at thermal equilibrium can be described using a polynomial number of parameters, which correspond to the strengths of the forces between particles. A natural open question has been whether these parameters can be estimated efficiently by running experiments on the system. We resolve this question by giving the first polynomial-time algorithm for this task. This improves over prior work, which uses polynomially many samples from the system, but requires exponential time. In this talk, I will introduce the problem and describe some of the key ideas behind our algorithm. Joint work with Ainesh Bakshi, Allen Liu, and Ankur Moitra.

2023-11-17. Learning quantum Hamiltonians at any temperature in polynomial time. MIT theory reading group.

A quantum system of n interacting particles at thermal equilibrium can be described using a polynomial number of parameters, which correspond to the strengths of the forces between particles. A natural open question has been whether these parameters can be estimated efficiently by running experiments on the system. We resolve this question by giving the first polynomial-time algorithm for this task. This improves over prior work, which uses polynomially many samples from the system, but requires exponential time. In this talk, I will introduce the problem and describe some of the key ideas behind our algorithm. Joint work with Ainesh Bakshi, Allen Liu, and Ankur Moitra.

2023-11-16. Learning quantum Hamiltonians at any temperature in polynomial time. Harvard Special Quantum Information seminar.

A quantum system of n interacting particles at thermal equilibrium can be described using a polynomial number of parameters, which correspond to the strengths of the forces between particles. A natural open question has been whether these parameters can be estimated efficiently by running experiments on the system. We resolve this question by giving the first polynomial-time algorithm for this task. This improves over prior work, which uses polynomially many samples from the system, but requires exponential time. In this talk, I will introduce the problem and describe some of the key ideas behind our algorithm. Joint work with Ainesh Bakshi, Allen Liu, and Ankur Moitra.

2023-10-11. An improved classical singular value transformation for quantum machine learning. Berkeley theory lunch.

The field of quantum machine learning (QML) produces many proposals for attaining quantum speedups for tasks in machine learning and data analysis. However, such speedups can only manifest if classical algorithms for these tasks perform significantly slower than quantum ones. We study the quantum-classical gap of several fundamental problems in QML through the quantum singular value transformation (QSVT) framework. QSVT, introduced by Gilyén, Su, Low and Wiebe, unifies all major types of quantum speedup; and in particular, QSVT on low-rank classical data encompasses a wide variety of QML proposals. We challenge these proposals by providing a classical algorithm that matches the performance of QSVT in this regime, up to a small polynomial overhead.

2023-05-30. An improved classical singular value transformation for quantum machine learning. Google Quantum CS Theory seminar.

The field of quantum machine learning (QML) has produced many proposals for attaining large quantum speedups for computationally intensive tasks in machine learning and data analysis. However, such speedups can only manifest if classical algorithms for these tasks perform significantly slower than quantum ones. We study the quantum-classical gap of several fundamental problems in QML through the quantum singular value transformation (QSVT) framework. QSVT, introduced by GilyĂ©n, Su, Low and Wiebe [GSLW, STOC’18], unifies all major types of quantum speedup; in particular, QSVT on low-rank classical data encompasses a wide variety of QML proposals. We challenge these proposals by providing a classical algorithm that matches the performance of QSVT in this regime, up to a small polynomial overhead. We show that, given a bounded matrix A, a vector b, a bounded degree-d polynomial p, and linear-time pre-processing, we can output a description of a vector v such that ‖v − p(A)b‖ ≀ Δ‖b‖ in Õ(d^9 ‖A‖_F^4 / Δ^2) time. This improves upon the best known classical algorithm [CGLLTW, STOC’20] and narrows the gap with QSVT, which, after linear-time pre-processing to load input into a quantum-accessible memory, can output a measurement from the state |p(A)b〉 in Õ(d ‖A‖_F) time. Our key insight is to combine the Clenshaw recurrence, an iterative method for computing matrix polynomials, with sketching techniques to simulate QSVT classically. Based on joint work with Ainesh Bakshi.

2023-05-10. Query-optimal estimation of unitary channels in diamond distance. Cornell’s Junior Theorists’ Workshop.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error Δ, using O(dÂČ/Δ) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity ÎŽ using O(dÂČ/√ή) applications in parallel, thereby requiring Ω(dÂČ) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2023-04-27. Query-optimal estimation of unitary channels in diamond distance. MIT Algorithms and Complexity Seminar.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error Δ, using O(dÂČ/Δ) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity ÎŽ using O(dÂČ/√ή) applications in parallel, thereby requiring Ω(dÂČ) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2023-03-31. Dequantization (and Other Work). Cold Quanta.
2023-01-13. Two results on estimation of unitary channels. MIT QI seminar.

Since quantum mechanics models the evolution of a closed system as a unitary operator, a natural question is how to interact with such a dynamics to learn its corresponding operator. I will present two algorithms that learn a unitary channel given the ability to apply it. One learns the coefficients of a local Hamiltonian from its real-time evolution, and one learns a general unitary channel to diamond norm distance with the optimal number of applications. Both of these results give polynomial improvements over prior work, and follow from simple conceptual insights about the structure of the hypothesis space. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2023-01-05. Query-optimal estimation of unitary channels in diamond distance. 2022 Chicago Junior Theorists Workshop.

We will present a quantum algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error Δ, using O(dÂČ/Δ) applications of the unknown channel and only one qudit. This improves over prior work by polynomial factors. We also show Ω(dÂČ/Δ) applications suffice, even if one has access to the inverse or controlled versions of the unknown unitary. Thus, our algorithm uses both the optimal number of qudits and the optimal number of applications. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-12-12. Query-optimal estimation of unitary channels in diamond distance. Sandia QPL Seminar.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error Δ, using O(dÂČ/Δ) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries up to a sub-logarithmic factor, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity ÎŽ using O(dÂČ/√ή) applications in parallel, thereby requiring Ω(dÂČ) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-12-02. Query-optimal estimation of unitary channels in diamond distance. CMU theory seminar.
2022-10-20. Query-optimal estimation of unitary channels in diamond distance. SQuInT 2022.

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error Δ, using O(dÂČ/Δ) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries up to a sub-logarithmic factor, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity ÎŽ using O(dÂČ/√ή) applications in parallel, thereby requiring Ω(dÂČ) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-10-18. Query-optimal estimation of unitary channels in diamond distance. Quantum Innovators in Computer Science and Mathematics (University of Waterloo).

We will present an algorithm to learn an unknown unitary channel acting on a d-dimensional qudit to diamond-norm error Δ, using O(dÂČ/Δ) applications of the unknown channel and only one qudit. This algorithm uses the optimal number of qudits and number of queries up to a sub-logarithmic factor, even if one has access to the inverse or controlled versions of the unknown unitary. This improves over prior work, which achieves entanglement infidelity ÎŽ using O(dÂČ/√ή) applications in parallel, thereby requiring Ω(dÂČ) qudits. Based on joint work with Jeongwan Haah, Robin Kothari, and Ryan O’Donnell.

2022-10-05. Query-optimal estimation of unitary channels in diamond distance. Perimeter Institute QI seminar.
2022-07-15. Equioscillation (theme & variations). UW theory lunch.

I talk about the idea of equioscillation, which comes up a lot in approximation theory. I’ll introduce Chebyshev polynomials, a very useful family of polynomials (and the polynomials that equioscillate the most). I’ll also prove some nice results that use equioscillation, as many as time permits.

2022-03-30. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. MIT A+C seminar.

Hamiltonian learning is the problem of learning a geometrically local \(N\)-qubit Hamiltonian \(H\) to precision \(\varepsilon\), supposing we are given copies of its Gibbs state \(\rho = \exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H))\) at a known inverse temperature \(\beta\). This is the quantum generalization of the problem of parameter learning Markov Random Fields. I will present an algorithm that solves this problem with optimal sample complexity (number of copies of \(\rho\) needed) and optimal time complexity in the high-temperature (low \(\beta\)) regime. This improves on the previous best algorithm given by Anshu et al. (FOCS 2020), for the high-temperature regime. This work was done jointly with Jeongwan Haah and Robin Kothari.

2022-03-07. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. QIP 2022.
2022-01-25. On quantum linear algebra for machine learning. IPAM Quantum Numerical Linear Algebra workshop.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by GilyĂ©n, Low, Su, and Wiebe. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2021-12-03. “Sufficient statistics considered harmful”. UW theory lunch talk.

This talk was a brief exposition of Andrea Montanari’s paper Computational implications of reducing data to sufficient statistics.

2021-11-23. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. University of Washington theory seminar.

Hamiltonian learning is the problem of learning a geometrically local \(N\)-qubit Hamiltonian \(H\) to precision \(\varepsilon\), supposing we are given copies of its Gibbs state \(\rho = \exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H))\) at a known inverse temperature \(\beta\). I will present an algorithm that solves this problem with optimal sample complexity (number of copies of \(\rho\) needed) and optimal time complexity in the high-temperature (low \(\beta\)) regime. This improves on the previous best algorithm given by Anshu et al. (FOCS 2020), for the high-temperature regime. This work was done jointly with Jeongwan Haah and Robin Kothari.

2021-10-27. On quantum linear algebra for machine learning. Perimeter Institute colloquium.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by GilyĂ©n, Low, Su, and Wiebe. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2021-07-12. Optimal learning of quantum Hamiltonians from high-temperature Gibbs states. Simons Institute, Quantum Wave in Computing Reunion.

Hamiltonian learning is the problem of learning a geometrically local \(N\)-qubit Hamiltonian \(H\) to precision \(\varepsilon\), supposing we are given copies of its Gibbs state \(\rho = \exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H))\) at a known inverse temperature \(\beta\). I will present an algorithm that solves this problem with optimal sample complexity (number of copies of \(\rho\) needed) and optimal time complexity in the high-temperature (low \(\beta\)) regime. This improves on the previous best algorithm given by Anshu et al. (FOCS 2020), for the high-temperature regime. This work was done jointly with Jeongwan Haah and Robin Kothari.

2021-04-06. On quantum linear algebra for machine learning. University of Illinois, IQIST seminar series.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by GilyĂ©n et al. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2021-03-30. On quantum linear algebra for machine learning. Simons Institute, quantum colloquium.

We will discuss quantum singular value transformation (QSVT), a simple unifying framework for quantum linear algebra algorithms developed by GilyĂ©n et al. QSVT is often applied to try to achieve quantum speedups for machine learning problems. We will see the typical structure of such an application, the barriers to achieving super-polynomial quantum speedup, and the state of the literature that’s attempting to bypass these barriers. Along the way, we’ll also see an interesting connection between quantum linear algebra and classical sampling and sketching algorithms (explored in the form of “quantum-inspired” classical algorithms).

2020-11-13. On Lipschitz continuity of matrix functions. UW theory lunch talk.

This was a casual 30-minute talk around a cute result I’ve tweeted about before: For \(A, B \in \mathbb{R}^{n\times n}\), \(\| |A| - |B| \| = O(\log(n)\|A - B\|)\), where \(|\cdot|\) denotes taking the absolute value of the eigenvalues of the matrix and \(\|\cdot\|\) denotes spectral norm. The log factor is tight!

2020-11-05. An improved quantum-inspired algorithm for linear regression. UW Allen School theory group colloquium.

Can quantum computers be used to perform machine learning faster? This still-open research question is crucial to understanding the future applications of quantum computing. I’ll briefly introduce one type of algorithm—quantum linear algebra—that is a leading candidate for quantum speedups in machine learning, and discuss the issues such algorithms have. In this talk, I will focus on the task of solving low-rank systems of linear equations, which can be done by a quantum linear algebra algorithm. For this setting, I, in joint work with AndrĂĄs GilyĂ©n and Zhao Song, show that a classical algorithm can perform just as well as this quantum algorithm, with a runtime “only” an 8th power greater. This improves on prior work, which gave a classical algorithm with runtime a 28th power greater. Our classical algorithm is, in fact, just stochastic gradient descent, giving some hope that this 1-to-8 gap between quantum and classical is closer than it appears in theory. (No prior knowledge of quantum computing will be assumed for this talk.)

2020-09-28. On quantum linear algebra vs classical linear algebra. UT quantum group meeting.
2020-06-23. Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. STOC 2020.

5-minute live version.

2020-02-25. A classical algorithm framework for dequantizing quantum machine learning. Simons Institute, Quantum Algorithms workshop.

We discuss a classical analogue to the singular value transformation framework for quantum linear algebra algorithms developed by GilyĂ©n et al. The proofs underlying this classical framework have natural connections to well-known ideas in randomized numerical linear algebra. Namely, our proofs are based on one key observation: approximating matrix products is easy in the quantum-inspired input model. This primitive’s simplicity makes finding classical analogues to quantum machine learning algorithms straightforward and intuitive. We present sketches of the key proofs of our sketches as well as of its applications to dequantizing low-rank quantum machine learning.

Joint work with Nai-Hui Chia, Andrås Gilyén, Tongyang Li, Han-Hsuan Lin, and Chunhao Wang.

2020-01-28. Quantum-inspired classical linear algebra. Simons Institute, The Quantum Wave in Computing Boot Camp.

We discuss a recent line of work on classical linear algebra algorithms inspired by quantum machine learning. Such algorithms disprove that their corresponding quantum algorithms admit exponential speedups. Further, they show an interesting connection between QML and classical sketching and sampling literature, which helps clarify where exactly quantum algorithms gain advantage over classical. We discuss some lemmas central to this line of work, algorithms based on these lemmas, and some open questions in this space.

2020-01-06. Quantum-inspired algorithms for recommendation systems, principal component analysis, and supervised clustering. QIP plenary talk.

Slides. (possibly dead link)

2019-07-30. Explaining some recent quantum-inspired algorithms. Santa Fe Institute “New Algorithms for Optimization, Sampling, Learning, and Quantum Simulations” workshop.
2019-06-03. Building a classical framework to analyze quantum machine learning speedups. TQC plenary talk.

We cover some recent work on “dequantizing” QML algorithms and the set of classical length-squared sampling techniques used to achieve these results. First, we motivate the correspondence between QML with state preparation assumptions and classical ML with sampling assumptions. In the latter setting, we describe an algorithm toolkit which bears resemblance to quantum operations on classical distributions. Second, we describe the capabilities and limitations of these techniques in comparison with quantum computing, by considering their application to low-rank matrix inversion (1811.04909 and 1811.04852), supervised clustering (1811.00414), and recommendation systems (1807.04271). This theory of quantum-like classical sampling algorithms formalizes some intuition informing our evaluation of polylogarithmic-time QML algorithms as “good” or “bad”. By comparing QML algorithms to classical ML algorithms strengthened with sampling assumptions, we can distinguish the speedups artificially created from quantum state preparation assumptions from the speedups that truly exploit the quantumness of the algorithm.

2019-05-16. Quantum-inspired classical linear algebra. CIFAR QIS Meeting.
2019-05-15. Quantum-inspired classical linear algebra algorithms: Why and how? TCS+.

Abstract: Over the past ten years, the field of quantum machine learning (QML) has produced many polylogarithmic-time procedures for linear algebra routines, assuming certain “state preparation” assumptions. Though such algorithms are formally incomparable with classical computing, a recent line of work uses an analogous classical model of computation as an effective point of comparison to reveal speedups (or lack thereof) gained by QML. The resulting “dequantized” algorithms assume sampling access to input to speed up runtimes to polylogarithmic in input size.

In this talk, we will discuss the motivation behind this model and its relation to existing randomized linear algebra literature. Then, we will delve into an example quantum-inspired algorithm: Gilyen, Lloyd, and Tang’s algorithm for low-rank matrix inversion. This dequantizes a variant of Harrow, Hassidim, and Lloyd’s matrix inversion algorithm, a seminal work in QML. Finally, we will consider the implications of this work on exponential speedups in QML. No background of quantum computing is assumed for this talk.

2019-04-18. Quantum 101: Grover’s Algorithm. UW theory lunch talk.
2018-12-11. A Quantum-inspired Recommendation Systems Algorithm. (R,B) MSR AI Seminar.

Over the past ten years, the field of quantum machine learning (QML) has produced many polylogarithmic-time procedures for linear algebra routines, assuming certain “state preparation” assumptions. One of the most prominent examples of these is Kerenidis and Prakash’s quantum recommendation system, which takes user-product preference data in an efficient data structure and outputs recommendations for users in time polylogarithmic in input size, exponentially faster than all previous classical recommendation systems. We discuss a “de-quantization” of their algorithm: a classical algorithm that performs just as well as the quantum algorithm, up to polynomial slowdown. Thus, we exponentially improve the runtime over previous algorithms with a purely classical algorithm. We describe the algorithm’s toolkit of classical length-squared sampling techniques, motivated by a new quantum-inspired paradigm. Finally, we consider the implications of our work on exponential speedups in QML. No background of quantum computing is assumed for this talk.

2018-11-29. A classical toolkit for quantum machine learning algorithms. MSR QuArC.

We cover some recent work on “dequantizing” QML algorithms, and the set of classical length-squared sampling techniques used to achieve these results. First, we describe the toolkit, which bears resemblance to performing quantum operations on classical distributions. Second, we describe the capabilities and limitations of these techniques in comparison with quantum computing, by considering their application to low-rank matrix inversion (1811.04909 and 1811.04852), supervised clustering (1811.00414), and recommendation systems (1807.04271). This theory of quantum-like classical sampling algorithms formalizes some intuition informing our evaluation of polylogarithmic-time QML algorithms as “good” or “bad”. By comparing QML algorithms to classical ML algorithms strengthened with sampling assumptions, we can distinguish the speedups created from quantum state preparation assumptions from the speedups which rely in part on the quantumness of the algorithm.

2018-10-09. A quantum-inspired classical algorithm for recommendation systems. UW Theory Seminar.
2018-06-18/19. A set of two two-hour informal talks regarding my de-quantized algorithm for recommendation systems. Occurred during the Simons Institute summer cluster on quantum computation.