Lijie Chen
Research Interests: Theoretical Computer Science
I have a broad interest in theoretical computer science. In particular, I am interested in classical and quantum computational complexity theory and their connections to other fields of computer science and quantum physics.
I am joining Berkeley EECS as an Assistant Professor in Fall 2025. [Read about how to apply.]
I am a Miller Postdoctoral Fellow at UC Berkeley, hosted by Avishay Tal and Umesh V. Vazirani. I got my Ph.D. from MIT, and I was very fortunate to be advised by Ryan Williams. Prior to that, I received my bachelor's degree from Yao Class at Tsinghua University.
At Tsinghua University, I was advised by Prof. Jian Li, working on Multi-Armed Bandits. During the Spring of 2016, I was visiting MIT, working under the supervision of Prof. Scott Aaronson on Quantum Complexity. [CV]
Twitter
Note: Click on [summary] or [highlight] to view summaries or highlight of the papers/projects!
Video/Slides/Summary of Recent Work
Workshops
Derandomization and its connections throughout complexity theory, a three-part series presented together with Roei Tell at IAS. [First talk by Roei] [Second talk by me] [Notes on the second talk] [Third talk by Roei]
[summary]
The series is intended to survey the fast-paced recent developments in the study of derandomization. We will present:
1. A revised version of the classical hardness vs randomness framework, converting new types of uniform lower bounds into non-black-box derandomization algorithms.
2. Unconditional derandomization of an important class of Merlin-Arthur protocols, and stronger circuit lower bounds from derandomization.
3. Optimal derandomization algorithms that incur essentially no runtime overhead (a.k.a "free lunch derandomization").
Manuscripts
Holographic pseudoentanglement and the complexity of the AdS/CFT dictionary [arxiv]
Chris Akers, Adam Bouland, Lijie Chen, Tamara Kohler, Tony Metger, Umesh Vazirani
Selected Publications
Derandomization
Polynomial-Time Pseudodeterministic Construction of Primes [eccc] [my slides] [Quanta Magazine]
Lijie Chen, Zhenjian Lu, Igor C. Oliveira, Hanlin Ren, Rahul Santhanam
Foundations of Computer Science (FOCS 2023)
Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise [eccc] [slides] [Roei's talk at TCS plus] [Roei's slides]
[short summary]
Textbook hardness-to-randomness converts circuit lower bounds into PRGs. But is this black-box approach really necessary for derandomization? In this new work we revamp the classical hardness-to-randomness framework, showing how to convert new types of uniform lower bounds into non-black-box derandomization, deducing conclusions such as promiseBPP = promiseP without PRGs. Moreover, we show that the same types of lower bounds are in fact necessary for any type of derandomization! This reveals a tight connection between any derandomization of promiseBPP (i.e., not necessarily a black-box one) and the foregoing new types of uniform lower bounds.
Our framework also allows a flexible trade-off between hardness and randomness. In an extreme setting, we show that plausible uniform lower bounds imply that "randomness is indistinguishable from useless". That is, every randomized algorithm can be derandomized with an arbitrarily small polynomial overhead, such that no polynomial-time algorithm can find a mistake with non-negligible probability.
Lijie Chen, Roei Tell. Foundations of Computer Science (FOCS 2021). Invited to the SICOMP Special Issue for FOCS 2021
Simple and fast derandomization from very hard functions: Eliminating randomness at almost no cost [eccc] [Oded's choice] [slides by me] [slides by Roei]
[short highlight]
Derandomization with linear overhead: Under plausible assumptions, we show that any \(T\)-time randomized computation can be derandomized in \(T \cdot n\)-time.
Conditional Optimality: Assuming NSETH, the \(n\) overhead above is optimal.
[longer summary]
Derandomization with a near-linear overhead: Assuming (1) one-way function exists and (2) generic \(2^{kn}\)-time computation cannot be speed up to \(2^{(k-\varepsilon) n}\) time with \(2^{(1-\varepsilon)n}\) bits of advice, we show that \(T(n)\)-time randomized computation can be derandomized in roughly \(T(n) \cdot n^{1+\varepsilon}\) time.
Optimality assuming NSETH: Assuming Nondeterministic Strong Exponential Time Hypothesis (NSETH), the \(n\) overhead is optimal (upto \(n^{o(1)}\)) for every reasonable time-bound \(T(n)\). Indeed, we only need a weaker version claiming that \(\#\mathsf{SAT}\) requires \(2^{(1-o(1)) \cdot n}\) non-deterministic time (aka, \(\#\mathsf{NSETH}\)).
Average-Case Derandomization with basically no overhead Assuming similar assumptions, we show that \(T(n)\)-time randomized computation can be derandomized in \(T(n) \cdot n^{o(1)}\)-time with respect to every \(T(n)\)-time samplable distribution \(S\) (meaning, for \(x \sim S\), the derandomization fails with \(n^{-\omega(1)}\) probability).
Lijie Chen, Roei Tell. Symposium on the Theory of Computing (STOC 2021)
Circuit Lower Bounds from Algorithms
Almost Everywhere Circuit Lower Bounds from Non-Trivial Derandomization [eccc] [video by Xin Lyu in FOCS 2020] [slides by Ryan]
[short highlight]
(Among many other things), we show that there is a function \(f \in \mathsf{E}^{\mathsf{NP}}\) such that \(f_n\) (\(f\) restricted to \(n\)-bit inputs) cannot be (\(1/2 + 2^{-n^{\varepsilon}}\))-approximated by \(2^{n^{\varepsilon}}\)-size \(\mathsf{ACC}^0\) circuits, for all sufficiently large input lengths \(n\).
Our lower bounds come from a generic framework showing that non-trivial derandomization of a circuit class \(\mathcal{C}\) implies \(\mathsf{E}^{\mathsf{NP}}\) is almost-everywhere hard for \(\mathcal{C}\).
Lijie Chen, Xin Lyu, Ryan Williams. Foundations of Computer Science (FOCS 2020)
Efficient Construction of Rigid Matrices Using an NP Oracle [pdf] [slides] [Oded's choice]
[short highlight]
Explicit construction of rigid matrices is a long-standing open question. We give a somewhat explicit (\(\mathsf{P}^{\mathsf{NP}}\)) construction of a familiy \(\{H_n\}_{n \in \mathbb{N}}\) of \(n \times n\) \(\mathbb{F}_2\)-matrices such that for infinitely many \(n\), \(H_n\) is \(\Omega(n^2)\)-far in Hamming distance to any \(2^{(\log n)^{1/4-\varepsilon}}\)-rank \(\mathbb{F}_2\)-matrices.
Josh Alman, Lijie Chen. Foundations of Computer Science (FOCS 2019). Machtey Award for Best Student Paper. Invited to the SICOMP Special Issue for FOCS 2019
[SIAM Journal on Computing]
Hardness Magnification (Strong Lower Bounds from Much Weaker Lower Bounds)
Beyond Natural Proofs: Hardness Magnification and Locality [eccc] [arxiv] [notes by Igor]
[short highlight]
The natural proof barrier does not seem to apply here since Hardness Magnification (HM) theorems only work for special functions. So one natural question stemmed from the hardness magnification phenomenon is to understand whether there is another inherent barrier, which prevents us from using current techniques to prove the lower bounds required by HM theorems.
We formulated a concrete barrier called Locality Barrier. Roughly speaking, the locality barrier says that if your lower bounds methods are robust enough to handle small-fan-in oracles, then it cannot be used to prove the lower bounds required by HM theorems. Unfortunately, it seems most lower bounds techniques we are aware of (random restrictions, approximation method, communication complexity based lower bounds, etc.) are subject to this barrier.
Lijie Chen, Shuichi Hirahara,Igor Oliveira, Jan Pich, Ninad Rajgopal, Rahul Santhanam. Innovations in Theoretical Computer Science (ITCS 2020)
[Journal of the ACM]
Bootstrapping Results for Threshold Circuits “Just Beyond” Known Lower Bounds [eccc] [Oded's choice]
[short highlight]
For some natural \(\mathsf{NC}^1\)-complete problem \(P\), we show that (among many other things) if one can show \(P\) requires depth-\(d\) \(\mathsf{TC}^{0}\) circuit of size (in term of wires) \(n^{1+\exp(o(d))}\), then \(\mathsf{NC}^1 \ne \mathsf{TC}^0\). Previous work implies that \(P\) has no depth-\(d\) \(\mathsf{TC}^{0}\) of size \(n^{1+c^{-d}}\), for some constant \(c > 1\).
Lijie Chen, Roei Tell. Symposium on the Theory of Computing (STOC 2019). Danny Lewin Best Student Paper Award
Other Topics
(Streaming Lower Bounds) Almost Optimal Super-Constant-Pass Streaming Lower Bounds for Reachability [eccc]
Lijie Chen, Gillat Kol, Dmitry Paramonov, Raghuvansh Saxena, Zhao Song, Huacheng Yu. Symposium on the Theory of Computing (STOC 2021). Invited to the SICOMP Special Issue for STOC 2021
(Fine-grained Complexity) On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product [eccc] [arxiv] [slides] [journal version]
[short highlight]
Under \(\mathsf{SETH}\), we give a characterization on when approxiamte Boolean Max-IP is hard (w.r.t approxiamtion ratios and vector dimensions). We also show quadratic time hardness for Z-Max-IP with \(2^{\log^*n}\) dimensions (Recall that \(\log^* n\) is the number of logs it takes to reduce \(n\) to at most \(1\). It is an extremely slow-growing function.)
One notable corollary is that finding the furthest pair among \(n\) points in \(2^{\log^*n}\) dimension Euclidian space requires essentialy \(n^2\) time under \(\mathsf{SETH}\).
Lijie Chen. Computational Complexity Conference (CCC 2018). Invited to the Toc Special Issue for CCC 2018
(Differential Privacy) On Distributed Differential Privacy and Counting Distinct Elements [arxiv] [long slides] [short slides]
[summary]
Problem: We study the CountDisticnt problem where there are \(n\) users each holding an element in a distributed setting, and the goal is to (approximately) count the total number of distinct elements.
Non-interactive Local Model: We show that no (1) \((\ln n - 7 \ln\ln n, n^{-\omega(1)})\)-DP Local protocol can solve CountDisticnt with error \(n/(\ln n)^{\omega(1)}\) and (2) there is an \((\ln n)\)-DP Local protocol solving CountDisticnt with error \(\tilde{O}(\sqrt{n})\).
Shuffle Model: We show that no (1) \((O(1), 2^{-\log^9 n})\)-DP single-message shuffle protocol can solve CountDisticnt with error \(n/(\ln n)^{\omega(1)}\) and (2) there is an \((O(1), 2^{-\log^9 n})\)-DP shuffle protocol solving CountDisticnt with error \(\tilde{O}(\sqrt{n})\), where each user sends at most \(1\) messages in expectation.
Two-Party Model: We also established an \(\tilde{\Omega}(n)\) vs. \(O(1)\) separation between error in two-party DP and global sensitivity, answering an open question of McGregor et al. (2011).
Dominated Protocols: We also introduce a relaxation of local-DP protocols called dominated-protocols, and show that multi-message shuffle-DP protocols are dominated. By proving lower bounds against dominated protocols, we also prove lower bounds for selection and learning-parity against multi-message shuffle-DP protocols.
Moment Matching and Poissonization: Inspired by the Poissonization and moment matching techniques in the context of property testing. Our lower bounds are proved by (1) constructing two hard distributions on datasets (2) expressing the histogram of applying any \((\ln n - 7 \ln\ln n, n^{-\omega(1)})\)-DP on a distribution of datasets as a sum of many independent mixtures of multi-dimensional Poisson distributions. (3) Using the moment-matching technique to bound the statistical distance between two mixtures of multi-dimensional Poisson distributions.
Lijie Chen, Badih Ghazi, Ravi Kumar, Pasin Manurangsi. Innovations in Theoretical Computer Science (ITCS 2021)
|