SFU CSE Logo Theory Seminar @ Computing Science, SFU

Theory Seminar series organized by theory group, Computing Science at SFU. We gratefully acknowledge the Pacific Institute of Mathematical Sciences for their generous funding.

If you are intersted in giving a talk please contact Sajin Koroth at skoroth (at) sfu (dot) ca

Update regarding the COVID-19 situation

To help curb the spread of covid-19 by exercising "social distancing", we will be cancelling our regular theory seminars immediately until further notice.

Details

Venue & Time

The usual venue is the CS-Seminar room 9204 in TASC-1. The standard day/time of the seminar for the Fall'19-Spring'20 semester is Mondays from 11:30am to 12:30am.

Mailing list

Our announcements/updates are sent out on the theory seminar mailing list at cs-theorysem@sfu.ca. If you would like to subscribe, please click the following link and sent us a mail : subscribe, or email Sajin Koroth at skoroth (at) sfu (dot) ca.

Calendar

Upcoming Talk

Future Talks

March 16th : Intersections of Half-spaces: Generalization and Approximate Counting - Zhenjian Lu (SFU)

Abstract

A half-space (over the Boolean hypercube) is defined as the set of points that lie on one side of a hyper-plane. An intersection of \(s\) half-spaces is obtained by taking the intersection of the \(s\) sets of points, each of which corresponds to a half-space. Intersections of half-spaces are of fundamental interest in theoretical computer science, especially in computational complexity, optimization, and high-dimensional geometry. In this talk, we consider the natural generalization of intersections of half-spaces: intersections of hyper-surfaces, where we replace hyper-planes in the definition with hyper-surfaces. We will then talk about the problems of proving lower bounds and approximate counting for intersections of half-spaces and hyper-surfaces.

March 23rd (Tentative) : TBD - Paul Beam (UW)

April (Tentative) : TBD - Marco Leandro Carmosino (PIMS Postdoctoral Fellow, SFU)

Past Talks

March 9th : Packing Directed Circuits Quarter-Integrally - Tomáš Masařík (Postdoc at SFU Math)

Abstract

The celebrated Erdős-Pósa theorem states that every undirected graph that does not admit a family of k vertex-disjoint cycles contains a feedback vertex set (a set of vertices hitting all cycles in the graph) of size O(k log k). After being known for long as Younger’s conjecture, a similar statement for directed graphs has been proven in 1996 by Reed, Robertson, Seymour, and Thomas. However, in their proof, the dependency of the size of the feedback vertex set on the size of vertex-disjoint cycle packing is not elementary.

We show that if we compare the size of a minimum feedback vertex set in a directed graph with quarter-integral cycle packing number, we obtain a polynomial bound. More precisely, we show that if in a directed graph G there is no family of k cycles such that every vertex of G is in at most four of the cycles, then there exists a feedback vertex set in G of size O(k^4 ). On the way there we prove a more general result about quarter-integral packing of subgraphs of high directed treewidth: for every pair of positive integers a and b, if a directed graph G has directed treewidth Ω(a^6 b^8 log^2 (ab)), then one can find in G a family of a subgraphs, each of directed treewidth at least b, such that every vertex of G is in at most four subgraphs.

This is a joint work with: Irene Muzi, Marcin Pilipczuk, Paweł Rzążewski, and Manuel Sorge

March 2nd : Racetrack and Vector TSP - Jason Schoeters (LaBRI, University of Bordeaux)

Abstract

The talk starts with a short survey on a mathematical game known as Racetrack, in which a vehicle moves in discrete rounds and can slightly modify its movement vector every round. The goal is to minimize the amount of rounds needed to attain some destination. The survey includes complexity results, as well as algorithms and insights on the problem and on some variations of the problem.

I will then introduce the Vector Traveling Salesman Problem (Vector TSP), in which one applies the Racetrack constraints to the visiting entity to effectively modelize speed, acceleration, and inertia forces in the well-known TSP. The difficulty lies in how to control these forces to visit the given set of locations in a minimum amount of rounds. We show this problem is NP-complete and explore a number of insights and algorithms related to this problem. In particular, the Racetrack constraints are powerful enough to possibly result in a different optimal visit order than the corresponding Euclidean TSP.

February 24th : Cancelled due to speaker being unwell.

February 10th : An Improved Approximation Algorithm for TSP in the Half Integral Case - Nathan Klein (UW)

Abstract

The traveling salesperson problem (TSP) is one of the most celebrated and well-studied NP-complete problems we know. A classic result from Christofides in the 70s tells us that a fast algorithm exists which returns a solution at most 3/2 times worse than the optimal. Since then, however, no better approximation algorithm has been found, and improving upon it has become a famous open problem in theoretical computer science. In this talk, I will give an overview of research towards this goal and will present the first sub-3/2 approximation algorithm for an interesting special case, so-called "half integral" TSP instances. Despite their simple structure, Schalekamp, Williamson and van Zuylen conjecture that these are "the hardest" instances of TSP to approximate. This presentation is of joint work with Anna Karlin and Shayan Oveis Gharan.

December 2nd : Logic of Information Flows: Expressing Reachability and Cardinality Properties - Eugenia Ternovska (SFU)

Abstract

A challenge in descriptive complexity is to identify logics with low complexity that simultaneously express both fundamental reachability properties and counting properties on unordered structures. In this talk, we give a family of logics that allow fine control of the complexity of order-independent computations. The approach is based on adding the ability to reason about information propagations in first-order logic with fixed points (FO(FP)). Two parameters may be used to control expressiveness and complexity: the set of definable connectives, and the expressiveness of atomic units. We identify a fragment with polytime data complexity and show that it can express both reachability and counting properties on unordered structures. The fragment is less expressive than FO(FP), both in its FO part and its recursive construct, but is more expressive in its ability to constrain information flows by means of (definable) modalities, similar to Dynamic logic.

We conjecture a connection of our logic to the MMSNP complexity class [Feder,Vardi:1992], which stands for Monotone Monadic Strict NP. This class has been conjectured to be a maximal fragment of NP where NP-intermediate languages are excluded. Our conjecture, if true, implies a dichotomy between PTIME and NP-complete problems definable in the logic.

November 25th : Query-to-communication lifting using low discrepancy gadget - Sajin Koroth (SFU)

Abstract

Lifting theorems are theorems that relate the query complexity of a function \(f : \{0, 1\}^n → \{0, 1\}\) to the communication complexity of the composed function \(f \circ g^n\), for some “gadget” \(g : \{0, 1\}^b \times \{0, 1\}^b →\{0, 1\}\). Such theorems allow transferring lower bounds from query complexity to the communication complexity, and have seen numerous applications in the recent years. In addition, such theorems can be viewed as a strong generalization of a direct-sum theorem for the gadget g. We prove a new lifting theorem that works for all gadgets g that have logarithmic length and exponentially-small discrepancy, for both deterministic and randomized communication complexity. Thus, we increase the range of gadgets for which such lifting theorems hold considerably. Our result has two main motivations: First, allowing a larger variety of gadgets may support more applications. In particular, our work is the first to prove a randomized lifting theorem for logarithmic-size gadgets, thus improving some applications the theorem. Second, our result can be seen a strong generalization of a direct-sum theorem for functions with low discrepancy.

Joint work with Arkadev Chattopadhyay, Yuval Filmus, Or Meir, Toniann Pitassi

November 4th : What CSP can do for GI - Andrei Bulatov (SFU)

Abstract

The Constraint Satisfaction Problem (CSP) and the Graph Isomorphism problem (GI) are very well know combinatorial problems that have seen substantial progress in recent years. We survey some results about GI and CSP, and then show a connection between them that allows to solve GI using CSP techniques. In particular, we improve upon a result known from the 70s.

October 21st : Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits - Sivaramakrishnan Natarajan Ramamoorthy (UW)

Abstract

There are various notions of balancing set families that appear in combinatorics and computer science. Roughly speaking, a family of proper non-empty subsets S1,..,Sk \subset [n] is balancing if for every set X of size n/2, the size of intersection of Si with X is approximately |Si|/2 for some i in {1,2,...,k}. We extend and simplify the framework developed by Hegedűs for proving lower bounds on the size of balancing sets. We prove that if n = 2p for a prime p, then k is at least p. We then show a connection between balancing set families and depth-2 threshold circuits computing the majority and weighted threshold function on n bits. This connection helps resolve a question raised by Kulikov and Podolskii on the fan-in of depth-2 majority circuits computing the majority function on n bits. We show that any depth-2 threshold circuit that computes the majority on n bits has at least one gate with fan-in at least n/2−o(n). We also prove a sharp lower bound on the fan-in of depth-2 threshold circuits computing a specific weighted threshold function.

This is joint work with Pavel Hrubeš, Anup Rao and Amir Yehudayoff.

October 7th : The proof complexity of reasoning over richer domains - Antonina Kolokolova (SFU and Memorial University of Newfoundland)

Abstract

How does representation of a problem affect the complexity of solving it? What is the interplay between the richness of a domain in which a statement is expressed and the complexity of proving that statement? Not only this is a fundamental question in logic, it also has a plethora of practical applications: given a class of problems, practitioners want to know how to encode them and which solver to use to solve them most efficiently.

Here we present a proof complexity approach to this question, in particular using tools of proof complexity to study satisfiability modulo theories (SMT) framework. In SMT, propositional satisfiability solvers operate over Boolean combinations of atoms from an underlying theory (such as theory of linear arithmetic or uninterpreted functions with equality), using a dedicated theory solver to analyse viability of supposed satisfying assignments from the theory perspective, and derive new facts by theory reasoning. A natural question is just how much it helps to augment propositional, resolution-based reasoning with the power of the theory.

We start by presenting a class of proof systems for resolution over a theory, and show that the correspondence between SAT solvers based on conflict-driven clause learning with restarts and general resolution can be extended to SMT. We then show that even a theory of uninterpreted functions, decidable in near-linear time, helps enormously: resolution over that theory can simulate a much more powerful Frege (natural deduction) system (and with an additional rule, even extended Frege system).

Based on joint work with Vijay Ganesh, Robert Robere and Arnold Beckmann.

October 1st : Stabbing Planes - Noah Fleming (U of Toronto)

Abstract

In this talk I will introduce a new semi-algebraic proof system, called Stabbing Planes that is in the style of DPLL-based modern SAT solvers. Stabbing Planes proofs extend the DPLL branching from single variables to branching on arbitrary linear inequalities and their "integer negations". This directly generalizes the reasoning underlying standard SAT solvers, which we hope will facilitate the extension of current heuristics to Stabbing Planes-based solvers. Somewhat surprisingly, we are able to show that Stabbing Planes can efficiently simulate Cutting Planes, and is strictly stronger under a reasonable conjecture. In fact, it is possible to characterize the exact restriction of Stabbing Planes proofs that corresponds to Cutting Planes. I will discuss these results and more, as well as several interesting open problems which I believe are in reach.

September 30th : Faster k-SAT algorithms using biased-PPSZ - Uri Zwick (TAU)

Talk slides

Abstract

The PPSZ algorithm, due to Paturi, Pudlak, Saks and Zane, is currently the fastest known algorithm for the k-SAT problem, for every k>3. For 3-SAT, a tiny improvement over PPSZ was obtained by Hertli. We introduce a biased version of the PPSZ algorithm using which we obtain an improvement over PPSZ for every k>=3. For k=3 we also improve on Herli's result and get a much more noticeable improvement over PPSZ, though still relatively small. In particular, for Unique 3-SAT, we improve the current bound from \(1.308^n\) to \(1.307^n\).

Joint work with Thomas Dueholm Hansen, Haim Kaplan and Or Zamir.

September 23rd : A Boosting Tutorial via Game Theory - Marco Leandro Carmosino (PIMS Postdoctoral Fellow, SFU)

Abstract

This talk will give prove the correctness of simple boosting algorithms using basic game theory.

Boosting is a technique for using weak learning algorithms whose accuracy barely beats a coin toss to construct strong learning algorithms who have 99% accuracy. It is an ensemble method that creates a "committee" of weak hypotheses whose combination is extremely accurate. It has applications across learning, complexity theory, and mathematics.

The analysis via game theory will provide both a strong intuition for how boosting algorithms work, and deep connections to online learning and convex optimization.

September 16th : Learning models : connections between boosting, hard-core distributions, dense models, GAN, and regularity - Russell Impagliazzo (UCSD)

Abstract

A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.

For example, a scientist is trying to find a mathematical model explaining observed data about some phenomenon, such as kinds of butterflies in a forest. A minimal criterion for success is that the model should accurately predict the results of future observations. When is this possible? This general situation arises in many contexts in computer science and mathematics. In machine learning, the object might be a distribution on data points, high dimensional real vectors, and the tests might be half-spaces. The goal would be to learn a simple representation of the data that determines the probability of any half-space or possibly intersections of half spaces. In computational complexity, the object might be a Boolean function or distribution on strings, and the tests are functions of low circuit complexity. In graph theory, the object is a large graph, and the tests are the cuts; the representation should determine approximately the size of any cut. In additive combinatorics, the object might be a function or distribution over a group, and the tests might be correlations with linear functions or polynomials.

This talk is to illustrate the connections between these various contexts. In particular, we'll show how to convert boosting algorithms into unsupervised learning algorithms similar to GANs. These algorithms produce ``dense models'' from seeing samples of a distribution, that are high entropy distributions that are indistinguishable from the samples by a class of tests. To illustrate the versatility of the techniques, we'll show direct applications to Fourier analysis of functions, leakage-resistant cryptography and graph theory.

We will then discuss how to implement the algorithms in question, and in particular, bounds on the sample complexity required to ensure generalization.

July 4th : The challenge of broadcasting and a new graph sampling lemma - Valerie King (UVic)

Abstract

The problem of broadcasting in a distributed network is to transmit a single message from one node to every other node. We assume each node knows its neighbors, and communication is by message-passing. For many years, the simple flooding algorithm was considered optimal but this requires a transmission across every edge. Since 2015, we have made substantial progress on this problem. Monte Carlo algorithms, for synchronous and asynchronous networks can build a spanning tree (or minimum spanning tree) with a number of messages sublinear in the number of edges, for dense graphs. But Monte Carlo algorithms have a probability of error. Is a deterministic algorithm possible? In this talk I’ll focus on recent work about a simplified version of the problem. It has led to a surprising graph sampling lemma with potentially interesting applications.

This work will appear in FOCS 2019. This is joint work with Jacob Holm, Mikkel Thorup, Or Zamir, and Uri Zwick.

June 20th : Toward a dichotomy for approximation of \(H\)-coloring : Akbar Rafiey (SFU)

Abstract

Given two (di)graphs \(G\), \(H\) and a cost function \(c:V(G)\times V(H) \to \mathbb{Q}_{\geq 0}\cup\{+\infty\}\), in the minimum cost homomorphism problem, MinHOM(\(H\)), we are interested in finding a homomorphism \(f:V(G)\to V(H)\) (a.k.a \(H\)-coloring) that minimizes \(\sum\limits_{v\in V(G)}c(v,f(v))\). The complexity of \emph{exact minimization} of this problem is well understood due to Hell and Rafiey 2012, and the class of digraphs \(H\), for which the MinHOM(\(H\)) is polynomial time solvable is a small subset of all digraphs.

In this paper, we consider the approximation of MinHOM within a constant factor. In terms of digraphs, MinHOM(\(H\)) is not approximable if \(H\) contains a \emph{digraph asteroidal triple (DAT)}. We take a major step toward a dichotomy classification of approximable cases. We give a dichotomy classification for approximating the MinHOM(\(H\)) when \(H\) is a graph (i.e. symmetric digraph). For digraphs, we provide constant factor approximation algorithms for two important classes of digraphs, namely bi-arc digraphs (digraphs with a \emph{conservative semi-lattice polymorphism} or \emph{min-ordering}), and \(k\)-arc digraphs (digraphs with an \emph{extended min-ordering}). Specifically, we show that:

a) \textbf{Dichotomy for Graphs:} MinHOM(\(H\)) has a \(2|V(H)|\)-approximation algorithm if graph \(H\) admits a \emph{conservative majority polymorphims} (i.e. \(H\) is a \emph{bi-arc graph}), otherwise, it is inapproximable; b) MinHOM(\(H\)) has a \(|V(H)|^2\)-approximation algorithm if \(H\) is a bi-arc digraph; c) MinHOM(\(H\)) has a \(|V(H)|^2\)-approximation algorithm if \(H\) is a \(k\)-arc digraph.

In conclusion, we show the importance of these results and provide insights for achieving a dichotomy classification of approximable cases. Our constant factors depend on the size of \(H\). However, the implementation of our algorithms provides a much better approximation ratio. It leaves open to investigate a classification of digraphs \(H\), where MinHOM(\(H\)) admits a constant factor approximation algorithm that is independent of \(|V(H)|\).

Join work with Arash Rafiey and Tiago Santos

June 6th : Revenue optimization for correlated bidders via sampling - Hu Fu (UBC)

Abstract

Correlation among buyers' valuations enables a revenue maximizing seller to fully extract the social surplus, with no money left to the buyers. This was shown in a classic work by Cremer and McLean. The model has been criticized for allowing arbitrary dependence of the mechanism on the prior: any uncertainty on the prior disrupts the mechanism. We examine this criticism from a learning point of view. We allow uncertainty on the prior but grant the seller sample access from the true prior, and study the number of samples that suffice for surplus extraction. We give precise bounds on the number of samples needed, which show that surplus extraction needs much less information than learning the prior itself. In a sense, this is because the buyers "collaborate" in the learning, driven by their incentives. Our upper bound on the number of samples is by an algebraic argument.

May 23rd : Counting (with) homomorphisms - Radu Curticapean (U Saarland)

Abstract

We survey results on counting graph homomorphisms, a simple yet comprehensive framework that unifies various graph-theoretic counting problems, sometimes in unexpected ways.

In the central problem for this framework, there are some objects on the left and some objects on the right. The objects on the left stand in relationships, as specified by a graph L. Likewise, the objects on the right share relationships specified by a graph R. A homomorphism f from L to R is a function that maps the left objects to the right objects such that present relationships are preserved: If uv is an edge in L, then its image f(u)f(v) is required to be an edge in R.

Given L and R as input, counting homomorphisms from L to R is #P-complete, and thus at least NP-hard: The problem can express counting proper 3-colorings of graphs (by taking L as input and fixing R to a triangle) or counting cliques of a desired size (by fixing L to a complete graph and taking R as the input, assuming R has no self-loops). We study how the complexity of this problem varies under more general restrictions on L and R:

While the settings of counting homomorphisms from fixed L or to fixed R may seem very different, we observe that they are both governed by the same connections between complexity-theoretic and algebraic properties of graph homomorphisms.

Based on joint works with Hubie Chen, Holger Dell, and Daniel Marx.

May 16th : Near-optimal sample complexity bounds for learning mixtures of Gaussians - Chris Liaw

Talk slides (UBC)

Abstract

Estimating distributions from observed data is a fundamental task in statistics that has been studied for over a century. We consider such a problem where the distribution is a mixture of k Gaussians in \(\mathbb{R}^d\). The objective is density estimation: given i.i.d. samples from the (unknown) distribution, produce a distribution whose total variation distance from the unknown distribution is at most \(\epsilon\). We prove that \(Θ(kd^2/\epsilon^2)\) samples are necessary and sufficient for this task, suppressing logarithmic factors. This improves both the known upper bound and lower bound for this problem.

This is joint work with Hassan Ashtiani, Shai Ben-David, Nick Harvey, Abbas Mehrabian, and Yaniv Plan.

April 25th : Anti-Concentration in most directions - Anup Rao (UW)

Abstract

Abstract. I will discuss a recent proof giving anti-concentration bounds for the inner product of two independent random vectors. For example, we show that if A, B are subsets of the cube \(\{±1\}^n\) with \(|A| · |B| \geq 2^{1.01n}\), and \(X \in A\) and \(Y \in B\) are sampled independently and uniformly, then the inner product \(<X,Y>\) takes on any fixed value with probability at most \(O( 1/ \sqrt{n} )\). Extending Hal´asz work, we prove stronger bounds when the choices for \(x\) are unstructured. I will discuss some applications of these results to communication complexity.

Link to the paper

Joint work with Amir Yehudayoff.

April 18th : Circuit Lower Bounds for MCSP from Local Pseudorandom Generator - Zhenjian Lu (SFU)

Abstract

The Minimum Circuit Size Problem (MCSP) asks if a given truth table of a Boolean function \(f\) can be computed by a Boolean circuit of size at most \(\theta\), for a given parameter \(\theta\). Understanding the exact complexity of MCSP is an important open problem in computational complexity theory. In this talk, I will talk about lower bounds for computing MCSP in some weak/restricted computational models. In particular, I will discuss some new and improved lower bounds for MCSP under several restricted circuit models including Boolean formulas, constant-depth circuits, and branching programs. These lower bounds for MCSP essentially match the best-known lower bounds for any explicit function/problem in these models.

Joint work with Mahdi Cheraghchi (Imperial College London), Valentine Kabanets (Simon Fraser University), and Dimitrios Myrisiotis (Imperial College London).

April 11th : A PTAS for approximating the genus of dense graphs - Bojan Mohar (SFU)

Abstract

Approximating the genus of graphs is one of the challenging open problems in approximation algorithms. Following our FOCS 2018 work, the speaker will show that for dense graphs the genus can be approximated with a PTAS of quadratic time complexity.

This is joint work with Yifan Jing.

April 4th : Circuit lower bounds for the Min Circuit Size Problem - Valentine Kabanets (SFU)

Abstract

The Minimum Circuit Size Problem (MCSP) is a problem to decide the minimum circuit complexity of a boolean function given by its truth table. More precisely, given a 2^n-bit string T (representing all the values of some boolean function f on n variables), and a parameter s>0, one needs to decide if there is a circuit of size at most s that computes f. While MCSP is easily seen to be in NP (guess a circuit of size s and then check if it computes f), it is still unknown if MCSP is NP-complete. In fact, Leonid Levin (co-discoverer of NP-completeness) delayed his NP-completeness paper in the 1970s because he tried to prove that MCSP is NP-complete, but could not. The question whether MCSP is NP-complete remains open today.

MCSP is a meta-computational problem that has applications to a number of areas of theoretical computer science: circuit complexity, cryptography, computational learning, proof complexity, and more. Since around 2015, there has been an increased interest in MCSP, with many research papers coming out in the past couple of years.

In this talk, I will present some very recent results on the circuit complexity of MCSP against several restricted circuit models, which essentially match the known state-of-the-art circuit lower bounds known today for explicit boolean functions. In particular, I will show that MCSP requires exponential-size circuits of constant depth, with AND, OR, NOT, and PARITY gates (of unbounded fanin), i.e., the circuit class AC^0[2]. I will also talk about almost-cubic lower bounds against boolean formulas with AND, OR, and NOT gates (de Morgan formulas).

No prior background in circuit complexity will be assumed for this talk. It is aimed to be understandable by first-year graduate students interested in theoretical computer science.

(Based on the joint works with A. Golovnev, R. Ilango, R. Impagliazzo, A. Kolokolova and A. Tal, as well as M. Cheraghchi, Z. Lu, and D. Myrisiotis.)

March 28th : Another problem with variants on either side of the P vs. NP divide - Leonid Chindelevitch (SFU)

Talk slides

Abstract

In our computer science classes we learn about several problems with variants crossing the boundary between P and NP: edge cover vs. vertex cover, 2-SAT vs. 3-SAT, LP vs. ILP, min-cut vs. max-cut. In this talk we focus on yet another such problem, motivated by genomics. It is the problem of finding the rank median of three permutation matrices. On the positive side, we show that if the median is allowed to be rational, then we can find it in O(n^w) time, where w is the matrix multiplication exponent. On the negative side, we show that if the median must be a permutation, the problem is not only NP-hard, but also APX-hard. We conclude with some open questions and directions for future work.

March 14th : Probabilistic Checking against Non-Signaling Strategies from Linearity Testing - Igor Shinkar (SFU)

Abstract

Non-signaling strategies are collections of distributions with certain non-local correlations that have been studied recently in the context of delegation of computation. In this talk I will discuss a recent result showing that the classical PCP construction of [ALMSS'92] is sound against non-signaling strategies. As part of the proof, we study the problem of linearity testing [BLR'93] in the non-signaling setting, and prove that any no-signaling strategy that passes the linearity test with high probability must be close to a “quasi-distribution” over linear functions. Quasi-distributions generalize the notion of probability distributions over functions by allowing negative probabilities, while at the same time requiring that “local views” follow standard distributions (with non-negative probabilities).

Joint work with Alessandro Chiesa and Peter Manohar (UC Berkeley).