Date and Time | Speaker and Title | Abstract | Reference |
---|---|---|---|
27 November 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Melissa Antonelli: Towards New Characterizations for Small Circuit Classes via Discrete Ordinary Differential Equations |
Implicit computational complexity is an active area of theoretical computer science, which aims to provide machine-independent characterizations for relevant complexity classes. One of the seminal works in this field appeared in 1965, when Cobham introduced a function algebra closed under bounded recursion on notation (BRN) to capture $\mathbf{FP}$. Later on, several complexity classes have been characterized using limited recursion schemas. In this context, an original approach was recently introduced by Bournez and Durand, showing that ordinary differential equations (ODEs) offer a natural tool for algorithmic design and providing a characterization of $\mathbf{FP}$ by a new ODE-schema.
The overall goal of our project is precisely to generalize this approach to parallel computation: starting with original ODE-characterizations for the small circuit classes $\mathbf{FAC^0}$ and $\mathbf{FTC^0}$, we aim to uniformly capture the entire hierarchies $\mathbf{FAC^k}$ and $\mathbf{FNC^k}$. In this talk, I will introduce the global aim and methodology at the basis of our ongoing research and present the results we have obtained so far.
Joint work with Arnaud Durand and Juha Kontinen. |
|
4 December 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Pedro Montealegre: Meta-Theorems for Distributed Certification: From Density to Sparsity, From FO to MSO |
A proof-labeling scheme (PLS) for a boolean predicate Π on labeled graphs is a mechanism for certifying the legality of global network states with respect to Π in a distributed manner. In a PLS, a certificate is assigned to each processing node in the network, and nodes are responsible for verifying that the collection of certificates forms a global proof of correctness. This is achieved through a single exchange of certificates between neighboring nodes. The primary measure of complexity in a PLS is the size of the certificates. Numerous PLSs have been designed for specific predicates, such as cycle-freeness, minimum-weight spanning trees, and planarity.
In this talk, we explore algorithmic meta-theorems for distributed certification, which aim to efficiently certify all predicates expressible in specific logics, provided the input graph belongs to a specific graph class. We demonstrate how celebrated meta-theorems from centralized computing can be adapted to the distributed setting with two key examples. First, we present a distributed certification analog of Courcelle's theorem: every graph predicate expressible in Monadic Second-Order (MSO) logic can be efficiently certified on graphs of bounded treewidth. Second, we establish that every graph predicate expressible in First-Order (FO) logic can be efficiently certified on graphs of bounded expansion, offering a distributed certification counterpart to the celebrated results of Nešetřil and Ossona de Mendez. |
|
11 December 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Martín Rios Wilson: TBA |
TBA | |
Winter break | (Winter break — the seminar will resume starting Wed 22 January 2025.) |
Date and Time | Speaker and Title | Abstract | Reference |
---|---|---|---|
20 November 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Tuomas Hakoniemi: Many flocks of birds - lower bounds for the pigeonhole principle in algebraic proof systems |
Pigeonhole principle is a basic combinatorial principle stating that there is no injective map from a finite set to its proper subset. In proof complexity it is also one of the most studied principles with many upper and lower bounds known in different proof systems. In this talk I'll discuss the proof complexity of the pigeonhole principle in strong algebraic proof systems that allow succint representations of the polynomials involved in the proofs. In particular, we will see that any short algebraic proof of the pigeonhole principle must contain a polynomial that is very hard to represent as a read-once oblivious algebraic branching program. I will present the argument and discuss the context of the result.
This talk is based on ongoing work with Nutan Limaye and Iddo Tzameret. |
|
13 November 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Juha Harviainen: Optimal Decision Tree Pruning Revisited: Algorithms and Complexity |
We present a comprehensive parameterized complexity analysis of decision tree pruning operations, extending recent research on the complexity of learning small decision trees. We focus on fundamental pruning operations used in heuristic approaches and provide a complete classification of their parameterized complexity. Our analysis identifies parameters that lead to fixed-parameter tractability or hardness, establishing a precise borderline between these complexity classes. We also study the tractability for combinations of parameters. This detailed complexity landscape offers new insights into the computational challenges of decision tree simplification, a crucial aspect of developing interpretable and efficient machine learning models.
Joint work with Frank Sommer, Manuel Sorge, and Stefan Szeider. |
|
6 November 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Ivy K. Y. Woo: Evasive LWE — An Unusual Member in the Expanding Zoo of Lattice Assumptions |
The Learning with Errors (LWE) problem w.r.t. a matrix B asks to distinguish c = sB+e mod q from uniformly random, where s is a uniform secret and e some short error. In Eurocrypt'22, Wee proposed the evasive LWE assumption, which postulates that "For any matrix P, if LWE w.r.t. the joint matrix (B,P) is hard, then LWE w.r.t. B is also hard even when given short preimages U satisfying BU=P mod q". A handful of evasive LWE variants have emerged since then, which have been shown to imply various advanced cryptographic primitives, ranging from attribute-based encryption for unbounded-depth circuits, witness encryption, to obfuscation for null-circuits.
In this talk we overview the evasive LWE assumption, including why it appears useful to cryptographic proofs of advanced primitives, the different types of its variants, and recently discovered counterexamples against some of the stronger ones. We end the talk by restating the assumption in its simplest form as a lattice problem, and ask the question of whether it may be proven true, conditionally or unconditionally. This talk is an episode of the talk "On the Expanding Zoo of Lattice Assumptions" by Russell Lai during the 2024 Helsinki Algorithms & Theory Days. Part of this talk is based on joint work with Chris Brzuska and Akin Ünal. |
|
30 October 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Andreas Niskanen: Declarative Approaches to Outcome Determination in Judgment Aggregation |
Judgment aggregation offers a general formal framework for modeling various settings involving information aggregation by social choice mechanisms. For many judgment aggregation rules, computing collective judgments is computationally notoriously hard. The central outcome determination problem, in particular, is often complete for higher levels of the polynomial hierarchy. In this work we develop practical exact algorithms for outcome determination under a range of the most central judgment aggregation rules by harnessing the declarative approach, in particular, Boolean satisfiability (SAT) based solvers. For the Kemeny, Slater, MaxHamming, Young, and Dodgson rules, we detail direct approaches based on maximum satisfiability (MaxSAT) solving. For the Reversal scoring, Condorcet, Ranked agenda, and LexiMax rules, we develop iterative algorithms, including algorithms based on the counterexample-guided abstraction refinement (CEGAR) paradigm, making use of recent advances in incremental MaxSAT solving and preferential SAT-based reasoning. We provide an open-source implementation of the algorithms, and empirically evaluate them using real-world preference data. We compare the performance of our implementation to a recent approach which makes use of declarative solver technology for answer set programming (ASP). The results demonstrate that our approach scales significantly beyond the reach of the ASP-based algorithms.
Joint work with Ari Conati and Matti Järvisalo. |
|
23 October 2024 14:00 (Helsinki time) Location: AS6, Maarintie 8 |
Seth Gilbert: To Catch a (Distributed) Thief |
Over the last several years we have seen a boom in the development of new Byzantine agreement protocols, in large part driven by the excitement over blockchains and cryptocurrencies. Unfortunately, Byzantine agreement protocols have some inherent limitations: (a) they are very expensive, particularly in terms of bandwidth; (b) correctness tends to depend on strong assumptions, such as unreliable network behavior; (c) and it is impossible to ensure correct operation when more than 1/3 of the processing power in the system is controlled by a single malicious party. These problems are fundamentally intertwined with the problem of Byzantine fault tolerance.
What if, instead of preventing bad behavior by a malicious attacker, we guarantee “accountability,” i.e., we can provide irrefutable evidence of the bad behavior and the identity of the perpetrator of those illegal actions? Much in the way we prevent crime in the real world, we can prevent bad behavior in a distributed system: either the protocol succeeds, or alternatively we record sufficient information to catch the criminal and take remedial actions. Accountability has been increasingly discussed as a desirable property in blockchains like Ethereum, which “slashes” the stake of cheating users. In this talk, I will give an overview of accountability in distributed systems, with a focus on decision problems like consensus. We begin with Polygraph, a (provably) accountable consensus protocol, and then describe the ABC transformation for making every consensus protocol accountable. Finally, we talk about the underlying theory of accountability and the necessary and sufficient conditions for achieving accountability. This talk covers joint work with Pierre Civit, Vincent Gramoli, Rachid Guerraoui, Jovan Komatovic, Zarko Milosevic, and Adi Serendinschi. |
|
16 October 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Michał Osadnik: RoK, Paper, SISsors – Toolkit for Lattice-based Succinct Arguments |
Lattice-based succinct arguments allow to prove bounded-norm satisfiability of relations, such as $f(\mathbf{s}) = \mathbf{t} \bmod q$ and $\|\mathbf{s}\|\leq \beta$, over specific cyclotomic rings $\mathcal{O}_{\mathcal{K}}$, with proof size polylogarithmic in the witness size. However, state-of-the-art protocols require either 1) a super-polynomial size modulus $q$ due to a soundness gap in the security argument, or 2) a verifier which runs in time linear in the witness size. Furthermore, construction techniques often rely on specific choices of field which are not mutually compatible.
We construct modular, verifier-succinct reductions of knowledge for the bounded-norm satisfiability of structured-linear relations, without any soundness gap, under the vanishing SIS assumption, while maintaining polynomial (and concretely small) modulus. Moreover, our construction works over any field $\mathcal{K}$ which admits polynomial-size subtractive sets. Joint work with Michael Klooß, Russell W. F. Lai, and Ngoc Khanh Nguyen. |
|
9 October 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Geert van Wordragen: Fine-Grained Complexity of Earth Mover’s Distance under Translation |
The Earth Mover's Distance is a popular similarity measure in several branches of computer science. It measures the minimum total edge length of a perfect matching between two point sets.
The Earth Mover's Distance under Translation (EMDuT) is a translation-invariant version thereof. It minimizes the Earth Mover's Distance over all translations of one point set. For EMDuT in $ℝ^1$, we present an $Õ(n^2)$-time algorithm. We also show that this algorithm is nearly optimal by presenting a matching conditional lower bound based on the Orthogonal Vectors Hypothesis. For EMDuT in $ℝ^d$, we present an $Õ(n^{2d+2})$-time algorithm for the $L_1$ and $L_\infty$ metric. We show that this dependence on $d$ is asymptotically tight, as an $n^{o(d)}$-time algorithm for $L_1$ or $L_\infty$ would contradict the Exponential Time Hypothesis (ETH). Prior to our work, only approximation algorithms were known for these problems. Joint work with Karl Bringmann, Frank Staals and Karol Węgrzycki. |
arXiv |
2 October 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Sebastian Brandt: Proving Impossibility Results in Distributed Algorithms and Beyond |
Over the last 35 years, the LOCAL model of distributed computation has seen countless improvements on the state-of-the-art complexity upper bounds for many different problems. In contrast, for a long time the task of proving complexity lower bounds has been far less well understood, resulting in roughly a handful of nontrivial lower bounds until 2015. In a curious turn of events, two new lower bound techniques---round elimination and Marks' technique---appeared in 2016, in versions powerful enough to prove the exact same lower bounds for the same two problems: Delta-coloring and sinkless orientation. However, as Marks' technique was developed in an area of mathematics seemingly unrelated to distributed algorithms---descriptive combinatorics---, its applicability to distributed algorithms was not noticed until several years later.
While the two techniques appear quite different at first glance, a closer inspection reveals that there are also striking similarities, suggesting that a simultaneous study of the techniques might reveal closer connections and ways how to use insights from either to improve the respectively other, in both distributed algorithms and descriptive combinatorics. Moreover, there are many open questions about either technique individually, making them primary targets of research on impossibility results. In this talk, we will take a tour through the workings of both techniques in their current generalized forms (from the perspective of distributed algorithms), with an emphasis on Marks' technique as it might be much less known to researchers in algorithms. |
Slides |
25 September 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Shizhou Yang: Polynomial-time approximation schemes for induced subgraph problems on fractionally tree-independence-number-fragile graphs |
We investigate a relaxation of the notion of fractional treewidth-fragility, namely fractional tree-independence-number-fragility. In particular, we obtain polynomial-time approximation schemes for meta-problems such as finding a maximum-weight sparse induced subgraph satisfying a given CMSO2 formula on fractionally tree-independence-number-fragile graph classes. Our approach unifies and extends several known polynomial-time approximation schemes on seemingly unrelated graph classes, such as classes of intersection graphs of fat objects in a fixed dimension or proper minor-closed classes. We also study the related notion of layered tree-independence number, a relaxation of layered treewidth, and its applications to exact subexponential-time algorithms. Joint work with Esther Galby and Andrea Munaro. |
|
18 September 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Jara Uitto: Parallel Algorithms for Local Problems in Sparse Graphs |
Locally checkable problems, such as vertex coloring and maximal independent set (MIS), have recently received a lot of attention in the study of (massively) parallel graph algorithms. In the sublinear regime of the Massively Parallel Computation (MPC) model, a standard technique to approach these problems is to sparsify the input graph and to efficiently simulate a distributed message-passing algorithm in the sparse graph. The focus in the algorithm design is typically on the sparsification step: Often, sophisticated steps are taken to turn the input graph into a sparse representation. Once the graph is sparsified, one applies a naïve algorithm to handle the sparse graph. Especially for MIS, the current techniques for sparsification seem to have hit a wall and we do not know how to push forward.
In this talk, we focus on the second step of the sparsification framework. We discuss algorithms for graphs that are already sparse where the sparsification step becomes irrelevant and nevertheless, many problems remain hard. We pay particular attention to the total space used in the algorithms and give algorithmic techniques tailored for sparse graphs. |
|
11 September 2024 14:00 (Helsinki time) Location: AS3, Maarintie 8 |
Xianbin Zhu: Dynamic Matching with Batches of Updates in Clique Networks |
We consider dynamic maximal matching and dynamic (approximation) maximum matching in the $k$-clique message-passing model under batches of updates (edge insertions/deletions). In such a model, there are $k$ players each of which can communicate with each other with bandwidth $O(k\beta \log n)$ bits. The input graph is vertex-partitioned into $k$ players. When a player holds some vertex $v$, it also knows its neighbors, i.e., $N(v)$. In this talk, we show the lower bounds and upper bounds for the above two problems. Specifically, for dynamic maximal matching and incremental (approximation) maximum matching with batches of updates, there are some efficient algorithms. For dynamic (approximation) maximum matching, it is much more difficult. Finally, some open questions will be proposed. | |
29–30 August 2024 | Helsinki Algorithms and Theory Days | ||
20 August 2024 14:15 (Helsinki time) Location: T4, Konemiehentie 2 |
Neea Rusch: Implicit Computational Complexity: From Theory to Practice |
Implicit Computational Complexity (ICC) complements traditional complexity theory through machine-independent characterizations of complexity classes. These characterizations aim at defining syntactic restrictions, embedded in a programming language, to guarantee runtime behavior; typically, complexity. Thus, ICC temporally shifts complexity to an apriori design consideration. Over the past thirty years, various ICC systems have been developed; however, they have largely remained at a theoretical level. This is discouraging in two ways. First, it prevents validating the techniques in real-world scenarios and assessing their full potential. Second, it restricts the spread of ICC techniques to wider research communities that could benefit from them. This presentation discusses a valiant effort to overcome the above challenge. It presents the applications of ICC we have discovered so far, in static analysis and in guaranteeing extended semantic properties in compiler transformations and software security. It also highlights the obstacles in bridging fruitful collaborations between separate research communities and reflects on strategies of success. | |
29 May 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Teemu Hankala: Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous Functions |
The training problem of neural networks (NNs) is known to be existsR-complete with respect to ReLU and linear activation functions. We show that the training problem for NNs equipped with arbitrary activation functions is polynomial-time bireducible to the existential theory of the reals extended with the corresponding activation functions. For effectively continuous activation functions (e.g., the sigmoid function), we obtain an inclusion to low levels of the arithmetical hierarchy. Consequently, the sigmoid activation function leads to the existential theory of the reals with the exponential function, and hence the decidability of training NNs using the sigmoid activation function is equivalent to the decidability of the existential theory of the reals with the exponential function, a long-standing open problem. In contrast, we obtain that the training problem is undecidable if sinusoidal activation functions are considered. | |
22 May 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Rishikesh Gajjala: A combinatorial approach to weighted model counting |
Weighted model counting consists of computing the weighted sum of all satisfying assignments of a propositional formula. It is a very well-studied area with applications in probabilistic reasoning, network reliability estimation, statistical physics, program synthesis, and system verification. One of the state-of-the-art approaches to solving weighted model counting is to reduce it to unweighted model counting. The following question arises from such reductions: “Can one construct a monotone DNF with few terms which has exactly n solutions?“. This problem is equivalent to a natural combinatorial question of finding the smallest antichain, which generates an ideal (also known as downsets or monotone decreasing families or abstract simplicial complexes) of size $n$.
In this work, we prove that we can always construct such monotone DNFs using at most $O(\sqrt{\log{n}}\log\log{n})$ terms. We also prove that there exists infinitely many $n\in \mathbb{N}$ for which to construct any monotone DNF, we require at least $\Omega(\log\log{n})$ terms.
Joint work with L. Sunil Chandran and Kuldeep S. Meel. |
|
15 May 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Petteri Kaski: A universal sequence of tensors for the asymptotic rank conjecture |
The exponent $\sigma(T)$ of a tensor $T\in\mathbb{F}^d\otimes\mathbb{F}^d\otimes\mathbb{F}^d$ over a field $\mathbb{F}$ captures the base of the exponential growth rate of the tensor rank of $T$ under Kronecker powers. Tensor exponents are fundamental from the standpoint of algorithms and computational complexity theory; for example, the exponent $\omega$ of matrix multiplication can be characterized as $\omega=2\sigma(\mathrm{MM}_2)$, where $\mathrm{MM}_2\in\mathbb{F}^4\otimes\mathbb{F}^4\otimes\mathbb{F}^4$ is the tensor that represents $2\times 2$ matrix multiplication.
Our main result is an explicit construction of a sequence $\mathcal{U}_d$ of zero-one-valued tensors that is universal for the worst-case tensor exponent; more precisely, we show that $\sigma(\mathcal{U}_d)=\sigma(d)$ where $\sigma(d)=\sup_{T\in\mathbb{F}^d\otimes\mathbb{F}^d\otimes\mathbb{F}^d}\sigma(T)$. We also supply an explicit universal sequence $\mathcal{U}_\Delta$ localised to capture the worst-case exponent $\sigma(\Delta)$ of tensors with support contained in $\Delta\subseteq [d]\times[d]\times [d]$; by combining such sequences, we obtain a universal sequence $\mathcal{T}_d$ such that $\sigma(\mathcal{T}_d)=1$ holds if and only if Strassen's asymptotic rank conjecture [Progr. Math. 120 (1994)] holds for $d$. Finally, we show that the limit $\lim_{d\rightarrow\infty}\sigma(d)$ exists and can be captured as $\lim_{d\rightarrow\infty} \sigma(D_d)$ for an explicit sequence $(D_d)_{d=1}^\infty$ of tensors obtained by diagonalisation of the sequences $\mathcal{U}_d$. As our second result we relate the absence of polynomials of fixed degree vanishing on tensors of low rank, or more generally asymptotic rank, with upper bounds on the exponent $\sigma(d)$. Using this technique, one may bound asymptotic rank for all tensors of a given format, knowing enough specific tensors of low asymptotic rank. Joint work with Mateusz Michałek. |
arXiv |
8 May 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Okko Makkonen: Algebraic methods in homomorphic secret sharing |
Secret sharing is a way of distributing a secret value to a group of users such that the secret is not revealed to any sufficiently small subset of users. The privacy guarantee in applications such as private information retrieval and secure distributed computation is based on linear secret sharing schemes that allow for computation on the data. Homomorphic secret sharing is a method of doing secret sharing that allows for computations on the secret value to be performed locally by the users. In this talk, we look at some algebraic methods of constructing homomorphic secret sharing schemes with applications in private information retrieval. In particular, we aim to reduce the field size requirements of such schemes by utilizing codes over algebraic curves.
This is joint work with David Karpuk and Camilla Hollanti. |
|
2 May 2024 14:00 (Helsinki time) Location: TU4, Maarintie 8 |
Zhuan Khye Koh: A strongly polynomial algorithm for the minimum cost generalized flow problem |
We give a strongly polynomial algorithm for minimum cost generalized flow, and as a consequence, for all linear programs with at most two variables per inequality. Previously, strongly polynomial algorithms were only known for the primal and dual feasibility problems. Our approach is to show that the path-following interior point method of Allamigeon et al. ’22 terminates in a strongly polynomial number of iterations for minimum cost generalized flow. We achieve this by bounding the ‘straight line complexity’ of the central path, which is the minimum number of pieces required by a piecewise affine curve to multiplicatively approximate the central path.
Based on joint work with Daniel Dadush, Bento Natura, Neil Olver and László Végh. |
|
24 April 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Nicola Cotumaccio: Sorting Automata and Regular Languages |
We present a new paradigm in graph compression and formal language theory. We show that the ideas behind some of the most important data structures for compressing and indexing strings --- such as the suffix array, the Burrows-Wheeler Transform and the FM-index --- are much more general and provide a new approach to studying automata and regular languages, which retrospectively explains the impact of these data structures. We classify all automata and all regular languages by their propensity to be sorted. Our classification represents a useful parameterization simultaneously for diverse automata-related measures: (i) the encoding bit-complexity of automata/labeled graphs, (ii) the complexity of operations on regular languages (e.g. membership) and on labeled graphs (e.g. pattern matching), (iii) the complexity of NFA determinization by the powerset-construction algorithm. To the best of our knowledge, ours is the only parameterization of automata/labeled graphs capturing simultaneously all these aspects. We show that our parameterization has deep and unexpected consequences both in data compression (encoding, pattern matching) and in automata theory (nondeterminism, entanglement, minimization). | |
17 April 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Sándor Kisfaludi-Bak: Separator Theorem and Algorithms for Planar Hyperbolic Graph |
The hyperbolicity of a graph, informally, measures how close a graph is (metrically) to a tree. Hence, it is intuitively similar to treewidth, but the measures are formally incomparable. The main topic of the talk is a novel balanced separator theorem for planar delta-hyperbolic graphs that is substantially stronger than the classic
planar separator theorem. For any fixed delta>=0, we can find a small balanced
separator that induces either a single geodesic (shortest) path or a single
geodesic cycle in the graph, which guarantees that each separated part plus the separator induces a (planar) delta-hyperbolic graph.
As an application of our separator theorem, we will see that both Independent Set and TSP have near-linear time FPTASes in planar delta-hyperbolic graphs for any constant delta, running in Õ(n)* 2^{O(delta^2)} / eps^{O(delta)} time. For Independent Set this running time is essentially tight under the Exponential Time Hypothesis (ETH). |
arXiv |
10 April 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Jara Uitto: Deterministic (1 + epsilon)-Approximation Algorithm for Maximum Matching with polynomial (in epsilon) Passes in the Semi-Streaming Model |
We consider the classic maximum cardinality matching problem in the semi-streaming model. In the streaming model, the input graph G = (V, E) is considered to be significantly larger than the available random access memory. The input is presented to the algorithm as a stream of edges and, at any given time, the algorithm is only able to store O(|V| log |V|) bits of information. A semi-streaming algorithm is allowed to make several passes over the edge-stream but, ideally, should commit to a solution after a constant number of passes.
In his seminal work in 2005, McGregor presented a randomized (1 + epsilon)-approximation algorithm for matching that requires an exponential (in epsilon) number of passes. After McGregor's result, Eggert et al. presented a deterministic algorithm with polynomial number of passes for bipartite graphs. Later, Tirodkar gave an exponential deterministic algorithm for general graphs. However, McGregor's algorithm remained the fastest for general graphs. In our work, we discovered an algorithm that reaches a polynomial number of passes for general graphs yielding an exponential improvement over the previous work. Satisfyingly, our algorithm is also deterministic. |
|
3 April 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Juha Harviainen: Bayesian Network Learning with Narrow Precedence Constraint Graph |
Bayesian networks are probabilistic graphical models whose structure is represented as a directed acyclic graph (DAG). Finding the optimal network structure is a computationally hard task, but it can be made easier with constraints obtained from expert knowledge. These constraints can come in the form of precedence constraints which define a partial order that the structure must obey. When the constraints are compiled into a DAG, the complexity of learning with precedence constraints is connected to the number of ideals of the constraint graph. Taking the path cover number of the constraint graph as a parameter, we extend earlier results to the problems of sampling and weighted counting of network structures. We also consider the problems with a stronger type of precedence constraints, positive ancestral constraints, which state that a node must be an ancestor of another. With these constraints, we give efficient algorithms for the problems under the additional assumption that the constraint graph has only a small number of incomparable edges.
Joint work with Pekka Parviainen. |
|
27 March 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Sorrachai Yingchareonthawornchai: Faster Deterministic Vertex Connectivity Algorithms |
An $n$-vertex $m$-edge graph is $k$-vertex connected if it cannot be disconnected by deleting less than $k$ vertices. After more than half a century of intensive research, the result by [Li et al. STOC'21] finally gave a randomized algorithm for checking $k$-connectivity in near-optimal $\widehat{O}(m)$ time where $\widehat{O}(\cdot)$ to hide an $n^{o(1)}$ factor. Deterministic algorithms, unfortunately, have remained much slower even if we assume a linear-time max-flow algorithm: they either require at least $\Omega(mn)$ time [Even'75; Henzinger Rao and Gabow, FOCS'96; Gabow, FOCS'00] or assume that $k=o(\sqrt{\log n})$ [Saranurak and Yingchareonthawornchai, FOCS'22].
In this talk, I will describe a deterministic algorithm for checking $k$-vertex connectivity in time proportional to making $\min\{k^2, n\}$ max-flow calls, and, hence, in $\widehat{O}(m\min\{k^{2},n\})$ time using the deterministic max-flow algorithm by [Brand et al. FOCS'23]. Our algorithm gives the first almost-linear-time bound for all $k$ where $\sqrt{\log n}\le k\le n^{o(1)}$ and subsumes up to a sub-polynomial factor the long-standing state-of-the-art algorithm by [Even'75] which requires $O(n+k^{2})$ max-flow calls. For large $k$, the algorithm runs in $\widehat O(mn)$ time, which improves over the state-of-the-art deterministic $\widehat{O}(mn^{1.5})$-time algorithm [Gabow, FOCS'00]. Our key technique is based on Ramanujan expanders and derandomization of the kernelization technique of [Li et al. STOC'21] for which their kernel construction was randomized. Joint work with Yonggang Jiang, Chaitanya Nalam, and Thatchaphol Saranurak. |
|
6 March 2024 14:00 (Helsinki time) Location: T5, Konemiehentie 2 |
Lasse Leskelä: Community recovery from temporal and higher-order network interactions |
Community recovery is the task of learning a latent community structure from interactions in a population of N nodes. Efficient algorithms for sparse binary pairwise interaction data are well known, and so are their consistency properties with respect to data sampled from the stochastic block model (SBM), the canonical model for random graphs with a community structure. Instead of a binary variable indicating whether or not an interaction occurs, we often also observe a category, value, or shape of an interaction. This motivates the definition of a generalised SBM in which interactions can be of arbitrary type, including categorical, numeric, and vector-valued, and not excluding even more general objects such as Markov chains or Poisson processes. For this model, I will discuss information-theoretic bounds which characterise the existence of consistent estimators in terms of data sparsity, statistical similarity between intra- and inter-block interaction distributions, and the shape and size of the interaction space. Temporal networks with time-correlated interaction patterns of length T provide an important model instance, for which consistency can be analysed with respect to either N or T, or both, approaching infinity. Time permitting, I will also highlight recent findings and open problems related to data sets involving higher-order interactions which can be modelled using hypergraph stochastic block models. Joint work with Konstantin Avrachenkov and Maximilien Dreveton. |
|
7 February 2024 14:00 (Helsinki time) Location: AS6, Maarintie 8 |
Shreyas Pai: Time and Space Optimal Massively Parallel Algorithm for the 2-Ruling Set Problem |
In this talk, we present a constant-round algorithm for the 2-ruling set problem in the MPC model with linear space-per-machine and optimal total space. Our results improve on the O(log log log n)-round algorithm by [HPS, DISC'14] and the O(log log Δ)-round algorithm by [GGKMR, PODC'18]. Our techniques can be applied to the Congested Clique model to obtain a constant round algorithm, and to the semi-streaming model to obtain a constant pass algorithm.
The main technical contribution is a novel sampling procedure that returns a small subgraph such that almost all nodes in the input graph are adjacent to the sampled subgraph. An MIS on the sampled subgraph provides a 2-ruling set for a large fraction of the input graph. As a technical challenge, we must handle the remaining part of the graph, which might still be relatively large. We overcome this challenge by showing useful structural properties of the remaining graph and show that running our process twice yields a 2-ruling set of the original input graph with high probability. The talk is based on a paper that appeared in DISC 2023 and is joint work with Mélanie Cambus, Fabian Kuhn, and Jara Uitto. |
arXiv |
24 January 2024 14:00 (Helsinki time) Location: TU6, Maarintie 8 |
Geert van Wordragen: A Quadtree, a Steiner Spanner, and Approximate Nearest Neighbours in Hyperbolic Space |
We propose a data structure in d-dimensional hyperbolic space that can be considered a natural counterpart to quadtrees in Euclidean spaces.
Using these quadtrees we build geometric spanners. Near-linear size (1+ϵ)-spanners do not exist in hyperbolic spaces, but we are able to create a Steiner spanner that achieves a spanning ratio of 1+ϵ with O(n) edges for constant d and ϵ, using a simple construction that can be maintained dynamically. As a corollary we also get a (2+ϵ)-spanner (in the classical sense) of the same size, where the spanning ratio 2+ϵ is almost optimal among spanners of subquadratic size. Finally, we show that our Steiner spanner directly provides a solution to the approximate nearest neighbour problem: given a point set P in d-dimensional hyperbolic space we build the data structure in O(n log n) time, using O(n) space. Then for any query point q we can find a point p∈P that is at most 1+ϵ times farther from q than its nearest neighbour in P in O(log n) time. Moreover, the data structure is dynamic and can handle point insertions and deletions with update time O(log n). This is joint work with Sándor Kisfaludi-Bak. |
arXiv |
17 January 2024 14:00 (Helsinki time) Location: AS6, Maarintie 8 |
Augusto Modanese: Testing Spreading Behavior in Networks with Arbitrary Topologies |
Inspired by the works of Goldreich and Ron (J. ACM, 2017) and Nakar and Ron (ICALP, 2021), we initiate the study of property testing in dynamic environments with arbitrary topologies. Our focus is on the simplest non-trivial rule that can be tested, which corresponds to the 1-BP rule of bootstrap percolation and models a simple spreading behavior: Every "infected" node stays infected forever, and each "healthy" node becomes infected if and only if it has at least one infected neighbor. We show various results for both the case where we test a single time step of evolution and where the evolution spans several time steps. In the first, we show that the worst-case query complexity is O(Δ/ε) or Õ(n‾√/ε) (whichever is smaller), where Δ and n are the maximum degree of a node and number of vertices, respectively, in the underlying graph, and we also show lower bounds for both one- and two-sided error testers that match our upper bounds up to Δ=o(n‾√) and Δ=O(n1/3), respectively. In the second setting of testing the environment over T time steps, we show upper bounds of O(ΔT−1/εT) and Õ(|E|/εT), where E is the set of edges of the underlying graph. All of our algorithms are one-sided error, and all of them are also time-conforming and non-adaptive, with the single exception of the more complex Õ(n‾√/ε)-query tester for the case T=2. This is joint work with Yuichi Yoshida (NII). |
arXiv |
10 January 2024 14:00 (Helsinki time) Location: TU3, Maarintie 8 |
Tuomas Hakoniemi: Proof complexity meets algebraic circuit complexity - lower bounds for constant-depth refutations |
Ideal Proof System (IPS), introduced by Grochow and Pitassi, is a strong algebraic proof system that can be used to refute the solvability of systems of polynomial equations. IPS refutations are algebraic circuits, and lower bounds for IPS have an intimate connection with lower bounds in algebraic circuit complexity. In this talk we discuss lower bounds for constant-depth IPS refutations. Our work builds on the breakthrough lower bounds for constant-depth algebraic circuits by Limaye, Srinivasan and Tavenas, and the recent follow-up work by Amireddy, Garg, Kayal, Saha and Thankey that forgoes the hardness escalation step of Limaye, Srinivasan and Tavenas. This talk is based on joint works with Nashlen Govindasamy, Nutan Limaye and Iddo Tzameret. |
|
13 December 2023 16:15 (Helsinki time) Location: AS3, Maarintie 8 |
Massimo Equi: From Bit-Parallelism to Quantum String Matching for Labelled Graphs |
Many problems that can be solved in quadratic time have bit-parallel speed-ups with factor w, where w is the computer word size. A classic example is computing the edit distance of two strings of length n, which can be solved in O(n²/w) time. In a reasonable classical model of computation, one can assume w = Θ(log n), and obtaining significantly better speed-ups is unlikely in the light of conditional lower bounds obtained for such problems. In this paper, we study the connection of bit-parallelism to quantum computation, aiming to see if a bit-parallel algorithm could be converted to a quantum algorithm with better than logarithmic speed-up. We focus on string matching in labeled graphs, the problem of finding an exact occurrence of a string as the label of a path in a graph. This problem admits a quadratic conditional lower bound under a very restricted class of graphs (Equi et al. ICALP 2019), stating that no algorithm in the classical model of computation can solve the problem in time O(|P||E|^(1-ε)) or O(|P|^(1-ε)|E|). We show that a simple bit-parallel algorithm on such restricted family of graphs (level DAGs) can indeed be converted into a realistic quantum algorithm that attains subquadratic time complexity O(|E|√|P|). | |
29 November 2023 16:15 (Helsinki time) Location: AS3, Maarintie 8 |
Petteri Kaski
: The Asymptotic Rank Conjecture and the Set Cover Conjecture are not Both True |
Strassen's asymptotic rank conjecture [Progr. Math. 120 (1994)]
claims a strong submultiplicative upper bound on the rank of a three-tensor
obtained as an iterated Kronecker product of a constant-size base tensor.
The conjecture, if true, most notably would put square matrix multiplication
in quadratic time. We note here that some more-or-less unexpected algorithmic
results in the area of exponential-time algorithms would also follow.
Specifically, we study the so-called set cover conjecture, which states
that for any ε>0 there exists a positive integer constant k
such that no algorithm solves the k-Set Cover problem in worst-case
time O((2-ε)^n|F|poly(n)). The k-Set Cover problem asks, given as
input an n-element universe U, a family F of size-at-most-k subsets of U,
and a positive integer t, whether there is a subfamily of at most t sets
in F whose union is U. The conjecture was formulated by Cygan, Fomin,
Kowalik, Lokshtanov, Marx, Pilipczuk, Pilipczuk, and Saurabh in the monograph
Parameterized Algorithms [Springer, 2015], but was implicit as a hypothesis
already in Cygan, Dell, Lokshtanov, Marx, Nederlof, Okamoto, Paturi, Saurabh,
and Wahlström [CCC 2012, TALG 2016], there conjectured to follow from the
Strong Exponential Time Hypothesis. We prove that if the asymptotic rank
conjecture is true, then the set cover conjecture is false. Using a reduction
by Krauthgamer and Trabelsi [STACS 2019], in this scenario we would also
get an O((2-δ)^n)-time randomized algorithm for some constant δ>0
for another well-studied problem for which no such algorithm is known, namely
that of deciding whether a given $n$-vertex directed graph has
a Hamiltonian cycle.
This is joint work with Andreas Björklund (ITU Copenhagen). |
arXiv |
15 November 2023 16:15 (Helsinki time) Location: AS3, Maarintie 8 |
Francesco d’Amore
: The Strong Lottery Ticket Hypothesis and the Random Subset Sum Problem |
The Strong Lottery Ticket Hypothesis (SLTH) posits that randomly-initialized neural networks contain subnetworks (strong lottery tickets) that achieve competitive accuracy when compared to sufficiently small target networks, even those that have been trained. Empirical evidence for this phenomenon was first observed by Ramanujan et al. in 2020, spurring a line of theoretical research: Malach et al. (2020), Pensia et al. (2020), da Cunha et al. (2022), and Burkholz (2022) have analytically proved formulations of the SLTH in various neural network classes and under different hypotheses.
In this presentation, we provide an overview of the state-of-the-art theoretical research on the SLTH and its connection with the Random Subset Sum (RSS) problem in theoretical computer science. While previous works on the SLTH ensure that the strong lottery ticket can be obtained via unstructured pruning, we demonstrate how recent advances in the multidimensional generalization of the RSS problem can be leveraged to obtain forms of structured pruning. Additionally, we highlight how refining the RSS results would yield tighter formulations of the SLTH. This presentation is based on a joint work with Arthur da Cunha and Emanuele Natale that will be presented at NeurIPS 2023. |
Paper |
26 April 2023 14:15 (Helsinki time) Online |
Goran Zuzic
: Universal optimality in distributed computing and its connections to diverse areas of theoretical computer science |
The modern computation and information processing systems shaping our world have become massively distributed, and a fundamental understanding of distributed algorithmics has never been more important. At the same time, despite 40 years of intense study, we often do not have an adequate understanding of the fundamental barriers that rule out the existence of ultra-fast distributed algorithms. This is true even for the most well-studied problems in computer science---including the shortest path, minimum spanning tree, minimum cut, and many other well-known tasks.
In this talk, I will present a high-level overview of a sequence of papers that give a near-complete answer to the above question. Its culmination is the following pie-in-the-sky result called universal optimality: for all of the tasks mentioned, there exists a single distributed algorithm that, when run on any communication network G, is provably competitive with the fastest algorithm on G. The pursuit of universal optimality has led to the development of many new connections between distributed computing and other seemingly unrelated areas of theoretical computer science. Curiously, these connections have been mutually-beneficial and have already led to many breakthroughs not only in distributed computing, but also in the theory of metric embedding, information theory, and oblivious packet routing. I will briefly explore these connections. Location: Virtual ( Zoom link ). Join us in A140 (T4) to watch it together. |
Arxiv |
19 April 2023 14:15 (Helsinki time) Location: A140 (T4) |
Rustam Latypov
: Optimal Deterministic Massively Parallel Connectivity on Forests |
We show fast deterministic algorithms for fundamental problems on forests in the challenging low-space regime of the well-known Massive Parallel Computation (MPC) model. A recent breakthrough result by Coy and Czumaj [STOC'22] shows that, in this setting, it is possible to deterministically identify connected components on graphs in O(log D + log log n) rounds, where D is the diameter of the graph and n the number of nodes. The authors left open a major question: is it possible to get rid of the additive log log n factor and deterministically identify connected components in a runtime that is completely independent of n?
We answer the above question in the affirmative in the case of forests. We give an algorithm that identifies connected components in O(log D) deterministic rounds. The total memory required is O(n + m) words, where m is the number of edges in the input graph, which is optimal as it is only enough to store the input graph. We complement our upper bound results by showing that Ω(log D) time is necessary even for component-unstable algorithms, conditioned on the widely believed 1 vs. 2 cycles conjecture. Our techniques also yield a deterministic forest-rooting algorithm with the same runtime and memory bounds. Location: A140 (T4) |
|
12 April 2023 14:15 (Helsinki time) Location: A140 (T4) |
Parinya Chalermsook
: Algorithms, Extremal Combinatorics, and Rectangles |
In this talk, I will give an overview of the interplay between extremal combinatorics and algorithms in the context of computing maximum independent set in a graph. At some point, we will shift the attention to specific things like rectangle graphs.
Location: A140 (T4) |
|
5 April 2023 14:15 (Helsinki time) Online |
Ming Ding: A Hardness Result for 1-Laplacians and Fast 1-Laplacian Solvers |
1-Laplacians or higher-dimensional Laplacians generalize graph Laplacians to higher-dimensional simplicial complexes and are crucial in computational topology and topological data analysis. It is known that nearly-linear time solvers exist for graph Laplacians. However, nearly-linear time solvers for 1-Laplacians are only known for restricted classes of complexes.
In this talk, I will present 1-Laplacians in two aspects. In the aspect of the lower bound, a hardness result for 1-Laplacians shows that linear equations in 1-Laplacians are as hard to solve as general linear equations. More precisely, for any constant c ≥ 1, if we can solve linear equations in 1-Laplacians up to high accuracy in time O˜((# of nonzero coefficients)^c), then we can solve general linear equations with polynomially bounded integer coefficients and condition numbers up to high accuracy in time O˜((# of nonzero coefficients)^c). In the aspect of the upper bound, we generalize existing collapsing-based 1-Laplacian solvers to less restricted classes of complexes. Specifically, we can approximately solve 1-Laplacian systems of a well-shaped simplicial complex with $n$ simplexes up to high precision in time $\tilde{O}(n^{3/2})$. Our solvers are inspired by the Incomplete Nested Dissection designed by Kyng et al. [STOC’2018] for stiffness matrices of well-shaped trusses. Based on joint work with Peng Zhang, Rasmus Kyng, Maximilian Probst Gutenberg. Location: Virtual (Zoom). Join us at A140 (T4) to watch together |
Arxiv |
29 March 2023 14:15 (Helsinki time) Location: A140 (T4) |
Mélanie Cambus
: A Parallel Algorithm for (3 + ε)-Approximate Correlation Clustering |
Grouping together similar elements in datasets is a common task in data mining and machine learning. In this talk, we present parallel algorithms for correlation clustering, where each pair of items is labeled either similar or dissimilar. The task is to partition the elements and the objective is to minimize disagreements, that is, the number of dissimilar elements grouped together and similar elements grouped separately.
Our main contribution is a parallel algorithm that achieves a (3 + ε)-approximation to the minimum number of disagreements. Our algorithm builds on the analysis of the PIVOT algorithm by Ailon, Charikar, and Newman that obtains a 3-approximation in the centralized setting. Our design allows us to sparsify the input graph by ignoring a large portion of the nodes and edges without a large extra cost as compared to the analysis of PIVOT. This sparsification makes our technique applicable on several models of massive graph processing, such as Massively Parallel Computing (MPC) and graph streaming, where sparse graphs can typically be handled much more efficiently.
We focus on the linear memory MPC model, where our approach yields an O(1) time algorithm where the runtime is independent of ε, which only appears in the memory demand.
Location: A140 (T4) Zoom |
Arxiv |
22 March 2023 14:15 (Helsinki time) Location: A140 (T4) |
Russell W. F. Lai
: Efficient Laconic Cryptography from Learning With Errors |
Laconic cryptography is an emerging paradigm that enables cryptographic
primitives with sublinear communication complexity in just two messages.
In particular, a two-message protocol between Alice and Bob is called
laconic if its communication and computation complexity are essentially
independent of the size of Alice's input. This can be thought of as a
dual notion of fully-homomorphic encryption, as it enables
``Bob-optimized'' protocols. This paradigm has led to tremendous
progress in recent years. However, all existing constructions of laconic
primitives are considered only of theoretical interest: They all rely on
non-black-box cryptographic techniques, which are highly impractical.
This work shows that non-black-box techniques are not necessary for
basic laconic cryptography primitives. We propose a completely algebraic
construction of laconic encryption, a notion that we introduce in this
work, which serves as the cornerstone of our framework. We prove that
the scheme is secure under the standard Learning With Errors assumption
(with polynomial modulus-to-noise ratio). We provide proof-of-concept
implementations for the first time for laconic primitives, demonstrating
the construction is indeed practical: For a database size of 2^50,
encryption and decryption are in the order of single digit milliseconds.
Laconic encryption can be used as a black box to construct other laconic
primitives. Specifically, we show how to construct:
- Laconic oblivious transfer
- Registration-based encryption scheme
- Laconic private-set intersection protocol
All of the above have essentially optimal parameters and similar
practical efficiency.
Furthermore, our laconic encryption can be preprocessed such that the
online encryption step is entirely combinatorial and therefore much more
efficient.
Using similar techniques, we also obtain identity-based encryption with
an unbounded identity space and tight security proof (in the standard
model).
Location: A140 (T4) |
|
15 March 2023 14:15 (Helsinki time) Location: A140 (T4) |
Augusto Modanese
: Embedding arbitrary Boolean circuits into fungal automata |
Fungal automata are a variation of the two-dimensional sandpile automaton of
Bak, Tang and Wiesenfeld (Phys.~Rev.~Lett., 1987).
In each step toppling cells emit grains only to \emph{some} of their
neighbors chosen according to a specific update sequence.
We show how to embed any Boolean circuit into the initial configuration of a
fungal automaton with update sequence $HV$.
In particular we give a constructor that, given the description $B$ of a
circuit, computes the states of all cells in the finite support of the
embedding configuration in $O(\log |B|)$ space.
As a consequence the prediction problem for fungal automata with
update sequence $HV$ is $P$-complete.
This solves an open problem of Goles et al.~(Phys.~Lett.~A, 2020).
This is joint work with Thomas Worsch. Location: A140 (T4) Zoom link |
Arxiv |
8 March 2023 14:15 (Helsinki time) Location: A140 (T4) |
Shreyas Pai
: Message Complexity of Distributed Algorithms |
In this talk we will look at the communication cost (or message complexity) of fundamental problems in the distributed CONGEST model. We will address the following question in this talk: can we solve problems using sublinear, i.e., $o(m)$ communication, and if so under what conditions?
In a classical result, Awerbuch, Goldreich, Peleg, and Vainish [JACM 1990] showed that fundamental global problems such as broadcast and spanning tree construction require at least $\Omega(m)$ messages in the KT-1 CONGEST model (i.e., CONGEST model in which nodes have initial knowledge of the neighbors' IDs) when algorithms are restricted to be comparison-based (i.e., algorithms in which node IDs can only be compared). Thirty five years after this result, King, Kutten, and Thorup [PODC 2015] showed that one can solve the above problems using $\tilde{O}(n)$ messages ($n$ is the number of nodes in the graph) in $\tilde{O}(n)$ rounds in the KT-1 CONGEST model if non-comparison-based algorithms are permitted. An important implication of this result is that one can use the synchronous nature of the KT-1 CONGEST model, using silence to convey information, and solve any graph problem using non-comparison-based algorithms with $\tilde{O}(n)$ messages, but this takes an exponential number of rounds. In contrast, much less is known about the message complexity of local symmetry breaking problems such as coloring and MIS. We will look at the following results in this talk: Lower bound: In the KT-1 CONGEST model, any comparison-based algorithm, even a randomized Monte-Carlo algorithm with constant success probability, requires $\Omega(n^2)$ messages in the worst case to solve either $(\Delta+1)$-coloring, regardless of the number of rounds. Upper bound: In the KT-1 CONGEST model, we present the following randomized non-comparison-based $(\Delta+1)$-coloring algorithm that uses $\tilde{O}(n^{1.5})$ messages, while running in $\tilde{O}(D+\sqrt{n})$ rounds, where $D$ is the graph diameter. If time permits, we can also look at some more recent work on message complexity of optimization problems. Based on joint work with Fabien Douflon, Gopal Pandurangan, Sriram V. Pemmaraju, and Peter Robinson. Location: A140 (T4) |
|
1st of March 2023 14:15 (Helsinki time) Location: A140 (T4) |
Andreas Grigorjew
: Width Helps and Hinders Splitting Flows |
Minimum flow decomposition (MFD) is the NP-hard problem of finding a smallest decomposition of a network flow $X$ on directed graph $G$ into weighted source-to-sink paths whose superposition equals $X$. We focus on a common formulation of the problem where the path weights must be non-negative integers and also on a new variant where these weights can be negative.
We show that, for acyclic graphs, considering the \emph{width} of the graph (the minimum number of $s$-$t$ paths needed to cover all of its edges) yields advances in our understanding of its approximability.
For the non-negative version, we show that a popular heuristic is a $O( \log |X|)$-approximation ($|X|$ being the total flow of $X$) on graphs satisfying two properties related to the width (satisfied by e.g., series-parallel graphs),
and strengthen its worst-case approximation ratio from $\Omega(\sqrt{m})$ to $\Omega(m / \log m)$ for sparse graphs, where $m$ is the number of edges in the graph.
For the negative version, we give a $(\lceil \log \Vert X \Vert \rceil +1)$-approximation ($\Vert X \Vert$
being the maximum absolute value of $X$ on any edge) using a power-of-two approach, combined with parity fixing arguments and a decomposition of unitary flows ($\Vert X \Vert \leq 1$) into
at most width paths.
We also disprove a conjecture about the linear independence of minimum (non-negative) flow decompositions posed by Kloster et al.
[ALENEX 2018], but show that its useful implication (polynomial-time assignments of weights to a given
set of paths to decompose a flow) holds for the negative version.
Joint work with Manuel Cáceres, Massimo Cairo, Shahbaz Khan, Brendan Mumey, Romeo Rizzi, Alexandru I. Tomescu, Lucia Williams Location: A140 (T4) Zoom link |
Arxiv |
22nd of February 2023 14:15 (Helsinki time) Location: A133 (T5) |
Schiewe Philine: Algorithms and Hardness for Non-Pool-Based Line Planning |
Line planning, i.e., choosing paths which are operated by one vehicle end-to-end, is an important aspect of public transport planning and can be considered as a discrete optimization problem. The goal is to determine a multiset of paths such that each edge of a given graph is covered according to predefined bounds and the costs are minimzed. While there exist heuristic procedures for generating lines from scratch, most theoretical observations consider the problem of choosing lines from a predefined line pool, i.e., restricting the solution space by defining a superset of paths to choose from. In this talk, we consider the complexity of the line planning problem when all simple paths can be used as lines. Depending on the cost structure, we show that the problem can be NP-hard even for paths and stars, and that no polynomial time approximation of sub-linear performance is possible. Additionally, we identify polynomially solvable cases and present a pseudo-polynomial solution approach for trees.
Joint work with Irene Heinrich (TU Darmstadt) and Constantin Seebach (RPTU Kaiserslautern Landau). Location: A133 (T5) (Note: a different room from the usual room) |
Paper |
15th of February 2023 14:15 (Helsinki time) Online |
Karol Węgrzycki: Parameterized Approximation for Maximum Weight Independent Set of Rectangles and Segments |
In the Maximum Weight Independent Set of Rectangles problem (MWISR) we are given a weighted set of $n$ axis-parallel rectangles in the plane. The task is to find a subset of pairwise non-overlapping rectangles with the maximum possible total weight. This problem is NP-hard and the best-known polynomial-time approximation algorithm, due to by Chalermsook and Walczak (SODA 2021), achieves approximation factor $\mathcal{O}(\log\log(n))$. While in the unweighted setting, constant factor approximation algorithms are known, due to Mitchell (FOCS 2021) and to Gálvez et al. (SODA 2022), it remains open to extend these techniques to the weighted setting.
In this paper, we consider MWISR through the lens of parameterized approximation. Grandoni et~al. (ESA 2019) gave $(1-\eps)$-approximation algorithm with running time $k^{\mathcal{O}(k/\eps^8)} n^{\mathcal{O}(1/\eps^8)}$ time, where $k$ is the number of rectangles in optimum solution. Unfortunately, their algorithm only works in the unweighted setting and they left it as an open problem to give a parameterized approximation scheme in the weighted setting. Our contribution is a partial answer to the open question of Grandoni et al. (ESA 2019). We give a parameterized approximation algorithm for MWISR that given a parameter $k$, finds a set of non-overlapping rectangles of weight at least $(1-\eps) \omega(\mathrm{opt}_k)$ in $2^{\mathcal{O}(k \log(k/\eps))} n^{\mathcal{O}(1/\eps)}$ time, where $\omega(\mathrm{opt}_k)$ is the maximum weight of solution of cardinality at most $k$. Note that thus, our algorithm may return a solution consisting of more than $k$ rectangles. To complement this apparent weakness, we also propose a parameterized approximation scheme that outputs a solution with cardinality at most $k$ and total weight at least $(1-\eps)\omega(\mathrm{opt}_k)$ for the special case of axis-parallel segments. Joint work with Jana Cslovjecsek and Michał Pilipczuk Location: Virtual (Zoom). Join us in A140 (T4) to watch it together. Zoom link |
Arxiv |
8th of February 2023 14:15 (Helsinki time) Location: A140 (T4) |
Nidia Obscura Acosta: Improved Pattern-Avoidance Bounds for Greedy BSTs via Matrix Decomposition |
Greedy BST (or simply Greedy) is an online self-adjusting binary search tree defined in the geometric view ([Lucas, 1988; Munro, 2000; Demaine, Harmon, Iacono, Kane, Patrascu, SODA 2009). Along with Splay trees (Sleator, Tarjan 1985), Greedy is considered the most promising candidate for being dynamically optimal, i.e., starting with any initial tree, their access costs on any sequence is conjectured to be within $O(1)$ factor of the offline optimal. However, in the past four decades, the question has remained elusive even for highly restricted input.
In this paper, we prove new bounds on the cost of Greedy in the ''pattern avoidance'' regime. Our new results include: The (preorder) traversal conjecture for Greedy holds up to a factor of $O(2^{\alpha(n)})$, improving upon the bound of $2^{\alpha(n)^{O(1)}}$ in (Chalermsook et al., FOCS 2015). This is the best known bound obtained by any online BSTs. We settle the postorder traversal conjecture for Greedy. The deque conjecture for Greedy holds up to a factor of $O(\alpha(n))$, improving upon the bound $O(2^{\alpha(n)})$ in (Chalermsook, et al., WADS 2015). The split conjecture holds for Greedy up to a factor of $O(2^{\alpha(n)})$. Key to all these results is to partition (based on the input structures) the execution log of Greedy into several simpler-to-analyze subsets for which classical forbidden submatrix bounds can be leveraged. This is joint work with Parinya Chalermsook, Manoj Gupta, Wanchote Jiamjitrak, Akash Pareek, and Sorrachai Yingchareonthawornchai. Location: A140 (T4) |
Arxiv |
1st of February 2023 14:15 (Helsinki time) Location: A140 (T4) |
Oumarou Oumarou: Accelerating Quantum Computations of Chemistry Through Regularized Compressed Double Factorization |
We propose the regularized compressed double factorization (RC-DF) method to classically compute compressed representations of molecular Hamiltonians that enable efficient simulation with noisy intermediate scale (NISQ) and error corrected quantum algorithms. We find that already for small systems with 12 to 20 qubits, the resulting NISQ measurement scheme reduces the number of measurement bases by roughly a factor of three and the shot count to reach chemical accuracy by a factor of three to six compared to truncated double factorization (DF) and we see order of magnitude improvements over Pauli grouping schemes. We demonstrate the scalability of our approach by performing RC-DF on the Cpd I species of cytochrome P450 with 58 orbitals and find that using the resulting compressed Hamiltonian cuts the run time of qubitization and truncated DF based error corrected algorithms almost in half and even outperforms the lambda parameters achievable with tensor hypercontraction (THC) while at the same time reducing the CCSD(T) energy error heuristic by an order of magnitude.
Joint work with Maximilian Scheurer, Robert M. Parrish, Edward G. Hohenstein and Christian Gogolin. Location: A140 (T4) |
Arxiv |
18th of January 2023 14:15 (Helsinki time) Location: A140 (T4) |
Manuel Cáceres: Minimum Path Cover in Parameterized Linear Time |
A minimum path cover (MPC) of a directed acyclic graph (DAG) $G = (V,E)$ is a minimum-size set of paths that together cover all the vertices of the DAG. Computing an MPC is a basic polynomial problem, dating back to Dilworth's and Fulkerson's results in the 1950s. Since the size $k$ of an MPC (also known as the \emph{width}) can be small in practical applications, research has also studied algorithms whose running time is parameterized on $k$.
We obtain a new MPC parameterized algorithm for DAGs running in time $O(k^2|V| + |E|)$. Our algorithm is the first solving the problem in parameterized linear time. Additionally, we obtain an edge sparsification algorithm preserving the width of a DAG but reducing $|E|$ to less than $2|V|$. This algorithm runs in time $O(k^2|V|)$ and requires an MPC of a DAG as input, thus its total running time is the same as the running time of our MPC algorithm. Location: A140 (T4) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
Arxiv |
25th of January 2023 |
Break | SODA 2023 | |
11th of January 2023 14:15 (Helsinki time) |
Michael Klooß: Short relaxed range proofs |
We give a brief overview of the relaxed range proofs by Couteau et al.
[CKLR21](https://eprint.iacr.org/2021/540) and
[CGKR22](https://eprint.iacr.org/2022/1153), which are based on square
decomposition. Our focus is on the (batch) proof of shortness and its
relaxed soundness notion ("short rationals") in the latter. The abstract
of [CGKR22] is cited below.
We provide optimized range proofs, called Sharp, in discrete logarithm and hidden order groups, based on square decomposition. In the former setting, we build on the paradigm of Couteau et al. (Eurocrypt ’21) and optimize their range proof (from now on, CKLR) in several ways: (1) We introduce batching via vector commitments and an adapted Σ-protocol. (2) We introduce a new group switching strategy to reduce communication. (3) As repetitions are necessary to instantiate CKLR in standard groups, we provide a novel batch shortness test that allows for cheaper repetitions. The analysis of our test is nontrivial and forms a core technical contribution of our work. For example, for λ = 128 bit security and B = 64 bit ranges for N = 1 (resp. N = 8) proof(s), we reduce the proof size by 34% (resp. 75%) in arbitrary groups, and by 66% (resp. 88%) in groups of order 256-bit, compared to CKLR. As Sharp and CKLR proofs satisfy a “relaxed” notion of security, we show how to enhance their security with one additional hidden order group element. In RSA groups, this reduces the size of state of the art range proofs (Couteau et al., Eurocrypt ’17) by 77% (λ = 128, B = 64, N = 1). Finally, we implement our most optimized range proof. Compared to the state of the art Bulletproofs (Bünz et al., S&P 2018), our benchmarks show a very significant runtime improvement. Eventually, we sketch some applications of our new range proofs. Location: A140 (T4) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
Paper |
14th of December 14:15 (Helsinki time) |
Tuukka Korhonen: An Improved Parameterized Algorithm for Treewidth |
Treewidth is a graph parameter that, informally, characterizes how tree-like a graph is. Treewidth plays an important role in many areas of computer science and graph theory, in particular generalizing results on trees to graphs of small treewidth.
We give a $2^{O(k^2)} n^{O(1)}$ time algorithm for determining if the treewidth of a given $n$-vertex graph is at most $k$ and outputting the corresponding tree decomposition. This resolves the long-standing open problem of whether there is a $2^{o(k^3)} n^{O(1)}$ time algorithm for treewidth. In particular, this is the first improvement on the dependency on $k$ in fixed-parameter algorithms for treewidth since the $2^{O(k^3)} n^{O(1)}$ time algorithm given in 1991 by Bodlaender and Kloks, and independently, by Lagergren and Arnborg. We also give a $k^{O(k/\varepsilon)} n^{O(1)}$ time ($1+\varepsilon$)-approximation algorithm for treewidth. Prior to our work, no approximation algorithms with approximation ratio less than 2 other than the exact algorithms were known.
This is joint work with Daniel Lokshtanov. Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
ArXiv |
7th of December 14:15 (Helsinki time) |
Vikas Garg: Provably powerful temporal graph networks |
Temporal graph networks (TGNs) have gained prominence as models for embedding dynamic interactions, but little is known about their theoretical underpinnings. I’ll give an overview of our recent work on the representational power and limits of the two main categories of TGNs: those that aggregate temporal walks (WA-TGNs), and those that augment local message passing with recurrent memory modules (MP-TGNs). Specifically, novel constructions reveal the inadequacy of MP-TGNs and WA-TGNs, proving that neither category subsumes the other. The most powerful MP-TGNs must use injective memory modules and message passing updates, in which case they become as powerful as a natural temporal extension of 1-WL. Interestingly, it turns out that sufficiently deep MP-TGNs cannot benefit from memory. Our investigations lead to PINT, which provably more expressive than both MP-TGNs and WA-TGNs.
Based on joint work with Amauri Souza, Diego Mesquita, and Samuel Kaski. Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
Paper |
30th of November 14:15 (Helsinki time) |
Madhav Krishnan Vijayan: Compilation of algorithm-specific graph states for quantum circuits |
We present a quantum circuit compiler that prepares an algorithm-specific graph state from quantum circuits described in high level languages, such as Cirq and Q#. The computation can then be implemented using a series of non-Pauli measurements on this graph state. By compiling the graph state directly instead of starting with a standard lattice cluster state and preparing it over the course of the computation, we are able to better understand the resource costs involved and eliminate wasteful Pauli measurements on the actual quantum device. Access to this algorithm-specific graph state also allows for optimisation over locally equivalent graph states to implement the same quantum circuit. The compiler presented here finds ready application in measurement based quantum computing, NISQ devices and logical level compilation for fault tolereant implementations.
Location: Virtual (Zoom). Join us in A133 (T5) to view it together. Zoom link |
ArXiv |
23rd of November 14:15 (Helsinki time) |
Hung Le: Low Treewidth Embeddings of Planar Graphs |
Traditional metric embedding focuses on finding an embedding with small multiplicative distortion. However, the search for this kind of embedding for planar graphs has not been very fruitful. Indeed, most known results for multiplicative embeddings of planar graphs are negative. In this talk, I will describe an embedding with additive distortion into low treewidth graphs, a new type of embedding for planar graphs. I will also discuss several algorithmic applications of this embedding and conclude the talk with several open problems.
Location: Virtual (Zoom). Join us in A133 (T5) to view it together. Zoom link |
ArXiv |
16th of November 14:15 (Helsinki time) |
Akira Takahashi: Two-round Multi-party Signing from Lattices |
Multi-party signatures comprise a class of protocols that allow a group of signers to jointly produce a single signature on the same message. In recent years, a number of such protocols following the Fiat-Shamir paradigm have been proposed in the discrete-log setting, e.g., MuSig2, DWMS, and FROST.
In this talk, I will first go over existing techniques realizing round-efficient Fiat-Shamir multi-party signing. I will then discuss new approaches to constructing lattice-based instantiations, by highlighting the main technical challenges as well as new proof techniques enabled by lattice trapdoors. Our protocols are provably secure in the (classical) random oracle model under the standard lattice-based hardness assumptions, such as SIS and LWE, while retaining only two rounds of interaction during the signing protocol as in the state-of-the-art Schnorr-based constructions. This talk is based on joint work with Cecilia Boschini, Ivan Damgård, Claudio Orlandi, and Mehdi Tibouchi. Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
Paper 1, Paper 2 |
9th of November 14:15 (Helsinki time) |
Tillmann Miltzow: Training Neural Networks is $\exists\mathbb{R}$-complete |
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weights for the neural network such that the total error is below the threshold. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is $\exists\mathbb{R}$-complete. This means that the problem is equivalent, up to polynomial-time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, $\exists\mathbb{R}$ is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP.
Neural networks are usually trained using some variation of backpropagation. The result of this paper offers an explanation of why techniques commonly used to solve big instances of NP-complete problems seem not to be of use for this task. Examples of such techniques are SAT solvers, IP solvers, local search, and dynamic programming, to name a few general ones.
Location: Virtual (Zoom). Join us in A133 (T5) to view it together. Zoom link |
ArXiv |
2nd of November 14:15 (Helsinki time) |
Mikkel Abrahamsen: Online Sorting and Translational Packing of Convex Polygons |
We investigate various online packing problems in which convex polygons arrive one by one and have to be placed irrevocably into a container before the next piece is revealed; the pieces must not be rotated, but only translated. The aim is to minimize the used space depending on the specific problem at hand, e.g., the strip length in strip packing, the number of bins in bin packing, etc. We draw interesting connections to the following online sorting problem OnlineSorting$[\gamma,n]$: We receive a stream of real numbers $s_1,…,s_n,\; s_i \in [0,1]$, one by one. Each real must be placed in an array $A$ with $\gamma n$ initially empty cells without knowing the subsequent reals. The goal is to minimize the sum of differences of consecutive reals in $A$. The offline optimum is to place the reals in sorted order so the cost is at most 1. We show that for any $\Delta$-competitive online algorithm of OnlineSorting$[\gamma,n]$, it holds that $\gamma\Delta \in \Omega(\log n/\log\log n)$. We use this lower bound to prove the non-existence of competitive algorithms for various online translational packing problems of convex polygons, among them strip packing, bin packing and perimeter packing. This also implies that there exists no online algorithm that can pack all streams of pieces of diameter and total area at most $\delta$ into the unit square. These results are in contrast to the case when the pieces are restricted to rectangles, for which competitive algorithms are known. Likewise, the offline versions of packing convex polygons have constant factor approximation algorithms. On the positive side, we present an algorithm with competitive ratio $O(n^{0.59})$ for online translational strip packing of convex polygons. In the case of OnlineSorting$[C,n]$ for any constant $C>1$, we present an $2^{O(\sqrt{\log n \log\log n})})$-competitive algorithm. Location: Virtual (Zoom). Join us in A133 (T5) to view it together. Zoom link |
ArXiv |
26th of October 14:15 (Helsinki time) |
Kirthivaasan Puniamurthy: Pebbling for proving adaptive security: Yao's garbling scheme |
Secure multi-party computation (MPC) allows several parties to compute a function $f$ on their secret data $x_1,...,x_n$ without any party learning anything except for the output $y = f(x_1,...,x_n)$. Yao's garbling scheme is one of the key techniques in MPC. Informally, garbling enables one to transform a circuit/input pair $(C, x)$ into a "garbled" circuit/input pair $\tilde{C}, \tilde{x}$ while preserving the output, i.e. $C(x) = \tilde{C}(\tilde{x})$.
Roughly speaking, a garbling scheme is secure if there exists an efficient simulator which is only given $y = C(x)$ (and the topology of $C$) and can generate $\tilde{C}, \tilde{x}$ that are indistinguishable from the real garbled pair $\tilde{C}, \tilde{x}$. We refer to this notion of security as selective simulation security. It is selective because $(C, x)$ are garbled simultaneously.
One may instead consider an adaptive setting, where an adversary first obtains the garbled circuit, and only then selects an input to be garbled. Since the input may now depend on the garbled circuit, a proof for selective simulation does not carry over to the adaptive setting. To fix this, we require new proof techniques (and/or new constructions) for proving adaptive security. In this talk we look at a proof technique that involves guessing, which can then be formulated within the language of so called pebbling, a combinatorial game played on graphs.
Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
|
19th of October 14:15 (Helsinki time) |
Pekka Orponen: Algorithmic Design of 3D Wireframe RNA Polyhedra |
We address the problem of de novo design and synthesis of nucleic acid nanostructures, a challenge that has been considered in the area of DNA nanotechnology since the 1980s and more recently in the area of RNA nanotechnology. Toward this goal, we introduce a general algorithmic design process and software pipeline for rendering 3D wireframe polyhedral nanostructures in single-stranded RNA. To initiate the pipeline, the user creates a model of the desired polyhedron using standard 3D graphic design software. As its output, the pipeline produces an RNA nucleotide sequence whose corresponding RNA primary structure can be transcribed from a DNA template and folded in the laboratory. As case examples, we have designed and characterised experimentally three 3D RNA nanostructures: a tetrahedron, a triangular bipyramid, and a triangular prism.
Joint work with Antti Elonen, Ashwin Natarajan, Ibuki Kawamata, Lukas Oesinghaus, Abdulmelik Mohammed, Jani Seitsonen, Yuki Suzuki, Friedrich Simmel and Anton Kuzyk. Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
Paper |
12th of October 14:15 (Helsinki time) |
Valerio Cini: Lattice-Based SNARKs: Publicly Verifiable, Preprocessing, and Recursively Composable |
In this talk, we present the first publicly verifiable lattice-based
succinct non-interactive argument of knowledge (SNARK). The construction
stems from a general technical toolkit that we develop to translate
pairing-based schemes to lattice-based ones, and that allows us to
construct the primitive which is at the heart of our SNARK construction:
a new lattice-based vector commitment (VC) scheme supporting openings to
constant-degree multivariate polynomial maps. The security is based on a
new family of lattice-based computational assumptions, that we call
$k$-Ring-Inhomogenous Short Integer Solution (or $k$-R-ISIS for short),
which naturally generalizes the standard Short Integer Solution (SIS)
assumption.
Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
Paper |
5th of October 14:15 (Helsinki time) |
Sorrachai Yingchareonthawornchai: Deterministic Small Vertex Connectivity in Almost Linear Time |
In the vertex connectivity problem, given an undirected n-vertex
m-edge graph, we need to compute the minimum number of vertices
that can disconnect the graph after removing them. This problem is one
of the most well-studied graph problems. From 2019, a new line of
work [Nanongkai et al. STOC'19;SODA'20;STOC'21] has used
randomized techniques to break the quadratic-time barrier and, very
recently, culminated in an almost-linear time algorithm via the recently
announced maxflow algorithm by Chen et al. In contrast, all known deterministic algorithms are much slower. The fastest algorithm [Gabow
FOCS'00] takes $O(m(n+\min\{c^{5/2},cn^{3/4}\}))$ time where $c$
is the vertex connectivity. It remains open whether there exists a subquadratic-time deterministic algorithm for any constant
$c>3$. In this talk, we present the first deterministic almost-linear time vertex connectivity algorithm for all constants c. Our running time is $m^{1+o(1)}2^{O(c^{2})}$ time, which is almost-linear for all $c=o(\sqrt{\log n})$. This is the first deterministic algorithm that breaks the $O(n^{2})$-time bound on sparse graphs where $m=O(n)$, which is known for more than 50 years ago [Kleitman'69]. Towards our result, we give a new reduction framework to vertex expanders which in turn exploits our new almost-linear time construction of mimicking network for vertex connectivity. The previous construction by Kratsch and Wahlström [FOCS'12] requires large polynomial time and is randomized. An interesting aspect that allows our overall algorithm to be efficient is to ``lift'' several graph problems to hypergraphs and work directly on hypergraphs. This is joint work with Thatchaphol Saranurak. Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
|
28th of September 14:15 (Helsinki time) |
Juha Harviainen: Trustworthy Monte Carlo |
Monte Carlo integration is a key technique for designing randomized approximation schemes for counting problems, with applications, e.g., in machine learning and statistical physics. The technique typically enables massively parallel computation, however, with the risk that some of the delegated computations contain spontaneous or adversarial errors. We present an orchestration of the computations such that the outcome is accompanied with a proof of correctness. Specifically, we adopt an algebraic proof system developed in computational complexity theory, in which the proof is represented by a polynomial; evaluating the polynomial at a random point amounts to a verification of the proof with probabilistic guarantees. We give examples of known Monte Carlo estimators that admit verifiable extensions with moderate computational overhead: for the permanent of zero--one matrices, for the model count of disjunctive normal form formulas, and for the gradient of logistic regression models. We also discuss the prospects and challenges of engineering efficient verifiable approximation schemes more generally.
Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
|
21st of September 14:15 (Helsinki time) |
Francesco d'Amore: On the multidimensional random subset sum problem |
In the Random Subset Sum Problem, given $n$ i.i.d. random variables $X_1,\dots,X_n$, we wish to approximate any point $z\in [−1,1]$ as the sum of a suitable subset $X_{i_1}(z),\dots,X_{i_s}(z)$ of them, up to error $\varepsilon$. Despite its simple statement, this problem is of fundamental interest to both theoretical computer science and statistical mechanics. More recently, it gained renewed attention for its implications in the theory of Artificial Neural Networks. An obvious multidimensional generalisation of the problem is to consider $n$ i.i.d. $d$-dimensional random vectors, with the objective of approximating every point $z\in [−1,1]^d$. Rather surprisingly, after Lueker's 1998 proof that, in the one-dimensional setting, $n=O(\log 1/\varepsilon)$ samples guarantee the approximation property with high probability, little progress has been made on achieving the above generalisation. In this work, we prove that, in $d$ dimensions, $n=O(d^3 \log 1/\varepsilon \cdot (\log 1/\varepsilon+\log d))$ samples suffice for the approximation property to hold with high probability. Location: A133 (T5) You can also join via Zoom, but there are no guarantees regarding the audio/video quality. Zoom link |
Paper |
24th of August 14:15 (Helsinki time) |
Mikäel Rabie: Distributed Recoloring |
In Graph Theory, a recoloring problem is as follows: given two colorings of a graph, can we go from the first one to the second by changing the color of a node one after another, while keeping the coloring proper?
In 2018, we introduced the Distributed Recoloring question: what happens if we allow an independent set of nodes to be recolored at each step? Can we find a recoloring schedule in an efficient way in the Distributed LOCAL model? This question appears to be hard in the general settings. However, different results were found in some specific cases. In this presentation, I will show results from different papers on that topic: Recoloring on Trees, Interval Graphs and Chordal Graphs, with the use of some extra colors. I will also show how to produce a constant length schedule in bounded degree graphs with Delta+1 colors, without using any extra color, when the colorings follow some nice properties. This last result allows improving the recoloring schedule in the centralized settings (from quadratic to linear). Those works are the result of collaborations with Marthe Bonamy, Nicolas Bousquet, Laurent Feuilloley, March Heinrich, Paul Ouvrard, Jara Uitto and Jukka Suomela |
ArXiv |
10th of August 14:15 (Helsinki time) |
Tuukka Korhonen: Fixed-Parameter Tractability of Maximum Colored Path and Beyond |
We introduce a general method for obtaining fixed-parameter algorithms for problems about finding paths in undirected graphs, where the length of the path could be unbounded in the parameter. The first application of our method is as follows. We give a randomized algorithm, that given a colored $n$-vertex undirected graph, vertices $s$ and $t$, and an integer $k$, finds an $(s,t)$-path containing at least $k$ different colors in time $2^k n^{O(1)}$. This is the first FPT algorithm for this problem, and it generalizes the algorithm of Björklund, Husfeldt, and Taslaman [SODA 2012] on finding a path through $k$ specified vertices. It also implies the first $2^k n^{O(1)}$ time algorithm for finding an $(s,t)$-path of length at least $k$. Our method yields FPT algorithms for even more general problems. For example, we consider the problem where the input consists of an $n$-vertex undirected graph $G$, a matroid $M$ whose elements correspond to the vertices of $G$ and which is represented over a finite field of order $q$, a positive integer weight function on the vertices of $G$, two sets of vertices $S,T \subseteq V(G)$, and integers $p,k,w$, and the task is to find $p$ vertex-disjoint paths from $S$ to $T$ so that the union of the vertices of these paths contains an independent set of $M$ of cardinality $k$ and weight $w$, while minimizing the sum of the lengths of the paths. We give a $2^{p+O(k^2 \log (q+k))} n^{O(1)} w$ time randomized algorithm for this problem. | ArXiv |
15th of June 14:15 (Helsinki time) |
Yan Xia: Inferring Multilayer Diffusion Networks in Online Social Media |
Information on social media spreads through an underlying diffusion network that connects people of common interests and opinions. This diffusion network often comprises multiple layers, each capturing the spreading dynamics of a certain type of information characterized by, for example, topic, language, or attitude. Researchers have previously proposed method for inferring these underlying multilayer diffusion networks from observed spreading patterns, but little is known about how well it performs across the range of realistic spreading data. We introduce an implementation of the multilayer diffusion network inference method that outperforms previous implementations in accuracy, and conduct an extensive series of synthetic data experiments to systematically analyze the performance of the inference method, under varied network structure (e.g. density, number of layers) and information diffusion settings (e.g. cascade size, layer mixing) that are designed to mimic real-world spreading on social media. Our results show extreme performance variation of the inference method: notably, it fails to decompose the diffusion network correctly when most cascades in the data reach a limited audience, or when the ground-truth diffusion network is sparse. Based on the analysis of the current method, we pose challenges toward further understanding and improving multilayer diffusion network inference from both theoretical and applicational perspectives. | Paper |
1st of June 14:15 (Helsinki time) |
Sándor Kisfaludi-Bak: Approximations for the Euclidean traveling salesman |
In the Euclidean traveling salesman problem (Euclidean TSP) we are given a set of n points in d-dimensional Euclidean space, and the goal is to find the shortest round trip containing all of these points. In this talk we will get on overview of the algorithmic techniques for Euclidean TSP, starting with Arora's approximation scheme. We will then briefly discuss geometric (light) spanners that brought the first round of improvements to the problem, and in the end we will look at the newest approximation scheme that achieves an optimal trade-off between running time and accuracy under reasonable complexity-theoretic assumptions. | ArXiv |
25th of May 14:15 (Helsinki time) |
Darya Melnyk: Online Algorithms with Lookaround |
In this talk, I will introduce a new model of computing model that was presented in “Online Algorithms with Lookaround”, called the online-LOCAL. In this model, the adversary reveals the nodes of the input graph one by one, in the same way as in classical online algorithms, but for each new node the algorithm can also inspect its radius-T neighborhood before choosing the output. Instead of looking ahead in time, we have the power of looking around in space. This model gives a unifying view of locality in four settings: LOCAL model of distributed computing, its sequential counterpart SLOCAL, dynamic algorithms, and online algorithms. We show that for LCL problems in paths, cycles, and rooted trees, all four models are roughly equivalent: the locality of any LCL problem falls in the same broad class - $O(\log^* n)$, $\Theta(\log n)$, or $n^{\Theta(1)}$ - in all four models. In particular, prior work on the LOCAL model directly generalizes to all four models. Second, we show that this equivalence does not hold in two-dimensional grids. We show that the locality of the 3-coloring problem is $O(\log n)$ in the online-LOCAL model, while it is known to be $\Omega(\sqrt{n})$ in the LOCAL model. |
ArXiv |
11th of May 14:15 (Helsinki time) |
Andrei Voicu Tomut: Enhanced Quantum Autoencoders for Anomaly Detection |
We present an extensive study of Quantum Auto Encoders (QAE) from the theoretical and application perspectives. This is a novel approach to building a quantum autoencoder that makes use of quantum entanglement as a resource to add an extra source of correlation between the compression and decompression process. We develop various conceptually different ideas: we let the encoder and decoder share Bell pairs, we entangle encoder and decoder qubits directly and we test what happens if we allow for both the encoder and decoder training contrary to the standard approach. So far, we have encountered an architecture that would provide a statistically significant advantage. We investigate how QAE can be used in the task of anomaly detection for two datasets: breast cancer and credit card transactions. We train QAEs architectures solely on the non-anomalous data. Then, given an anomaly datapoint coming from a different than learnt distribution, QAE provides non-faithful reconstruction, hence indicating an anomaly appearance. We judge the reconstruction quality using either the fidelity test (suitable for simulations) and SWAP test (suitable for both simulations and QPU hardware). We extend previous work on anomaly detection using QAEs by employing for this task for the first time the enhanced autoencoder and the patch autoencoder. Results show in some cases up to 91% accuracy in identifying breast cancer and 88% in fraudulent transaction identification. The present method won at Qhack2022 [2] in the following categories: Hybrid Algorithms Challenge, Amazon Braket Challenge (Best Open Hackathon experiment on Bracket Simulators), Quantum Finance Challenge (third place). [1] https://github.com/XanaduAI/QHack/issues/129 [2] https://medium.com/xanaduai/qhack-2022-cb5ad92573e2 |
|
27th of April 14:15 (Helsinki time) |
Stepháne Deny: Addressing the Topological Defects of Disentanglement via Distributed Operators |
A core challenge in Machine Learning is to learn to disentangle natural factors of variation in data (e.g. object shape vs. pose). A popular approach to disentanglement consists in learning to map each of these factors to distinct subspaces of a model's latent representation. However, this approach has shown limited empirical success to date. Here, we show that, for a broad family of transformations acting on images--encompassing simple affine transformations such as rotations and translations--this approach to disentanglement introduces topological defects (i.e. discontinuities in the encoder). Motivated by classical results from group representation theory, we study an alternative, more flexible approach to disentanglement which relies on distributed latent operators, potentially acting on the entire latent space. We theoretically and empirically demonstrate the effectiveness of this approach to disentangle affine transformations. Our work lays a theoretical foundation for the recent success of a new generation of models using distributed operators for disentanglement. Joint work with Diane Bouchacourt and Mark Ibrahim. | |
20th of April 14:15 (Helsinki time) |
Janne Korhonen: Distributed matrix multiplication - an overview |
In this review talk, we give an overview of distributed matrix multiplication algorithms in fully connected communication models, from 60's classics to more contemporary work on sparse matrix multiplication. We will then briefly touch on some remaining challenges in the field, and discuss our recent work on uniformly sparse matrix multiplication. | |
6th of April 14:15 (Helsinki time) |
Mika Göös: Collapses, separations, and characterisations |
We discuss a range of results for total NP search problem classes (TFNP) that includes new collapses of classes, black-box separations, and characterisations using propositional proof systems. We will focus on a surprising collapse result, EoPL = PLS ∩ PPAD, and see its full proof. Joint work with Alexandros Hollender, Siddhartha Jain, Gilbert Maystre, William Pires, Robert Robere, and Ran Tao. | |
30th of March 14:15 (Helsinki time) |
Petteri Kaski: The Shortest Even Cycle Problem is Tractable |
Given a directed graph, we show how to efficiently find a shortest (directed, simple) cycle on an even number of vertices. As far as we know, no polynomial-time algorithm was previously known for this problem. In fact, finding any even cycle in a directed graph in polynomial time was open for more than two decades until Robertson, Seymour, and Thomas (Ann. of Math. (2) 1999) and, independently, McCuaig (Electron. J. Combin. 2004; announced jointly at STOC 1997) gave an efficiently testable structural characterisation of even-cycle-free directed graphs. Methodologically, our algorithm relies on algebraic fingerprinting and randomized polynomial identity testing over a finite field, and uses a generating polynomial implicit in Vazirani and Yannakakis ( Discrete Appl. Math. 1989) that enumerates weighted cycle covers as a difference of a permanent and a determinant polynomial. The need to work with the permanent is where our main technical contribution occurs. We design a family of finite commutative rings of characteristic 4 that simultaneously (i) give a nondegenerate representation for the generating polynomial identity via the permanent and the determinant, (ii) support efficient permanent computations, and (iii) enable emulation of finite-field arithmetic in characteristic 2. Here our work is foreshadowed by that of Björklund and Husfeldt (SIAM J. Comput. 2019), who used a considerably less efficient ring design to obtain a polynomial-time algorithm for the shortest two disjoint paths problem. Building on work of Gilbert and Tarjan (Numer. Math. 1978) as well as Alon and Yuster (J. ACM 2013), we also show how ideas from the nested dissection technique for solving linear equation systems leads to faster algorithm designs when we have control on the separator structure of the input graph; for example, this happens when the input has bounded genus. Joint work with Andreas Björklund (Lund, Sweden) and Thore Husfeldt (Lund University and Basic Algorithms Research Copenhagen, ITU Copenhagen). |
ArXiv |
23rd of March 14:15 (Helsinki time) |
Fabricio Oliveira: Decision programming: multi-stage optimization under uncertainty |
Multi-stage decision problems under uncertainty can be represented as influence diagrams that are converted into decision trees. These trees can then be solved using dynamic programming, if the optimal strategy within a given branch do not depend on the decisions in other non-overlapping branches. To address these shortfalls, we propose the Decision Programming approach, which can efficiently address this ‘no forgetting’ assumption and retain outcome distribution information. In this, we convert problems into equivalent mixed-integer linear programs that can be efficiently solved, including in the presence of multiple objectives, endogenous uncertainty, and other dependency conditions. | ArXiv Paper Code Repository |
16th of March 14:15 (Helsinki time) |
Silvio Lattanzi: Efficient Online and Parallel Correlation Clustering |
Correlation clustering is a central topic in unsupervised learning, with many applications in ML and data mining. In correlation clustering, one receives as input a signed graph and the goal is to partition it to minimize the number of disagreements. In this talk we analyze the correlation clustering problem in the online and the massively parallel computation (MPC) settings. In the online setting we show that the celebrated Pivot algorithm performs well when given access to a small number of random samples from the input. For the MPC setting, we propose a new algorithm that uses machines with memory sublinear in the number of nodes in the graph and returns a constant approximation while running only for a constant number of rounds. To the best of our knowledge, our algorithm is the first that can provably approximate a clustering problem on graphs using only a constant number of MPC rounds in the sublinear memory regime. We complement our analysis with an experimental analysis of our techniques. | ArXiv |
Two Talks 2nd of March 14:15 (Helsinki time) |
Evan E. Dobbs: Constructing Quantum Circuits by Tiling in 3D and LDPC Decoding for Quantum Error Correction using Stochastic Circuit Elements |
We take advantage of the regular layout common in quantum circuits to express them as 3D objects composed of tiles, where each tile is a 3D nearest-neighbor implementation of the Toffoli gate. Using this scheme, we implement the quantum multiplier proposed by Munoz-Coreas and Thapliyal and demonstrate improvements in the number and depth of SWAP gates necessary for nearest-neighbor implementations of this multiplier when compared with automatic routing methods. This scheme is applied prior to compilation and can be used as an intermediate step for both NISQ and error corrected circuits which use the surface code. We also discuss ongoing work on the implementation of a belief propagation decoder for LDPC codes with stochastic circuit elements, using the surface code as an example.
|
|
George Watkins: Lattice Surgery Quantum Error Correction Compiler |
We developed a state of the art, high performance compiler for surface code quantum error correction. Our compiler translates any quantum circuit to a fault-tolerant one error corrected one that follows Lattice Surgery (LS) instructions. We offer our compiler publicly through a web interface, a web service, and as an open sourced Python package. In this talk, we will demonstrate the operation of our compiler and the web visualizer for the conversion of a sample quantum circuit. We will also discuss our results on compiling thousands of qubits, and on verifying the correctness of the compiler output. Generally, verifying the correctness is an intractable problem. An example of this will be showcased during our talk. With respect to future work, we will present our preliminary results on implementing massively parallel compilation of very large scale circuits.
|
||
23rd of February 14:15 (Helsinki time) |
Thaha Mohammed: Sparse Matrix Vector Multiplication (SpMVM) on Graphical processing units |
Sparse linear algebra is vital to many fields of science and engineering. The matrix sparsity structures vary widely based on the application domains and this poses major challenges in obtaining consistent high performance from sparse iterative solvers on GPUs. These challenges include coalesced memory access and load balancing. Due to the sparsity of the matrices there is no single storage/algorithmic technique that is suitable for all sparse matrix structures. In this talk we survey basic techniques and recent developments in SpMV computations on GPUs. | Paper |
2nd of February 14:15 (Helsinki time) |
Michal Dory: Fault-Tolerant Labeling and Compact Routing Schemes |
Assume that you have a huge graph, where edges and vertices may fail, and you want to answer questions about this graph quickly, without inspecting the whole graph. A fault-tolerant labeling scheme allows us to do just that. It is a distributed data structure, in which each vertex and edge is assigned a short label, such that given the labels of a pair of vertices $s,t$, and a set of failures $F$, you can answer questions about $s$ and $t$ in $G \setminus F$. For example, determine if $s$ and $t$ are connected in $G \setminus F$, just by looking at their labels. I will discuss a recent work where we show the first fault-tolerant connectivity labeling schemes for general graphs. I will also show applications for fault-tolerant routing. Here the goal is to provide each vertex a small routing table, such that given these tables we can efficiently route messages in the network, even in the presence of failures. Based on a joint work with Merav Parter. |
ArXiv |
19th of January 14:15 (Helsinki time) |
Michael Kapralov: Streaming (Lower Bounds) |
Given a large dataset as a stream of updates, how can we approximate its properties using a very small memory footprint? And what are the limitations imposed by the memory constraints? In this talk we will survey basic techniques and recent developments in graph streaming, with a focus on lower bounds. | |
1st of December Non-standard time 17:15 (Helsinki time) |
Nathan Klein: A (Slightly) Improved Approximation Algorithm for Metric TSP |
I will describe work in which we obtain a randomized $3/2-\epsilon$ approximation algorithm for metric TSP, for some $\epsilon>10^{-36}$. This slightly improves over the classical 3/2 approximation algorithm due to Christofides [1976] and Serdyukov [1978]. Following the approach of Oveis Gharan, Saberi, and Singh [2010], we analyze an algorithm which is a small modification of Christofides-Serdyukov: we obtain a TSP tour by adding a matching to a random spanning tree instead of one of minimum cost. The talk will focus on giving an overview of the key ideas involved to analyze the approximation factor of this algorithm, such as properties of random spanning trees and the structure of near minimum cuts. I will also discuss a recent extension in which we show a $3/2-\epsilon$ bound on the integrality gap of the subtour elimination polytope. This is joint work with Anna Karlin and Shayan Oveis Gharan. |
ArXiv |
24th of November 14:15 (Helsinki time) |
Roie Levin: Random Order Set Cover is as Easy as Offline |
We give a polynomial time algorithm for Online Set Cover with a competitive ratio of $O(\log mn)$ when the elements are revealed in random order, essentially matching the best possible offline bound of $O(\log n)$ and circumventing the $O(\log m \log n)$ lower bound known in adversarial order. We also extend the result to solving pure covering IPs when constraints arrive in random order. The algorithm is a multiplicative-weights-based round-and-solve approach we call Learn-Or-Cover. We maintain a coarse fractional solution that is neither feasible nor monotone increasing, but can nevertheless be rounded online to achieve the claimed guarantee (in the random order model). This gives a new offline algorithm for Set Cover that performs a single pass through the elements, which may be of independent interest. This is joint work with Anupam Gupta and Gregory Kehne. |
|
17th of November 14:15 (Helsinki time) |
Kamyar Khodamoradi: Local Search for (Stable) $k$-Means |
In the classic (discrete) $k$-Median/Means problem, we are given a set of $n$ points in a metric alongside the parameter $k$, and our task is to choose $k$ of these points as centres such that the cost of connecting all the points to them is minimized. For $k$-Median, the cost of connecting a point to a set of centres is just the distance of the point to the closest centre in the set, and for $k$-Means, this cost is the square of the distance. In this talk, I will focus on a "promise'' variation of $k$-Median/Means where, 1) the metric is a Euclidean plane of constant dimensions, and 2) the instances of the problem are guaranteed to satisfy some stability constraint. Stable optimization is a relatively young sub-field of optimization and approximation theory (introduced by Bilu and Linial in 2010), and can be seen as a "beyond the worst-case'' type of analysis. In the context of $k$-Means, an important notion of stability is that of perturbation resilience. An instance is called $\alpha$-perturbation resilient if there is a promise of a unique optimum solution which remains optimum even if the distances are (potentially non-uniformly) stretched by a factor of at most $\alpha$. Stable clustering instances have been studied to explain why heuristics such as Lloyd’s algorithm perform well in practice. In this talk, I will show how $(1 + \varepsilon)$-stable instances of $k$-Means in bounded-dimension Euclidean metrics can be solved in polynomial time, using a simple local search algorithm. To give some background about the work, I will first roughly sketch how local search was applied to the (not necessarily stable) Euclidean instances of the problem (Friggstad et al., FOCS 2016) to obtain a $(1 + \varepsilon)$-approximation. |
|
10th of November 14:15 (Helsinki time) |
Oren Louidor: A Scaling limit for the Cover Time of the Binary Tree |
We consider a continuous time random walk on the rooted binary tree of depth $n$ with all transition rates equal to one and study its cover time, namely the time until all vertices of the tree have been visited. We prove that, normalized by $2^{n+1} n$ and then centered by $(\log 2) n - \log n$, the cover time admits a weak limit as the depth of the tree tends to infinity. The limiting distribution is identified as that of a Gumbel random variable with rate one, shifted randomly by the logarithm of the sum of the limits of the derivative martingales associated with two negatively correlated discrete Gaussian free fields on the infinite version of the tree. The existence of the limit and its overall form were conjectured in the literature. Our approach is quite different from those taken in earlier works on this subject and relies in great part on a comparison with the extremal landscape of the discrete Gaussian free field on the tree. Joint work with Aser Cortines and Santiago Saglietti. |
ArXiv |
27th of October 14:15 (Helsinki time) |
Rustam Latypov: Memory Efficient Massively Parallel Algorithms for LCL Problems on Trees |
We pioneer the low-space Massively Parallel Computation (MPC) complexity landscape for a family of fundamental graph problems on trees. We give a general method that solves most locally checkable labeling (LCL) problems exponentially faster in the low-space MPC model than in the LOCAL message passing model. In particular, we show that all solvable LCL problems on trees can be solved in $O(\log n)$ time (high-complexity regime) and that all LCL problems on trees with deterministic complexity $n^{o(1)}$ in the LOCAL model can be solved in $O(\log \log n)$ time (mid-complexity regime). We emphasize that we solve LCL problems on constant degree trees, our algorithms are deterministic, and they work in the low-space MPC model, where local memory is $O(n^\delta)$ for $\delta \in (0,1)$ and global memory is $O(m)$. For the high-complexity regime, their are two key ingredients. One is a novel $O(\log n)$ time tree rooting algorithm, which may be of independent interest. The other ingredient is a novel pointer-chain technique and analysis that allows us to solve any solvable LCL problem on trees in $O(\log n)$ time. For the mid-complexity regime, we adapt the approach by Chang and Pettie [FOCS'17], who gave a canonical LOCAL algorithm for solving LCL problems on trees. For the special case of 3-coloring trees, which is a natural LCL problem with LOCAL time complexity $n^{o(1)}$, we show that our analysis is tight, as it matches the conditional $\Omega(\log \log n)$ lower bound for component-stable algorithms. |
|
13th of October 14:15 (Helsinki time) |
Sándor Kisfaludi-Bak: The traveling salesman and the Steiner tree with flat neighborhoods |
In the classic Euclidean traveling salesman problem, we are given n points in the Euclidean plane, and the goal is to find the shortest round trip that visits all the points. The Steiner tree asks for the shortest tree that can connect these points. An interesting variant of these problems is when instead of visiting points, we are asked to visit n "neighborhoods". A neighborhood could be any sort of geometric object; the goal is then to visit at least one point in each object. For example, we could be given a set of affine subspaces (often called flats) in $d$-dimensional Euclidean space, and the goal is to find the shortest round trip or tree that intersects each subspace. We will see how the 2-dimensional problems with line neighborhoods can be tackled, and see how the optimum tour for hyperplane neighborhoods ($(d-1)$-dimensional flats) can be efficiently approximated in higher dimensions. It turns out that these neighborhood problems have a different computational complexity than the classic problems with points: they require a novel approach for the hyperplane case, while the case of lower-dimensional flats remain largely mysterious. | |
6th of October Non-standard time 17:15 (Helsinki time) |
Yakov Nekrich: Dynamic Point Location in Optimal Time. |
In the dynamic point location problem we maintain a planar subdivision under insertions and deletions of segments so that for any query point $q$ the polygon containing $q$ can be found efficiently. In the dynamic vertical ray shooting problem we maintain a set of non-intersecting segments on a plane, so that for an arbitrary query point $q$ the highest segment immediately below $q$ can be found efficiently, where the segment immediately below $q$ is the first segment hit by a downward vertical ray from $q$. Vertical ray shooting and point location are closely connected problems: a data structure for vertical ray shooting can be transformed into a data structure for planar point location. In this talk we present a fully-dynamic data structure that supports point location queries in a connected planar subdivision with $n$ edges. Our data structure uses $O(n)$ space, answers queries in $O(\log n)$ time, and supports updates in $O(\log n)$ time. Our solution is based on a data structure for vertical ray shooting queries that supports queries and updates in $O(\log n)$ time. | Paper |
29th of September 14:15 (Helsinki time) |
Noam Touitou: Flow Time Scheduling with Uncertain Processing Time |
We consider the problem of online scheduling on a single machine in order to minimize weighted flow time. The existing algorithms for this problem (STOC '01, SODA '03, FOCS '18) all require exact knowledge of the processing time of each job. This assumption is crucial, as even a slight perturbation of the processing time would lead to polynomial competitive ratio. However, this assumption very rarely holds in real-life scenarios. In this paper, we present the first algorithm for weighted flow time which do not require exact knowledge of the processing times of jobs. Specifically, we introduce the Scheduling with Predicted Processing Time (SPPT) problem, where the algorithm is given a prediction for the processing time of each job, instead of its real processing time. For the case of a constant factor distortion between the predictions and the real processing time, our algorithms match all the best known competitiveness bounds for weighted flow time -- namely $O(\log P)$, $O(\log D)$ and $O(\log W)$, where $P, D, W$ are the maximum ratios of processing times, densities, and weights, respectively. For larger errors, the competitiveness of our algorithms degrades gracefully. |
ArXiv |
22nd of September 14:15 (Helsinki time) |
Chetan Gupta: Derandomizing Isolation Lemma for Special Classes of Graphs |
Isolation lemma states that if we randomly assign polynomially large weights to the elements of a set $S$ then for any family $F$ of subsets of $S$, the minimum weight subset in F will be unique with high probability. Isolation lemma has found numerous applications in complexity theory since its introduction. In this talk, I will talk about its application in the two fundamental problems in complexity theory, perfect matching and directed graph reachability. In the perfect matching problem, we want to find out whether a given graph has a perfect matching or not. Perfect matching problem is known to have a polynomial-time sequential algorithm. In complexity theory, most often, for every problem that has a polynomial-time sequential algorithm, people try to find an efficient parallel algorithm (NC algorithm) for the same in order to make progress towards P vs NC question. But the question whether there exists an efficient parallel algorithm for perfect matching or not has been elusive so far. Efforts to find an efficient parallel algorithm for perfect matching spurred powerful tools like isolation lemma. It was shown that if we can derandomize the isolation lemma for perfect matchings, that is, if we can efficiently construct polynomial-size weights for the edges of a graph such that minimum weight perfect matching in the graph is unique then the perfect matching problem can be solved in SPL, which is a small subclass of NC. In the graph reachability problem, given a directed graph and two vertices, we are asked whether there exists a path from one vertex to another or not. Reachability is a complete problem for complexity class NL, which is a class of problems that can be solved by nondeterministic Turing machines in logspace. The question whether reachability can be solved by an unambiguous logspace Turing machine (UL) or not is still open. It was shown that if we can derandomize the isolation lemma for paths, then reachability can be solved by a UL machine. This talk will explain how we can derandomize the isolation lemma for $O(\log n)$ genus graphs and single-crossing minor free graphs to obtain an SPL bound for bipartite perfect matching and UL bound for reachabilty, in these two classes. |
Slides ArXiv MFCS |
15th of September 14:15 (Helsinki time) |
Laurent Feuilloley: The Secretary Problem with Independent Sampling |
The secretary problem is a classic online decision problem. In this problem,
an adversary first chooses some n numbers, then these numbers are shuffled at
random and presented to the player one by one. For each number, the player has
two options: discard the number and continue, or keep the number and stop the
game. The player wins if she kept the highest number of the whole set. It is
clearly not possible to win all the time: when one decides to stop there might be
a higher number in the rest of the sequence, and when one discards a number,
it might actually be the highest of the sequence. But surprisingly one can win
with probability $1/e$. An issue with the secretary problem is that it assumes that the player has absolutely no information about the numbers, which reduces its applicability. A recent research direction is to understand what happens when one knows a distribution or samples etc. We study a simple such setting for which we prove tight results. Interestingly, some of the lower bound techniques we use are inspired by ideas from distributed graph algorithms. |
ArXiv |
8th of September Non-standard time 17:00 (Helsinki time) |
Jan van den Brand: Unifying Matrix Data Structures - Simplifying and Speeding up Iterative Algorithms |
Many algorithms use data structures that maintain properties of matrices undergoing some changes. The applications are wide-ranging and include for example matchings, shortest paths, linear programming, semi-definite programming, convex hull and volume computation. Given the wide range of applications, the exact property these data structures must maintain varies from one application to another, forcing algorithm designers to invent them from scratch or modify existing ones. Thus it is not surprising that these data structures and their proofs are usually tailor-made for their specific application and that maintaining more complicated properties results in more complicated proofs. In this talk, I present a unifying framework that captures a wide range of these data structures. The simplicity of this framework allows us to give short proofs for many existing data structures regardless of how complicated the to be maintained property is. We also show how the framework can be used to speed up existing iterative algorithms, such as the simplex algorithm. | ArXiv |
21st of July 14:15 (Helsinki time) |
Iftach Haitner: A Tight Lower Bound on Adaptively Secure Full-Information Coin Flip |
In a distributed coin-flipping protocol, Blum [ACM Transactions on Computer Systems '83], the parties try to output a common (close to) uniform bit, even when some adversarially chosen parties try to bias the common output. In an adaptively secure full-information coin flip, Ben-Or and Linial [FOCS '85], the parties communicate over a broadcast channel and a computationally unbounded adversary can choose which parties to corrupt during the protocol execution. Ben-Or and Linial proved that the n-party majority protocol is resilient to $O(\sqrt{n})$ corruptions (ignoring poly-logarithmic factors), and conjectured this is a tight upper bound for any n-party protocol (of any round complexity). Their conjecture was proved to be correct for limited variant of single-turn (each party sends a single message) single-bit (a message is one bit) protocols, Lichtenstein, Linial, and Saks [Combinatorica '89], symmetric protocols Goldwasser, Kalai, and Park [ICALP '15], and recently for (arbitrary message length) single-turn protocols Kalai, Komargodski, and Raz [DISC '18]. Yet, the question for many-turn (even single-bit) protocols was left completely open. In this work we close the above gap, proving that no n-party protocol (of any round complexity) is resilient to $\omega(\sqrt{n})$ (adaptive) corruptions. That is, up to polylog factors, the single-bit, single-message majority protocol is the optimal protocol against adaptive corruptions. Joint work with Yonatan Karidi-Heller. |
ArXiv |
30th of June 14:15 (Helsinki time) |
Bernhard Haeupler: The Quest for Universally-Optimal Distributed Algorithms |
Many distributed optimization algorithms achieve an existentially-optimal round complexity of ($\tilde{O}(\sqrt{n} + D)$, i.e., there exists some pathological worst-case topology on which no algorithm can be faster. However, most networks of interest allow for exponentially faster algorithms. This motivates two questions: (1) What network topology parameters determine the complexity of distributed optimization? (2) Are there universally-optimal algorithms that are as fast as possible on every single topology? This talk provides an overview over the freshly-completed 6-year program that affirmatively resolves these 25-year-old open problems for a wide class of global network optimization problems including MST, $(1+\epsilon)$-min cut, various approximate shortest path problems, sub-graph connectivity, etc.. In particular, we provide several equivalent graph parameters that are tight universal lower bounds for the above problems, fully characterizing their inherent complexity. We also give the first universally-optimal algorithms approximately achieving this complexity on every topology. The quest for universally-optimal distributed algorithms required novel techniques that also answer fundamental (open) questions in seemingly unrelated fields, such as, network information theory, approximation algorithms, (oblivious) packet routing, (algorithmic & topological) graph theory, and metric embeddings. Generally, the problems addressed in these fields explicitly or implicitly ask to jointly optimize $\ell_\infty$ & $\ell_1$ parameters such as congestion & dilation, communication rate & delay, capacities & diameters of subnetworks, or the makespan of packet routings. In particular, results obtained on the way include the following firsts: (Congestion+Dilation)-Competitive Oblivious Routing, Network Coding Gaps for Completion-Times, Hop-Constrained Expanders & Expander Decompositions, Bi-Criteria (Online / Demand-Robust) Approximation Algorithms for many Diameter-Constrained Network Design Problems (e.g., (Group) Steiner Tree/Forest), Makespan-Competitive (Compact and Distributed) Routing Tables, and (Probabilistic) Tree Embeddings for Hop-Constrained Distances. (joint work with Mohsen Ghaffari, Goran Zuzic, Ellis D. Hershkowitz, David Wajc, Jason Li, Harald Raecke, Taisuke Izumi) |
ArXiv |
16th of June 14:15 (Helsinki time) |
Yael Hitron: Distributed CONGEST Algorithms Against Adversarial Edges |
In this talk, I will present broadcast algorithms in the adversarial CONGEST model. In this model, there is an adaptive byzantine adversary that controls some edges (or vertices) $F$ in the input communication graph. The adversary sees the entire communication graph, as well as the random coins of the nodes, while maliciously manipulating the messages sent through the $F$ edges (unknown to the nodes). The communication occurs as in the standard CONGEST model
Our two key results for $n$-node graphs of diameter $D$ are as follows: 1. For $|F|=1$, there is a deterministic broadcast algorithm that runs in $\widetilde{O}(D^2)$ rounds, provided that the graph is $3$ edge-connected (which is necessary). This round complexity beats the natural barrier of $O(D^3)$ rounds, the existentially lower bound on the maximal length of $3$ edge-disjoint paths between a given pair of nodes in $G$. This algorithm can be extended to a $\widetilde{O}(D^{O(t)})$-round algorithm against $|F|=t$ adversarial edges in $(2t+1)$ edge-connected graphs. 2. For expander graphs with edge-connectivity $t=\Omega(\log n)$, we provide a considerably improved broadcast algorithm with $O(t' \log ^2 n)$ rounds, provided that the adversary controls at most $t'=O(\sqrt{t/\log n})$ edges. I will also present some concrete open problems. This is joint work with Merav Parter. |
|
9th of June 14:15 (Helsinki time) |
Krzysztof Nowicki: A Deterministic Algorithm for the MST Problem in Constant Rounds of Congested Clique |
In this talk I'll present a simple deterministic algorithm for the MST problem that needs only $O(1)$ rounds of Congested Clique. The algorithm can be also applied in the MPC model, as long as the local memory of a single machine is $O(n)$.The key technical contribution is an algorithm for the maximal (in the sense of inclusion) spanning forest that does not rely on any randomized techniques (e.g., graph sketches, 1-out sampling, uniform sampling) used to obtain all previous $o(\log \log n)$ round algorithms. | ArXiv |
19th of May 14:15 (Helsinki time) |
Anton Bernshteyn: Distributed algorithms and continuous combinatorics |
The topic of this talk is a recently discovered connection between two seemingly disparate fields: distributed computing and descriptive combinatorics. Distributed computing is the area of computer science concerned with problems that can be solved efficiently by a (finite) decentralized network of processors. Descriptive combinatorics, on the other hand, studies combinatorial problems on infinite graphs under additional topological or measure-theoretic regularity constraints and is largely motivated by questions in logic and ergodic theory. It turns out that these two areas are closely related to each other, and there are formal ways of translating results from one context to the other one. In particular --- and this is the main focus of this talk --- a local coloring problem can be solved by an efficient deterministic distributed algorithm if and only if it admits a continuous solution. | ArXiv |
12th of May 14:15 (Helsinki time) |
Joel Rybicki: Fast Population Protocols on Graphs |
In the stochastic population protocol model, a collection of indistinguishable, resource-limited nodes reside on a graph and collectively solve tasks via pairwise interactions. In each interaction, two randomly chosen neighbours first read each other's states, and then update their local states. A recent line of research has established trade-offs between time and space complexity for various fundamental tasks, such as majority and leader election, when the underlying interaction graph is a clique. In this talk, we consider the more general setting of arbitrary interaction graphs and introduce a technique for simulating protocols designed for fully-connected networks in any connected regular graph. The technique allows us to automatically transfer synchronous algorithms designed for the clique into the asynchronous setting on regular graphs. As an application of this result, we obtain fast and space-efficient population protocols for leader election and exact majority on graphs with good expansion properties. The talk is based on joint work with Dan Alistarh and Rati Gelashvili. |
ArXiv |
5th of May 14:15 (Helsinki time) |
Tuomas Hakoniemi: Feasible interpolation for algebraic proof systems |
Feasible interpolation is a framework introduced by Krajíček in 90's for extracting computational content from proofs in formal proof systems. Although originally conceived as a means to prove lower bounds on size of proofs from lower bounds on different computational models, the framework finds also non-trivial applications in the opposite direction. In this talk we consider two algebraic proof systems, Polynomial Calculus and Sums-of-Squares, and give a form of feasible interpolation for these systems. These systems can be used to show the infeasibility of a set of polynomial constraints, and with a suitable encoding of clauses into polynomial constraints, the systems can be used as refutation systems for CNFs. Our results do not yield any new unconditional lower bounds for the proof systems. However, as a consequence for feasible interpolation for Sums-of-Squares, we obtain a novel polynomial time algorithm for separating $(k-1)$-colorable graphs from those containing a $k$-clique. | Paper |
28th of April 14:15 (Helsinki time) |
Christoph Grunau: A gap in the Randomized Local Computation Complexity Landscape |
The Local Computation Algorithm (LCA) model is a popular model in the field of sublinear-time algorithms that measures the complexity of an algorithm by the number of probes the algorithm makes in the neighborhood of one node to determine that node’s output. In this talk, I show that every randomized LCA algorithm for a locally checkable problem with a probe complexity of $o(\sqrt{\log n})$ can be turned into a deterministic LCA algorithm with a probe complexity of $O(\log^* n)$. I also show how additional ideas lead to an $\Omega(log n)$ lower bound for the Lovasz Local Lemma (LLL) on constant degree graphs. This talk is based on joint work with Sebastian Brandt and Vasek Rozhon. | ArXiv |
21th of April 14:15 (Helsinki time) |
Sorrachai Yingchareonthawornchai: Vertex Connectivity in Poly-logarithmic Max-flows |
The vertex connectivity of an $m$-edge $n$-vertex undirected graph is the smallest number of vertices whose removal disconnects the graph, or leaves only a singleton vertex. In this paper, we give a reduction from the vertex connectivity problem to a set of maxflow instances. Using this reduction, we can solve vertex connectivity in $\widetilde{O}(m^α)$ time for any $\alpha \geq 1$, if there is a $m^α$-time maxflow algorithm. Using the current best maxflow algorithm that runs in $m^{4/3+o(1)}$ time (Kathuria, Liu and Sidford, FOCS 2020), this yields an $m^{4/3+o(1)}$-time vertex connectivity algorithm. This is the first improvement in the running time of the vertex connectivity problem in over 20 years, the previous best being an $\widetilde{O}(mn)$-time algorithm due to Henzinger, Rao, and Gabow (FOCS 1996). Indeed, no algorithm with an $o(mn)$ running time was known before our work, even if we assume an $\widetilde{O}(m)$-time maxflow algorithm. |
ArXiv |
14th of April 14:15 (Helsinki time) |
Yuval Gil: Twenty-Two New Approximate Proof Labeling Schemes |
Introduced by Korman, Kutten, and Peleg (Distributed Computing 2005), a proof labeling scheme
(PLS) is a system dedicated to verifying that a given configuration graph satisfies a certain property.
It is composed of a centralized prover, whose role is to generate a proof for yes-instances in the
form of an assignment of labels to the nodes, and a distributed verifier, whose role is to verify the
validity of the proof by local means and accept it if and only if the property is satisfied. To overcome
lower bounds on the label size of PLSs for certain graph properties, Censor-Hillel, Paz, and Perry
(SIROCCO 2017) introduced the notion of an approximate proof labeling scheme (APLS) that allows
the verifier to accept also some no-instances as long as they are not "too far" from satisfying the
property. The goal of the current paper is to advance our understanding of the power and limitations of APLSs. To this end, we formulate the notion of APLSs in terms of distributed graph optimization problems (OptDGPs) and develop two generic methods for the design of APLSs. These methods are then applied to various classic OptDGPs, obtaining twenty-two new APLSs. An appealing characteristic of our APLSs is that they are all sequentially efficient in the sense that both the prover and the verifier are required to run in (sequential) polynomial time. On the negative side, we establish “combinatorial” lower bounds on the label size for some of the aforementioned OptDGPs that demonstrate the optimality of our corresponding APLSs. For other OptDGPs, we establish conditional lower bounds that exploit the sequential efficiency of the verifier alone (under the assumption that NP ≠ co-NP) or that of both the verifier and the prover (under the assumption that P ≠ NP, with and without the unique games conjecture). |
ArXiv |
7th of April 14:15 (Helsinki time) |
Václav Rozhoň: The ID Graph Trick and its Applications to Lower Bounds in the LOCAL and LCA Model |
We will talk about the so-called ID graph: this is a new trick useful for proving lower bounds in distributed models of computation.
We will show how it can be used to simplify the famous lower bound for the Lovász Local Lemma in the distributed LOCAL model and then we will discuss how it can be used to prove a lower bound for the same problem in the Local Computation Algorithm model. We also discuss the original usage of the trick: to translate results from the field of descriptive combinatorics to distributed algorithms. Joint work with Sebastian Brandt, Jan Grebík, Christoph Grunau |
ArXiv |
31st of March 14:15 (Helsinki time) |
Juho Hirvonen: Classifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory |
Graphical games are a framework for modelling games where an underlying graph determines how the actions of players affect each other. The utility of each player depends only on its own strategy and the strategy of its neighbours. In this context many fundamental equilibrium concepts are localised: for example, the strategies of the players form a Nash equilibrium if and only if no player can unilaterally improve its current strategy with respect to its neighbours. Systems modelled as graphical games can also be naturally seen as distributed systems. Assuming that a system is converging to a Nash equilibrium, it is implicitly solving the corresponding computational task of finding a Nash equilibrium. Since Nash equilibria of graphical games are localised, finding one is a locally verifiable task. This family of problems has been studied widely in the theory of distributed graph algorithms, and it is therefore possible to use distributed complexity theory to understand and classify the equilibria of graphical games. I will present our work formalising this connection. We give examples of basic graphical games and show how theory of distributed computing can be used to understand them. |
Paper |
17th of March 14:15 (Helsinki time) |
Andras Papp: Computational Aspects of Financial Networks |
We study financial networks with debt contracts and credit default swaps between specific pairs of banks. Given such a financial system, we want to decide which of the banks are in default, and how much of their liabilities can these defaulting banks pay. There can easily be multiple different solutions to this problem, leading to a situation of default ambiguity and a range of possible solutions to implement for a financial authority. We first study these financial networks from a game-theory perspective, analyzing the incentives of banks, and some simple operations they could execute in order to influence the outcome to their advantage. We show that if multiple banks try to execute these operations simultaneously, then this can easily result in classical game theoretic situations like the prisoner's dilemma or the dollar auction game. We then analyze these financial networks in a sequential model where banks announce their default one at a time, and the system evolves in a step-by-step manner. We show that such a dynamic process can go on indefinitely, or last for an exponential number of steps before an eventual stabilization. We also study how the ordering of announcements affects the final outcome, and show that even for a single bank, finding the best time to announce a default is already an NP-hard problem. |
Paper Paper |
10th of March 14:15 (Helsinki time) |
Petteri Kaski: Error-Correcting and Verifiable Parallel Inference in Graphical Models |
We present a novel framework for parallel exact inference in graphical models. Our framework supports error-correction during inference and enables fast verification that the result of inference is correct, with probabilistic soundness. The computational complexity of inference essentially matches the cost of w-cutset conditioning, a known generalization of Pearl's classical loop-cutset conditioning for inference. Verifying the result for correctness can be done with as little as essentially the square root of the cost of inference. Our main technical contribution amounts to designing a low-degree polynomial extension of the cutset approach, and then reducing to a univariate polynomial employing techniques recently developed for noninteractive probabilistic proof systems. This is joint work with Negin Karimi (Aalto University) and Mikko Koivisto (University of Helsinki). | Paper |
24th of February 14:15 (Helsinki time) |
Michele Scquizzato: Tight Bounds for Parallel Paging and Green Paging |
In the parallel paging problem, there are p
processors that share a cache of size \(k\). The goal is to partition
the cache among the processors over time in order to minimize their
average completion time. For this long-standing open problem, we give
tight upper and lower bounds of Θ(log p) on the competitive
ratio with O(1) resource augmentation. A key idea in both our algorithms and lower bounds is to relate the problem of parallel paging to the seemingly unrelated problem of green paging. In green paging, there is an energy-optimized processor that can temporarily turn off one or more of its cache banks (thereby reducing power consumption), so that the cache size varies between a maximum size k and a minimum size k/p. The goal is to minimize the total energy consumed by the computation, which is proportional to the integral of the cache size over time. We show that any efficient solution to green paging can be converted into an efficient solution to parallel paging, and that any lower bound for green paging can be converted into a lower bound for parallel paging, in both cases in a black-box fashion. We then show that, with O(1) resource augmentation, the optimal competitive ratio for deterministic online green paging is Θ(log p), which, in turn, implies the same bounds for deterministic online parallel paging. |
Paper |
17th of February 14:15 (Helsinki time) |
Faour Salwa: Approximating Bipartite Minimum Vertex Cover in the CONGEST Model |
In this talk, I present how one can achieve efficient distributed algorithms for the minimum vertex cover problem in bipartite graphs in the CONGEST model for both the randomized and deterministic setting. This is based on a joint work with Fabian Kuhn (University of Freiburg). From Kőnig’s theorem, it is well known that in bipartite graphs the size of a minimum vertex cover is equal to the size of a maximum matching. First, we show that together with an existing \( O ( n \log n )\)-round algorithm for computing a maximum matching, the constructive proof of Kőnig's theorem directly leads to a deterministic \( O ( n \log n )\)-round CONGEST algorithm for computing a minimum vertex cover. We then show that by adapting the construction, we can also convert an approximate maximum matching into an approximate minimum vertex cover. Given a \((1-\delta)\)-approximate matching for some \( \delta > 1\), we show that a \((1+O(\delta))\)-approximate vertex cover can be computed in time \(O(D+\poly (\frac{\log n}{\delta}))\), where D is the diameter of the graph. Finally when combining with known graph clustering techniques, for any \(\eps \in (0,1]\), this leads to a \(\poly(\frac{\log n}{\eps})\)-time deterministic and also to a slightly faster and simpler randomized \(O(\frac{\log n}{\eps^3})\)-round CONGEST algorithm for computing a $(1+\eps)$-approximate vertex cover in bipartite graphs. For constant \(\eps\), the randomized time complexity matches the \(\Omega(\log n)\) lower bound (Göös, Suomela 2014) for computing a \((1+\eps)\)-approximate vertex cover in bipartite graphs even in the LOCAL model. Our results are also in contrast to the situation in general graphs, where it is known that computing an optimal vertex cover requires \(\tilde{\Omega}(n^2)\) rounds in the CONGEST model and where it is not even known how to compute any \((2-\eps)\)-approximation in time \(o(n^2)\). |
Paper |
3rd of February 14:15 (Helsinki time) |
Orr Fischer: A Distributed Algorithm for Directed Minimum-Weight Spanning Tree |
In the directed minimum spanning tree problem (DMST, also called minimum weight arborescence), the network is given a root node $r$, and needs to construct a minimum-weight directed spanning tree, rooted at $r$ and oriented outwards. In this paper we present the first sub-quadratic DMST algorithms in the distributed CONGEST network model, where the messages exchanged between the network nodes are bounded in size. We consider three versions: a model where the communication links are bidirectional but can have different weights in the two directions; a model where communication is unidirectional; and the Congested Clique model, where all nodes can communicate directly with each other. Our algorithm is based on a variant of Lovasz' DMST algorithm for the PRAM model, and uses a distributed single-source shortest-path (SSSP) algorithm for directed graphs as a black box. In the bidirectional CONGEST model, our algorithm has roughly the same running time as the SSSP algorithm; using the state-of-the-art SSSP algorithm, we obtain a running time of $\widetilde{O}(n^{1/2}D^{1/4} + D))$ rounds for the bidirectional communication case. For the unidirectional communication model we give an $\widetilde{O}(n)$ algorithm, and show that it is nearly optimal. And finally, for the Congested Clique, our algorithm again matches the best known SSSP algorithm: it runs in $\widetilde{O}(n^{1/3})$ rounds. On the negative side, we adapt an observation of Chechik in the sequential setting to show that in all three models, the DMST problem is at least as hard as the $(s,t)$-shortest path problem. Thus, in terms of round complexity, distributed DMST lies between single-source shortest path and $(s,t)$-shortest path. |
Paper |
20th of January 14:15 (Helsinki time) |
Vanni Noferini: Walk this way |
In complex network analysis, centrality measures are nonnegative valued functions defined on the nodes of a graph that measure the relative importance of each node. Applications include, for instance, singling out the most influential users of a social network, or identifying potential superspreaders within the context of a disease epidemic.
Due to the existence of efficient computational algorithms based on numerical linear algebra, centrality measures based on walks are particularly popular. However, in certain applications it is desirable to refine the underlying combinatorics, by neglecting certain types of walks, in order to obtain more significant results. After a gentle introduction including some classical results in this research area, I will discuss some recently proposed definitions of centrality measures that only count walks that do not backtrack, or more generally do not include cycles of length $\leq k$. I will focus both on theoretical properties of these measures and on computational issues. The talk is based on collaborative work with F. Arrigo (Strathclyde), P. Grindrod (Oxford) and D. Higham (Edinburgh). |
|
13th of January 14:15 (Helsinki time) |
Jonni Virtema: Descriptive complexity of real computation and probabilistic team semantics |
In this talk I survey my recent joint work with Miika Hannula (Helsinki), Juha Kontinen (Helsinki), and Jan Van den Bussche (Hasselt) on logics and complexity in a setting that incorporates real numbers as first-class citizens. Background: Metafinite model theory (Grädel & Gurevich 1998) generalizes the approach of finite model theory by shifting to two-sorted structures that extend finite structures with another (often infinite) domain with some arithmetic (such as the reals with multiplication and addition), and weight functions bridging the two sorts. Finite structures enriched with real arithmetic are called R-structures. Blum-Shub-Smale machines (Blum, Shub & Smale 1989), BSS machine for short, are essentially random access machines with registers that can store real numbers and which can compute arithmetic operations on reals in a single time step. In addition for recognizing languages over the reals, BSS machines can also be used to recognize languages on the Boolean alphabeth {0,1}; e.g. Boolean languages recognizable by BSS-machiness in non-determinitic polynomial time coincides with the complexity class existsR (problems PTIME reducible to the existential theory of the reals) which lies somewhere between NP and PSPACE (defined with Turing machines). NP on BSS machines was logically captured by a variant of existential second-order logic over R-structures in (Grädel & Meer 1995). Our contribution: We study descriptive complexity of logics in the setting of probabilistic team semantics. This is a family of logics built-up from atomic expressions stating quantitative notions of dependence between random variables. E.g., probabilistic independence logic is built around an atomic statement that declares conditional probabilistic independence between tuples of random variables. Formulae, in this setting, describe properties of real-weighted distributions over first-order assignments. Hence, it turns out, that the descriptive complexity of related logics lie in the realm of BSS-machines. For pinpointing the exact complexity of logics in the probabilistic team semantics setting, we introduced a novel restricted variant of BSS machines, coined "separate-branching BSS-machines" or SBSS-machines. This led to various connections between logics using probabilistic team semantics, complexity classes defined via SBSS-machines, complexity classes defined via Turing machines, and restrictions of the variant of existential second-order logic of Grädel and Meer. |
Paper ArXiv |
16th of December 14:15 (Helsinki time) |
Merav Parter: An Algorithmic Perspective for Spiking Neural Networks |
Understanding how the brain works, as a computational device, is a
central challenge of modern neuroscience and artificial intelligence.
Different research communities approach this challenge in different
ways, including studies that examine neural network structure as a
clue to computational function, functional imaging that studies neural
activation patterns, theoretical work using simplified models of
neural computation, and engineering of complex neural-inspired machine
learning architectures. The talk will be devoted to a line of works that
approaches this problem using techniques from distributed computing theory and other branches of
theoretical computer science. We will present a scientific perspective on the area,
modeling neural networks as computational platforms and studying
computability and costs within these models. We will focus on a collection of abstract problems inspired by problems that are solved in actual brains, such as problems of focus, time estimation, recognition, and memory. The goal is to design efficient algorithms (networks) for solving these problems, and analyze them in terms of static costs such as network size and dynamic costs such as the time to converge to a correct solution. The talk will also discuss the barriers of asynchronous neural computation from an algorithmic perspective, and the role of randomness in neural computation. If time allows, we will mention some interesting relations to the area of streaming algorithms. Based on several joint works with Yael Hitron, Nancy Lynch, Cameron Musco and Gur Perry. |
|
9th of December 14:15 (Helsinki time) |
William Moses Jr.: Singularly Optimal Randomized Leader Election |
This paper concerns designing distributed algorithms that are singularly optimal, i.e., algorithms that are simultaneously time and message optimal, for the fundamental leader election problem in networks. Our main result is a randomized distributed leader election algorithm for asynchronous complete networks that is essentially (up to a polylogarithmic factor) singularly optimal. Our algorithm uses O(n) messages with high probability and runs in O(\log^2 n) time (with high probability) to elect a unique leader. The O(n) message complexity should be contrasted with the \Omega(n \log n) lower bounds for the deterministic message complexity of leader election algorithms (regardless of time), proven by Korach, Moran, and Zaks (TCS, 1989) for asynchronous algorithms and by Afek and Gafni (SIAM J. Comput., 1991) for synchronous networks. Hence, our result also separates the message complexities of randomized and deterministic leader election. More importantly, our (randomized) time complexity of O(\log^2 n) for obtaining the optimal O(n) message complexity is significantly smaller than the long-standing \tilde{\Theta}(n) time complexity obtained by Afek and Gafni and by Singh (SIAM J. Comput., 1997) for message optimal (deterministic) election in asynchronous networks. Afek and Gafni also conjectured that \tilde{\Theta}(n) time would be optimal for message-optimal asynchronous algorithms. Our result shows that randomized algorithms are significantly faster. Turning to synchronous complete networks, Afek and Gafni showed an essentially singularly optimal deterministic algorithm with O(\log n) time and O(n \log n) messages. Ramanathan et al. (Distrib. Comput. 2007) used randomization to improve the message complexity, and showed a randomized algorithm with O(n) messages but still with O(\log n) time (with failure probability O(1 / \log^{\Omega(1)}n)). Our second result shows that synchronous complete networks admit a tightly singularly optimal randomized algorithm, with O(1) time and O(n) messages (both bounds are optimal). Moreover, our algorithm's time bound holds with certainty, and its message bound holds with high probability, i.e., 1-1/n^c for constant c. Our results demonstrate that leader election can be solved in a simultaneously message and time-efficient manner in asynchronous complete networks using randomization. It is open whether this is possible in asynchronous general networks. This talk is based on joint work with Shay Kutten, Gopal Pandurangan, and David Peleg which was published at DISC 2020. |
Paper |
2nd of December 14:15 (Helsinki time) |
Shreyas Pai: Sample-and-Gather: Fast Ruling Set Algorithms in the Low-Memory MPC Model |
Motivated by recent progress on symmetry breaking problems such as maximal independent set (MIS) and maximal matching in the low-memory Massively Parallel Computation (MPC) model (e.g., Behnezhad et al. PODC 2019; Ghaffari-Uitto SODA 2019), we investigate the complexity of ruling set problems in this model. The MPC model has become very popular as a model for large-scale distributed computing and it comes with the constraint that the memory-per-machine is strongly sublinear in the input size. For graph problems, extremely fast MPC algorithms have been designed assuming Õ(n) memory-per-machine, where n is the number of nodes in the graph (e.g., the O(log log n) MIS algorithm of Ghaffari et al., PODC 2018). However, it has proven much more difficult to design fast MPC algorithms for graph problems in the low-memory MPC model, where the memory-per-machine is restricted to being strongly sublinear in the number of nodes, i.e., O(n^ε) for constant 0 < ε < 1. In this paper, we present an algorithm for the 2-ruling set problem, running in Õ(log^{1/6} Δ) rounds whp, in the low-memory MPC model. Here Δ is the maximum degree of the graph. We extend this result to β-ruling sets for any integer β > 1. Specifically, we show that a β-ruling set can be computed in the low-memory MPC model with O(nε) memory-per-machine in Õ(β ⋅ log^{1/(2β+1-2)} Δ) rounds, whp. From this it immediately follows that a β-ruling set for β = Ω(logloglog Δ)-ruling set can be computed in just O(β log log n) rounds whp. The above results assume a total memory of Õ(m + n^{1+ε}). We also present algorithms for β-ruling sets in the low-memory MPC model assuming that the total memory over all machines is restricted to Õ(m). For β > 1, these algorithms are all substantially faster than the Ghaffari-Uitto Õ(√log Δ)-round MIS algorithm in the low-memory MPC model. All our results follow from a Sample-and-Gather Simulation Theorem that shows how random-sampling-based Congest algorithms can be efficiently simulated in the low-memory MPC model. We expect this simulation theorem to be of independent interest beyond the ruling set algorithms derived here. Joint work with Kishore Kothapalli and Sriram Pemmaraju to appear in FSTTCS 2020. |
ArXiv |
25th of November 14:15 (Helsinki time) |
Shaofeng Jiang: Streaming Algorithms for Geometric Steiner Forest |
We consider a natural generalization of the Steiner tree problem, the Steiner forest problem, in the Euclidean plane: the input is a multiset X ⊂ R^2, partitioned into k color classes C_1, C_2, ..., C_k X. The goal is to find a minimum-cost Euclidean graph G such that every color class C_i is connected in G.
We study this Steiner forest problem in the streaming setting, where the stream consists of insertions and deletions of points to X. Each input point x\in X arrives with its in [k],
and as usual for dynamic geometric streams, the input points are restricted to the discrete grid {0, ..., Δ}^2. We design a single-pass streaming algorithm that uses poly(k log Δ) space and time, and estimates the cost of an optimal Steiner forest solution within ratio arbitrarily close to the famous Euclidean Steiner ratio alpha_2 (currently 1.1547 < \alpha_2 < 1.214). Our approach relies on a novel combination of streaming techniques, like sampling and linear sketching, with the classical dynamic-programming framework for geometric optimization problems, which usually requires large memory and has so far not been applied in the streaming setting. Joint work with Artur Czumaj, Robert Krauthgamer and Pavel Veselý. |
ArXiv |
11th of November One hour later that usual: 15:15 (Helsinki time) |
Przemek Uznański: Cardinality estimation using Gumbel distribution |
Cardinality estimation is the task of approximating the number of distinct elements in a large dataset with possibly repeating elements. LogLog and HyperLogLog (c.f. Durand and Flajolet [ESA 2003], Flajolet et al. [Discrete Math Theor. 2007]) are small space sketching schemes for cardinality estimation, which have both strong theoretical guarantees of performance and are highly effective in practice. This makes them a highly popular solution with many implementations in big-data systems (e.g. Algebird, Apache DataSketches, BigQuery, Presto and Redis). However, despite having simple and elegant formulation, both the analysis of LogLog and HyperLogLog are extremely involved -- spanning over tens of pages of analytic combinatorics and complex function analysis. We propose a modification to both LogLog and HyperLogLog that replaces discrete geometric distribution with a continuous Gumbel distribution. This leads to a very short, simple and elementary analysis of estimation guarantees, and smoother behavior of the estimator. |
ArXiv |
28th of October 2020 14:15 (Helsinki time) |
Philipp Schneider: Shortest Paths in the HYBRID Network Model |
The HYBRID model, introduced in [Augustine et al., SODA '20], provides a theoretical foundation for networks that allow multiple communication modes. The model follows the principles of synchronous message passing, where nodes are allowed to use two fundamentally different communication modes. First, a local mode where nodes may exchange arbitrary information per round over edges of a local communication graph G (akin to the LOCAL model). Second, a global mode where every node may exchange log n messages of size O(log n) bits per round with arbitrary nodes in the network. The HYBRID model intends to reflect the conditions of many real hybrid networks, where high-bandwidth but inherently local communication is combined with highly flexible global communication with restricted bandwidth. We explore the power and limitations of the HYBRID model by investigating the complexity of computing shortest paths of the local communication graph G. The aim of the talk is to give an overview of the known techniques for information dissemination in the HYBRID model. Subsequently, we will look into how these techniques can be used to obtain algorithmic upper bounds for various shortest paths problems and how these compare to the known lower bounds. As a sideffect we will also learn how the HYBRID model is related to other models of computation. |
ArXiv |
21th of October 2020 14:15 (Helsinki time) |
Darya Melnyk: Online Problems with Delays in the Uniform Metric Space |
We present tight bounds for the k-server problem with delays in the
uniform metric space. The problem is defined on n + k nodes in the
uniform metric space which can issue requests over time. These requests
can be served directly or with some delay using k servers, by moving a
server to the corresponding node with an open request. The task is to
find an online algorithm that can serve the requests while minimizing
the total moving and delay costs. We first provide a lower bound by
showing that the competitive ratio of any deterministic online algorithm
cannot be better than (2k + 1) in the clairvoyant setting. We will then
show that conservative algorithms (without delay) can be equipped with
an accumulative delay function such that all such algorithms become (2k
+ 1)-competitive in the non-clairvoyant setting. Together, the two
bounds establish a tight result for both, the clairvoyant and the
non-clairvoyant settings. We further study a special case of the online Minimum-Cost Perfect Matching with Delays (MPMD) problem introduced by Emek et al. [STOC 2016]. In this problem, a metric space is given and requests arrive at different times at points in this space. The algorithm is allowed to delay the matching of these requests, with the objective to both minimize the sum of distances between matched pairs and the time that passes until a request is matched. We consider the special case of uniform metric spaces with a unit distance between any two points. Our results include a deterministic 3-competitive online algorithm for a uniform metric space consisting of four or fewer points as well as a deterministic 3.5-competitive online algorithm for uniform metric spaces with an arbitrary number of points. |
Paper |
7th of October 2020 14:15 (Helsinki time) |
Aleksander Łukasiewicz: All-Pairs LCA in DAGs: Breaking through the O(n^2.5) barrier |
Let G=(V,E) be an n-vertex directed acyclic graph (DAG). A lowest common ancestor (LCA) of two vertices u and v is a common ancestor w of u and v such that no descendant of w has the same property. In this paper, we consider the problem of computing an LCA, if any, for all pairs of vertices in a DAG. The fastest known algorithms for this problem exploit fast matrix multiplication subroutines and have running times ranging from O(n^2.687) [Bender et al. SODA'01] down to O(n^2.615) [Kowaluk and Lingas~ICALP'05] and O(n^2.569) [Czumaj et al. TCS'07]. Somewhat surprisingly, all those bounds would still be Ω(n^2.5) even if matrix multiplication could be solved optimally (i.e., ω=2). This appears to be an inherent barrier for all the currently known approaches, which raises the natural question on whether one could break through the O(n^2.5) barrier for this problem. In this paper, we answer this question affirmatively: in particular, we present an O(n^2.447) (O(n^7/3) for ω=2) algorithm for finding an LCA for all pairs of vertices in a DAG, which represents the first improvement on the running times for this problem in the last 13 years. A key tool in our approach is a fast algorithm to partition the vertex set of the transitive closure of G into a collection of O(ℓ) chains and O(n/ℓ) antichains, for a given parameter ℓ. As usual, a chain is a path while an antichain is an independent set. We then find, for all pairs of vertices, a candidate} LCA among the chain and antichain vertices, separately. The first set is obtained via a reduction to min-max matrix multiplication. The computation of the second set can be reduced to Boolean matrix multiplication similarly to previous results on this problem. We finally combine the two solutions together in a careful (non-obvious) manner. To appear in SODA 2021. |
ArXiv |
30th September 2020 14:15 (Helsinki time) |
Julian Portmann: Tight Bounds for Deterministic High-Dimensional Grid Exploration |
We study the problem of exploring an oriented grid with autonomous agents governed by finite automata. In the case of a 2-dimensional grid, the question how many agents are required to explore the grid, or equivalently, find a hidden treasure in the grid, is fully understood in both the synchronous and the semi-synchronous setting. For higher dimensions, Dobrev, Narayanan, Opatrny, and Pankratov [ICALP'19] showed very recently that, surprisingly, a (small) constant number of agents suffices to find the treasure, independent of the number of dimensions, thereby disproving a conjecture by Cohen, Emek, Louidor, and Uitto [SODA'17]. Dobrev et al. left as an open question whether their bounds on the number of agents can be improved. We answer this question in the affirmative for deterministic finite automata: we show that 3 synchronous and 4 semi-synchronous agents suffice to explore an n-dimensional grid for any constant n. The bounds are optimal and notably, the matching lower bounds already hold in the 2-dimensional case. Our techniques can also be used to make progress on other open questions asked by Dobrev et al.: we prove that 4 synchronous and 5 semi-synchronous agents suffice for polynomial-time exploration, and we show that, under a natural assumption, 3 synchronous and 4 semi-synchronous agents suffice to explore unoriented grids of arbitrary dimension (which, again, is tight). |
ArXiv |
16th September 2020 14:15 (Helsinki time) |
Janne Korhonen: Input-dynamic distributed graph algorithms for congested networks |
Consider a distributed system, where the topology of the communication network remains fixed, but local inputs given to nodes may change over time. In this work, we explore the following question: if some of the local inputs change, can an existing solution be updated efficiently, in a dynamic and distributed manner?
To address this question, we define the batch dynamic CONGEST model, where the communication network G=(V,E) remains fixed and a dynamic edge labelling defines the problem input. The task is to maintain a solution to a graph problem on the labeled graph under batch changes. We investigate, when a batch of α
edge label changes arrive, — how much time as a function of α we need to update an existing solution, and — how much information the nodes have to keep in local memory between batches in order to update the solution quickly. We give a general picture of the complexity landscape in this model, including a general framework for lower bounds. In particular, we prove non-trivial upper bounds for two selected, contrasting problems: maintaining a minimum spanning tree and detecting cliques. |
ArXiv |
9nd September 2020 14:15 (Helsinki time) |
Yannic Maus: Distributed Graph Coloring: Linial for Lists |
Linial's famous color reduction algorithm reduces a given m-coloring of a graph with maximum degree Δ to a O(Δ² log m)-coloring, in a single round in the LOCAL model. We show a similar result when nodes are restricted to choose their color from a list of allowed colors: given an m-coloring in a directed graph of maximum outdegree β, if every node has a list of size Ω(β² (log β + log log m + log log |C|)) from a color space C then they can select a color in two rounds in the LOCAL model. Moreover, the communication of a node essentially consists of sending its list to the neighbors. This is obtained as part of a framework that also contains Linial's color reduction (with an alternative proof) as a special case. Our result also leads to a defective list coloring algorithm. As a corollary, we improve the state-of-the-art truly local (deg+1)-list coloring algorithm from Barenboim et al. [PODC'18] by slightly reducing the runtime to O(sqrt(Δ log Δ) + log* n) and significantly reducing the message size (from huge to roughly Δ). Our techniques are inspired by the local conflict coloring framework of Fraigniaud et al. [FOCS'16]. | ArXiv |
26th August 2020 14:15 (Helsinki time) |
Ebrahim Ghorbani: Spectral gap of regular graphs and a conjecture by Aldous-Fill on the relaxation time of the random walk |
Aldous and Fill conjectured that the maximum relaxation time for the random walk on a connected regular graph with n vertices is (1+o(1)) \frac{3n^2}{2\pi^2}. This conjecture can be rephrased in terms of the spectral gap as follows: the spectral gap (algebraic connectivity) of a connected k-regular graph on n vertices is at least (1+o(1))\frac{2k\pi^2}{3n^2}, and the bound is attained for at least one value of k. Brand, Guiduli, and Imrich determined the structure of connected cubic graphs on n vertices with minimum spectral gap. We investigate the structure of quartic (i.e.~4-regular) graphs with the minimum spectral gap among all connected quartic graphs. We show that they must have a path-like structure built from specific blocks. Based on these results, we prove the Aldous--Fill conjecture follows for k=3 and 4. This talk is based on joint works with M. Abdi and W. Imrich. | |
19th August 2020 14:15 (Helsinki time) |
Klaus-Tycho Förster: On the Feasibility of Perfect Resilience with Local Fast Failover |
In order to provide a high resilience and to react quickly to link failures, modern computer networks support fully decentralized flow rerouting, also known as local fast failover. In a nutshell, the task of a local fast failover algorithm is to pre-define fast failover rules for each node using locally available information only. These rules determine for each incoming link from which a packet may arrive and the set of local link failures (i.e., the failed links incident to a node), on which outgoing link a packet should be forwarded. Ideally, such a local fast failover algorithm provides a perfect resilience deterministically: a packet emitted from any source can reach any target, as long as the underlying network remains connected. Feigenbaum et al. showed that it is not always possible to provide perfect resilience and showed how to tolerate a single failure in any network. Interestingly, not much more is known currently about the feasibility of perfect resilience. This paper revisits perfect resilience with local fast failover, both in a model where the source can and cannot be used for forwarding decisions. We first derive several fairly general impossibility results: By establishing a connection between graph minors and resilience, we prove that it is impossible to achieve perfect resilience on any non-planar graph; furthermore, while planarity is necessary, it is also not sufficient for perfect resilience. On the positive side, we show that graph families which are closed under the subdivision of links, can allow for simple and efficient failover algorithms which simply skip failed links. We demonstrate this technique by deriving perfect resilience for outerplanar graphs and related scenarios, as well as for scenarios where the source and target are topologically close after failures. |
Paper |
12th August 2020 14:15 (Helsinki time) |
Chris Brzuska: On Building Fine-Grained One-Way Functions from Strong Average-Case Hardness |
Constructing one-way function from average-case hardness is a long-standing open problem
A positive result would exclude Pessiland (Impagliazzo '95) and establish a highly desirable win-win situation: Either (symmetric) cryptography exists unconditionally, enabling many of the important primitives which are used to secure our communications, or all NP problems can be solved efficiently on average, which would be a revolution in algorithms. Motivated by the interest of establishing such a win-win result and the lack of progress on this seemingly very hard question, we initiate the investigation of weaker yet meaningful candidate win-win results. Specifically, we study the following type of win-win results: either there are fine-grained one-way functions (FGOWF), which relax the standard notion of a one-way function by requiring only a fixed polynomial gap (as opposed to superpolynomial) between the running time of the function and the running time of an inverter, or nontrivial speedups can be obtained for all NP problems on average. We obtain three main results: - We introduce the random language model (RLM), which captures idealized average-case hard languages, analogous to how the random oracle model captures idealized one-way functions. We provide a construction of a FGOWF (with quadratic hardness gap) and prove its security in the RLM. This rules out an idealized version of Pessiland, where ideally hard languages would exist yet even weak forms of cryptography would fail. - On the negative side, we prove a strong oracle separation: we show that there is no black-box proof that either FGOWF exist, or non-trivial speedup can be obtained for all NP languages on average (i.e., there is no exponentially average-case hard NP languages). - We provide a second strong negative result for an even weaker candidate win-win result: there is no black-box proof that either FGOWF exist, or non-trivial speedups can be obtained for all NP languages on average when amortizing over many instances (i.e., there is no exponentially average-case hard NP languages whose hardness amplifies optimally through parallel repetitions). This separation forms the core technical contribution of our work. Our results lay the foundations for a program towards building fine-grained one-way functions from strong forms of average-case hardness, following the template of constructions in the random language model. We provide a preliminary investigation of this program, showing black-box barriers toward instantiating our idealized constructions from natural hardness properties. Joint work with Geoffroy Couteau. |
|
29th July 2020 14:15 (Helsinki time) |
Davin Choo: k-means++: few more steps yield constant approximation |
The k-means++ algorithm of Arthur and Vassilvitskii (SODA 2007) is a state-of-the-art algorithm for solving the k-means clustering problem and is known to give an O(log k)-approximation in expectation. Recently, Lattanzi and Sohler (ICML 2019) proposed augmenting k-means++ with O(k log log k) local search steps to yield a constant approximation (in expectation) to the k-means clustering problem. In this paper, we improve their analysis to show that, for any arbitrarily small constant $\eps > 0$, with only $\eps k$ additional local search steps, one can achieve a constant approximation guarantee (with high probability in k), resolving an open problem in their paper. | Paper |
22nd July 2020 14:15 (Helsinki time) |
Yuval Efron: Beyond Alice and Bob: Improved Inapproximability for Maximum Independent Set in CONGEST |
By far the most fruitful technique for showing lower bounds for the CONGEST model is reductions to two-party communication complexity. This technique has yielded nearly tight results for various fundamental problems such as distance computations, minimum spanning tree, minimum vertex cover, and more.
In this work, we take this technique a step further, and we introduce a framework of reductions to t-party communication complexity, for every t≥2. Our framework enables us to show improved hardness results for maximum independent set. Recently, Bachrach et al.[PODC 2019] used the two-party framework to show hardness of approximation for maximum independent set. They show that finding a (5/6+ϵ)-approximation requires Ω(n/log6n) rounds, and finding a (7/8+ϵ)-approximation requires Ω(n2/log7n) rounds, in the CONGEST model where n in the number of nodes in the network.
We improve the results of Bachrach et al. by using reductions to multi-party communication complexity. Our results: (1) Any algorithm that finds a (1/2+ϵ)-approximation for maximum independent set in the CONGEST model requires Ω(n/log3n) rounds. (2) Any algorithm that finds a (3/4+ϵ)-approximation for maximum independent set in the CONGEST model requires Ω(n2/log3n) rounds. |
Paper |
15th July 2020 14:15 (Helsinki time) |
Yi-Jun Chang: Simple Contention Resolution via Multiplicative Weight Updates |
We consider the classic contention resolution problem, in which devices conspire to share some common resource, for which they each need temporary and exclusive access. To ground the discussion, suppose (identical) devices wake up at various times, and must send a single packet over a shared multiple-access channel. In each time step they may attempt to send their packet; they receive ternary feedback {0,1,2^+} from the channel, 0 indicating silence (no one attempted transmission), 1 indicating success (one device successfully transmitted), and 2^+ indicating noise. We prove that a simple strategy suffices to achieve a channel utilization rate of 1/e-O(epsilon), for any epsilon > 0. In each step, device i attempts to send its packet with probability p_i, then applies a rudimentary multiplicative weight-type update to p_i. p_i ← { p_i * e^{epsilon} upon hearing silence (0), p_i upon hearing success (1), p_i * e^{-epsilon/(e-2)} upon hearing noise (2^+) }. This scheme works well even if the introduction of devices/packets is adversarial, and even if the adversary can jam time slots (make noise) at will. We prove that if the adversary jams J time slots, then this scheme will achieve channel utilization 1/e-epsilon, excluding O(J) wasted slots. Results similar to these (Bender, Fineman, Gilbert, Young, SODA 2016) were already achieved, but with a lower constant efficiency (less than 0.05) and a more complex algorithm. | Paper |
8th July 2020 14:15 (Helsinki time) |
Diksha Gupta: Sybil Defense in the Presence of Churn Using Resource Burning |
In this talk, I will begin by discussing a recent result for defending against Sybil attacks in dynamic permissionless systems, in the presence of a computationally bounded adversary - TOtal Good COMputation (ToGCom). This technique guarantees a majority of honest identities (IDs) at all times, at a computational cost to the protocol that is comparable to that of the attacker. Next, I will talk about the concept of resource burning - the verifiable consumption of resources. This resource does not necessarily need to be computational power, but can also be communication capacity, computer memory, and human effort, subject to some constraints. Using this insight, I will conclude with a generalizing of our Sybil defense techniques to a variety of systems with churn, in the presence of a resource-bounded adversary. | |
1st July 2020 14:15 (Helsinki time) |
Sebastian Brandt: Lower Bounds for Ruling Sets in the LOCAL Model |
Given a graph G = (V,E), an (a,b)-ruling set is a subset S of V such
that the distance between any two vertices in S is at least a, and the
distance between any vertex in V and the closest vertex in S is at most
b. Ruling sets are a generalization of maximal independent sets (which
are (2,1)-ruling sets) and constitute an important building block of
many distributed algorithms. The recent breakthrough on network
decompositions by Rozhon and Ghaffari [STOC'20] implies that, in the
distributed LOCAL model, ruling sets can be computed deterministically
in polylogarithmic time, for a wide range of parameters a, b. In this talk, we present polylogarithmic lower bounds for the deterministic computation of ruling sets, and Omega(poly(log log n)) lower bounds for the randomized computation of ruling sets, both in the LOCAL model, improving on the previously best known lower bounds of Omega(log*n)) by Linial [FOCS'87] and Naor [J.Disc.Math.'91]. In the special case of maximal independent sets, such lower bounds were already known; however, our lower bounds are the first (beyond Omega(log*n)) that are also applicable on trees. |
Paper |
17th June 2020 14:15 (Helsinki time) |
Frederik Mallman-Trenn: Finding the best papers with noisy reviews |
Say you are tasked to find the best 150 papers among more than 550 papers.
You can ask people to review a given paper by either asking 1) Is paper A better than paper B or 2) What’s the score of paper A? The problem is that each review returns an incorrect response with a small probability, say 2/3. How can you assign reviews so that you the total number of queries is small and the number of review rounds is small? |
Paper |
10th June 2020 14:15 (Helsinki time) |
Joachim Spoerhase: Approximation Algorithms for some Submodular and Graph-based Optimization Problems |
In this talk, I will give an overview over three recent results on approximation algorithms for discrete optimization problems.
The first result concerns optimizing a monotone submodular function with respect to an arbitrary (constant) number of packing and covering constraints. We give a randomized polynomial-time approximation algorithm that is essentially tight with respect to approximation ratio, running time, and constraint violation. The algorithm is based on rounding a multi-linear relaxation of the problem. We also discuss special cases where we can give *deterministic* algorithms based on a novel combinatorial approach.
The second set of results concerns optimization problems for graphs. We will briefly look at a polynomial-time approximation scheme (PTAS) for Steiner tree on map graphs, which generalizes a known PTAS planar (edge-weighted) Steiner tree, and which is motivated by the study of planar node-weighted Steiner trees. We will also touch upon a result for the sparsest cut problem in tree-width bounded graphs. REMARK: This talk is given as a memorial for Sumedha Uniyal who was a postdoc at Aalto for 3 years until she passed away on February 19, 2020. I will give an overview over our three joint results. Additional contributors include Jaroslaw Byrka (Wroclaw), Parinya Chalermsook (Aalto), Mateusz Lewandowski (Wroclaw), Syed Mohammad Meesum (Wroclaw), Eyal Mizrachi (Technion), Matthias Mnich (TU Hamburg) Roy Schwartz (Technion), Daniel Vaz (TU Munich) |
|
3rd June 2020 14:15 (Helsinki time) |
Parinya Chalermsook: Some New Algorithmic Min-Max Relations via Local Search |
The study of min-max relations has been a cornerstone in combinatorial optimization. Classic examples include, for instance, LP duality relations, Tutte-Berge formula, and the notion of perfect graphs. These relations have yielded many ground-breaking algorithmic results. Algorithmic uses of these bounds often involve two steps: (1) Prove a non-constructive bound and (2) Use another technique (e.g. convex programs) to turn the first step algorithmic. In this talk, I will report our recent use of local search arguments to derive two new algorithmic min-max relations in one go. In these results, a bound is proved by showing that a locally optimal solution must satisfy such bound, therefore yielding immediately an efficient algorithm.
REMARK: This talk is given as a memorial for Sumedha Uniyal who was a postdoc at Aalto for 3 years until she passed away on February 19, 2020. She played a central role in this project. Additional contributors include Samir Khuller (Northwestern), Andreas Schmid (Aalto), and Pattara Sukprasert (Northwestern). |
|
27th May 2020 14:15 (Helsinki time) |
Krzysztof Nowicki: Faster Algorithms for Edge Connectivity via Random 2-Out Contractions |
We provide a simple new randomized contraction approach to the global minimum cut problem for simple undirected graphs. The contractions exploit 2-out edge sampling from each vertex rather than the standard uniform edge sampling. Our new approach yields better algorithms for sequential, distributed, and parallel models of computation. | Paper |
20th May 2020 15:15 (Helsinki time, one hour later than usually!) |
Jan Studený: On the Hardness of Classifying Distributed Complexity of Graph Problems on Paths, Cycles, and Trees |
Given the description of a locally checkable graph problem Π for paths or cycles or trees, find out how hard is to determine what the distributed complexity of solving Π in the usual LOCAL model of distributed computing. In this talk I will present our results on answering this question for paths and cycles and discuss the case of trees. | Paper |
13th May 2020 14:15 (Helsinki time) |
Manuela Fischer: Local Glauber Dynamics |
Sampling constitutes an important tool in a variety of areas: from machine learning and combinatorial optimization to computational physics and biology. A central class of sampling algorithms is the Markov Chain Monte Carlo method, based on the construction of a Markov chain with the desired sampling distribution as its stationary distribution. Many of the traditional Markov chains, such as the Glauber dynamics, do not scale well with increasing dimension. To address this shortcoming, we propose a simple local update rule based on the Glauber dynamics that leads to efficient parallel and distributed algorithms for sampling from Gibbs distributions. Concretely, we present a Markov chain that mixes in O(logn) rounds when Dobrushin's condition for the Gibbs distribution is satisfied. This improves over the LubyGlauber algorithm by Feng, Sun, and Yin [PODC'17], which needs O(Δ log n) rounds, and their LocalMetropolis algorithm, which converges in O(log n) rounds but requires a considerably stronger mixing condition. Here, n denotes the number of nodes in the graphical model inducing the Gibbs distribution, and Δ its maximum degree. In particular, our method can sample a uniform proper coloring with α Δ colors in O(log n) rounds for any α > 2, which almost matches the threshold of the sequential Glauber dynamics and improves on the α > 2+sqrt(2)- threshold of Feng et al. | Paper |
6th May 2020 14:15 (Helsinki time) |
Jesper Nederlof: Bipartite TSP in O(1.9999^n) Time, Assuming Quadratic Time Matrix Multiplication |
The symmetric traveling salesman problem (TSP) is the problem of
finding the shortest Hamiltonian cycle in an edge-weighted undirected
graph. In 1962 Bellman, and independently Held and Karp, showed that
TSP instances with n cities can be solved in O(n^2*2^n) time. Since
then it has been a notorious problem to improve the runtime to
O((2-eps)^n) for some constant eps >0. In this work we establish the
following progress: If (s x s)-matrices can be multiplied in
s^{2+o(1)} time, than all instances of TSP in bipartite graphs can be
solved in O(1.9999^n) time by a randomized algorithm with constant
error probability. We also indicate how our methods may be useful to
solve TSP in non-bipartite graphs. On a high level, our approach is via a new problem called the MINHAMPAIR problem: Given two families of weighted perfect matchings, find a combination of minimum weight that forms a Hamiltonian cycle. As our main technical contribution, we give a fast algorithm for MINHAMPAIR based on a new sparse cut-based factorization of the `matchings connectivity matrix', introduced by Cygan et al. [JACM'18]. |
|
29th April 2020 14:15 (Helsinki time) |
Sorrachai Yingchareonthawornchai: Computing and Testing Small Connectivity in Near-Linear Time and Queries via Fast Local Cut Algorithms |
Consider the following "local" cut-detection problem in a directed graph: We are given a seed vertex x and need to remove at most k edges so that at most ν edges can be reached from x (a "local" cut) or output ⊥ to indicate that no such cut exists. If we are given query access to the input graph, then this problem can in principle be solved without reading the whole graph and with query complexity depending on k and ν. In this talk, I will present a simple randomized algorithm spending O(νk^2) time and O(kν) queries for the slight variant of the above problem, improving in particular a previous time bound of O(k^O(k) ν) by Chechik et al. [SODA '17]. I will then present two key applications of the local cut algorithm. The first application is a fast randomized algorithm for the classic k-vertex connectivity problem that takes near-linear time when k = O(polylog(n)). Second is property testing algorithms for k-edge and -vertex connectivity with query complexities that are near-linear in k, exponentially improving the state-of-the-art. This resolves two open problems, one by Goldreich and Ron [STOC '97] and one by Orenstein and Ron [Theor. Comput Sci. '11]. | Paper |
22th April 2020 15:15 (Helsinki time |
Václav Rozhoň: Polylogarithmic-Time Deterministic Network Decomposition and Distributed Derandomization |
We present a simple polylogarithmic-time deterministic distributed algorithm for network decomposition. This improves on a celebrated 2^O(sqrt(log n))-time algorithm of Panconesi and Srinivasan [STOC'92] and settles a long-standing question in distributed graph algorithms. It also leads to the first polylogarithmic-time deterministic distributed algorithms for numerous other problems, hence resolving several well-known open problems, including Linial's question about the deterministic complexity of maximal independent set [FOCS'87; SICOMP'92]. By known connections, our result leads also to substantially faster randomized distributed algorithms for a number of well-studied problems including (Delta+1)-coloring, maximal independent set, and Lovász Local Lemma, as well as massively parallel algorithms for (Delta+1)-coloring. | Paper |
15th April 2020 14:15 (Helsinki time |
Przemek Uznański: Hardness of Exact Distance Queries in Sparse Graphs Through Hub Labeling |
A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. An important class of distance labeling schemes is that of hub labelings, where a node v∈G stores its distance to the so-called hubs S(v)⊆V, chosen so that for any u,v∈V there is w∈S(u)∩S(v) belonging to some shortest uv path. Notice that for most existing graph classes, the best distance labelling constructions existing use at some point a hub labeling scheme at least as a key building block. Our interest lies in hub labelings of sparse graphs, i.e., those with |E(G)|=O(n), for which we show a lowerbound of n/2^O(√logn) for the average size of the hubsets. Additionally, we show a hub-labeling construction for sparse graphs of average size O(n/RS(n)^c) for some 0 < c < 1, where RS(n) is the so-called Ruzsa-Szemerédi function, linked to structure of induced matchings in dense graphs. This implies that further improving the lower bound on hub labeling size to n/2^(logn)^o(1) would require a breakthrough in the study of lower bounds on RS(n), which have resisted substantial improvement in the last 70 years. For general distance labeling of sparse graphs, we show a lowerbound of 1/2O(√logn) * SumIndex(n), where SumIndex(n) is the communication complexity of the Sum-Index problem over Z_n. Our results suggest that the best achievable hub-label size and distance-label size in sparse graphs may be Θ(n/2^(log n)^c) for some 0 < c < 1. | Paper |
8th April 2020 14:15 (Helsinki time |
Alex Jung: On the Duality between Network Flows and Network Lasso |
Many applications generate data with an intrinsic network structure such as time series data, image data or social network data. The network Lasso (nLasso) has been proposed recently as a method for joint clustering and optimization of machine learning models for networked data. The nLasso extends the Lasso from sparse linear models to clustered graph signals. This paper explores the duality of nLasso and network flow optimization. We show that, in a very precise sense, nLasso is equivalent to a minimum-cost flow problem on the data network structure. Our main technical result is a concise characterization of nLasso solutions via existence of certain network flows. The main conceptual result is a useful link between nLasso methods and basic graph algorithms such as clustering or maximum flow. | Paper |
1st April 2020 14:15 (Helsinki time) |
Petteri Kaski: Probabilistic Tensors and Opportunistic Boolean Matrix Multiplication |
We introduce probabilistic extensions of classical deterministic measures of algebraic complexity of a tensor, such as the rank and the border rank. We show that these probabilistic extensions satisfy various natural and algorithmically serendipitous properties, such as submultiplicativity under taking of Kronecker products. Furthermore, the probabilistic extensions enable strictly lower rank over their deterministic counterparts for specific tensors of interest, starting from the tensor <2,2,2> that represents 2-by-2 matrix multiplication. By submultiplicativity, this leads immediately to novel randomized algorithm designs, such as algorithms for Boolean matrix multiplication as well as detecting and estimating the number of triangles and other subgraphs in graphs. | Paper |
25th March 2020 14:15 (Helsinki time) |
Jukka Suomela: Foundations of Distributed Computing in the 2020s |
In this talk I will review some major advances in the theory of distributed computing in the past decade and discuss key research challenges for the 2020s, with the main focus on distributed graph algorithms. I will present promising new approaches for doing research in the field, and I will also highlight examples of seemingly elementary questions that are still beyond the reach of our current techniques. | Slides |