PH.D DEFENCE - PUBLIC SEMINAR

Some New Results for Distributed Algorithms in Dynamic Networks

Speaker
Mr Irvan Jahja
Advisor
Dr Yu Haifeng, Dean'S Chair Associate Professor, School of Computing


19 Jun 2020 Friday, 01:00 PM to 02:30 PM

Zoom presentation

Join Zoom Meeting
https://nus-sg.zoom.us/j/97648707226?pwd=WFUybkdiOUdhbDdHNS9tVU8wTngyZz09

Meeting ID: 976 4870 7226
Password: 285061

Abstract:

Algorithms have always been one of the most important and extensively studied topics in computer science. This includes distributed algorithms, who traditionally have been studied in static networks (whose topology does not change). But the increasing popularity of mobile networks has spurred interest in studying distributed algorithms in dynamic networks whose topologies can change from time to time. Understanding the nature of distributed algorithms in dynamic network is still an ongoing process. This thesis aims to further expand this understanding.

For our first contribution, we consider a number of fundamental and related distributed computing problems in dynamic networks with oblivious adversaries. We prove several Ω(d + poly(n)) lower bounds on the time complexities of these problems, Here, d is the dynamic diameter and n is the number of nodes of the network. The previously best lower bounds for these problems under oblivious adversary were the trivial Ω(d) lower bounds. We are the first to give both a non-trivial lower bounds, and a lower bound with a poly(n) term. Our proofs for these results rely on a novel reduction from a two-party communication complexity problem. What makes our proof unique is that we consider a communication complexity problem with a special entity we call the leaker, who participates in the problem alongside Alice and Bob. The leaker helps Alice and Bob solve the two-party communication problem, by disclosing to Alice and Bob some "non-critical'' part of the problem instance they are solving.

For our second contribution, we consider the power of randomization in general distributed algorithms in dynamic networks with adaptive adversaries. Randomization can be used by distributed algorithms to deal with i) "bad'' inputs to the algorithm, and ii) "bad'' changing topologies as chosen by the adaptive adversaries. It is unclear whether randomness can help deal better with "bad'' topology changes chosen by the adaptive adversaries, given its nature. To answer this question, we prove that randomness does in fact offer only limited power to better deal with "bad'' adaptive adversaries. We define the simple notion of prophetic adversary who can accurately predict all the coin flip outcomes of the algorithm (including future ones), before determining the topologies. This makes randomness useless against "bad'' prophetic adversaries. We prove that (subject to some mild conditions), it is always possible to convert a randomized algorithm P into some new algorithm Q with comparable time complexity, even when Q is running against prophetic adversaries. Hence, the benefit of randomness in P for dealing with "bad'' adaptive adversaries is limited, which answers the above question. We further show that if P does not use randomness to deal with "bad'' inputs, then it can be derandomized efficiently.

All the contributions earlier are for robust dynamic networks, whose topologies are always connected and undirected. For our last contribution, we consider shaky dynamic networks, whose topologies may be disconnected and/or directed in some rounds. We explore several approaches for designing algorithms for shaky dynamic networks, given some algorithms for robust dynamic networks.