Papers
Topics
Authors
Recent
Search
2000 character limit reached

Multi-level Weighted Additive Spanners

Published 11 Feb 2021 in cs.DM | (2102.05831v3)

Abstract: Given a graph $G = (V,E)$, a subgraph $H$ is an \emph{additive $+\beta$ spanner} if $\dist_H(u,v) \le \dist_G(u,v) + \beta$ for all $u, v \in V$. A \emph{pairwise spanner} is a spanner for which the above inequality only must hold for specific pairs $P \subseteq V \times V$ given on input, and when the pairs have the structure $P = S \times S$ for some subset $S \subseteq V$, it is specifically called a \emph{subsetwise spanner}. Spanners in unweighted graphs have been studied extensively in the literature, but have only recently been generalized to weighted graphs. In this paper, we consider a multi-level version of the subsetwise spanner in weighted graphs, where the vertices in $S$ possess varying level, priority, or quality of service (QoS) requirements, and the goal is to compute a nested sequence of spanners with the minimum number of total edges. We first generalize the $+2$ subsetwise spanner of [Pettie 2008, Cygan et al., 2013] to the weighted setting. We experimentally measure the performance of this and several other algorithms for weighted additive spanners, both in terms of runtime and sparsity of output spanner, when applied at each level of the multi-level problem. Spanner sparsity is compared to the sparsest possible spanner satisfying the given error budget, obtained using an integer programming formulation of the problem. We run our experiments with respect to input graphs generated by several different random graph generators: Erd\H{o}s--R\'{e}nyi, Watts--Strogatz, Barab\'{a}si--Albert, and random geometric models. By analyzing our experimental results we developed a new technique of changing an initialization parameter value that provides better performance in practice.

Citations (9)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.