Papers
Topics
Authors
Recent
Search
2000 character limit reached

Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization

Published 26 Dec 2015 in math.OC | (1512.08122v4)

Abstract: Given an undirected graph $\mathcal{G}=(\mathcal{N},\mathcal{E})$ of agents $\mathcal{N}={1,\ldots,N}$ connected with edges in $\mathcal{E}$, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions ${\Phi_i}_{i\in\mathcal{N}}$ while respecting privacy requirements, where $\Phi_i\triangleq \xi_i + f_i$ belongs to agent-$i$. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient method DPGA for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-$i$ computes the prox map of $\xi_i$ and gradient of $f_i$, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of $\nabla f_i$ at each agent-$i$. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both sub-optimality error and consensus violation for DPGA and SDPGA with rates $\mathcal{O}(1/t)$ and $\mathcal{O}(1/\sqrt{t})$, respectively.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.