Distributed Computing for Huge-Scale Linear Programming
Abstract: This study develops an algorithm for distributed computing of linear programming problems of huge-scales. Global consensus with single common variable, multiblocks, and augmented Lagrangian are adopted. The consensus is used to partition the constraints of equality and inequality into multi-consensus blocks, and the subblocks of each consensus block are employed to partition the primal variables into $M$ sets of disjoint subvectors. The block-coordinate Gauss-Seidel method, the proximal point method, and ADMM are used to update the primal variables, and descent models used to update the dual. Convergence of the algorithm to optimal solution is shown and the rate of convergence of the augmented Lagrangian sequence, of $O(1/k)$ is obtained, under the dual sequences supposedly bounded. This boundedness of the dual sequences needs to be ensured through adequate choice of the control parameter values and initialization of the primal and dual sequences; further to help resolve the issue, it is to be explored that explicit bounds are imposed for the dual variables associated with the global consensus equality constraints.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.