Página 1 dos resultados de 2366 itens digitais encontrados em 0.019 segundos

Song, Changkyu; Yoon, Sejong; Pavlovic, Vladimir
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.2%
We propose new methods to speed up convergence of the Alternating Direction Method of Multipliers (ADMM), a common optimization tool in the context of large scale and distributed learning. The proposed method accelerates the speed of convergence by automatically deciding the constraint penalty needed for parameter consensus in each iteration. In addition, we also propose an extension of the method that adaptively determines the maximum number of iterations to update the penalty. We show that this approach effectively leads to an adaptive, dynamic network topology underlying the distributed optimization. The utility of the new penalty update schemes is demonstrated on both synthetic and real data, including a computer vision application of distributed structure from motion.; Comment: 8 pages manuscript, 2 pages appendix, 5 figures

## Distributed Optimization: Convergence Conditions from a Dynamical System Perspective

Shi, Guodong; Proutiere, Alexandre; Johansson, Karl Henrik
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.2%
This paper explores the fundamental properties of distributed minimization of a sum of functions with each function only known to one node, and a pre-specified level of node knowledge and computational capacity. We define the optimization information each node receives from its objective function, the neighboring information each node receives from its neighbors, and the computational capacity each node can take advantage of in controlling its state. It is proven that there exist a neighboring information way and a control law that guarantee global optimal consensus if and only if the solution sets of the local objective functions admit a nonempty intersection set for fixed strongly connected graphs. Then we show that for any tolerated error, we can find a control law that guarantees global optimal consensus within this error for fixed, bidirectional, and connected graphs under mild conditions. For time-varying graphs, we show that optimal consensus can always be achieved as long as the graph is uniformly jointly strongly connected and the nonempty intersection condition holds. The results illustrate that nonempty intersection for the local optimal solution sets is a critical condition for successful distributed optimization for a large class of algorithms.

## A Polyhedral Approximation Framework for Convex and Robust Distributed Optimization

Bürger, Mathias; Notarstefano, Giuseppe; Allgöwer, Frank
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.33%
In this paper we consider a general problem set-up for a wide class of convex and robust distributed optimization problems in peer-to-peer networks. In this set-up convex constraint sets are distributed to the network processors who have to compute the optimizer of a linear cost function subject to the constraints. We propose a novel fully distributed algorithm, named cutting-plane consensus, to solve the problem, based on an outer polyhedral approximation of the constraint sets. Processors running the algorithm compute and exchange linear approximations of their locally feasible sets. Independently of the number of processors in the network, each processor stores only a small number of linear constraints, making the algorithm scalable to large networks. The cutting-plane consensus algorithm is presented and analyzed for the general framework. Specifically, we prove that all processors running the algorithm agree on an optimizer of the global problem, and that the algorithm is tolerant to node and link failures as long as network connectivity is preserved. Then, the cutting plane consensus algorithm is specified to three different classes of distributed optimization problems, namely (i) inequality constrained problems, (ii) robust optimization problems...

## Newton-like method with diagonal correction for distributed optimization

Bajovic, Dragana; Jakovetic, Dusan; Krejic, Natasa; Jerinkic, Natasa Krklec
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.22%
We consider distributed optimization problems where networked nodes cooperatively minimize the sum of their locally known convex costs. A popular class of methods to solve these problems are the distributed gradient methods, which are attractive due to their inexpensive iterations, but have a drawback of slow convergence rates. This motivates the incorporation of second-order information in the distributed methods, but this task is challenging: although the Hessians which arise in the algorithm design respect the sparsity of the network, their inverses are dense, hence rendering distributed implementations difficult. We overcome this challenge and propose a class of distributed Newton-like methods, which we refer to as Distributed Quasi Newton (DQN). The DQN family approximates the Hessian inverse by: 1) splitting the Hessian into its diagonal and off-diagonal part, 2) inverting the diagonal part, and 3) approximating the inverse of the off-diagonal part through a weighted linear function. The approximation is parameterized by the tuning variables which correspond to different splittings of the Hessian and by different weightings of the off-diagonal Hessian part. Specific choices of the tuning variables give rise to different variants of the proposed general DQN method -- dubbed DQN-0...

## Distributed optimization over time-varying directed graphs

Nedic, Angelia; Olshevsky, Alex
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
56.11%
We consider distributed optimization by a collection of nodes, each having access to its own convex function, whose collective goal is to minimize the sum of the functions. The communications between nodes are described by a time-varying sequence of directed graphs, which is uniformly strongly connected. For such communications, assuming that every node knows its out-degree, we develop a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness. The subgradient-push requires no knowledge of either the number of agents or the graph sequence to implement. Our analysis shows that the subgradient-push algorithm converges at a rate of $O(\ln(t)/\sqrt{t})$, where the constant depends on the initial values at the nodes, the subgradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.

## Quantization Design for Distributed Optimization

Pu, Ye; Zeilinger, Melanie N.; Jones, Colin N.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.26%
We consider the problem of solving a distributed optimization problem using a distributed computing platform, where the communication in the network is limited: each node can only communicate with its neighbours and the channel has a limited data-rate. A common technique to address the latter limitation is to apply quantization to the exchanged information. We propose two distributed optimization algorithms with an iteratively refining quantization design based on the inexact proximal gradient method and its accelerated variant. We show that if the parameters of the quantizers, i.e. the number of bits and the initial quantization intervals, satisfy certain conditions, then the quantization error is bounded by a linearly decreasing function and the convergence of the distributed algorithms is guaranteed. Furthermore, we prove that after imposing the quantization scheme, the distributed algorithms still exhibit a linear convergence rate, and show complexity upper-bounds on the number of iterations to achieve a given accuracy. Finally, we demonstrate the performance of the proposed algorithms and the theoretical findings for solving a distributed optimal control problem.

## Asynchronous Distributed Optimization via Randomized Dual Proximal Gradient

Notarnicola, Ivano; Notarstefano, Giuseppe
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.2%
In this paper we consider distributed optimization problems in which the cost function is separable, i.e., a sum of possibly non-smooth functions all sharing a common variable, and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose a class of distributed optimization algorithms based on proximal gradient methods applied to the dual problem. We show that, by means of a proper choice of primal variable copies, the dual problem is itself separable when written in terms of conjugate functions, and the dual variables can be stacked into non-overlapping blocks associated to the computing nodes. We first show that a weighted proximal gradient on the dual function leads to a synchronous distributed algorithm with proper local dual proximal gradient updates at each node. Then, as main paper contribution, we develop asynchronous versions of the algorithm in which the node updates are triggered by local timers without any global iteration counter. The algorithms are shown to be proper randomized block-coordinate proximal gradient updates on the dual function.; Comment: submitted to journal

## Escaping Local Optima in a Class of Multi-Agent Distributed Optimization Problems: A Boosting Function Approach

Sun, Xinmiao; Cassandras, Christos G.; Gokbayrak, Kagan
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
56.09%
We address the problem of multiple local optima commonly arising in optimization problems for multi-agent systems, where objective functions are nonlinear and nonconvex. For the class of coverage control problems, we propose a systematic approach for escaping a local optimum, rather than randomly perturbing controllable variables away from it. We show that the objective function for these problems can be decomposed to facilitate the evaluation of the local partial derivative of each node in the system and to provide insights into its structure. This structure is exploited by defining "boosting functions" applied to the aforementioned local partial derivative at an equilibrium point where its value is zero so as to transform it in a way that induces nodes to explore poorly covered areas of the mission space until a new equilibrium point is reached. The proposed boosting process ensures that, at its conclusion, the objective function is no worse than its pre-boosting value. However, the global optima cannot be guaranteed. We define three families of boosting functions with different properties and provide simulation results illustrating how this approach improves the solutions obtained for this class of distributed optimization problems.

## Fault-Tolerant Distributed Optimization (Part IV): Constrained Optimization with Arbitrary Directed Networks

Su, Lili; Vaidya, Nitin H.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.27%
We study the problem of constrained distributed optimization in multi-agent networks when some of the computing agents may be faulty. In this problem, the system goal is to have all the non-faulty agents collectively minimize a global objective given by weighted average of local cost functions, each of which is initially known to a non-faulty agent only. In particular, we are interested in the scenario when the computing agents are connected by an arbitrary directed communication network, some of the agents may suffer from crash faults or Byzantine faults, and the estimate of each agent is restricted to lie in a common constraint set. This problem finds its applications in social computing and distributed large-scale machine learning. The fault-tolerant multi-agent optimization problem was first formulated by Su and Vaidya, and is solved when the local functions are defined over the whole real line, and the networks are fully-connected. In this report, we consider arbitrary directed communication networks and focus on the scenario where, local estimates at the non-faulty agents are constrained, and only local communication and minimal memory carried across iterations are allowed. In particular, we generalize our previous results on fully-connected networks and unconstrained optimization to arbitrary directed networks and constrained optimization. As a byproduct...

## Asynchronous Distributed Optimization using a Randomized Alternating Direction Method of Multipliers

Iutzeler, Franck; Bianchi, Pascal; Ciblat, Philippe; Hachem, Walid
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.11%
Consider a set of networked agents endowed with private cost functions and seeking to find a consensus on the minimizer of the aggregate cost. A new class of random asynchronous distributed optimization methods is introduced. The methods generalize the standard Alternating Direction Method of Multipliers (ADMM) to an asynchronous setting where isolated components of the network are activated in an uncoordinated fashion. The algorithms rely on the introduction of randomized Gauss-Seidel iterations of a Douglas-Rachford operator for finding zeros of a sum of two monotone operators. Convergence to the sought minimizers is provided under mild connectivity conditions. Numerical results sustain our claims.; Comment: 6 pages

## Quantized Consensus ADMM for Multi-Agent Distributed Optimization

Zhu, Shengyu; Hong, Mingyi; Chen, Biao
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.11%
Multi-agent distributed optimization over a network minimizes a global objective formed by a sum of local convex functions using only local computation and communication. We develop and analyze a quantized distributed algorithm based on the alternating direction method of multipliers (ADMM) when inter-agent communications are subject to finite capacity and other practical constraints. While existing quantized ADMM approaches only work for quadratic local objectives, the proposed algorithm can deal with more general objective functions (possibly non-smooth) including the LASSO. Under certain convexity assumptions, our algorithm converges to a consensus within $\log_{1+\eta}\Omega$ iterations, where $\eta>0$ depends on the local objectives and the network topology, and $\Omega$ is a polynomial determined by the quantization resolution, the distance between initial and optimal variable values, the local objective functions and the network topology. A tight upper bound on the consensus error is also obtained which does not depend on the size of the network.; Comment: 30 pages, 4 figures; to be submitted to IEEE Trans. Signal Processing. arXiv admin note: text overlap with arXiv:1307.5561 by other authors

## Differentially Private Distributed Optimization

Huang, Zhenqi; Mitra, Sayan; Vaidya, Nitin
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.16%
In distributed optimization and iterative consensus literature, a standard problem is for $N$ agents to minimize a function $f$ over a subset of Euclidean space, where the cost function is expressed as a sum $\sum f_i$. In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private. The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual's cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value. Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm. We observe that to achieve $\epsilon$-differential privacy the accuracy of the algorithm has the order of $O(\frac{1}{\epsilon^2})$.

## Communication-Efficient Algorithms For Distributed Optimization

Mota, João F. C.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.37%
This thesis is concerned with the design of distributed algorithms for solving optimization problems. We consider networks where each node has exclusive access to a cost function, and design algorithms that make all nodes cooperate to find the minimum of the sum of all the cost functions. Several problems in signal processing, control, and machine learning can be posed as such optimization problems. Given that communication is often the most energy-consuming operation in networks, it is important to design communication-efficient algorithms. The main contributions of this thesis are a classification scheme for distributed optimization and a set of corresponding communication-efficient algorithms. The class of optimization problems we consider is quite general, since each function may depend on arbitrary components of the optimization variable, and not necessarily on all of them. In doing so, we go beyond the common assumption in distributed optimization and create additional structure that can be used to reduce the number of communications. This structure is captured by our classification scheme, which identifies easier instances of the problem, for example the standard distributed optimization problem, where all functions depend on all the components of the variable. In our algorithms...

## Multi-Agent Distributed Optimization via Inexact Consensus ADMM

Chang, Tsung-Hui; Hong, Mingyi; Wang, Xiangfeng
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
56.23%
Multi-agent distributed consensus optimization problems arise in many signal processing applications. Recently, the alternating direction method of multipliers (ADMM) has been used for solving this family of problems. ADMM based distributed optimization method is shown to have faster convergence rate compared with classic methods based on consensus subgradient, but can be computationally expensive, especially for problems with complicated structures or large dimensions. In this paper, we propose low-complexity algorithms that can reduce the overall computational cost of consensus ADMM by an order of magnitude for certain large-scale problems. Central to the proposed algorithms is the use of an inexact step for each ADMM update, which enables the agents to perform cheap computation at each iteration. Our convergence analyses show that the proposed methods converge well under some convexity assumptions. Numerical results show that the proposed algorithms offer considerably lower computational complexity than the standard ADMM based distributed optimization methods.; Comment: submitted to IEEE Trans. Signal Processing; Revised April 2014 and August 2014

## Partitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?

Ma, Chenxin; Takáč, Martin
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.16%
In this paper we study the effect of the way that the data is partitioned in distributed optimization. The original DiSCO algorithm [Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss, Yuchen Zhang and Lin Xiao, 2015] partitions the input data based on samples. We describe how the original algorithm has to be modified to allow partitioning on features and show its efficiency both in theory and also in practice.

## Online Distributed Optimization on Dynamic Networks

Hosseini, Saghar; Chapman, Airlie; Mesbahi, Mehran
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.2%
This paper presents a distributed optimization scheme over a network of agents in the presence of cost uncertainties and over switching communication topologies. Inspired by recent advances in distributed convex optimization, we propose a distributed algorithm based on a dual sub-gradient averaging. The objective of this algorithm is to minimize a cost function cooperatively. Furthermore, the algorithm changes the weights on the communication links in the network to adapt to varying reliability of neighboring agents. A convergence rate analysis as a function of the underlying network topology is then presented, followed by simulation results for representative classes of sensor networks.; Comment: Submitted to The IEEE Transactions on Automatic Control, 2014

## Federated Optimization:Distributed Optimization Beyond the Datacenter

Konečný, Jakub; McMahan, Brendan; Ramage, Daniel
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.24%
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large number of \nodes, but the goal remains to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of utmost importance. A motivating example for federated optimization arises when we keep the training data locally on users' mobile devices rather than logging it to a data center for training. Instead, the mobile devices are used as nodes performing computation on their local data in order to update a global model. We suppose that we have an extremely large number of devices in our network, each of which has only a tiny fraction of data available totally; in particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, we assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results. This work also sets a path for future research needed in the context of federated optimization.; Comment: NIPS workshop version

## Distributed Optimization with Arbitrary Local Solvers

Ma, Chenxin; Konečný, Jakub; Jaggi, Martin; Smith, Virginia; Jordan, Michael I.; Richtárik, Peter; Takáč, Martin
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.25%
With the growth of data and necessity for distributed optimization methods, solvers that work well on a single machine must be re-designed to leverage distributed computation. Recent work in this area has been limited by focusing heavily on developing highly specific methods for the distributed environment. These special-purpose methods are often unable to fully leverage the competitive performance of their well-tuned and customized single machine counterparts. Further, they are unable to easily integrate improvements that continue to be made to single machine methods. To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods. We give strong primal-dual convergence rate guarantees for our framework that hold for arbitrary local solvers. We demonstrate the impact of local solver selection both theoretically and in an extensive experimental comparison. Finally, we provide thorough implementation details for our framework, highlighting areas for practical performance gains.

## Continuous-time Proportional-Integral Distributed Optimization for Networked Systems

Droge, Greg; Kawashima, Hiroaki; Egerstedt, Magnus
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
56.21%
In this paper we explore the relationship between dual decomposition and the consensus-based method for distributed optimization. The relationship is developed by examining the similarities between the two approaches and their relationship to gradient-based constrained optimization. By formulating each algorithm in continuous-time, it is seen that both approaches use a gradient method for optimization with one using a proportional control term and the other using an integral control term to drive the system to the constraint set. Therefore, a significant contribution of this paper is to combine these methods to develop a continuous-time proportional-integral distributed optimization method. Furthermore, we establish convergence using Lyapunov stability techniques and utilizing properties from the network structure of the multi-agent system.; Comment: 23 Pages, submission to Journal of Control and Decision, under review. Takes comments from previous review process into account. Reasons for a continuous approach are given and minor technical details are remedied. Largest revision is reformatting for the Journal of Control and Decision

## Parallel and distributed optimization methods for estimation and control in networks

Necoara, Ion; Nedelcu, Valentin; Dumitrache, Ioan