Treffer: A novel distributed gradient algorithm for composite constrained optimization over directed network.
Huang, F., Duan, M., Su, H. & Zhu, S. Distributed optimal formation control of second-order multiagent systems with obstacle avoidance. IEEE Control Syst. Lett. 7, 2647–2652. https://doi.org/10.1109/LCSYS.2023.3287950 (2023).
Ran, L. et al. Distributed generalized nash equilibria computation of noncooperative games via novel primal-dual splitting algorithms. IEEE Trans. Signal Inf. Process. Netw. 10, 179–194. https://doi.org/10.1109/TSIPN.2024.3364613 (2024).
Zhou, Z.-D., Guo, G. & Zhang, R. A fixed-time convergent distributed algorithm for time-varying optimal resource allocation problem. IEEE Trans. Signal Inf. Process. Netw. 11, 48–58. https://doi.org/10.1109/TSIPN.2024.3511258 (2024).
Xin, R., Kar, S. & Khan, U. A. Decentralized stochastic optimization and machine learning: A unified variance-reduction framework for robust performance and fast convergence. IEEE Signal Process. Mag. 37, 102–113. https://doi.org/10.1109/MSP.2020.2974267 (2020).
Duan, M., Zhu, S., Yang, Z., Chen, C. & Guan, X. Distributed optimization of heterogeneous linear multi-agent systems with unknown disturbances and optimal gain tuning. IEEE Trans. Signal Inf. Process. Netw. 11, 563–576. https://doi.org/10.1109/TSIPN.2025.3574852 (2025).
Lü, Q., Liao, X., Deng, S. & Li, H. Asynchronous algorithms for decentralized resource allocation over directed networks. IEEE Trans. Parallel Distrib. Syst. 34, 16–32. https://doi.org/10.1109/TPDS.2022.3212424 (2023).
Ou, M. & Li, Z. Dynamic average consensus for multi-agent systems with continuous control input and practical predefined-time stability. In 2025 37th Chinese Control and Decision Conference (CCDC), 6207–6212, https://doi.org/10.1109/CCDC65474.2025.11090516 (IEEE, 2025).
Zhang, K., Liao, X. & Lü, Q. Privacy-protected decentralized dual averaging push with edge-based correlated perturbations over time-varying directed networks. IEEE Trans. Netw. Sci. Eng. 9, 4145–4158. https://doi.org/10.1109/TNSE.2022.3195953 (2022).
Shi, W., Ling, Q., Wu, G. & Yin, W. Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25, 944–966. https://doi.org/10.1137/14096668X (2015).
Xu, J., Zhu, S., Soh, Y. C. & Xie, L. Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes. In 2015 54th IEEE Conference on Decision and Control (CDC), 2055–2060, https://doi.org/10.1109/CDC.2015.7402509 (IEEE, 2015).
Nedic, A., Olshevsky, A. & Shi, W. Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27, 2597–2633. https://doi.org/10.1137/16M1084316 (2017).
Wang, D. et al. Distributed delayed dual averaging for distributed optimization over time-varying digraphs. Automatica 150, 110869. https://doi.org/10.1016/j.automatica.2023.110869 (2023).
Pu, S., Shi, W., Xu, J. & Nedić, A. Push-pull gradient methods for distributed optimization in networks. IEEE Trans. Autom. Control 66, 1–16. https://doi.org/10.1109/TAC.2020.2972824 (2021).
Yan, J., Shi, X., Guo, L., Wan, Y. & Wen, G. Distributed proximal alternating direction method of multipliers for constrained composite optimization over directed networks. IEEE Trans. Signal Inf. Process. Netw. 10, 539–551. https://doi.org/10.1109/TSIPN.2024.3407660 (2024).
Li, H. et al. Decentralized dual proximal gradient algorithms for non-smooth constrained composite optimization problems. IEEE Trans. Parallel Distrib. Syst. 32, 2594–2605. https://doi.org/10.1109/TPDS.2021.3072373 (2021).
Huang, Q., Fan, Y. & Cheng, S. Dual averaging for distributed unbalanced optimization with delayed information. IEEE Trans. Signal Inf. Process. Netw. 11, 366–377. https://doi.org/10.1109/TSIPN.2025.3559433 (2025).
Yuan, D., Hong, Y., Ho, D. W. & Xu, S. Distributed mirror descent for online composite optimization. IEEE Trans. Autom. Control 66, 714–729. https://doi.org/10.1109/TAC.2020.2987379 (2021).
Li, B., Cen, S., Chen, Y. & Chi, Y. Communication-efficient distributed optimization in networks with gradient tracking and variance reduction. J. Mach. Learn. Res. 21, 1–51 (2020).
Yuan, K., Ying, B., Liu, J. & Sayed, A. H. Variance-reduced stochastic learning by networked agents under random reshuffling. IEEE Trans. Signal Process. 67, 351–366. https://doi.org/10.1109/TSP.2018.2872003 (2018).
Zhu, Y., Yu, W., Wen, G. & Ren, W. Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks. IEEE Trans. Circuits Syst. II: Express Br. 66, 1202–1206. https://doi.org/10.1109/TCSII.2018.2878250 (2019).
Lü, Q., Liao, X., Deng, S. & Li, H. A decentralized stochastic algorithm for coupled composite optimization with linear convergence. IEEE Trans. Signal Inf. Process. Netw. 8, 627–640. https://doi.org/10.1109/TSIPN.2022.3190743 (2022).
Chen, A. I. & Ozdaglar, A. A fast distributed proximal-gradient method. In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 601–608, https://doi.org/10.1109/allerton.2012.6483273 (IEEE, 2012).
Chang, T.-H., Hong, M. & Wang, X. Multi-agent distributed optimization via inexact consensus admm. IEEE Trans. Signal Process. 63, 482–497. https://doi.org/10.1109/TSP.2014.2367458 (2014).
Shi, W., Ling, Q., Wu, G. & Yin, W. A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process. 63, 6013–6023. https://doi.org/10.1109/TSP.2015.2461520 (2015).
Li, Z., Shi, W. & Yan, M. A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Signal Process. 67, 4494–4506. https://doi.org/10.1109/TSP.2019.2926022 (2019).
Xu, J., Sun, Y., Tian, Y. & Scutari, G. A unified contraction analysis of a class of distributed algorithms for composite optimization. In 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 485–489, https://doi.org/10.1109/CAMSAP45676.2019.9022451 (IEEE, 2019).
Alghunaim, S. A., Yuan, K. & Sayed, A. H. A proximal diffusion strategy for multiagent optimization with sparse affine constraints. IEEE Trans. Autom. Control 65, 4554–4567. https://doi.org/10.1109/TAC.2019.2960265 (2020).
Notarnicola, I. & Notarstefano, G. Asynchronous distributed optimization via randomized dual proximal gradient. IEEE Trans. Autom. Control 62, 2095–2106. https://doi.org/10.1109/TAC.2016.2607023 (2017).
Wang, D., Chen, Y., Gupta, V. & Lian, J. Distributed constrained optimization for multi-agent systems over a directed graph with piecewise stepsize. J. Frankl. Inst. 357, 4855–4868. https://doi.org/10.1016/j.jfranklin.2020.03.035 (2020).
Zhao, Y. & Liu, Q. A consensus algorithm based on collective neurodynamic system for distributed optimization with linear and bound constraints. Neural Netw. 122, 144–151. https://doi.org/10.1016/j.neunet.2019.10.008 (2020).
Chambolle, A. & Pock, T. An introduction to continuous optimization for imaging. Acta Numerica 25, 161–319. https://doi.org/10.1017/S096249291600009X (2016).
Mateos, G., Bazerque, J. A. & Giannakis, G. B. Distributed sparse linear regression. IEEE Trans. Signal Process. 58, 5262–5276. https://doi.org/10.1109/TSP.2010.2055862 (2010).
Zheng, L. et al. Decentralized proximal splitting algorithms for composite constrained convex optimization. J. Frankl. Inst. 359, 7482–7509. https://doi.org/10.1016/j.jfranklin.2022.07.053 (2022).
Liu, Q., Yang, S. & Hong, Y. Constrained consensus algorithms with fixed step size for distributed convex optimization over multiagent networks. IEEE Trans. Autom. Control 62, 4259–4265. https://doi.org/10.1109/TAC.2017.2681200 (2017).
Xi, C., Wu, Q. & Khan, U. A. On the distributed optimization over directed networks. Neurocomputing 267, 508–515. https://doi.org/10.1016/j.neucom.2017.06.038 (2017).
Zhang, H., Li, H., Zhu, Y., Wang, Z. & Xia, D. A distributed stochastic gradient algorithm for economic dispatch over directed network with communication delays. Int. J. Electr. Power Energy Syst. 110, 759–771. https://doi.org/10.1016/j.ijepes.2019.03.024 (2019).
Li, H., Zhang, H., Wang, Z., Zhu, Y. & Han, Q. Distributed consensus-based multi-agent convex optimization via gradient tracking technique. J. Frankl. Inst. 356, 3733–3761. https://doi.org/10.1016/j.jfranklin.2019.01.050 (2019).
Nedić, A. & Olshevsky, A. Distributed optimization over time-varying directed graphs. IEEE Trans. Autom. Control 60, 601–615. https://doi.org/10.1109/cdc.2013.6760975 (2014).
Chung, F. Laplacians and the cheeger inequality for directed graphs. Ann. Comb. 9, 1–19. https://doi.org/10.1007/s00026-005-0237-z (2005).
Liu, Q. & Li, K. A continuous-time algorithm based on multi-agent system for distributed least absolute deviation subject to hybrid constraints. In IECON 2017-43rd Annual Conference of the IEEE Industrial Electronics Society, 7381–7386, https://doi.org/10.1109/iecon.2017.8217293 (IEEE, 2017).
Bazaraa, M., Sherali, H. & Shetty, C. Nonlinear Programming: Theory and Algorithms (Hoboken, New Jersey: John Wiley & Sons, 2006), 3rd edn.
Alghunaim, S. A., Yuan, K. & Sayed, A. H. A linearly convergent proximal gradient algorithm for decentralized optimization. Proc. Adv. Neural Inf. Process. Syst. 32, 2848–2858 (2019).
Grant, M. & Boyd, S. CVX: Matlab software for disciplined convex programming, ver 2.1 (2017).
Unbehauen, R. Neural networks for solving systems of linear equations-part II: minimax and least absolute value problems. IEEE Trans. Circuits Syst. II: Analog Digit. Signal Process 39, 619–633. https://doi.org/10.1109/82.193316 (1992).
Weitere Informationen
This study centers on a specific category of constrained convex optimization. The problems under consideration feature an objective function that is explicitly constructed from the combination of multiple differentiable convex functions and one or more non-smooth regularization components, particularly the [Formula: see text] norm.These problems are further subject to local linear and bound constraints. Such formulations commonly arise in practical domains, including power allocation, sensor network coordination, and source localization. To address these challenges efficiently and robustly, a new distributed optimization approach is developed that utilizes a time-varying yet constant step-size mechanism. Distinctively, by relying solely on row-stochastic weight matrices, the proposed method effectively manages constrained optimization tasks over directed communication networks without necessitating knowledge of each node's out-neighbor information. As long as each local objective satisfies the requirements for convexity and Lipschitz continuity and the chosen time-varying constant step size stays within a predefined upper constraint, theoretical analysis verifies that the suggested method converges to the optimal point. Simulation experiments further validate and reinforce the remarkable efficiency and real-world applicability of the developed method.
(© 2026. The Author(s).)
Declarations. Competing interests: The authors declare no competing interests.