Gradient-Free Two-Point Methods for Solving Stochastic Nonsmooth Convex Optimization Problems with Small Non-Random Noises

 
PIIS000523100001245-3-1
DOI10.31857/S000523100001245-3
Publication type Article
Status Published
Authors
Affiliation:
Moscow Institute of Physics and Technology (National Research University)
Skolkovo University of Science and Technology
Address: Russian Federation, Moscow
Affiliation:
Moscow Institute of Physics and Technology (National Research University)
Kharkevich Institute for Information Transmission Problems Russian Academy of Sciences
Higher School of Economics (Moscow)
Address: Russian Federation, Moscow
Affiliation: Moscow Institute of Physics and Technology (National Research University)
Address: Russian Federation, Moscow
Journal nameAvtomatika i Telemekhanika
EditionIssue 8
Pages38-49
Abstract

We study nonsmooth convex stochastic optimization problems with a two-point zero-order oracle, i.e., at each iteration one can observe the values of the function’s realization at two selected points. These problems are first smoothed out with the well-known technique of double smoothing (B.T. Polyak) and then solved with the stochastic mirror descent method. We obtain conditions for the permissible noise level of a nonrandom nature exhibited in the computation of the function’s realization for which the estimate on the method’s rate of convergence is preserved.

 

KeywordsMirror descent method, noise, stochastic optimization, gradient-free methods, double smoothing technique
Received30.09.2018
Publication date30.09.2018
Number of characters871
Cite   Download pdf To download PDF you should sign in
Размещенный ниже текст является ознакомительной версией и может не соответствовать печатной
1
Hey. I sent a screenshot. Did you get it?

Hey. I sent a screenshot. Did you get it?

2 Hey. I sent a screenshot. Did you get it? Hey. I sent a screenshot. Did you get it?
3 Hey. I sent a screenshot. Did you get it? Hey. I sent a screenshot. Did you get it?
4 Hey. I sent a screenshot. Did you get it? Hey. I sent a screenshot. Did you get it?
5 Hey. I sent a screenshot. Did you get it?

views: 2225

Readers community rating: votes 0

1. Polyak B.T. Vvedenie v optimizatsiyu. M.: Nauka, 1983.

2. Granichin O.N., Polyak B.T. Randomizirovannye algoritmy otsenivaniya i optimizatsii pri pochti proizvol'nykh pomekhakh. M.: Nauka, 2003.

3. Duchi J.C., Jordan M.I., Wainwright M.J., Wibisono A. Optimal Rates for ZeroOrder Convex Optimization: The Power of Two Function Evaluations // IEEE Transact. Inf. 2015. V. 61. No. 5. P. 2788–2806.

4. Shamir O. An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback // e-print, 2015. URL: http://arxiv.org/pdf/1507.08752v1.pdf

5. Gasnikov A.V., Dvurechenskij P.E., Nesterov Yu.E. Stokhasticheskie gradientnye metody s netochnym orakulom // Tr. MFTI. 2016. T. 8. № 1. S. 41–91. URL: https://arxiv.org/ftp/arxiv/papers/1411/1411.4218.pdf

6. Gasnikov A.V., Lagunovskaya A.A., Usmanova I.N., et al. Gradient-Free Proximal Methods with Inexact Oracle for Convex Stochastic Nonsmooth Optimization Problems on the Simplex // Autom. Remote Control. 2016. V. 77. No. 11. P. 2018– 2034.

7. Gasnikov A.V., Krymova E.A., Lagunovskaya A.A., et al. Stochastic Online Optimization. Single-Point and multi-Point Non-Linear Multi-Armed Bandits. Convex and Strongly-Convex Case // Autom. Remote Control. 2017. V. 78. No. 2. P. 224–234.

8. Nemirovskij A.S., Yudin D.B. Slozhnost' zadach i ehffektivnost' metodov optimizatsii. M.: Nauka, 1979.

9. Agarwal A., Dekel O., Xiao L. Optimal algorithms for online convex optimization with multi-point bandit feedback // Proc. 23 Annual Conf. on Learning Theory. 2010. P. 28–40.

10. Bubeck S., Cesa-Bianchi N. Regret Analysis of Stochastic and Nonstochastic MultiArmed Bandit Problems // Found. Trends Machine Learning. 2012. V. 5. No. 1. P. 1–122.

11. Nemirovski A. Lectures on modern convex optimization analysis, algorithms, and engineering applications. Philadelphia: SIAM, 2013. URL: http://www2.isye.gatech.edu/∼nemirovs/Lect_ModConvOpt.pdf

12. Shapiro A., Dentcheva D., Ruszczynski A. Lecture on stochastic programming. Modeling and theory. MPS-SIAM series on Optimization, 2014.

13. Nesterov Yu. Primal-dual subgradient methods for convex problems // Math. Program. Ser. B. 2009. V. 120(1). P. 261–283.

14. Duchi J.C. Introduction lectures in stochastic programming // Park City Math. Ser. 2016. URL: http://stanford.edu/˜jduchi/PCMIConvex/Duchi16.pdf

15. Juditsky A., Nesterov Yu. Deterministic and Stochastic Primal-Dual Subgradient Algorithms for Uniformly Convex Minimization // Stoch. Syst. 2014. V. 4. No. 1. P. 44–80.

16. Hazan E., Kale S. Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization // JMLR. 2014. V. 15. P. 2489–2512.

17. Guiges V., Juditsky A., Nemirovski A. Non-asymptotic confidence bounds for the optimal value of a stochastic program // e-print, 2016. URL: https://arxiv.org/pdf/1601.07592.pdf

18. Ball K. An Elementary Introduction to Modern Convex Geometry // Flavors of Geometry. Ed. by S. Levy. Cambridge Univ. Press. 1997. P. 1–58. (Math Sci. Res. Inst. Publ., V. 31.)

19. Usmanova I.N. Bezgradientnyj metod zerkal'nogo spuska s dvukhtochechnym zashumlennym orakulom: diplomnaya rabota bakalavra po spetsial'nosti “Prikladnaya matematika i fizika”. Dolgoprudnyj, MFTI, 2015.

20. Ehvans L.K., Gariepe K.F. Teoriya mery i tonkie svojstva funktsij. Novosibirsk: Nauch. kn. (IDMI), 2002.

Система Orphus

Loading...
Up