Automatica( ) – Contents lists available at SciVerse ScienceDirect Automatica journal homepage:ate/automatica On stochastic gradient and subgradient methods with adaptive steplength sequences ? Farzad Yousefian 1, Angelia Nedi?, Uday V. Shanbhag Department of Industrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA a r t i c l e i n f o Article history: Received 23 August 2010 Received in revised form 9 February 2011 Accepted 17 June 2011 Available online xxxx Keywords: Stochastic optimization Convex optimization Stochastic approximation Adaptive steplength Randomized smoothing techniques a b s t r a c t Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equippedwithconvergencetheory, that aimto e some of the reliance on user-specific parameters. The first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. Then, we allow for nondifferentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a
2013 On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging 来自淘豆网www.taodocs.com转载请标明出处.