Abstract:
|
We study the almost sure convergence rates for linear algorithms hk+1= hk + 1/k^? (bk - Ak hk)for all ??(0,1), where {Ak} are symmetric, positive semidefinite random matrices and {bk} are random vector and show that|hn - A^-1 b|=o(n^-?) a.s. for the ??[0,?), positive definite A and vector b such that limn?? 1/n^(?- ?) ?nk=1 (Ak-A)=0, limn?? 1/n^(?-?) ?nk=1 (bk-b)=0 a.s.. When ?-??(½,1), these assumptions are implied by the Marcinkiewicz strong law of large numbers, which allows the {Ak} and {bk} to have heavy tails(HT), long-range dependence(LRD) or both. The focus is on one-step versus Polyak-Ruppert's two-step averaging algorithms but handle HT and LRD, deriving a surprising decoupling. This means that the optimal convergence rate is affected by either the HT or LRD, whichever is worse, but not both, namely |hn -A^-1 b|=o(n^-?)for all ? < ?0 =?-M. M is called the Marcinkiewicz threshold and is defined by inf{1/m:limn?? 1/n^1/m ?nk=1 (Ak-A)=0, limn?? 1/n^1/m ?nk=1 (bk-b)=0 a.s.}. Usually, M depends on LRD and/or HT parameters. Also, we expect M?(½,1], due to SLLN and CLT in the light-tail, short-range-dependence case but when there is LRD and/or HT M generally cannot approach ½.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.