[Submitted on 9 Nov 2013 (v1), last revised 30 Nov 2014 (this version, v7)]
Abstract:We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these disparate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We maintain computational tractability and limit memory requirements even for high dimensional optimization problems by storing and manipulating these quadratic approximations in a shared, time evolving, low dimensional subspace. Each update step requires only a single contributing function or minibatch evaluation (as in SGD), and each step is scaled using an approximate inverse Hessian and little to no adjustment of hyperparameters is required (as is typical for quasi-Newton methods). This algorithm contrasts with earlier stochastic second order techniques that treat the Hessian of each contributing function as a noisy approximation to the full Hessian, rather than as a target for direct estimation. We experimentally demonstrate improved convergence on seven diverse optimization problems. The algorithm is released as open source Python and MATLAB packages.Submission history
From: Jascha Sohl-Dickstein [view email]
[v1]
Sat, 9 Nov 2013 00:54:37 UTC (960 KB)
[v2]
Mon, 10 Feb 2014 02:10:32 UTC (1,988 KB)
[v3]
Wed, 26 Mar 2014 06:49:16 UTC (2,073 KB)
[v4]
Sun, 27 Apr 2014 02:38:28 UTC (2,101 KB)
[v5]
Tue, 13 May 2014 23:51:41 UTC (3,213 KB)
[v6]
Thu, 14 Aug 2014 02:27:38 UTC (6,430 KB)
[v7]
Sun, 30 Nov 2014 01:35:55 UTC (1,598 KB)