# Primal Method for ERM with Flexible Mini-batching Schemes

and Non-convex Losses^{1}^{1}1The authors acknowledge support from the EPSRC Grant EP/K02325X/1, Accelerated Coordinate Descent Meth- ods for Big Data Optimization.

###### Abstract

In this work we develop a new algorithm for regularized empirical risk minimization. Our method extends recent techniques of Shalev-Shwartz [02/2015], which enable a dual-free analysis of SDCA, to arbitrary mini-batching schemes. Moreover, our method is able to better utilize the information in the data defining the ERM problem. For convex loss functions, our complexity results match those of QUARTZ, which is a primal-dual method also allowing for arbitrary mini-batching schemes. The advantage of a dual-free analysis comes from the fact that it guarantees convergence even for non-convex loss functions, as long as the average loss is convex. We illustrate through experiments the utility of being able to design arbitrary mini-batching schemes.

## 1 Introduction

Empirical risk minimization (ERM) is a very successful and immensely popular paradigm in machine learning, used to train a variety of prediction and classification models. Given examples , loss functions and a regularization parameter , the L2-regularized ERM problem is an optimization problem of the form

(1) |

Throughout the paper we shall assume that for each , the loss function is -smooth with . That is, for all and all , we have

(2) |

Further, let be constants for which the inequality

(3) |

holds for all and all and let . Note that we can always bound . However, can be better (smaller) than .

### 1.1 Background

In the last few years, a lot of research effort was put into designing new efficient algorithms for solving this problem (and some of its modifications). The frenzy of activity was motivated by the realization that SGD [1], not so long ago considered the state-of-the-art method for ERM, was far from being optimal, and that new ideas can lead to algorithms which are far superior to SGD in both theory and practice. The methods that belong to this category include SAG [2], SDCA [3], SVRG [4], S2GD [5], mS2GD [6], SAGA [7], S2CD [8], QUARTZ [9], ASDCA [10], prox-SDCA [11], IPROX-SDCA [12], A-PROX-SDCA [13], AdaSDCA [14], SDNA [15]. Methods analyzed for arbitrary mini-batching schemes include NSync [16], ALPHA [17] and QUARTZ [9].

In order to find an -solution in expectation, state of the art (non-accelerated) methods for solving (1) only need

steps, where each step involves the computation of the gradient for some randomly selected example . The quantity is the condition number. Typically one has for methods picking uniformly at random, and for methods picking using a carefully designed data-dependent importance sampling. Computation of such a gradient typically involves work which is equivalent to reading the example , that is, arithmetic operations.

### 1.2 Contributions

In this work we develop a new algorithm for the L2-regularized ERM problem (1). Our method extends a technique recently introduced by Shalev-Shwartz [18], which enables a dual-free analysis of SDCA, to arbitrary mini-batching schemes. That is, our method works at each iteration with a random subset of examples, chosen in an i.i.d. fashion from an arbitrary distribution. Such flexible schemes are useful for various reasons, including i) the development of distributed or robust variants of the method, ii) design of importance sampling for improving the complexity rate, iii) design of a sampling which is aimed at obtaining efficiencies elsewhere, such us utilizing NUMA (non-uniform memory access) architectures, and iv) streamlining and speeding up the processing of each mini-batch by means of assigning to each processor approximately even workload so as to reduce idle time (we do experiments with the latter setup).

In comparison with [18], our method is able to better utilize the information in the data examples , leading to a better data-dependent bound. For convex loss functions, our complexity results match those of QUARTZ [9] in terms of the rate (the logarithmic factors differ). QUARTZ is a primal-dual method also allowing for arbitrary mini-batching schemes. However, while [9] only characterize the decay of expected risk, we also give bounds for the sequence of iterates. In particular, we show that for convex loss functions, our method enjoys the rate (Theorem 2)

where is the probability that coordinate is updated in an iteration, are certain “stepsize” parameters of the method associated with the sampling and data (see (6)), and is a constant depending on the starting point. For instance, in the special case picking a single example at a time uniformly at random, we have and , whereby we obtain one of the rates mentioned above. The other rate can be recovered using importance sampling.

The advantage of a dual-free analysis comes from the fact that it guarantees convergence even for non-convex loss functions, as long as the average loss is convex. This is a step toward understanding non-convex models. In particular, we show that for non-convex loss functions, our method enjoys the rate (Theorem 1)

where is a constant depending on the starting point.

Finally, we illustrate through experiments with “chunking”—a simple load balancing technique—the utility of being able to design arbitrary mini-batching schemes.

## 2 Algorithm

We shall now describe the method (Algorithm 1).

The method encodes a family of algorithms, depending on the choice of the sampling , which encodes a particular mini-batching scheme. Formally, a sampling is a set-valued random variable with values being the subsets of , i.e., subsets of examples. In this paper, we use the terms “mini-batching scheme” and “sampling” interchangeably. A sampling is defined by the collection of probabilities assigned to every subset of the examples.

The method maintains vectors and a vector . At the beginning of step , we have for all and computed and stored in memory. We then pick a random subset of the examples, according to the mini-batching scheme, and update variables for , based on the computation of the gradients for . This is followed by an update of the vector , which is performed so as to maintain the relation

(4) |

This relation is maintained for the following reason. If is the optimal solution to (1), then

(5) |

and hence , where . So, if we believe that the variables converge to , it indeed does make sense to maintain (4). Why should we believe this? This is where the specific update of the “dual variables” comes from: is set a convex combination of its previous value and our best estimate so far of , namely, . Indeed, the update can be written as

Why does this make sense? Because we believe that converges to . Admittedly, this reasoning is somewhat “circular”. However, a better word to describe this reasoning would be: “iterative”.

## 3 Main Results

Let . We assume the knowledge of parameters for which

(6) |

Tight and easily computable formulas for such parameters can be found in [19]. For instance, whenever , inequality (6) holds with .

To simplify the exposure, we will write

(7) |

### 3.1 Non-convex loss functions

Our result will be expressed in terms of the decay of the potential

### 3.2 Convex loss functions

Our result will be expressed in terms of the decay of the potential

###### Theorem 2.

The rate, , precisely matches that of the QUARTZ algorithm [9]. Quartz is the only other method for ERM which has been analyzed for an arbitrary mini-batching scheme. Our algorithm is dual-free, and as we have seen above, allows for an analysis covering the case of non-convex loss functions.

## 4 Chunking

In this section we illustrate one use of the ability of our method to work with an arbitrary mini-batching scheme. Further examples include the ability to design distributed variants of the method [20], or the use of importance/adaptive sampling to lower the number of iterations [21, 12, 9, 14].

One marked disadvantage of standard mini-batching (“choose a subset of examples, uniformly at random”) used in the context of parallel processing on multicore processors is the fact that in a synchronous implementation there is a loss of efficiency due to the fact that the computation time of may differ through . This is caused by the data examples having varying degree of sparsity. We hence introduce a new sampling which mitigates this issue.

#### Chunks:

Choose sets , such that and and is similar for every , i.e. . Instead of sampling coordinates we propose a new sampling, which on each iteration samples sets out of and uses coordinates as the sampled set. We assign each core one of the sets for parallel computation. The advantage of this sampling lies in the fact, that the load of computing for all is similar for all . Hence, using this sampling we minimize the waiting time of processors.

#### How to choose :

We introduce the following algorithm:

The algorithms returns the partition of into in a sense, that the first coordinates belong to , next coordinates belong to and so on. The main advantage of this approach is, that it makes a preprocessing step on the dataset which takes just one pass through the data. On Figure 0(a) through Figure 0(f) we show the impact of Algorithm 2 on the probability of the waiting time of a single core, which we measure by the difference

and

for the initial and preprocessed dataset respectively. We can observe, that the waiting time is smaller using the preprocessing.

## 5 Experiments

In all our experiments we used logistic regression. We normalized the datasets so that , and fixed . The datasets used for experiments are summarized in Table 1.

Dataset | #samples | #features | sparsity |
---|---|---|---|

w8a | 49,749 | 300 | 3.8% |

dorothea | 800 | 100,000 | 0.9% |

protein | 17,766 | 358 | 29% |

rcv1 | 20,242 | 47,237 | 0.2% |

cov1 | 581,012 | 54 | 22% |

Experiment 1. In Figure 1(a) we compared the performance of Algorithm 1 with uniform serial sampling against state of the art algorithms such as SGD [1], SAG[2] and S2GD [5] in number of epochs. The real running time of the algorithms was 0.46s for S2GD, 0.79s for SAG, 0.47s for SDCA and 0.58s for SGD. In Figure 1(b) we show the convergence rate for different regularization parameters . In Figure 1(c) we show convergence rates for different serial samplings: uniform, importance [12] and also 4 different randomly generated serial samplings. These samplings were generated in a controlled manner, such that random c has . All of these samplings have linear convergence as shown in the theory.

Experiment 2: New sampling vs. old sampling. In Figure 2(a) through Figure 2(l) we compare the performance of a standard parallel sampling against sampling of blocks output by Algorithm 2. In each iteration we measure the time by

and

for the standard and new sampling respectively. This way we measure only the computations done by the core which is going to finish the last in each iteration, and consider the number of multiplications with nonzero entries of the data matrix as a proxy for time.

## 6 Proofs

As a first approximation, our proof is an extension of the proof of Shalev-Shwartz [18] to accommodate an arbitrary sampling [16, 17, 9, 15]. For all and we let and . We will use the following lemma.

###### Lemma 3 (Evolution of and ).

For a fixed iteration and all we have:

(12) | ||||

(13) |

Proof. It follows that for using the definition (7) we have

and for we have . Taking the expectation over we get the result.

For the second potential we get

Taking the expectation over , using inequality (6), and noting that

(14) |

we get

### 6.1 Proof of Theorem 1 (nonconvex case)

Combining (12) and (13), we obtain

Using (3) we have

By strong convexity of ,

and which together yields

Therefore,

It follows that , and repeating this recursively we end up with This concludes the proof of the first part of Theorem 1. The second part of the proof follows by observing that is -smooth, which gives .

### 6.2 Convex case

For the next theorem we need an additional lemma:

###### Lemma 4.

Assume that are -smooth and convex. Then, for every ,

(15) |

Proof. Let Clearly, is also -smooth. By convexity of we have for all . It follows that satisfies Using the definition of , we obtain

(16) |

Summing these terms up weighted by and using (5) we get

### 6.3 Proof of Theorem 2

Using the convexity of we have and using Lemma 4, we have

This gives , which concludes the first part of the Theorem 2. The second part follows by observing, that is -smooth, which gives .

## References

- [1] Herbert Robbins and Sutton Monro. A stochastic approximation method. Ann. Math. Statist., 22(3):400–407, 09 1951.
- [2] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. arXiv:1309.2388, 2013.
- [3] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. Journal of Machine Learning Research, 14(1):567–599, 2013.
- [4] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013.
- [5] Jakub Konečný and Peter Richtárik. S2GD: Semi-stochastic gradient descent methods. arXiv:1312.1666, 2014.
- [6] Jakub Konečný, Jie Lu, Peter Richtárik, and Martin Takáč. mS2GD: Mini-batch semi-stochastic gradient descent in the proximal setting. arXiv:1410.4744, 2014.
- [7] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. Advances in Neural Information Processing Systems 27 (NIPS 2014), 2014.
- [8] Jakub Konečný, Zheng Qu, and Peter Richtárik. Semi-stochastic coordinate descent. arXiv:1412.6293, 2014.
- [9] Zheng Qu, Peter Richtárik, and Tong Zhang. Randomized Dual Coordinate Ascent with Arbitrary Sampling. arXiv:1411.5873, 2014.
- [10] Shai Shalev-Shwartz and Tong Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In Advances in Neural Information Processing Systems 26, pages 378–385. 2013.
- [11] Shai Shalev-Shwartz and Tong Zhang. Proximal stochastic dual coordinate ascent. arXiv:1211.2717, 2012.
- [12] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling. ICML, 2015.
- [13] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. to appear in Mathematical Programming, 2015.
- [14] Dominik Csiba, Zheng Qu, and Peter Richtárik. Stochastic dual coordinate ascent with adaptive probabilities. ICML 2015.
- [15] Zheng Qu, Peter Richtárik, Martin Takáč, and Olivier Fercoq. Stochastic Dual Newton Ascent for empirical risk minimization. arXiv:1502.02268.
- [16] Peter Richtárik and Martin Takáč. On optimal probabilities in stochastic coordinate descent methods. arXiv:1310.3438, 2013.
- [17] Zheng Qu and Peter Richtárik. Coordinate descent methods with arbitrary sampling I: Algorithms and complexity. arXiv:1412.8060, 2014.
- [18] Shai Shalev-Shwartz. SDCA without duality. CoRR, abs/1502.06177, 2015.
- [19] Zheng Qu and Peter Richtárik. Coordinate Descent with Arbitrary Sampling II: Expected Separable Overapproximation. arXiv:1412.8063, 2014.
- [20] Peter Richtárik and Martin Takáč. Distributed coordinate descent method for learning with big data. arXiv:1310.2059, 2013.
- [21] Peter Richtárik and Martin Takáč. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144(2):1–38, 2014.