1 Introduction
We consider a general global noisy optimization problem:
(1) 
where is an expensive blackbox function, captures noise (randomness) in the function evaluation and the dimension is low to medium (up to tens of dimensions). We assume that only noisy evaluations are observed and the underlying objective function is unknown. The problem in Eq. 1 is a standard optimization problem [2, 22] that appears in many applications including operations [14], engineering designs [20], biology [34, 27], transportation [7, 19] and machine learning [29, 32].
Another problem relevant to Eq. 1 is a stochastic bandit with infinitely many arms [31, 5, 6, 16]. In this type of problem, the goal is to find the optimal strategy within a continuous space so that the expected cumulative reward is maximized. Compared to the problem considered in this work, the objective function in a bandit problem is typically not expensive. As a result, the solution strategies for these two types of problems are generally different.
Surrogatebased optimization algorithms are often used to solve the expensive problem in Eq. 1 [2]. Because the algorithm exploits the shape of the underlying objective function, surrogatebased optimization can often make good progress towards the optimum using relatively few function evaluations compared to derivativefree algorithms such as NelderMead and Direct Search algorithms [8, 25]. Generally speaking, this method works as follows: in each iteration, a surrogate function that approximates the objective is first constructed using available evaluations, and then a new set of point(s) is carefully proposed for the next iteration based on the surrogate. Because the function is expensive, spending extra computation in determining which points to evaluate is often worthwhile.
Within the family of surrogatebased optimization methods, parallel surrogate optimization algorithms propose multiple points in each iteration, and the expensive evaluations of these points are performed in parallel [13]. Compared to the serial counterpart, parallel surrogate optimization uses the parallel computing resources more efficiently, thereby achieving better progress per unit wall time.
A popular method for noisy parallel surrogate optimization is Bayesian optimization [29, 9, 10, 28, 3, 12, 32]. Bayesian optimization typically works by assuming a Gaussian process prior over the objective function, constructing a Gaussian process (GP) surrogate [23] with the evaluations, and proposing new points through optimizing an acquisition function. Common acquisition functions are expected improvement (EI) [29], upper confidence bound (UCB) or lower confidence bound (LCB) [9, 10], and informationtheoretic based [28, 32].
One issue with Bayesian methods is the high computational cost. Typically, training a GP surrogate requires solving a maximum likelihood problem, for which operations of complexity proportional to the cube of the number of evaluations are performed for many times [21]. To propose new points, Bayesian optimization usually requires the solution of suboptimization problems (e.g., maximizing expected improvement) with the possible use of Monte Carlo procedures [29, 3]. When many points are evaluated per iteration, so that the number of evaluations accumulates quickly with the number of iterations, Bayesian optimization algorithm itself can be even more expensive than the evaluation of the function , and this is indeed observed in real hyperparametertuning problems (see Section 4.3).
In this work, we develop a novel algorithm called ProSRS for noisy parallel surrogate optimization. Unlike Bayesian optimization that uses a GP model, our algorithm uses a radial basis function (RBF), which is more efficient computationally. We adopt an efficient framework, known as stochastic response surface (SRS) method [25, 26], for proposing new points in each iteration. The suboptimization problems in the SRS method are discrete minimization problems. Compared to the original parallel SRS work [26], our work: (1) introduces a new treebased technique, known as the “zoom strategy”, for efficiency improvement, (2) extends the original work to the noisy setting (i.e., an objective function corrupted with random noise) through the development of a radial basis regression procedure, (3) introduces weighting to the regression to enhance exploitation, (4) implements a new SRS combining the two types of candidate points that were originally proposed in SRS [25].
We compare our algorithm to three wellestablished parallel Bayesian optimization algorithms. We find that our algorithm shows superior optimization performance on both benchmark problems and real hyperparametertuning problems, and yet its cost is orders of magnitude lower. The fact that our algorithm is significantly cheaper means that our algorithm is suitable for a wider range of optimization problems, not just very expensive ones.
2 The ProSRS algorithm
Conventional surrogate optimization algorithms use all the expensive function evaluations from past iterations to construct the surrogate. As the number of evaluations grows over iterations, the cost of conventional methods thus increases. Indeed, the cost can increase rather quickly with the number of iterations, especially when a large number of points are evaluated in parallel per iteration.
To overcome this limitation, we develop a novel algorithm that does not necessarily use all the past evaluations while still being able to achieve good optimization performance. The key intuition here is that once an optimal region is approximately located, progress can be made by focusing on the evaluations within this region. This idea is illustrated in Fig. 1, where the red curve is a surrogate built with all the evaluations. Now suppose we restrict the domain to a smaller region as indicated by the dashed black box and only fit the evaluation data within that region. We still obtain a good surrogate (blue curve) around the optimum, and it is cheaper as we are using fewer evaluations to do so. We now proceed with our optimization, treating the restricted region as our new domain and the local fit as our surrogate for optimization. This idea of recursively optimizing over hierarchical domains lies at the heart of our algorithm. In this paper, we call this technique the “zoom strategy”. Because it requires less evaluation data to build a local surrogate than to build a global one, the zoom strategy can significantly reduce the cost of the algorithm.
For ease of describing the relationships between different domains, we introduce the notion of a node. A node consists of a domain together with all the information needed by an optimization algorithm to make progress for that domain. We call the process of restricting the domain to a smaller domain the “zoomin” process, in which case the node associated with the original domain is a “parent” node and the node for the restricted domain is a “child” node. The reverse process of zooming in is referred to as the “zoomout” process (i.e., the transition from a child node to its parent node). See Fig. 2 for an illustration of this structure.
2.1 Overview of the algorithm
We now present our algorithm, namely Progressive Stochastic Response Surface (ProSRS)^{1}^{1}1Code is publicly available at https://github.com/compdyn/ProSRS., in Alg. 1. Like most surrogate optimization algorithms, ProSRS starts with a spacefilling design of experiments (DOE). Here we use Latin hypercube sampling with maximin criterion for the initial design. In our algorithm, a node is formally defined by a quadruplet:
(2) 
where is the evaluation data in the domain . The variable characterizes the exploitation (versus exploration) strength of ProSRS. Mathematically, it is a tuple:
(3) 
where is a radial basis regression parameter (see Section 2.2) and are two parameters in the step of proposing new points (see Section 2.3). The variable in Eq. 2 is the zoomout probability.
For each iteration, we first construct a radialbasis surrogate using the evaluation data (Line 7), followed by the step of proposing new points for parallel evaluation (Line 8). The proposed points must not only exploit the optimal locations of a surrogate, but also explore the untapped regions in the domain to improve the quality of the surrogate. Indeed, achieving the appropriate balance between exploitation and exploration is the key to the success of a surrogate optimization algorithm. For this, we use an efficient procedure, known as Stochastic Response Surface (SRS) method, that was first developed by Regis and Shoemaker [25] and later extended to the parallel setting in their subsequent work [26].
After performing expensive evaluations in parallel, we update the exploitation strength variable (Line 11) so that for a specific node, the exploitation strength progressively increases with the number of iterations (see Section 2.4 for the update rule). The purpose of this step is to help locate the optimal region for zooming in. Once the exploitation strength reaches some prescribed threshold (Line 12; see Section 2.5 for details), the algorithm will decide to zoom in (Line 16) by setting a child to be the current node (neglecting the restart step in Line 14 for now). The updating of the variable and the zoomin mechanism generally make ProSRS “greedier” as the number of iterations increases. To balance out this increasing greediness over iterations, we implement a simple greedy policy by allowing the algorithm to zoom out with some small probability in each iteration (Line 21). Because of the mechanism of zooming in and out, ProSRS will generally form a “tree” during the optimization process, as illustrated in Fig. 2.
Finally we would like to address the restart steps (Line 13 and 14) in Alg. 1. We make the algorithm restart completely from scratch when it reaches some prescribed resolution after several rounds of zooming in. Specifically, to check whether to restart, we first perform the step of creating or updating a child node like the normal zoomin process (Line 16). Suppose the resulted child node has points in its domain , then ProSRS will restart if for all ,
(4) 
where is a prescribed resolution parameter, denotes the length of the domain in the dimension, and are the bounds for the optimization domain (Eq. 1). The reason for restarting from a DOE is to avoid the new runs being biased by the old runs so that the algorithm has a better chance to discover other potentially optimal regions. Indeed, extensive study [25, 26, 24] has shown that restarting from the initial DOE is better than continuing the algorithm with past evaluations.
2.2 Weighted radial basis surrogate functions
Given the evaluation data , a radial basis surrogate takes the form
(5) 
where the function is a radial basis function. In this work, we choose to be a multiquadric function. The radial basis coefficients are obtained by minimizing the regularized weighted square loss:
(6) 
where is a nonpositive weight parameter (one component of the variable ; see Eq. 3) and
is a regularization constant determined automatically through cross validation. This loss function is quadratic in the coefficients
so that the minimization problem admits a unique solution and can be solved efficiently.The term in Eq. 6 represents the weight for the data point, and can be interpreted as the normalized value with the understanding that if . It is clear that disables the weighting in the RBF regression. When is negative, the points with smaller values gain more weight, so the RBF regression produces a better fit for the points with low values (the “best” points). Consequently, smaller weight parameter values imply greater exploitation.
2.3 Stochastic response surface method
To propose new points for parallel evaluations, we use the general Stochastic Response Surface (SRS) framework [25, 26]. The first step of the stochastic response surface method is to randomly generate candidate points in the domain . In the original SRS work [25], the authors introduced two types of candidate points and proposed one algorithm for each type. Here we consider the candidate points to be a mixture of both types.
Type I candidate points are sampled uniformly over the domain. Type II candidate points are generated by adding Gaussian perturbations around the current best point , where is the point in the evaluation data with the lowest value of the RBF surrogate . The covariance matrix for the Gaussian perturbation is a diagonal matrix with its diagonal being (), where is one component of the variable (see Eq. 3) and is the length of the domain in the dimension. Any generated point that lies outside the domain would be replaced by the nearest point in the domain so that all the Type II candidate points are within . The proportion of these two types of candidate points is controlled by a parameter , which is another component of the variable . Specifically, we generate candidate points with a fraction of points being Type I and the remainder being Type II.
The second step is to measure the quality of each candidate point using two criteria: the value of the response surface (RBF surrogate) and the minimum distance from previously evaluated points. The points with low response values are of high exploitation value, while the ones with large minimum distances are of high exploration value. In the SRS method, every candidate point is given a score on each of the two criteria, and a weight between 0 and 1 is used for trading off one criterion for the other. For our algorithm, we generate an array of weights that are equallyspaced in the interval with the number of weights being equal to the number of proposed points per iteration (if the number of proposed points per iteration is one, we alternate weights between 0.3 and 1 from iteration to iteration). This weight array, also known as the “weight pattern” in the original work [25], is used to balance between exploitation and exploration among the proposed points. The procedures of scoring the candidate points and selecting the proposed points from the candidate points based on the weight pattern are described in detail in Regis and Shoemaker [26].
2.4 Update procedure for variable
After obtaining new evaluations, we update the variable of the current node (Line 11 of Alg. 1). The goal of this updating step is to gradually increase the exploitation strength. As listed in Eq. 3, the variable of a node consists of 3 parameters: (1) a weight parameter for radial basis regression, (2) a parameter that controls the proportion of Type I candidate points in the SRS method, and (3) a parameter that determines the spread of Type II candidate points. The exploitation strength will be enhanced by decreasing any of these 3 parameters.
The update rule is specified in Alg. 2, which can be understood as having two separate phases. The first phase is when there are still some Type I candidate points generated in the SRS method (i.e., ). During this phase, the values of and are unchanged but the value is decreased with each iteration. The rate of decrease is determined by , where is the effective number of evaluations for the current iteration. The effective number of evaluations is computed by first uniformly partitioning the domain into cells with the number of cells per dimension being equal to , where is the number of points in the evaluation data . Then is number of cells that are occupied by at least one point. The quantity can be viewed as a measurement of the density of the evaluated points in the domain . Therefore, we essentially make the decreasing rate proportional to the evaluation density.
When the value drops below 0.1, so that all the candidate points are Type II, we enter the second phase of the state transition, where the parameter does not change but and are reduced. Just like in Regis and Shoemaker [25], we use the number of consecutive failures as the condition for deciding when to reduce the value of . Here an iteration is counted as a failure if the best value of the proposed points for the current iteration does not improve the best value of the evaluations prior to the proposing step. The counter is set to zero at the beginning of the algorithm, and starts to count the number of consecutive failures only when . Whenever the number of consecutive failures reaches some prescribed threshold , we reduce by half and decrease by .
2.5 Zoom strategy
The updating of the variable (Line 11) will make the parameter gradually decrease over iterations. Once drops below some critical value (i.e., reaches the critical value in Line 12) and the restart condition is not satisfied, the algorithm will zoom in by either creating a new child node or updating an existing child node. Specifically, we start the zoomin process by finding the point that has the lowest fit value among the evaluation data , which we will denote as . Depending on the location of and the locations of the children of the current node, there are two possible scenarios.
The first scenario is that does not belong to the domain of any of the existing child nodes or there is no child for the current node. In this case a new child node is created. The domain of this child node is generated by shrinking the domain of the current node with the center being at and the length of each dimension being fractional of that of the current domain. The parameter is called the zoomin factor, which is a constant set prior to the start of the algorithm. After shrinkage, any part of the new domain that is outside the current domain will be clipped off so that the domain of a child is always contained by that of its parent. Given the domain of the new child node, its evaluation data are all the past evaluations that are within this domain. The zoomout probability and the variable of this child node are set to the initial values and respectively.
The other possibility is that belongs to at least one child of the current node. Among all children whose domains contain , we select the child whose domain center is closest to . The evaluation data of this selected child node is updated by including all the past evaluations that are within its domain. Since the selected child node is being revisited, we reduce its zoomout probability by , where is a constant lower bound for the zoomout probability.
3 Convergence
In this section we state a convergence theorem for our ProSRS algorithm (Alg. 1). More specifically, if ProSRS is run for sufficiently long, with probability converging to one there will be at least one point among all the evaluations that will be arbitrarily close to the global minimizer of the objective function. Because the point returned in each iteration is the one with the lowest noisy evaluation (not necessarily with the lowest expected value), as the underlying expectation function is generally unknown, this theoretical result does not immediately imply the convergence of our algorithm. However, in practice one may implement posterior selection procedures for choosing the true best point from the evaluations using discrete optimization algorithms such as those by Nelson et al. [17] and Ni et al. [18].
Theorem 1.
Suppose the objective function in Eq. 1 is continuous on the domain and is the unique minimizer of , characterized by^{2}^{2}2Here we adopt the convention that if , then . and for all . Let be the point with the minimum objective value among all the evaluated points up to iteration . Then almost surely as .
Proof.
See Appendix A. ∎
4 Numerical results
We compare our algorithm to three stateoftheart parallel Bayesian optimization algorithms: GPEIMCMC [29], GPLP [12] with acquisitions LCB and EI. The parameter values of ProSRS algorithm are listed in Table 1, where is optimization dimension and is the number of points evaluated in parallel per iteration.
Parameter  Meaning  Value 

number of DOE samples  
initial value of exploitation strength variable  (0, 1, 0.1)  
critical value  0.025  
initial zoomout probability  0.02  
minimum zoomout probability  0.01  
zoomin factor  0.4  
resolution parameter for restart  0.01  
critical value for number of consecutive failures  
change of value  2 
For test problems we used a suite of standard optimization benchmark problems from the literature and two hyperparametertuning problems: (1) tuning 5 hyperparameters of a random forest (2) tuning 7 hyperparameters of a deep neural network. The details of the benchmark problems and the hyperparametertuning problems are given in Appendix
B and Appendix C respectively.4.1 Optimization performance versus iteration
The first results that we consider are the optimization result (function value) versus iteration number. All the algorithms are proposing and evaluating the same number of points per iteration, so these results measure the quality of these proposed points. As we will see, ProSRS does significantly better than the existing methods.
Figure 3
shows performance of the algorithms on 12 optimization benchmark functions, for which the dimension varies from 2 to 10 (the last numeric figure in the function name indicates the dimension). The objective function on the y axis is the evaluation of the underlying true expectation function (not the noisy function) at the algorithm output. The error bar shows the standard deviation of 20 independent runs. All algorithms are run using 12 parallel cores of a Blue Waters
^{3}^{3}3Blue Waters: https://bluewaters.ncsa.illinois.edu. XE compute node.As we can see from Fig. 3, our algorithm performs the best on almost all of the problems. In particular, ProSRS is significantly better on highdimensional functions such as Ackley and Levy, as well as highlycomplex functions such as Dropwave and Schaffer. Excellent performance on these benchmark problems shows that our algorithm can cope with various optimization landscape types.
Figure 4 shows optimization performance on the two hyperparametertuning problems. Here we also include the random search algorithm since random search is one of the popular hyperparametertuning methods [4]. First, we see that surrogate optimization algorithms are in general significantly better than the random search algorithm. This is no surprise as the surrogate optimization algorithm selects every evaluation point carefully in each iteration. Second, among the surrogate optimization algorithms, our ProSRS algorithm is better than the GPEIMCMC algorithm (particularly on the random forest tuning problem), and is much better than the two GPLP algorithms.
4.2 Optimization performance analysis
In Section 4.1 we demonstrated that our ProSRS algorithm generally achieved superior optimization performances compared to the Bayesian optimization algorithms. In this section, we give some insight into why our algorithm could be better. We performed the analysis with a numerical experiment that studied the modeling capability of RBF (as used in ProSRS) and GP models (as used in the Bayesian optimization methods).
More specifically, we investigated RBF and GP regression on the twelve optimization benchmark functions that are shown in Fig. 3, varying the number of training data points from 10 to 100. For each test function and every , we first randomly sampled points over the function domain using Latin hypercube sampling, and then evaluated these sampled points to get noisy responses . Then given the data , , , we trained 4 models: a RBF model using the cross validation procedure developed in the ProSRS algorithm with no weighting, and 3 GP models with commonly used GP kernels: Matern1.5, Matern2.5 and RBF.
We used the Python scikitlearn package^{4}^{4}4Python package for Gaussian Processes: http://scikitlearn.org/stable/modules/gaussian_process.html. for the implementations of GP regression. We set the number of restarts for the optimizer in GP regression to be 10. We evaluated each regression model by measuring the relative error in terms of the norm of the difference between a model and the underlying true function over the function domain. We repeated the training and evaluation procedure for 10 times, and reported the mean and the standard deviation of the measured relative errors.
The results are shown in Fig. 5. We can see that crossvalidated RBF regression (as used in our ProSRS method) generally produces a better model than those from GP regression (as used in the Bayesian optimization methods). Specifically, the RBF model from ProSRS is significantly better for the test functions Griewank, Levy, Goldstein and PowerSum, and is on par with GP models for Schaffer, Dropwave and Hartmann.
From this numerical study, we can draw two conclusions. First, the ProSRS RBF models seem to be able to better capture the objective functions than GP regression models. One possible explanation for this is that the ProSRS RBF regression uses a cross validation procedure so that the best model is selected directly according to the data, whereas GP regression builds models relying on the prior distributional assumptions about the data (i.e., Gaussian process with some kernel). Therefore, in a way the ProSRS regression procedure makes fewer assumptions about the data and is more “datadriven” than GP. Since the quality of a surrogate has a direct impact on how well the proposed points exploit the objective function, we believe that the superiority of the RBF models plays an important part in the success of our ProSRS algorithm over those Bayesian optimization algorithms.
Second, for those test functions where ProSRS RBF and GP have similar modeling performances (i.e., Schaffer, Dropwave and Hartmann), the optimization performance of ProSRS (using RBF) is nonetheless generally better than Bayesian optimization (using the GP models), as we can see from Fig. 3. This suggests that with surrogate modeling performance being equal, the ProSRS point selection strategy (i.e., SRS and zoom strategy) may still have an edge over the probablitybased selection criterion (e.g., EIMCMC) of Bayesian optimization.
4.3 Algorithm cost
In Section 4.1 we saw that ProSRS achieved faster convergence per iteration, meaning that it was proposing better points to evaluate in each iteration. In this section we will compare the cost of the algorithms and show that ProSRS is in addition much cheaper per iteration. The main focus here is to compare the cost of the algorithm, not the cost of evaluating the function since the functionevaluation cost is roughly the same among the algorithms.
Figure 6 and Figure 7 show the computational costs of running different algorithms for the twelve optimization benchmark problems and the two hyperparametertuning problems. The time was benchmarked on Blue Waters XE compute nodes. We observe that our ProSRS algorithm is about 1–2 orders of magnitude cheaper than the two GPLP algorithms, and about 3–4 orders of magnitude cheaper than the GPEIMCMC algorithm. It is worth noting that for the hyperparametertuning problems (Fig. 7), the cost of the GPEIMCMC algorithm is in fact consistently higher than that of the training and the evaluation of a machine learning model, and the cost gap becomes larger as the number of iterations increases.
From Fig. 7 we can also see that the cost of our algorithm scales roughly with the number of iterations in the long run (i.e., when ProSRS is run with a large number of iterations, the general trend of the cost stays flat with iterations). This scaling behavior is generally true for our algorithm, and is a consequence of the zoom strategy and the restart mechanism exploited by our algorithm.
4.4 Overall optimization efficiency
In this section, we will show the overall optimization efficiency for the two hyperparametertuning problems, which takes into account not only the optimization performance per iteration but also the cost of the algorithm and the expensive function evaluations. From Fig. 8, we can see that our ProSRS algorithm is the best among all the algorithms. Because of the high cost of the GPEIMCMC algorithm, the advantage of our algorithm over GPEIMCMC becomes even more pronounced compared to that of the iterationbased performance measurement (Fig. 4).
5 Conclusion
In this paper we introduced a novel parallel surrogate optimization algorithm, ProSRS, for noisy expensive optimization problems. We developed a “zoom strategy” for efficiency improvement, a weighted radial basis regression procedure, and a new SRS method combining the two types of candidate points in the original SRS work. We proved an analytical result for our algorithm (Theorem 1): if ProSRS is run for sufficiently long, with probability converging to one there will be at least one point among all the evaluations that will be arbitrarily close to the global minimizer of the objective function. Numerical experiments show that our algorithm outperforms three current Bayesian optimization algorithms on both optimization benchmark problems and real machine learning hyperparametertuning problems. Our algorithm not only shows better optimization performance per iteration but also is orders of magnitude cheaper to run.
6 Acknowledgments
This research was supported by NSF CMMI1150490 and is part of the Blue Waters sustainedpetascale computing project, which is supported by the National Science Foundation (awards OCI0725070 and ACI1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at UrbanaChampaign and its National Center for Supercomputing Applications.
References
 [1] Tensorflow tutorial. URL https://www.tensorflow.org/versions/r1.1/get_started/mnist/beginners.
 Amaran et al. [2016] S. Amaran, N. V. Sahinidis, B. Sharda, and S. J. Bury. Simulation optimization: a review of algorithms and applications. Annals of Operations Research, 240(1):351–380, May 2016.
 Azimi et al. [2010] J. Azimi, A. Fern, and X. Z. Fern. Batch Bayesian optimization via simulation matching. In Advances in Neural Information Processing Systems, pages 109–117, 2010.
 Bergstra and Bengio [2012] J. Bergstra and Y. Bengio. Random search for hyperparameter optimization. Journal of Machine Learning Research, 13(2):281–305, 2012.
 Bubeck et al. [2011] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesvári. Xarmed bandits. Journal of Machine Learning Research, 12(May):1655–1695, 2011.
 Carpentier and Valko [2015] A. Carpentier and M. Valko. Simple regret for infinitely many armed bandits. In International Conference on Machine Learning, pages 1133–1141, 2015.
 Chen et al. [2014] X. M. Chen, L. Zhang, X. He, C. Xiong, and Z. Li. Surrogatebased optimization of expensivetoevaluate objective for optimal highway toll charges in transportation network. ComputerAided Civil and Infrastructure Engineering, 29(5):359–381, 2014.
 Conn et al. [1997] A. R. Conn, K. Scheinberg, and Ph. L. Toint. Recent progress in unconstrained nonlinear optimization without derivatives. Mathematical Programming, 79(1):397–414, 1997.
 Contal et al. [2013] E. Contal, D. Buffoni, A. Robicquet, and N. Vayatis. Parallel Gaussian process optimization with upper confidence bound and pure exploration. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 225–240. Springer, 2013.
 Desautels et al. [2014] T. Desautels, A. Krause, and J. W. Burdick. Parallelizing explorationexploitation tradeoffs in Gaussian process bandit optimization. The Journal of Machine Learning Research, 15(1):3873–3923, 2014.
 Dheeru and Karra Taniskidou [2017] D. Dheeru and E. Karra Taniskidou. UCI machine learning repository, 2017. URL https://archive.ics.uci.edu/ml.
 González et al. [2016] J. González, Z. Dai, P. Hennig, and N. Lawrence. Batch Bayesian optimization via local penalization. In Artificial Intelligence and Statistics, pages 648–657, 2016. URL https://github.com/SheffieldML/GPyOpt.
 Haftka et al. [2016] R. T. Haftka, D. Villanueva, and A. Chaudhuri. Parallel surrogateassisted global optimization with expensive functions–a survey. Structural and Multidisciplinary Optimization, 54(1):3–13, 2016.
 Köchel and Nieländer [2005] P. Köchel and U. Nieländer. Simulationbased optimisation of multiechelon inventory systems. International Journal of Production Economics, 93:505–513, 2005.

[15]
Y. LeCun, C. Cortes, and C. J. C. Burges.
The MNIST database of handwritten digits.
URL http://yann.lecun.com/exdb/mnist/.  Li and Xia [2017] H. Li and Y. Xia. Infinitely manyarmed bandits with budget constraints. In ThirtyFirst AAAI Conference on Artificial Intelligence, 2017.
 Nelson et al. [2001] B. L. Nelson, J. Swann, D. Goldsman, and W. Song. Simple procedures for selecting the best simulated system when the number of alternatives is large. Operations Research, 49(6):950–963, 2001.
 Ni et al. [2013] E. C. Ni, S. R. Hunter, and S. G. Henderson. Ranking and selection in a high performance computing environment. In Simulation Conference (WSC), 2013 Winter, pages 833–845. IEEE, 2013.
 Osorio and Bierlaire [2010] C. Osorio and M. Bierlaire. A simulationbased optimization approach to perform urban traffic control. In TRISTAN VII, Triennial Symposium on Transportation Analysis, number EPFLTALK152410, 2010.
 Prakash et al. [2008] P. Prakash, G. Deng, M. C. Converse, J. G. Webster, D. M. Mahvi, and M. C. Ferris. Design optimization of a robust sleeve antenna for hepatic microwave ablation. Physics in Medicine and Biology, 53(4):1057, 2008.
 QuinoneroCandela and Rasmussen [2005] J. QuinoneroCandela and C. E. Rasmussen. Analysis of some methods for reduced rank Gaussian process regression. In Switching and Learning in Feedback Systems, pages 98–127. Springer, 2005.

Rakshit et al. [2016]
P. Rakshit, A. Konar, and S. Das.
Noisy evolutionary optimization algorithms – a comprehensive survey.
Swarm and Evolutionary Computation
, 2016.  Rasmussen and Williams [2006] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning, volume 1. MIT Press Cambridge, 2006.
 Regis [2016] R. G. Regis. Trust regions in Krigingbased optimization with expected improvement. Engineering Optimization, 48(6):1037–1059, 2016.
 Regis and Shoemaker [2007] R. G. Regis and C. A. Shoemaker. A stochastic radial basis function method for the global optimization of expensive functions. INFORMS Journal on Computing, 19(4):497–509, 2007.
 Regis and Shoemaker [2009] R. G. Regis and C. A. Shoemaker. Parallel stochastic global optimization using radial basis functions. INFORMS Journal on Computing, 21(3):411–426, 2009.
 Romero et al. [2013] P. A. Romero, A. Krause, and F. H. Arnold. Navigating the protein fitness landscape with Gaussian processes. Proceedings of the National Academy of Sciences, 110(3):E193–E201, 2013.
 Shah and Ghahramani [2015] A. Shah and Z. Ghahramani. Parallel predictive entropy search for batch global optimization of expensive objective functions. In Advances in Neural Information Processing Systems, pages 3330–3338, 2015.
 Snoek et al. [2012] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pages 2951–2959, 2012. URL https://github.com/JasperSnoek/spearmint.
 Spall [2005] J. C. Spall. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, volume 65. John Wiley & Sons, 2005.
 Wang et al. [2009] Y. Wang, J.Y. Audibert, and R. Munos. Algorithms for infinitely manyarmed bandits. In Advances in Neural Information Processing Systems, pages 1729–1736, 2009.
 Wu and Frazier [2016] J. Wu and P. Frazier. The parallel knowledge gradient method for batch Bayesian optimization. In Advances in Neural Information Processing Systems, pages 3126–3134, 2016.
 Wu et al. [2017] J. Wu, M. Poloczek, A. G. Wilson, and P. Frazier. Bayesian optimization with gradients. In Advances in Neural Information Processing Systems, pages 5267–5278, 2017.
 Xie et al. [2012] J. Xie, P. I. Frazier, S. Sankaran, A. Marsden, and S. Elmohamed. Optimization of computationally expensive simulations with Gaussian processes and parameter uncertainty: Application to cardiovascular surgery. In Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on, pages 406–413. IEEE, 2012.
Appendix A Proof of Theorem 1
Theorem 1.
Suppose the objective function in Eq. 1 is continuous on the domain and is the unique minimizer of , characterized by^{5}^{5}5Here we adopt the convention that if , then . and for all . Let be the point with the minimum objective value among all the evaluated points up to iteration . Then almost surely as .
Proof.
We define the zoom level to be zero for the root node and, whenever zooming in occurs, the zoom level of the child node is one plus that of its parent node so that every node in the tree is associated with a unique zoom level (see Fig. 2).
First, we argue that there is an upper bound on the zoom level for ProSRS algorithm. Since after each zoomin step, the size of the domain is shrunk by at least the zoomin factor , the domain length for a node of a zoom level is upper bounded by for each dimension . Here and are the domain boundaries for the root node (Eq. 1). Now let us consider a node with zoom level , where is the prescribed resolution parameter for the restart (see Eq. 4). We further denote the domain length of this node in each dimension to be and the number of evaluation points within its domain to be , then we have for all ,
which would satisfy the restart condition (Eq. 4). This implies that the zoom level of ProSRS must be less than . In other words, the zoom level is upper bounded by .
Now fix some and define , where is the number of iterations for the initial spacefilling design. The main idea of the following proof is similar to that in the original SRS paper [25].
Since the objective function is continuous at the unique minimizer , there exists so that whenever is within the open ball , .
The probability that a candidate point generated in the root node (of either Type I or Type II) is located within the domain can be shown to be bounded from below by some positive (see Section 2 of Regis and Shoemaker [25]). Here is the domain of the optimization problem (Eq. 1). Since all the candidate points are generated independently, the probability that all the candidate points are within is greater than or equal to , where is a constant denoting the number of candidate points.
Now we define a positive quantity , where is the minimum zoomout probability (see Section 2.5). We further define the event
Let probability with the understanding that . Then
(7) 
For now, let us assume . For the iteration , there are 3 possible events that could happen when we are about to run Line 20 of the ProSRS algorithm (Alg. 1):
Let
be the zoom level of the current node at this moment. Then we have the following inequalities:
That is, for any , for all . Hence, , which implies for any . Now if , again we have because the probability that all the candidates are within for the iteration is greater or equal to . Therefore, holds true for all . Using Eq. 7, we have
(8) 
Since , converges to zero, or equivalently converges to one as . Observe that
Hence, converges to in probability as . Therefore, there is a subsequence of which is also a subsequence of , that converges almost surely to . Because is monotonically decreasing so that the limit always exists, converges to almost surely. Finally, by the uniqueness of the minimizer, converges to almost surely. The arguments for the last two almostsure convergences are essentially the same as those used in proving the convergence of a simple random search algorithm (see the proof of Theorem 2.1 in Spall [30]).
∎
Appendix B Optimization benchmark functions
Table 2 summarizes the benchmark test problems. For each problem, a Gaussian noise was added to the true underlying function. We tested with commonlyused optimization domains, and the standard deviation of the noise roughly matched the range of a function.
Function  Optimization Domain  Std. of Gaussian noise 

Ackley10  1  
Alpine10  1  
Griewank10  2  
Levy10  1  
SumPower10  0.05  
SixHumpCamel2  0.1  
Schaffer2  0.02  
Dropwave2  0.02  
GoldsteinPrice2  2  
Rastrigin2  0.5  
Hartmann6  0.05  
PowerSum4  1 
Appendix C Hyperparametertuning problems
Hyperparameter tuning can be formulated as an optimization problem in Eq. 1. In this case, the function
is a validation or crossvalidation error for a machine learning model and the vector
represents the hyperparameters to be tuned. The function is typically expensive since one evaluation of involves training and scoring one or multiple machine learning models. The noise associated with may come from the fact that a machine learning algorithm (e.g., random forest) contains random elements or a stochastic optimization method (e.g., SGD) is invoked during the training process.Next, we describe the details of the two hyperparametertuning problems considered in this work. For both problems, when tuning an integervalued hyperparameter, we rounded the continuous output from an optimization algorithm to the nearest integer before feeding it to the machine learning algorithm.
Random forest.
We tuned a random forest, one of the most widely used classification algorithms, on the wellknown Adult dataset [11]
. The dataset consists of 48842 instances with 14 attributes, and the task is to classify income based on census information. We tuned 5 hyperparameters: number of trees on
, number of features on , maximum depth of a tree on , minimum number of samples for the node split on and minimum number of samples for a leaf node on . We minimized the 5fold crossvalidation error.Deep neural network.
We tuned a feedforward deep neural network with 2 hidden layers on the popular MNIST dataset [15]. This tuning problem is also considered in [33]. We used the same trainingvalidation data split as in the TensorFlow tutorial [1] with the training set having 55000 data points and the validation set having 5000 data points. We tuned 7 hyperparameters: number of units in each hidden layer on , and regularization constants, both on a log scale on , learning rate on a log scale on , batch size on
and number of epochs on
. We minimized the validation error.
Comments
There are no comments yet.