Preprint
Review

Tutorial on Bayesian Optimization

This version is not peer-reviewed.

Submitted:

15 March 2023

Posted:

16 March 2023

You are already at the latest version

Abstract
Machine learning forks into three main branches such as supervised learning, unsupervised learning, and reinforcement learning where reinforcement learning is much potential to artificial intelligence (AI) applications because it solves real problems by progressive process in which possible solutions are improved and finetuned continuously. The progressive approach, which reflects ability of adaptation, is appropriate to the real world where most events occur and change continuously and unexpectedly. Moreover, data is getting too huge for supervised learning and unsupervised learning to draw valuable knowledge from such huge data at one time. Bayesian optimization (BO) models an optimization problem as a probabilistic form called surrogate model and then directly maximizes an acquisition function created from such surrogate model in order to maximize implicitly and indirectly the target function for finding out solution of the optimization problem. A popular surrogate model is Gaussian process regression model. The process of maximizing acquisition function is based on updating posterior probability of surrogate model repeatedly, which is improved after every iteration. Taking advantages of acquisition function or utility function is also common in decision theory but the semantic meaning behind BO is that BO solves problems by progressive and adaptive approach via updating surrogate model from a small piece of data at each time, according to ideology of reinforcement learning. Undoubtedly, BO is a reinforcement learning algorithm with many potential applications and thus it is surveyed in this research with attention to its mathematical ideas. Moreover, the solution of optimization problem is important to not only applied mathematics but also AI.
Keywords: 
;  ;  ;  ;  

1. Introduction

Given target function y = f(x), optimization problem is to find out extremizer x* so that f(x*) gets extreme value y = f(x*). As a convention, f(x) is scalar-by-vector function whose output (observed value or evaluated value) y is scalar and whose variable x is n-dimension vector. The extremizer x* can be minimizer or maximizer so that y* = f(x*) is minimum or maximum, respectively. Optimization problem is specified as follows:
x * = argmin x f x Or   x * = argmax x f x
If the extremizer x* is local minimizer or local maximizer, the optimization problem is local optimization problem where traditional methods such as Newton-Raphson and gradient descent are perfect solutions but they require f(x) is totally convex for minimization (concave for maximization). The problem becomes much more complex if f(x) is not totally convex (concave) which leads that x* is global extremizer. This is the global optimization problem which is mentioned in this research. In literature, x* is minimizer by default but in context of Bayesian optimization (BO) it is better to consider x* as maximizer because BO mainly relates to probabilistic distributions whose peaks are concerned much. However, it is not serious because minimization is inverse of maximization, for example:
argmin x f x argmax x f x
There are three approaches to solve (global) optimization problem such as analytic approach, probabilistic approach, and heuristic approach. Analytic approach applies purely mathematical tools into finding out optimizers such as approximation, cutting plane, branch and bound, and interval method, in which these methods focus on analytic essence of algebraic target function. Probabilistic approach considers looking for optimizers as random selection but such random selection is guided by some probabilistic model so as to reach an optimizer. Heuristic approach which is the most flexible one among three approaches tries to apply or imitate heuristic assumptions into searching for optimizers. It does not concern much mathematical reasonings because feasibility and effectiveness are most important. As usual, heuristic approach imitates natural activities, for example, particle swarm optimization (PSO) simulates how a flock of birds search for food. Evolutional algorithms like PSO and ant bee colony (ABC) which are inspired from biological activities are popular methods of heuristic algorithms. However, there are some implicit connections between heuristic approach (concretely, evolutional algorithms) and probabilistic approach that I mentioned in a research about minima distribution (Nguyen, 2022).
Bayesian optimization (BO) belongs to the probabilistic approach. It is based on Bayesian inference which considers parameter as random variable and updates posterior probability of parameter based on evidence and prior probability. Because BO does not impact directly on target function f(x), it must model f(x) as a probabilistic model and then define an acquisition function for such probabilistic model. BO solves the optimization problem by maximizing the acquisition function instead of maximizing the target function. Shortly, two main tasks of BO are:
  • Modeling f(x) by the probabilistic model called surrogate model (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, pp. 149-150).
  • Defining the acquisition function for the surrogate model so that it is possible to maximize the acquisition function.
Posterior probability of surrogate model in BO is updated continuously along with maximizing acquisition function continuously until a maximizer of target function is reached. Following is pseudo code of BO.
Table 1.1. BO algorithm.
Table 1.1. BO algorithm.
Model f(x) by the surrogate model S(f | Dn) where sample Dn is a set of variable values xi.
Define the acquisition function α(x | f, S) based on both f(x) and S(f | Dn) whose variable is x.
Initialize n.
Initialize randomly Dn = {x1, x2,…, xn}.

While maximizer x* is not reached or the number of iterations is not many enough
Update posterior probability of S(f | Dn) with sample Dn.
Determine acquisition function α(x | f, S) with S(f | Dn).
Find xn+1 as a maximizer of α(x | f, S) with regard to x.
x n + 1 = argmax x α x f , S
(Checking whether xn+1 is maximizer x*).
Add xn+1 to sample Dn.
D n = D n x n + 1
Increase n = n + 1.
End while
In the table above, there is a question about how to check if a given xn+1 is the global maximizer x*. The checking task here is implicitly executed by BO applications, for example, if two or more sequential iterations produce the same value xn+1 = xn (or small enough deviation |xn+1xn|) such value xn+1 can be the maximizer x* or be considered as the maximizer x* because BO algorithm improves xn+1 to be higher and higher by maximizing updated acquisition function after every iteration. Because the checking task is not essential task of BO, it is often not listed in BO literature. Besides, how to define acquisition function can decide which one among maximization or minimization is optimization problem.
If the surrogate model S(f | Dn) has explicit parameters related directly to f(x), updating posterior probability of S(f | Dn) is indeed to update posterior probability of its explicit parameters and then, BO is called parametric BO (PBO). If the surrogate model S(f | Dn) has no explicit parameters or its parameters are not related directly to f(x) then, BO is called nonparametric BO (NBO). As seen in BO algorithm, BO does not concern algebraic formulation of target function f(x) because it only concerns output y=f(x) and hence, f(x) is a black box with regard to BO. In other words, f(x) is an arbitrary mapping with subject to BO and so BO is a flexible solution for global optimization, which has many potential applications. For BO, especially NBO, f(x) is a black box except its output y = f(x) and so BO maximizes the acquisition function α(x | f, S) instead of maximizing directly f(x) because BO knows α(x | f, S) which is created from S(f | Dn) that BO built up before. How to define acquisition function depends on which kind of BO is, for instance, PBO or NBO, and how to define surrogate model. BO and acquisition function will be mentioned in the next sections.
In general, the essence of BO is continuous improvement via updating posterior probability, which follows ideology of reinforcement learning (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, p. 151). Therefore, it is possible to consider BO as a reinforcement learning technique. Before describing subjects related to Bayesian optimization (BO) in detail, it is necessary to mention conventions about mathematical notations. For instance, lowercase and un-bold letters like a, b, c, x, y, and z denote scalar whereas lowercase and bold letters like a, b, c, x, y, and z denote vector. In some cases, uppercase and un-bold letters like A, B, C, X, Y, and Z can denote vector. Uppercase and bold / un-bold letters like A, B, C, A, B, C, X, Y, Z, X, Y, and Z denote matrix. Variables can be denoted as x, y, z, x, y, z, X, Y, Z, X, Y, and Z whereas constants can be denoted as a, b, c, a, b, c, A, B, C, A, B, and C. Uppercase and bold / un-bold letters like X, Y, Z, X, Y, and Z can denote random variables. However, scalar, vector, matrix, variables, and random variables are stated explicitly in concrete cases. A vector is column vector by default if there is no additional explanation. Superscript “T” denotes transposition operator of vector and matrix, for example, given column vector x then the notation xT is the row vector which is transposed vector of x.

2. Bayesian Optimization

As aforementioned, Bayesian optimization (BO) solves the optimization problem by maximizing the acquisition function α(x | f, S) which is derived from the surrogate model S(f | Dn) where Dn is the current sample. The surrogate model S(f | Dn), in turn, was a probabilistic representation of target function f(x). The way to define surrogate model decides the kind of BO where there are two kinds of BO such as parametric BO (PBO) and nonparametric BO (NBO). PBO implies that parameters of the surrogate model S(f | Dn) were included explicitly in target function f(x). Otherwise, the surrogate model S(f | Dn) of NBO has no explicit parameters or its parameters were not included explicitly in target function f(x). Firstly, PBO is surveyed and then NPO is researched.
The simplest PBO inspired by a system consisting of r events Xk is known Bernoulli system, for example, a lottery machine or bandit system has r arms represented by r random variables Xk and probability of an arm k yielding lottery prize is P(Xk=1) (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, pp. 152-153). Such PBO is called Bernoulli PBO where each binary random variable Xk whose outcome is successful (Xk=1) or failed (Xk=0) follows Bernoulli distribution whose parameter is θk (0 ≤ θi ≤1).
P X k θ k = θ k   i f   X k = 1 1 θ k   i f   X k = 0
This means:
θ k = P X k = 1 θ k
Note, the general notation P(.) denotes probability function in both discrete case and continuous case. In Bayesian inference, parameter is considered as random variable which also follows distribution and so suppose each θk has prior probability P(θk) and hence, the probability P(Xk=1) of event Xk=1 is expectation of Bernoulli distribution P(Xk=1 | θk) within the prior probability P(θk).
E θ k = P X k = 1 = θ k P X k = 1 θ k P θ k d θ k = θ k θ k P θ k d θ k
Note, notation E(.) denotes expectation of random variable. Going back to the example of lottery machine, P(Xk=1) is the probability that an arm k yields lottery prize, which is called winning probability of arm k. A gambler does not know exactly which arm is the best one to win a prize and so he tries to choose an arm k having largest winning probability P(Xk=1). Consequently, if he wins (Xk=1) then the winning probability P(Xk=1) is increased and otherwise if he loses then P(Xk=1) is decreased. In the next time, the gambler will continue to choose the arm whose winning probability is largest and so on. Suppose the gambler chooses arms through n times are considered as sample Dn = {Xv1, Xv2,…, Xvn} where each observation Xvi indicates that the gambler selects arm vi at the ith time with result Xvi (which equals to 0 or 1). Let Dk denote a sub-sample of Dn in which Xk is selected and let sk and tk be the numbers of Xk=1 and Xk=0, respectively, which are extracted from Dk. Suppose Dk obeys identically independent distribution (iid) criterion, likelihood function of Dk is:
P D k θ k = θ k s k 1 θ k t k
Sample Dk here is called binomial sample when Xk follows Bernoulli distribution which leads to the event that the likelihood function of Dk above follows binomial distribution. Marginal probability of Dk within the prior probability P(θk) is:
P D k = θ k P D k θ k P θ k d θ k = E θ k s k 1 θ k t k
Posterior probability of θk given sample Dk according to Bayes’ rule is:
P θ k D k = P D k θ k P θ k P D k
The probability P(Xk=1 | Dk) of Xk=1 given Dk is.
E θ k D k = P X k = 1 D k = θ k P X k = 1 θ k P θ k D k d θ k = θ k θ k P θ k D k d θ k
Obviously, for the example of lottery machine, the winning probability of arm k is expectation of parameter θk within its posterior probability given sample Dk. Therefore, given r parameters as parameter vector Θ = (θ1, θ2,…, θr)T, the target function of Bernoulli PBO is essentially an index function specified as follows:
f x = k = f k = E θ k D k   w h e r e   k = 1 , r ¯
The optimization problem is specified as follows:
k * = argmax k f k = argmax k E θ k D k
Obviously, the target function is defined based on the posterior probability of θk which is now considered as surrogate model of Bernoulli PBO.
S f D n P θ k D k
Bernoulli PBO is a PBO because surrogate model and target function share the same parameter θk. Acquisition function of Bernoulli PBO is the same to target function:
α k f , S = f k = E θ k D k
The problem now is how to specify the posterior probability of θk. Suppose the prior probability of each θk follows beta distribution:
P θ k = b e t a θ k a k , b k = Γ a k + b k Γ a k Γ b k θ k a k 1 1 θ k b k 1
There are two hyper parameters of beta distribution in which ak indicates the number of successful outcomes and bk indicates the number of failed outcomes in ak + bk trials. Note, Γ(.) denotes gamma function (Neapolitan, 2003, p. 298) which is continuous form of factorial function.
Γ x = 0 + t x 1 e t d t
Given prior beta distribution, the probability of Xi=1 is:
E θ k = P X k = 1 = θ k θ k b e t a θ k a k , b k d θ k = a k a k + b k
Given sub-sample Dk the posterior probability of each θk is:
P θ k D k = b e t a θ k a k + s k , b k + t k = Γ a k + s k + b k + t k Γ a k + s k Γ b k + t k θ k a k + s k 1 1 θ k b k + t k 1
Where sk and tk be the numbers of Xk=1 and Xk=0, respectively. Recall that such posterior probability is also called Bernoulli PBO. Because the prior probability P(θk) and the posterior probability P(θk|Dk) have the same form, they are called conjugate distributions and such conjugation is very important to Bayesian inference. The probability of Xk=1 given Dk which is also target function is:
E θ k D k = P X k = 1 D k = θ k θ k P θ k D k d θ k = a k + s k a k + s k + b k + t k
Formulation of surrogate model becomes neat because of the assumption of beta distribution with binomial sample. When prior probability P(θk) follows beta distribution and Xk follows Bernoulli distribution (so that likelihood function P(Dk | θk) follows binomial distribution with binomial sample Dk) then we obtain probabilistic conjugation in which the posterior probability P(θk|Dk) follows beta distribution. Going back to the example of lottery machine, it is easy to derive Bernoulli PBO with beta distribution as follows (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, p. 153):
Table 2.1. Bernoulli PBO algorithm.
Table 2.1. Bernoulli PBO algorithm.
Initialize hyper parameters ak and bk of r distributions beta(θk | ak, bk).

While the number of iterations is not many enough
Find kn+1 as an index maximizer among r distributions beta(θk | ak, bk) so that E(θk | Dk) is maximum.
k n + 1 = argmax k E θ k D k = argmax k a k + s k a k + s k + b k + t k
If X k n + 1 = 1 then increasing a k n + 1 = a k n + 1 + 1 .
Otherwise, if X k n + 1 = 0 then increasing b k n + 1 = b k n + 1 + 1 .
Increase n = n + 1.
End while
Because the expectation E(θk | Dk) is essentially an estimate of θk given binomial sample according Bayesian statistics, the target function of Bernoulli PBO can be rewritten:
f k = θ ^ k
Where,
θ ^ k = E θ k D k
This implies that target function in Bernoulli PBO is “identified” with probabilistic parameter and hence, for example, given lottery machine, each arm has no associated properties, which cannot be applied into more complex system. Therefore, suppose each arm k is associated with a real number vector Xk so that the target function is linear function of p-dimension Xk and vector parameter W as follows (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, p. 154):
f k = f k = W T X k   w h e r e   k = 1 , r ¯
Note, the superscript “T” denotes transposition operator of vector and matrix. There are r property vectors {X1, X2,…, Xr} when the number of arms is r. Suppose variable Y = WTXk distributes normally with mean WTXk and variance σ2 as follows:
Y ~ N Y W T X k , σ 2 = 1 2 π σ 2 e x p Y W T X k 2 2 σ 2
Note, the tilde sign “~” indicates probabilistic distribution of a random variable and notation N . . denotes normal distribution. Essentially, this normal distribution is likelihood function:
P Y Θ = N Y W T X k , σ 2
Normal distribution is also called Gaussian distribution. According to Bayesian inference, suppose parameter W distributes normally in prior with mean μ0 and covariance matrix Σ0 as follows:
W ~ N W μ 0 , Σ 0 = 2 π p 2 Σ 0 1 2 e x p 1 2 W μ 0 T Σ 0 1 ( W μ 0 )
The distribution above is multinormal distribution because W is random vector variable. Multinormal distribution is also called multivariate normal distribution or multivariate Gaussian distribution. Suppose parameter σ2 follows inverse gamma distribution in prior with shape parameter α0 and scale parameter β0 as follows:
W ~ I G σ 2 α 0 , β 0 = β 0 α 0 Γ α 0 σ 2 α 0 1 e x p β 0 σ 2
Therefore, the parameter of entire system like lottery machine is Θ = θ = {μ, Σ, α, β} and the NBO for this parameter is called normal – inverse gamma NBO (NIG-NBO) because it combines multinormal distribution and inverse gamma distribution by product operator. The prior probability of Θ follows NIG distribution (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, p. 154):
P Θ = N W μ 0 , Σ 0 I G σ 2 α 0 , β 0
The prior parameter is Θ0 = {μ0, Σ0, α0, β0}. The sample Dn now is Dn = X = (x1T, x2T,…, xnT)T of size n where xi is some Xk that the gambler selects at the ith time, as follows:
D n = X = x 1 T x 2 T x n T = x 11 x 12 x 1 p x 21 x 22 x 2 p x n 1 x n 2 x n p , x i = x i 1 x i 2 x i p
Let y = (y1, y2,…, yn)T be the output vector of X with target function where yi = WTxi.
y = y 1 = W T x 1 y 2 = W T x 2 y n = W T x n
Fortunately, the posterior probability of Θ given sample Dn is also a NIG distribution which is the probabilistic conjugation with the prior probability. Of course, such posterior probability is NIG-PBO surrogate model.
S f D n P Θ D n = N W μ n , Σ n I G σ 2 α n , β n
The posterior parameter is Θn = {μn, Σn, αn, βn}. Fortunately, Θn was calculated from Dn in literature (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, p. 154).
μ n = Σ n Σ 0 1 μ 0 + X T y Σ n = Σ 0 1 + X T X α n = α 0 + n / 2 β = β 0 + 1 2 μ 0 T Σ 0 1 μ 0 + y T y μ n T Σ n 1 μ n
Acquisition function of NIG-PBO is defined from target function and the posterior mean μn:
α k f , S = α k μ n = μ n T X k
Note, α(k|μn) is a function of index k. When Θn is determined, it is easy to derive NIG-NBO algorithm.
Table 2.2. NIG-PBO algorithm.
Table 2.2. NIG-PBO algorithm.
Initialize prior parameters Θ0 = {μ0, Σ0, α0, β0}.

While the number of iterations is not many enough
Update posterior parameter Θn = {μn, Σn, αn, βn} from X=Dn.
              μ n = Σ n Σ 0 1 μ 0 + X T y
              Σ n = Σ 0 1 + X T X
              α n = α 0 + n / 2
              β = β 0 + 1 2 μ 0 T Σ 0 1 μ 0 + y T y μ n T Σ n 1 μ n
Find kn+1 as an index maximizer of the acquisition function α(k|μn) among r property vectors Xk.
k n + 1 = argmax k μ n T X k
Add X k n + 1 to sample Dn.
D n = D n X k n + 1
Increase n = n + 1.
End while
When f(x) does not have specific (explicit) aspect or property which becomes an explicit parameter of the surrogate model S(f | Dn), the corresponding BO becomes nonparametric BO (NBO). The most popular technique to establish S(f | Dn) for NBO is to use Gaussian process regression (GPR) for modeling S(f | Dn). In other words, GPR is a surrogate model of NBO. This research focuses on NBO with GPR.
Because kernel function is very important to GPR when it is used to not only build up GPR but also make GPR line smoother. Kernel function measures similarity between two variables (two points), according to that, the closer the two variables are, the larger their kernel function is. Let Σ(xi, xj) denote kernel function of two variables xi and xj, for example, a popular kernel function is simple squared exponential function.
Σ S E x i , x j = e x p x i x j 2 2
In this research the notation |.| can denote absolute value of scalar, length (module) of vector, determinant of matrix, and cardinality of set. Concretely, kernel function Σ(xi, xj) is used to define covariance function in GPR. As a convention, let xi:j where ij denote a sequential subset of X such that xi:j = {xi, xi+1,…, xj}. Of course, we have xi:i = xi. Given two subsets xi:j and xk:l, their covariance function is:
Σ x i : j , x k : l = Σ x i , x k Σ x i , x k + 1 Σ x i , x l Σ x i + 1 , x k Σ x i + 1 , X k + 1 Σ x i + 1 , x l Σ x j , x k Σ x j , x k + 1 Σ x j , x l
Output of a covariance function is a covariance matrix if such output is invertible and symmetric. For NBO, given sample Dn = x1:n = {x1, x2,…, xn}, the consequential sequence y1:n = f(x1:n) = f(Dn) = {y1 = f(x1), y2 = f(x2),…, yn = f(xn)} is established to distribute normally with prior probability density function (prior PDF) as follows:
y 1 : n x 1 : n f y 1 : n x 1 : n , μ . , Σ . = N y 1 : n μ y 1 : n , Σ x 1 : n , x 1 : n
Where, in this research, notation f(.|.) often denotes probability density function (PDF), which is not concrete function and notation N . . denotes normal distribution, with note that the general notation P(.) denotes probability function in both discrete case and continuous case whereas the term “PDF” focuses on probability function of continuous variable. Note, μ(.) and Σ(.) are mean function and covariance function, respectively. In this special case, covariance function Σ(.) is matrix function defined based on kernel function; exactly Σ(.) is matrix-by-vector function if each xi is vector.
Σ x 1 : n , x 1 : n = Σ x 1 , x 1 Σ x 1 , x 2 Σ x 1 , x n Σ x 2 , x 1 Σ x 2 , x 2 Σ x 2 , x n Σ x n , x 1 Σ x n , x 2 Σ x n , x n
Of course, each element Σ(xi, xj) of covariance function Σ(x1:n, x1:n) is kernel function. Recall that
Σ x , x 1 : n = Σ x , x 1 Σ x , x 2 Σ x , x n T , Σ x 1 : n , x = Σ x 1 , x Σ x 2 , x Σ x n , x
It is popular to define μ(y1:n) based on μ(y) separately:
μ y 1 : n = μ y 1 μ y 2 μ y n
Moreover, for BO, mean function μ(.) is often set to be zero as follows:
μ y = 0   s o t h a t   μ y 1 : n = μ y 1 = 0 μ y 2 = 0 μ y n = 0 = 0 T
Given variable x, GPR surrogate model is represented by the posterior PDF of y = f(x) given y1:n, x1:n, and x as follows:
y y 1 : n , x 1 : n , x f y y 1 : n , x 1 : n , x , μ . , Σ . = N y μ y y 1 : n , x 1 : n , x , σ 2 y x 1 : n , x
This posterior PDF is derived from interesting properties of normal distribution which will be mentioned in the next section. Note that μ(y | y1:n, x1:n, x) and σ2(y | x1:n, x) are mean and variance of the multinormal posterior PDF of y given y1:n, x1:n, and x, respectively.
μ y y 1 : n , x 1 : n , x = μ y + Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 y 1 : n m y 1 : n σ 2 y x 1 : n , x = Σ x , x Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
Note, (Σ(x1:n, x1:n))–1 denotes inverse of covariance matrix (output of covariance function) Σ(x1:n, x1:n). Please pay attention that m(y1:n) is a realistic mean which is recurrently determined by previously known μ(y | y1:n0, x1:n0, x):
m y n = μ y y 1 : n 0 + n 1 , x 1 : n 0 + n 1 , x n
The variance σ2(y | x1:n, x) is function of only x and so, in practice, mean function μ(y) is set to be zero so that the mean function μ(y | y1:n, x1:n, x) is also function of only x too, as follows (Frazier, 2018, p. 4):
μ n x = μ y y 1 : n , x 1 : n , x = Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 y 1 : n m y 1 : n σ n 2 x = σ 2 y x 1 : n , x = Σ x , x Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
Please pay attention that m(y1:n) is a realistic mean which is recurrently determined by previously known μn–1(x):
m y n = μ n 1 x n
The event that both μn(x) and σn2(x) are functions of only x is necessary to determine acquisition function of BO later with note that x1:n and y1:n were known. GPR surrogate model is rewritten:
y x 1 : n , x f y x 1 : n , μ . , Σ . = N y μ n x , σ n 2 x = S f D n
Where,
μ n x = Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 y 1 : n m y 1 : n σ n 2 x = Σ x , x Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
Therefore, the acquisition function α(x | f, S) which will be based on μn(x) and σn2(x) is denoted as follows:
α x f , S = α x μ n x , σ n 2 x
Indeed, GPR is represented by the two parameters μn(x) and σn2(x) but such parameters are not included in the target function f(x) and so this is a NBO. Given acquisition function α(x | μn(x), σn2(x)) based on μn(x) and σn2(x), and also known sample Dn = x1:n = {x1, x2,…, xn}, let xn+1 be a maximizer of α(x | μn(x), σn2(x)) with regard to x and hence xn+1 will be updated continuously after every iteration until it reaches the entire maximizer x* of f(x).
x n + 1 = argmax x α x μ n x , σ n 2 x
As a result, the pseudo code of NBO with GPR is finetuned as follows:
Table 2.3. NBO algorithm with GPR.
Table 2.3. NBO algorithm with GPR.
Initialize randomly Dn = x1:n = {x1, x2,…, xn}.
Initialize m(y1:n) = μ(y1:n).

While maximizer x* is not reached or the number of iterations is not many enough
Update posterior mean μn(x) and variance σn2(x) of GPR with sample Dn as follows:
μ n x = Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 y 1 : n m y 1 : n
σ n 2 x = Σ x , x Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
Determine acquisition function α(x | μn(x), σn2(x)) based on μn(x) and σn2(x) of GPR.
Find xn+1 as a maximizer of α(x | μn(x), σn2(x)) with regard to x.
x n + 1 = argmax x α x μ n x , σ n 2 x
Set
m y n + 1 = μ n x n + 1
Add xn+1 to sample Dn.
D n = D n x n + 1 = x i : n + 1
Increase n = n + 1.
End while
In general, there are two important tasks of NBO in which the first one is to determine the posterior mean μn(x) and variance σn2(x) of GPR. The second one is to specify the acquisition function α(x | μn(x), σn2(x)) which is described shortly here. Detailed description of GPR and acquisition function will be mentioned in the next sections. Note that α(x | μn(x), σn2(x)) is function of x. Recall that BO maximizes the acquisition function α(x | f, S) so as to search for maximizer x* because target function f(x) is assumed to be a black box for BO and BO creates previously surrogate model from which acquisition function is derived later. Acquisition function is especially important to NBO because NBO does not know f(x) and parameters of NBO surrogate model are not relevant to f(x). Acquisition function may not be strict with PBO but it is very strict with NBO. Moreover, finding maximizer of acquisition function should be cheaper than finding maximizer of target function f(x) so that researchers in optimization domain will pay more attention to BO. There are some acquisition functions, for example, probability of improvement, expected improvement, entropy search, and upper confidence bound but expected improvement (EI) is the most popular one. EI is mentioned here and other ones are described in the next section.
Given sample Dn = x1:n = {x1, x2,…, xn} and its evaluations y1:n = {y1, y2,…, yn}, let yn* = max{y1, y2,…, yn } be the temporal maximum at current iteration of NBO algorithm. EI acquisition function is determined as follows:
α E I x = α x μ n x , σ n 2 x = μ n x y n * Φ μ n x y n * σ n x + σ n x ϕ y n *
Where σn(x) is called standard deviation of y given y1:n, x1:n, and x.
σ n x = σ n x
The most important here is that ϕ(.) and Φ(.) are standard normal PDF and standard normal cumulative distribution function (standard normal CDF). Standard normal distribution is also called standard Gaussian distribution.
ϕ z = 1 2 π e x p z 2 2 Φ z 0 = ϕ z < z 0 = ϕ z z 0 = z 0 ϕ z d z
Mean and variance of standard normal distribution are 0 and 1, respectively. Moreover, values of standard normal CDF were evaluated in practice. As a result, the maximizer xn+1 of EI acquisition function in NBO algorithm at a current iteration is determined as follows:
x n + 1 = argmax x α E I x = argmax x μ n x y n * Φ μ n x y n * σ n x + σ n x ϕ y n *
Recall that defining and maximizing acquisition function is one of two most important tasks in BO whereas the other one is to build up surrogate model. The essential criterion is that maximizing acquisition function like EI should be cheaper than maximizing analytic target function f(x) by analytic methods. Anyhow, BO is always significant because f(x) can be arbitrary or have no analytic formulation for BO.

3. Gaussian Process Regression

Nonparametric BO is based on Gaussian process regression (GPR) which, in turn, is based on Gaussian process. Therefore, we start the description of GPR with a concept of Gaussian process. Given a random process X = (X1, X2,…, Xn) over n timepoints in which each Xt where t belongs to {1, 2,…, n} is random variable then, X is called Gaussian random process or Gaussian process (GP) in brief if and only if for any finite index set {t1, t2,…, tk} of {1, 2,…, n} where tj belongs to {1, 2,…, n}, the subset X t 1 , X t 2 , , X t k considered as tk-dimension random vector X t 1 , X t 2 , , X t k T follows multinormal distribution known as multivariate Gaussian distribution. Note, each X t 1 represents one dimension of the k-dimension random vector variable X t 1 , X t 2 , , X t k T . Moreover, please pay attention that any combination of X t 1 , X t 2 , , X t k follows multinormal distribution too. Without loss of generality, we denote the random process as random variable X = (X1, X2,…, Xn)T where {t1, t2,…, tk} = {1, 2,…, n} obeying multinormal distribution also called multivariate normal distribution or multivariate Gaussian distribution as follows:
X = X 1 , X 2 , , X n T N X μ X , Σ X
Where μ(X) and Σ(X) are mean function and covariance function of X, respectively. Note, the superscript “T” denotes transposition operator of vector and matrix whereas the tilde sign “~” indicates probabilistic distribution of a random variable. GP is known as infinite multinormal distribution which is the generalized form of n-dimension multinormal distribution because n can approach positive infinity n = +∞. Let f(X | μ(X), Σ(X) be probability density function (PDF) of X when X is continuous, we have:
f X μ X , Σ X = N X μ X , Σ X
Where mean μ(X) and covariance Σ(X) are functions of X, respectively. Note, in this research, notation f(.|.) often denotes PDF, which is not concrete function, and notation N . . denotes normal distribution. Probability function, distribution function, cumulative distribution function (CDF), and PDF are terms which can be exchangeable in some cases, but the term “PDF” focuses on probability function of continuous variable whereas the general notation P(.) denotes probability function in both discrete case and continuous case.
In literature, μ(X) is assumed to be zero as μ(X) = 0T for convenience but it can be defined as the random process X itself, μ(X) = X. Besides, μ(X) can be customized according to concrete applications, for example it can be constant as μ(X) = μ. However, it is better to set μ(X) = 0T for my opinion. As a convention in Gaussian process, output of a mean function is a mean and so μ(X) also denotes the theoretical mean of X. In general case, μ(X) is vector function of combination of X1, X3,…, and Xn (s) belong to X but, as a convention, each μ(Xi) depends only Xi and moreover, μ(Xi) has the same formulation of every Xi. Therefore, μ(X) should be identified with μ(Xi) or μ(X) where X denotes any Xi belonging to X.
μ X   s h o u l d   b e   μ X 1 μ X 2 μ X n
Covariance function Σ(X) measures the correlation between random variables when the random process “moves” them, in which the closer the given two random variables are, the larger their covariance. The two most important properties based on covariance function Σ(X) of random process are stationarity and isotropy among four basic properties: stationarity, isotropy, smoothness, and periodicity. Stationarity implies that the PDF f(X | μ(X) of random process X will not be changed when the process is moved in time, for example, if new random variable Xn+1 raises to be added then means and covariances of old (previous) variables Xi where 1 ≤ in in X will not be changed. It is proved that if GP X satisfies stationarity, Σ(X) will depend only on the deviation XiXj but the inversed statement is not asserted. However, if Σ(X) depends only on the Euclidean distance |XiXj| then, GP X will satisfy isotropy. If X satisfies both stationarity and isotropy, X is called homogeneous process. In cases where each element of matrix function Σ(X) depends on only Xi and Xj like stationarity case and isotropy, it will result out a following matrix.
Σ X = Σ X , X = Σ X 1 , X 1 Σ X 1 , X 2 Σ X 1 , X n Σ X 2 , X 1 Σ X 2 , X 2 Σ X 2 , X n Σ X n , X 1 Σ X n , X 2 Σ X n , X n
In these cases, covariance function Σ(X) can be “identified” with its element function Σ(Xi, Xj) when the formulation of Σ(Xi, Xj) is not changed formally and hence, Σ(Xi, Xj) is called kernel function. Please pay attention that the term “covariance function” is slightly different from the term “kernel function”. When referring to only two variables Xi and Xj, they are the same. When referring to one or two sets of variables such as X and X*, the notations Σ(X, X*), Σ(X, X), or Σ(X) mentions covariance function as matrix function whose elements are defined by kernel function if each element depends on only two variables like Xi and Xj. Exactly Σ(.) is matrix-by-vector function if each Xi is vector. Output of a covariance function is a covariance matrix if such output is invertible and symmetric; in this case, covariance function is “identified” with covariance matrix. For explanation, suppose each Xi belonging to X is scalar (so that X is vector) and the output of covariance function Σ(X) is covariance matrix, the multinormal PDF of X if formulated as follows:
f X μ X , Σ X = N X μ X , Σ X = 2 π n 2 Σ X 1 2 e x p 1 2 X μ X T Σ X 1 ( X μ X )
However, each Xi in GPR is arbitrary, such as scalar, vector, and matrix. Note, Σ(X)–1 denotes inverse of covariance matrix Σ(X). Kernel function is not only essential to define covariance function of Gaussian process but also used to make GPR line smoother. The following are some kernel functions.
L i n e a r : Σ L X i , X j = X i T X j S q u a r e d   e x p o n e n t i a l : Σ S E X i , X j = e x p X i X j 2 2 l 2
Where l is the characteristic length-scale of the process which reinforces similarity of Xi and Xj. In this research the notation |.| denotes absolute value of scalar, length (module) of vector, determinant of matrix, and cardinality of set. As a convention, let Xi:j where ij denote a sequential subset of X such that Xi:j = {Xi, Xi+1,…, Xj}. Of course, we have Xi:i = Xi. In general case, Xi:j is arbitrary subset of distinguish variables. Let Σ(Xi, Xj:k) denote a row-vector covariance function of Xi and Xj:k as follows:
Σ X i , X j : k = Σ X i , X j , Σ X i , X j + 1 , , Σ X i , X k = Σ X i , X j Σ X i , X j + 1 Σ X i , X k T
Let Σ(Xj:k, Xi) denote a column-vector covariance function of Xj:k and Xi as follows:
Σ X j : k , X i = Σ X j , X i Σ X j + 1 , X i Σ X k , X i
As a convention, let Σ(Xi:j, Xk:l) denote a covariance function of Xi:j and Xk:l as follows:
Σ X i : j , X k : l = Σ X i , X k Σ X i , X k + 1 Σ X i , X l Σ X i + 1 , X k Σ X i + 1 , X k + 1 Σ X i + 1 , X l Σ X j , X k Σ X j , X k + 1 Σ X j , X l
Obviously, the output of Σ(Xi:j, Xk:l) cannot be a covariance matrix if it is not squared. Note, Σ(Xi:j, Xk:l) is a partition of Σ(X) and we have Σ(X) = Σ(Xi:n, Xi:n). If denoting X1 = (Xi, Xi+1,…, Xj)T and X2 = (Xk, Xk+1,…, Xl)T, we denote:
Σ X 1 , X 2 = Σ X i : j , X k : l Σ X 1 , X 1 = Σ X 1 = Σ X i : j , X i : j = Σ X i : j
Gaussian process repression (GPR) is based on Gaussian process (GP) when there is a target function which attaches to each X where X represents any Xi in X such that Y = f(X). Please distinguish the target function f from the formal notations f(.|.) of probability density function (PDF). Of course, Y or Yi is also random variable and we also have Yi = f(Xi). Besides, in context of regression model, the target function f(X) is not a formal function with arithmetic operators and exactly, it is a mapping between X and Y. For example, sample {X, Y} has two paired datasets X and Y in which for every Xi belonging to X there is a Yi belong to Y and hence, the equation Y = f(X) only indicates such mapping. GPR model tries to represent or draw a regressive PDF of Y from its previous ones (which will be explained later) and X. Suppose we had a GP X = (X1, X2,…, Xn)T and target function Y = f(X), assuming that the prior PDF of Y = (Y1, Y2,…, Yn)T = f(X) given X is also derived from the multinormal PDF of X.
Y X f X μ X , Σ X = N X μ X , Σ X
Therefore, we have:
Y X f Y X , μ . , Σ . = N Y μ Y , Σ X , X
Where μ(.) and Σ(.) denote mean function and covariance function in formality, respectively, and Σ(X, X) = Σ(X). The equation above implies that the prior PDF of GP Y is initialized with the PDF of GP X. Mean vector and covariance matrix of Y are μ(Y) and Σ(X, X) *) within the PDF f(Y | X, μ(.), Σ(.)), respectively. The mean function μ(Y) is redefined here with variable Y but it should be the same to the mean function μ(X) in formulation if μ(X) can be extended with the lower/higher dimensional space. For instance, if μ(X) is defined as itself μ(X) = X then, μ(Y) will be defined as itself μ(Y) = Y = (Y1, Y2,…, Yn)T. In literature, for simplicity, μ(Y) is set to be zero as μ(Y) = 0T. It is also better to set μ(Y) = 0T.
Now suppose we randomize a new set of variables X* = (Xn+1, Xn+2,…, Xn+k)T and obtain their evaluated values Y* = f(X*) = (Yn+1, Yn+2,…, Yn+k)T. Similarly, mean vector and covariance matrix of Y* are μ(Y*) and Σ(X*, X*) within the PDF f(Y* | X*, μ(.), Σ(.)), respectively. Of course, the mean function μ(.) for both Y and Y* is the same. Consequently, the joint PDF of Y and Y* which is of course multinormal distribution is denoted as follows (Wang, 2022, p. 11):
f Y , Y * X , X * , μ . , Σ . = N Y μ Y μ Y * , Σ X , X Σ X , X * Σ X * , X Σ X * , X *
Where mean vector and covariance matrix of the joint PDF of Y and Y* are μ Y μ Y * and Σ X , X Σ X , X * Σ X * , X Σ X * , X * , respectively. The equation above is one of interesting properties of multinormal distribution. As a result, Gaussian process regression (GPR) model of dependent variable Y is conditional PDF of Y* given Y, X, and X* such as f(Y* | Y, X, X*, μ(.), Σ(.)) which is also predictive PDF of Y* because it can be used to predict next occurrence of Y. From interesting properties of multinormal distribution, it is easy to drawn GPR from the joint PDF f(Y, Y* | Y, X, X*, μ(.), Σ(.)) as follows:
Y * Y , X , X * f Y * Y , X , X * , μ . , Σ . = N Y μ Y * Y , X , X * , Σ Y * X , X *
Where mean vector μ(Y* | Y, X, X*) and covariance matrix Σ(Y* | X, X*) are:
Y * Y , X , X * = μ Y * + Σ X * , X Σ X , X 1 Y μ Y Σ Y * X , X * = Σ X * , X * Σ X * , X Σ X , X 1 Σ X , X *
When Y* is variable then, μ(Y*) is still mean function but μ(Y) was determined and hence, for making clear this vagueness, let m(Y) replace μ(Y) so that mean vector μ(Y* | Y, X, X*) and covariance matrix Σ(Y* | X, X*) are rewritten as follows:
μ Y * Y , X , X * = μ Y * + Σ X * , X Σ X , X 1 Y m Y Σ Y * X , X * = Σ X * , X * Σ X * , X Σ X , X 1 Σ X , X *
The quantity m(Y) is a realistic mean. At the beginning of any GPR algorithm, it is defined as the same to the mean function μ(.), for example, m(Y) = μ(Y) = 0T, m(Y) = μ(Y) = Y, etc., but, at the later phase of any GPR algorithm, it is recurrently determined by previously known μ(Y | Y1:n0, X1:n0, X) as follows:
m Y = μ Y Y 1 : n 0 , X 1 : n 0 , X = μ Y Y 1 : n 0 , X 1 : n 0 , X 1 μ Y Y 1 : n 0 , X 1 : n 0 , X 2 μ Y Y 1 : n 0 , X 1 : n 0 , X n
Obviously, the GPR of Y* distributes normally with mean vector μ(Y* | Y, X, X*) and covariance matrix Σ(Y* | X, X*). Indeed, the GPR f(Y* | Y, X, X*, μ(.), Σ(.)) is posterior PDF of Y whose prior PDF is f(Y | X, μ(.), Σ(.)). Initializing such prior PDF by the PDF of X as f(Y | X, μ(.), Σ(.)) = f(X | μ(.), Σ(.)) is not totally correct but implicit biases will be decreased after the posterior PDF f(Y* | Y, X, X*, μ(.), Σ(.)) is updated. We have following summary:
P r i o r : Y X N Y μ X , Σ X P o s t e r i o r G P R : Y * Y , X , X * N Y μ Y * Y , X , X * , Σ Y * X , X *
Although the GPR of Y* depends on Y, X, and X*, the main semantic meaning of regression model here is mentioned mainly that Y* is determined based on its previous one Y when both of them are assumed to be based on X via the PDF of Y as f(Y | X, μ(.), Σ(.)) and the PDF of Y* as f(Y* | X*, μ(.), Σ(.)). This implies that X is the intermediate point for the probabilistic dependence of Y* on Y.
If X* and Y* are 1-element vectors such that X* = Xn+1 and Y* = Yn+1 and let x1:n = X, x = X*, y1:n = Y, and y = Y*, the GPR of Y* which is now GPR of y becomes:
y y 1 : n , x 1 : n , x N y μ y y 1 : n , x 1 : n , x , σ 2 y x 1 : n , x
Where μ(y | y1:n, x1:n, x) and σ2(y | x1:n, x) are mean and variance of the posterior PDF of y given y1:n, x1:n, and x.
μ y y 1 : n , x 1 : n , x = μ y + Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 y 1 : n m y 1 : n σ 2 y x 1 : n , x = Σ x , x Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
The equation above with single variables x and y (single posterior processes) is popular in BO, especially x is vector and y is scalar although x and y can be arbitrary such as scalar, vector, and matrix with note that high dimensional spaces require tensor products for example. Please pay attention that m(y1:n) in GPR algorithm is the realistic mean which is recurrently determined by previously known μ(y | y1:n–1, x1:n–1, x):
m y n = μ y y 1 : n 0 + n 1 , x 1 : n 0 + n 1 , x n
Note,
Σ x 1 : n , x 1 : n = Σ x 1 , x 1 Σ x 1 , x 2 Σ x 1 , x n Σ x 2 , x 1 Σ x 2 , x 2 Σ x 2 , x n Σ x n , x 1 Σ x n , x 2 Σ x n , x n
Σ x , x 1 : n = Σ x , x 1 Σ x , x 2 Σ x , x n T , Σ x 1 : n , x = Σ x 1 , x Σ x 2 , x Σ x n , x
Suppose only x is considered variable whereas y, y1:n, are x1:n are known then, mean function μ(.) is set to be zero as μ(y) = 0 so that both the mean μ(y | y1:n, x1:n, x) and the variance σ2(y | x1:n, x) are functions of only x as follows:
μ n x = μ y y 1 : n , x 1 : n , x = Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 y 1 : n m y 1 : n σ n 2 x = σ 2 y x 1 : n , x = Σ x , x Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
Recall that m(y1:n) is the realistic mean which is recurrently determined by previously known μn–1(x):
m y n = μ n 1 x n
The event that μn(x) and σn2(x) are functions of only x is necessary to define acquisition function for optimization task in BO.
GPR can be executed continuously with new X* while the old X* is incorporated into larger GP X. In practice, such continuous execution is implemented as iterative algorithm whose each iteration has two following steps:
Table 3.1. GPR algorithm.
Table 3.1. GPR algorithm.
Step 1:
Take new X*
μ Y * Y , X , X * = μ Y * + Σ X * , X Σ X , X 1 Y m Y
Σ Y * X , X * = Σ X * , X * Σ X * , X Σ X , X 1 Σ X , X *
Step 2:
m Y = μ Y * Y , X , X *
X = X X *
In step 1, the covariance matrix Σ(X, X) is not recomputed entirely because some of its elements Σ(Xi, Xj) were determined before.
Because GP X will get huge when the iterative algorithm repeats many iterations and X* is incorporated into X over and over again, it is possible to apply first-order Markov property so that each iteration remembers only one previous X. Therefore, we can assign X* to X as X = X* in step 2 so that the iterative algorithm runs faster and saves more computational resources as follows:
Table 3.2. GPR algorithm with first-order Markov property.
Table 3.2. GPR algorithm with first-order Markov property.
Step 1:
Take new X*
μ Y * Y , X , X * = μ Y * + Σ X * , X Σ X , X 1 Y m Y
Σ Y * X , X * = Σ X * , X * Σ X * , X Σ X , X 1 Σ X , X *
Step 2:
m Y = μ Y * Y , X , X *
X = X *
However, this advantage of first-order Markov property cannot be applied into BO (by defining how to take new X* with acquisition functions) because it is necessary to look entire X domain forward and backward to find out potential global optimizer. The use of Markov property will be appropriate to tasks of prediction and estimation in which entire time series are split into many smaller time windows where it is necessary to remember w windows with w-order Markov property. Shortly, the assignment X = X* is suitable to forward modeling.
GPR focuses mainly on the regressive (posterior) covariance matrix Σ(Y* | X, X*) via kernel function Σ(Xi, Xj) because adjusting the regressive (posterior) mean vector μ(Y* | Y, X, X*) via adjusting the mean function μ(Y) is unnecessary due to:
μ Y * Y , X , X * = μ Y * + Σ X * , X Σ X , X 1 Y m Y
As usual, mean function μ(Y*) are set to be zero as μ(Y*) = 0T so that
μ Y * Y , X , X * = Σ X * , X Σ X , X 1 Y m Y
Which is indeed an arithmetic regression of Y* on Y, X, and X* in addition to the main semantic meaning that the posterior PDF of Y* given Y is the main regression. However, it is still better to define exactly the mean function μ(Y) in cases of predicting more precisely confident intervals of Y based on X because the equation above:
μ Y * Y , X , X * = Σ X * , X Σ X , X 1 Y m Y
Which implies variation of Y according to variation of X from the origin. It is not a real value of Y. By another way, if setting μ(Y*) = Y* (also deriving μ(Y) = Y) so that
μ Y * Y , X , X * = Y * + Σ X * , X Σ X , X 1 Y m Y
That is not a regression function, which is impossible to predict Y* based on X* if Y* is unknown. Therefore, I propose a technique based on linear regression model to define μ(Y*) with constraint that each element μ(Y*) of μ(Y*) depend only on Y*, for instance:
μ Y * = μ Y n + 1 μ Y n + 2 μ Y n + k
Note, Y* represents any Yi belonging to the subprocess Y* = (Yn+1, Yn+2,…, Yn+k)T and X* represents any Xi belonging to the subprocess X* = (Xn+1, Xn+2,…, Xn+k)T. Let φ(X) be the transformation function which transforms X space into Y space such that φ(.) is invertible.
Y = φ X , Y * = φ X * X = φ 1 Y , X * = φ 1 Y *
Therefore, φ(.) should be defined with constraint that X space and Y space have the same dimension for information preservation. The simplest form of φ(.) is identity function.
φ X = X
Let Z* be a regressive estimate of X*:
Z * = A * X *
Where A* is regressive coefficient. How to calculate A* from sample {X*, φ–1(Y*)} will be described later. Please pay attention that this linear regression is totally different from the regression meaning of GPR via posterior PDF. Let Y ^ * be an estimate of Y* from X* with association of linear regression model and transformation function φ(.).
Y ^ * = φ Z * = φ A * X *
Obviously, we have:
Z * = φ 1 Y ^ *
Let |X|, |X*|, |Y|, and |Y*| be cardinalities (also dimensions) of X, X*, Y, and Y*, respectively. Of course, we have |X| = |Y| and |X*| = |Y*|, for example, we also have |X| = |Y| = n and |X*| = |Y*| = k. Concentration ratio (CR) of a subprocess which is point density of such subprocess is defined by the ratio of cardinality of such subprocess to the cardinality of entire process, for example, CR of X* over {X, X*} is:
c r X * X , X * = X * X + X *
Suppose A also being the regressive coefficient estimated from sample {X, φ–1(Y)} was determined before, the mean function μ*(X*) is proposed as a weighted estimation with concentration ratios of X and X* as follows:
μ * X * = φ X X + X * A X * + X * X + X * A * X *
As a result, the two steps at each iteration of the iterative algorithm for estimating GPR are finetuned as follows:
Table 3.3. GPR algorithm with linear regression.
Table 3.3. GPR algorithm with linear regression.
Step 1:
Take new X*
Calculate A* from sample {X*, φ–1(Y*)}
μ Y * X , X * = μ * X * + Σ X * , X Σ X , X 1 Y m Y
Σ Y * X , X * = Σ X * , X * Σ X * , X Σ X , X 1 Σ X , X *
Step 2:
m Y = μ Y * X , X *
A = A *
X = X X *   o r   X = X *
Where μ*(X*) is determined based on μ*(X*) as follows:
μ * X * = μ * X n + 1 μ * X n + 2 μ * X n + k
Note, the vector φ–1(Y*) is determined:
φ 1 Y * = φ 1 Y n + 1 φ 1 Y n + 2 φ 1 Y n + k
Both mean vector μ(Y* | X, X*) and covariance matrix Σ(Y* | X, X*) of GPR with linear regression are free from Y because they are totally based on only X and X*.
For interval estimation of Y* with given X*, suppose GPR algorithm finished obtaining regression coefficient A after some iterations on X, the mean function μ(Y*) is reduced with X* as follows:
μ * X * = φ A X *
As a result, the confident interval of Y* which is the pair {μ(Y* | X, X*), Σ(Y* | X, X*)} is determined as follows:
μ Y * X , X * = φ A X * + Σ X * , X Σ X , X 1 Y m Y Σ Y * X , X * = Σ X * , X * Σ X * , X Σ X , X 1 Σ X , X *
Recall that m(Y) is a realistic mean vector which is recurrently determined by previously known μ(Y | X1:n0, X):
m Y = μ Y X 1 : n 0 , X = μ Y X 1 : n 0 , X 1 μ Y X 1 : n 0 , X 2 μ Y X 1 : n 0 , X n
Because the GPR algorithm here defines the mean function μ(Y*) based on multiple linear regression (MLR) model, it is necessary to describe MLR in short. Given a dependent random vector variable Z = (z1, z2,…, zm)T and an independent random variable X = (1, x1, x2,…, xn)T, MLR tries to establish linear relationship between Z = (z1, z2,…, zm)T and X so that Z is sum of a linear combination of X and an random error vector ε.
Z = A X + ε
As a convention, X are called regressor and Z is called responsor whereas A = (α0, α1, α2,…, αq)T is mx(n+1) regressive coefficient matrix.
A = a 00 a 11 a 12 a 1 n a 10 a 21 a 22 a 2 n a m 0 a m 1 a m 2 a m n
Suppose ε distributes normally with mean vector 0T and covariance matrix Σ then Z distributes normally with mean vector AX and covariance matrix Σ due to:
E Z = E A X + ε = A X V Z = V A X + ε = Σ
Note, E(.) and V(.) denote theoretical expectation and variance, respectively and Σ is mxm invertible matrix. This implies that the PDF of random variable Z is:
Z X f Z X , A , Σ = N Z A X , Σ = 2 π m 2 Σ 1 2 e x p 1 2 Z A X T Σ 1 ( Z A X )
MLR of Z given X is built from sample {X, Z} of size N in form of data matrix as follows:
X = x 1 T x 2 T x N T = 1 x 11 x 12 x 1 n 1 x 21 x 22 x 2 n 1 x N 1 x N 2 x N n , x i = 1 x i 1 x i 2 x i n Z = z 1 T z 2 T z N T = z 11 z 12 z 1 m z 21 z 22 z 2 m z N 1 z N 2 z N m , z i = z i 1 z i 2 z i m
Therefore, xi and zi is the ith instances of regressor X and responsor Z at the ith row of matrix (X, Z). As a convention, we can connect datasets X and Z of MLR here with the GP X* and the set φ–1(Y*) aforementioned, respectively if removing the first column of values 1 from datasets X.
N k x i X n + i X X * z i φ 1 Y n + i Z φ 1 Y *
The essence of MLR is to estimate the regressive coefficient matrix A and the covariance matrix Σ. By applying maximum likelihood estimation (MLE) method, we obtain estimates A ^ and Σ ^ of A and Σ.
A ^ = Z T X X T X 1 Σ ^ = 1 N i = 1 N z i A ^ x i z i A ^ x i T

4. Acquisition Functions

Recall that Bayesian optimization (BO) maximizes the acquisition function α(x | f, S) so as to search for maximizer x*. Especially, nonparametric BO (NBO) based on Gaussian process regression (GPR) requires support of acquisition function to guide movement of x in the search space so as to reach maximizer x*. Moreover, acquisition function of NBO is created from GPR surrogate model; concretely, it is defined with two essential parameters such as posterior mean μn(x) and variance σn2(x) of GPR.
μ n x = Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 y 1 : n m y 1 : n σ n 2 x = Σ x , x Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
Note that m(y1:n) is a realistic mean which is recurrently determined by previously known μn–1(x):
m y n = μ n 1 x n
This is the reason that acquisition function of NBO is denoted as of α(x | μn(x), σn2(x)) in NBO. Therefore, this section focuses on NBO acquisition function. GPR surrogate model of NBO is represented by the posterior PDF of y = f(x) given y1:n, x1:n, and x as follows:
y y 1 : n , x 1 : n , x f y y 1 : n , x 1 : n , x , μ . , Σ . = N y μ n x , σ n 2 x
Both mean μn(x) and variance σn2(x) are functions of only x because y1:n and x1:n were known, which is necessary to determine acquisition function which is function of x too. Note, in this research, notation f(.|.) often denotes PDF, which is not concrete function and notation N . . denotes normal distribution. The normal PDF of y = f(x) given y1:n, x1:n, and x is formulated as follows:
y y 1 : n , x 1 : n , x N y μ n x , σ n 2 x = 1 2 π σ n 2 x e x p y μ n x 2 2 σ n 2 x
This PDF should be standardized into standard normal PDF ϕ(z) because values of standard normal cumulative distribution function (standard normal CDF) Φ(z) were evaluated in practice. Standard normal distribution is also called standard Gaussian distribution. If variable y is standardized into variable z such that z = y μ n x σ n x then, distribution of y is equivalent to distribution of z via its PDF ϕ(z) also its CDF Φ(z).
z = y μ n x σ n x ϕ z = N z 0,1 = 1 2 π e x p z 2 2 Φ z 0 = ϕ z < z 0 = z 0 ϕ z d z
Where the quantity σ n x = σ n 2 x is called standard deviation of y given y1:n, x1:n, and x. Note, standard normal PDF has mean 0 and variance 1. We have some important properties of standard normal distribution.
ϕ z < z 0 = ϕ z z 0 ϕ z > z 0 = ϕ z z 0 Φ z = ϕ z
Φ z 0 = 1 Φ z 0 = ϕ z > z 0 = z 0 + ϕ z d z
If the PDF of y is standardized, the statement that y distributes normally given y1:n, x1:n, and x with mean μn(x) and variance σn2(x) will be equivalent to the statement that the distribution of z given y1:n, x1:n, and x is standard normal distribution where:
z = y μ n x σ n x
Of course, we have:
y = μ n x + σ n x z
Recall that the acquisition function α(x | f, S) based on mean μn(x) and variance σn2(x) of GPR surrogate model is denoted as follows, which implies that it is function of x.
α x f , S = α x μ n x , σ n 2 x
Recall, given acquisition function α(x | μn(x), σn2(x)) based on μn(x) and σn2(x), and also known sample Dn = x1:n = {x1, x2,…, xn}, let xn+1 be a maximizer of α(x | μn(x), σn2(x)) in NBO algorithm at a current iteration.
x n + 1 = argmax x α x μ n x , σ n 2 x
There are some acquisition functions, for example, probability of improvement, expected improvement, entropy search, and upper confidence bound. Let x* and y* = f(x*) be the maximizer and its respective maximum value of target function to which all acquisitions aim. Both probability of improvement (PI) technique and expected improvement (EI) technique belong to improvement approach in which their acquisition functions, which are also utility functions in decision theory, are expectations of a predefined respective improvement function but their definitions of improvement function are different. For instance, the improvement function denoted I(x) sets the deviation between the next value y = f(x) and the current and temporal maximum yn* = f(xn*) with expectation that y will be larger than yn* for searching realistic maximizer x*; otherwise, I(x) is 0.
I x m a x y y * , 0 = m a x f x y n * , 0
Note, given sample Dn = x1:n = {x1, x2,…, xn} and its evaluations y1:n = {y1, y2,…, yn}, we have yn* = max{y1, y2,…, yn } which is the temporal maximum at current iteration of NBO algorithm. When the PDF of y is standardized, y = μn(x) + σn(x)z then the improvement function is rewritten:
I x = m a x μ n x + σ n x z y n * , 0
Probability of improvement (PI) technique defines acquisition function as the probability of I(x) > 0, which means that the probability of the event that next candidate of target maximizer is larger than the old one must be maximized (Shahriari, Swersky, Wang, Adams, & Freitas, 2016, p. 160).
α P I x = α x μ n x , σ n 2 x = P I x > 0
Note, notation P(.) denotes probability. As a result, PI acquisition function is determined as follows:
α P I x = Φ μ n x y n * σ n x
Proof,
α P I x = P I x > 0 = P μ n x + σ n x z y n * > 0 = P z > y n * μ n x σ n x = ϕ z > y n * μ n x σ n x = Φ μ n x y n * σ n x D u e t o   ϕ z > z 0 = Φ z 0
Therefore, the maximizer xn+1 of PI acquisition function in NBO algorithm at a current iteration is determined as follows:
x n + 1 = argmax x Φ μ n x y n * σ n x
Expected improvement (EI) technique defines acquisition function as expectation of I(x), which means that the mean of the event that next candidate of target maximizer is larger than the old one must be maximized.
α E I x = α x μ n x , σ n 2 x = + I x ϕ z d z = + m a x μ n x + σ n x z y n * , 0 ϕ z d z
Note, such expectation of I(x) is determined with the standard PDF ϕ(z) given y1:n, x1:n, and x. As a result, EI acquisition function is determined as follows (Frazier, 2018, p. 7):
α E I x = μ n x y n * Φ μ n x y n * σ n x + σ n x ϕ y n *
Proof (Kamperis, 2021),
α E I x = + m a x μ n x + σ n x z y n * , 0 ϕ z d z = y n * m a x μ n x + σ n x z y n * , 0 ϕ z d z + y n * + m a x μ n x + σ n x z y n * , 0 ϕ z d z = y n * + μ n x + σ n x z y n * ϕ z d z D u e t o   m a x μ n x + σ n x z y n * , 0 = μ n x y n * y n * + ϕ z d z + σ n x y n * + z ϕ z d z = μ n x y n * Φ μ n x y n * σ n x + σ n x y n * + z ϕ z d z D u e t o y n * + ϕ z d z = Φ z = Φ μ n x y n * σ n x = μ n x y n * Φ μ n x y n * σ n x + σ n x 2 π y n * + z e x p z 2 2 d z = μ n x y n * Φ μ n x y n * σ n x σ n x 2 π y n * + d e x p z 2 2 = μ n x y n * Φ μ n x y n * σ n x σ n x 2 π e x p z 2 2 + y n * = μ n x y n * Φ μ n x y n * σ n x + σ n x 2 π e x p y n * 2 2 = μ n x y n * Φ μ n x y n * σ n x + σ n x ϕ y n * D u e t o   ϕ y n * = 1 2 π e x p y n * 2 2
Therefore, the maximizer xn+1 of EI acquisition function in NBO algorithm at a current iteration is determined as follows:
x n + 1 = argmax x μ n x y n * Φ μ n x y n * σ n x + σ n x ϕ y n *
An important clue of NBO is that maximizing acquisition function should be cheaper than maximizing target function f(x) if f(x) is analytic function which is appropriate to analytic approach. Fortunately, it is not prerequisite to find out global maximizer of acquisition function because essentially NBO is a reinforcement algorithm whose solution candidate for x* is improved progressively and moreover, acquisition function itself makes tradeoff between exploitation (convergence in searching maximizer) and exploration (searching global maximizer) for NBO. Therefore, it is possible to apply traditional mathematical methods such as Newton-Raphson and gradient descent for obtaining the local maximizer xn+1 of acquisition function. It is only necessary to check first-order derivative known as gradient of acquisition function when traditional methods like Newton-Raphson and gradient descent require existence of gradient. Because PI is simpler than EI, how to maximize PI is mentioned here as an example of acquisition maximization.
x n + 1 = argmax x α P I x = argmax x Φ μ n x y n * σ n x
Now we only need to determine the gradient of αPI(x) with regard to variable x. Let Σ’(xi, x) be gradient (first-order derivative) of Σ(xi, x) with regard to x given known xi = (xi1, xi2,…, xin)T and unknown x = (x1, x2,…, xn)T. As a convention, Σ’(xi, x) is row vector. If covariance function produces symmetric matrix then Σ’(xi, x) = Σ’(x,xi). For example, given simple squared exponential kernel function Σ S E x , x i = e x p x x i 2 2 , its gradient with regard to x is:
Σ x , x i = d d x e x p x x i 2 2 = d d x e x p x x i T x x i 2 = x x i T e x p x x i T x x i 2
Let Σ’(x1:n, x) be gradient of Σ(x1:n, x), essentially it is a matrix but written as column vector whose each element is row vector:
Σ x 1 : n , x = Σ x 1 , x Σ x 2 , x Σ x n , x
If covariance function produces symmetric matrix, we have:
Σ x , x 1 : n = Σ x 1 : n , x T
Gradients of μn(x) and σn2(x) with regard to x are:
μ n x = y 1 : n m y 1 : n T Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x σ n 2 x = Σ x , x 2 Σ x , x 1 : n Σ x 1 : n , x 1 : n 1 Σ x 1 : n , x
Note, Σ(x1:n, x1:n) is invertible and symmetric. Because of σ n x = σ n 2 x , we have gradient of σn(x) as follows:
σ n x = σ n 2 x 2 σ n 2 x
Let zn(x) = (yn*μn(x)) / σn(x) as function of x, its gradient with regard to x is:
z n x = μ n x σ n x + y n * μ n x σ n x σ n 2 x
Gradient of PI is totally determined as follows:
α P I x = Φ μ n x y n * σ n x = ϕ z n x z n x
Where,
z n x = y n * μ n x σ n x
It is possible to apply traditional methods like Newton-Raphson and gradient descent to find out local maximizer xn+1 of PI because PI gradient is determined given x. Similarly, it is easy to obtain gradient of EI acquisition function because gradients of μn(x) and σn2(x) are most important, which are the base to calculate gradients of PI and EI.

5. Conclusions

This research focuses on nonparametric Bayesian optimization (NBO) although there are interesting aspects of parametric Bayesian optimization (PBO). Strictness is emphasized in some researches but other ones need only reasonableness, especially applied researches or heuristic researches. Although NBO belongs to applied mathematics, its mathematical base is strict even its outlook of surrogate model and acquisition function maximization is seemly tricky. Therefore, its costing price is that NBO is, in turn, based on traditional convex optimization methods like Newton-Raphson and gradient descent to maximize acquisition function, which means that NBO requires other knowledge outside its circle while heuristic methods like particle swarm optimization (PSO) and ant bee colony (ABC) do not require mathematical constraints. Anyhow, NBO is a significant solution of global optimization problem because, within requisite assumption of too strict mathematical constraints, the progress of applied mathematics will be hesitative. Therefore, the main problem here is effectiveness of NBO in balancing exploration and exploitation for search global optimizer. Heuristic methods like PSO are proved their simplicity and feasibility when they do not focus on complicated mathematical tools although there are some researches trying to apply mathematical theories into explaining them. For human research, whether thinking via theory is prior to natural imitation or vice versa is a big question whose answers can be dependent on individuals. This question is equivalent to the other question that which one among logicality and imagination is important.

References

  1. Frazier, P. I. (2018, July 8). A Tutorial on Bayesian Optimization. arXiv. [CrossRef]
  2. Kamperis, S. (2021, June 11). Acquisition functions in Bayesian Optimization. Retrieved from Stathis Kamperis's GitHub page: https://ekamperi.github.io/machine%20learning/2021/06/11/acquisition-functions.html.
  3. Neapolitan, R. E. (2003). Learning Bayesian Networks. Upper Saddle River, New Jersey, USA: Prentice Hall.
  4. Nguyen, L. (2022, June 27). A short study on minima distribution. Preprints. [CrossRef]
  5. Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & Freitas, N. d. (2016, January). Taking the Human Out of the Loop: A Review of Bayesian Optimization. (J. H. Trussell, Ed.) Proceedings of the IEEE, 104(1), 148 - 175. [CrossRef]
  6. Wang, J. (2022, April 18). An Intuitive Tutorial to Gaussian Processes Regression. arXiv. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.

Downloads

496

Views

301

Comments

0

Subscription

Notify me about updates to this article or when a peer-reviewed version is published.

Email

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated