Preprint
Review

Overview of High-dimensional Measurement Error Regression Models

Altmetrics

Downloads

166

Views

61

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

15 June 2023

Posted:

16 June 2023

You are already at the latest version

Alerts
Abstract
High-dimensional measurement error data are becoming more prevalent across various fields. Research on measurement error regression models has gained increasing attention due to the risk of drawing inaccurate conclusions if measurement errors are ignored. When the dimension p is larger than the sample size n, it is challenging to develop statistical inference methods for high-dimensional measurement error regression models due to the existence of bias, nonconvexity of objective function, high computational cost and many other difficulties. Over the past few years, some works have overcome the aforementioned difficulties and proposed several statistical inference methods. This paper mainly reviews the current development on estimation, hypothesis testing and variable screening methods for high-dimensional measurement error regression models and shows the theoretical results of these methods with some directions worthy of exploring for future research.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

1. Introduction

Measurement error data inevitably exists in applications and has raised significant concerns in various fields including biology, medicine, epidemiology, economics, finance and remote sensing. So far, there have been a wealth of research achievements on classical low-dimensional measurement error regression models under various assumptions. Numerous studies focus on parameter estimation for low-dimensional measurement error regression models, with primary techniques listed below: (1) corrected regression estimation methods [1]; (2) Simulation-Extrapolation (SIMEX) estimation methods [2,3]; (3) deconvolution methods [4]; (4) corrected empirical likelihood methods [5,6]. For more detailed discussions on other estimation and hypothesis testing methods for classical low-dimensional measurement error models, please refer to the literature [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29], as well as the monographs [30,31,32,33,34,35].
As one of the most popular research fields in statistics, high-dimensional regression has been widely used in various fields including genetics, economics, medical imaging, meteorology and sensor networks. Over the past two decades, various high-dimensional regression methods have been widely proposed such as Lasso [36], smoothly clipped absolute deviation (SCAD) [37], Elastic Net [38], Adaptive Lasso [39], Dantzig Selector [40], smooth integration of counting and absolute deviation (SICA) [41], minimax concave penalty (MCP) [42], and among many others. These methods have been widely applied to estimate regression coefficients while also achieving the goal of variable selection by adding penalties to objective functions, please refer to the literature review [43,44,45], as well as the monographs [46,47,48].
For the variable screening methods of ultrahigh-dimensional regression models where dimension p and sample size n satisfying log p = O ( n κ ) , κ > 0 , Fan and Lv [49] proposed the sure independence screening (SIS) method, which is a pioneering method in this field. For the estimation and variable selection of ultrahigh-dimensional regression models, it is suggested applying SIS method for variable screening first. Then, based on the variables screened in the first step, we can utilize regularization methods with penalties to estimate the regression coefficients and identify the significant variables simultaneously. Due to the operability and effectiveness of SIS method in applications, numerous works have been done to extend the method, see [50,51,52,53,54,55,56,57,58,59].
However, most of the aforementioned theories and applications for high-dimensional regression models focused on clean data. In the era of big data, researchers frequently collect high-dimensional data with measurement errors. Typical instances include gene expression data [61] and sensor network data [60]. The imprecise measurements are the result of poorly managed and defective data collection processes, as well as the imprecise measuring instruments. It is well known that ignoring the influence of measurement errors will result in biased estimators and erroneous conclusions. Therefore, developing statistical inference methods for high-dimensional measurement error regression models have drawn a lot of interest.
Based on the types of measurement errors, research on high-dimensional measurement error regression models can be divided into the following three categories: covariates containing measurement errors; response variables containing measurement errors; both covariates and response variables containing measurement errors. In this paper, we mainly focus on the category that covariates contain measurement errors. When the dimension p is larger than the sample size n, parameter estimation can be challenging due to the nonconvexity of penalized objective function caused by correction for the bias. This further makes it impossible to obtain the optimal solution of optimization problem. We utilize the following linear regression model to illustrate this problem
y = X β + ε ,
where y = ( y 1 , , y n ) T R n is the n × 1 response vector, X = ( X 1 , , X n ) T R n × p is the n × p fixed design matrix with X i = ( x i 1 , , x i p ) T , β = ( β 1 , , β p ) T R p is the sparse regression coefficient vector with only s nonzero components, and assume that model error vector ε = ( ε 1 , , ε n ) T R n is independent of X . In order to obtain a sparse estimator of the true regression coefficient vector β 0 = ( β 01 , , β 0 p ) T R p , we can minimize the following penalized least square objective function
1 2 n y X β 2 2 + p λ ( β ) 1 ,
which is equivalent to minimizing
1 2 β T Σ β ρ T β + p λ ( β ) 1 ,
where Σ = n 1 X T X , ρ = n 1 X T y , p λ ( · ) is a penalty function with regularization parameter λ 0 . If the covariates matrix X can be precisely measured, the penalized objective functions (2) and (3) are convex. Thus, we can obtain a sparse estimator of β 0 by minimizing the penalized objective function (2) or (3).
However, it is common that the covariates matrix X cannot be accurately observed in practice. Let W = ( W 1 , , W n ) T = ( w i j ) n × p be the observed covariates matrix with additive measurement errors satisfying W = X + U , where U = ( U 1 , , U n ) T is the matrix of measurement errors, U i = ( u i 1 , , u i p ) T follows a sub-Gaussian distribution with mean zero and covariance matrix Σ u , and it is assumed to be independent of ( X , y ) . To reduce the influence of measurement errors, Loh and Wainwright [62] proposed to replace Σ and ρ in the penalized objective function (3) by their consistent estimators Σ ^ = n 1 W T W Σ u and ρ ˜ = n 1 W T y , respectively. Then we can obtain the sparse estimator of β 0 by minimizing the following penalized objective function
1 2 β T Σ ^ β ρ ˜ T β + p λ ( β ) 1 .
Note that when the dimension p is fixed or smaller than the sample size n, it can be guaranteed that Σ ^ is a positive definite or semi positive-definite matrix. It further ensures that the penalized objective function (4) remains convex. Thus, the global optimal solution of β can be obtained by minimizing the penalized objective function (4).
However, for high-dimensional or ultrahigh-dimensional regression models, i.e., p > n or p n , there are two key problems: (i) the penalized objective function (4) is no longer convex and unbounded from below because the corrected estimator Σ ^ of Σ is no longer a semi-positive definite matrix. This further makes it impossible to obtain the estimator of β 0 by minimizing the penalty objective function (4); (ii) In order to construct an objective function similar to that of standard Lasso and solve the corresponding optimization problem using R package “glmnet” or “lars”, it is necessary to decompose Σ ^ by Cholesky decomposition method and obtain the substitution of response vector and covariates matrix. However, this process results in an error accumulation and makes it challenging to guarantee valid theoretical results, and please see the detailed discussions in [63,64].
For problem (i), Loh and Wainwright [62] changed the unconstrained optimization problem into a constrained optimization problem by adding restrictions to β . They suggested applying the projected gradient descent algorithm to solve the restricted optimization problem and acquire the global optimal solution of true regression coefficient vector β 0 . Nevertheless, the penalized objective function of the optimization problem is still nonconvex. To address this issue, Datta and Zou [63] suggested substituting Σ ^ by its semi-positive definite projection matrix Σ ˜ , and they proposed convex conditioned Lasso (CoCoLasso). Further, Zheng et al. [64] introduced a balanced estimation that prevented overfitting while maintaining the estimation accuracy by combining l 1 and concave penalty. Tao et al. [65] constructed a modified least-squares loss function using a semi-positive definite projection matrix for estimated covariance matrix and proposed calibrated zero-norm regularized least squares (CaZnRLS) estimation of regression coefficients. Rosenbaum and Tsybakov [66,67] proposed a matrix uncertainty (MU) selector and its improved version compensated MU selector for high-dimensional linear models with additive measurement errors in covariates. Sørensen et al. [68] extended MU selector to generalized linear models and developed the generalized matrix uncertainty (GMU) selector. Sørensen et al. [69] showed the theoretical results of relevant variable selection methods. Based on MU selector, Belloni et al. [70] introduced an estimator that can achieve the minimax efficiency bound. They proved that the corresponding optimization problem can be converted into a second-order cone programming problem, which can be solved in polynomial time. Romeo and Thoresen [71] evaluated the performance of MU selector in [66], nonconvex Lasso in [62], and CoCoLasso in [63] using simulation studies. Brown et al. [72] proposed a path-following iterative algorithm called Measurement Error Boosting (MEBoost), which is a computationally effective method for variable selection in high-dimensional measurement error regression models. Nghiem and Potgieter [73] introduced a new estimation method called simulation-selection-extrapolation (SIMSELEX), which used Lasso in the simulation step and group Lasso in the selection step. Jiang and Ma [74] drew on the idea of nonconvex Lasso in [62] and proposed an estimator of the regression coefficients for high-dimensional Poisson models with measurement errors. Byrd and McGee [75] developed an iterative estimation method for high-dimensional generalized linear models with additive measurement errors based on the imputation-regularized optimization (IRO) algorithm in [76]. However, the error accumulation issue mentioned in problem (ii) has not been addressed in the literature.
The aforementioned works place more emphasis on estimation and variable selection problems rather than hypothesis testing. For high-dimensional regression models with clean data, research on hypothesis testing problems has made significant progress under various settings in [77,78,79,80,81,82,83,84]. For high-dimensional measurement error models, the hypothesis testing methods are equally crucial. However, the bias and instability caused by measurement errors make hypothesis testing extremely difficult. Recently, some progress has been achieved in statistical inference methods. Based on multiplier bootstrap, Belloni [85] constructed simultaneous confidence intervals for the target parameters in high-dimensional linear measurement error models. Focused on the case where a fixed number of covariates contain measurement errors, Li et al. [86] proposed a corrected decorrelated score test for parameters corresponding to the error-prone covariates and created asymptotic confidence intervals for them. Huang et al. [87] proposed a new variable selection method based on debiased CoCoLasso and proved that it can achieve false discovery rate (FDR) control. Jiang et al. [88] developed Wald and score tests for high-dimensional Poisson measurement error models.
Compared to the above estimation and hypothesis testing methods, the screening techniques for ultrahigh-dimensional measurement error models is relatively few. Nghiem et al. [89] introduced two screening methods named corrected penalized marginal screening (PMSc) and corrected sure independence screening (SISc) for ultrahigh-dimensional linear measurement error models.
This paper gives an overview of the estimation and hypothesis testing methods for high-dimensional measurement error regression models, as well as the variable screening methods for ultrahigh-dimensional measurement error models. The rest of this paper is organized as follows. In Section 2, we review some estimation methods for linear models. We survey the estimation methods for generalized linear models in Section 3. Section 4 presents the recent advances in hypothesis testing methods for high-dimensional measurement error models. Section 5 introduces the variable screening techniques for ultrahigh-dimensional linear measurement error models. We conclude the paper with some discussions in Section 6.
Notations. Let S p be the set of all p × p real symmetric matrices and S + p be the subset of S p containing all positive semi-definite matrix in S p . We use | A | to denote the cardinality of set A . Let S = { j : β 0 j 0 , j = 1 , , p } be the index set of nonzero parameters. For a vector a = ( a 1 , , a m ) R m , let a q = ( = 1 m | a | q ) 1 / q , 1 q < denote its l q norm, and write a = max 1 m | a | . Denote a A R | A | the subvector of a with index set A { 1 , , m } . Denote by e the vector of all ones. For a matrix B = ( b i j ) , let B 1 = max j i b i j , B max = max i , j b i j and B = max i j b i j . For constants a and b, define a b = max { a , b } . We use c and C to denote positive constants that may vary throughout the paper. Finally, let d denote convergence in distribution.

2. Estimation Methods for Linear Models

This section mainly focuses on the linear model (1) with high-dimensional settings where the dimension p is larger than the sample size n. When the data can be observed precisely, we can estimate the true regression coefficient vector β 0 by minimizing the penalized objective function (2) or (3). However, we frequently come across cases where the measured covariates contain measurement errors. There are various types of measurement error data, and we primarily focus on the two categories below.
(1) Covariates with additive errors. The observed error-prone of covariate W i = X i + U i , where the measurement error U i is independent of X i and independently generated from a distribution with mean zero and known covariance matrix Σ u .
(2) Covariates with multiplicative errors. The observed error-prone of covariates W i = X i M i , where ⊙ denotes the Hadamard product, the measurement error M i is independent of X i and follows from a distribution with mean μ M and known covariance matrix Σ M .
Our main goal is to obtain the sparse estimator β ^ of true regression coefficient vector β 0 in the presence of measurement errors. As we introduced in Section 1, we will run into the issue of the penalized objective function being nonconvex and unbounded from below after correcting the bias caused by measurement errors. This prevents us solving the optimization problem. Several works focused on this issue and proposed some estimation methods.

2.1. Nonconvex Lasso

In order to resolve the issue of objective function being unbounded from below and unsolvable in the presence of measurement errors, Loh and Wainwright [62] added restrictions to regression coefficients β and adopted l 1 penalty. Then the estimator of β 0 can be obtained by the following l 1 -constrained quadratic program
β ^ NCL arg min β 1 c 0 s 1 2 β T Σ ^ β ρ ˜ T β + λ β 1 = : arg min β 1 c 0 s L ( β ) + λ β 1 ,
where c 0 > 0 is a constant, s = | S | denotes the number of nonzero components of β 0 , L ( β ) = 2 1 β T Σ ^ β ρ ˜ T β is the loss function, Σ ^ and ρ ˜ are the consistent estimators of covariance matrix Σ of X i and marginal correlation coefficient vector ρ of ( X i , y i ) , and they may differ in terms of various kinds of measurement error data. Under the additive error setting,
Σ ^ add = n 1 W T W Σ u , ρ ˜ add = n 1 W T y .
Under the multiplicative error setting,
Σ ^ mul = n 1 W T W ( Σ m + μ m μ m T ) , ρ ˜ mul = n 1 W T y μ m ,
where ⊘ denotes elementwise division operator, and let Σ ^ = Σ ^ add or Σ ^ mul throughout the sequel. The reason for using “∈” rather than “=” in (5) is that several local minima might exist in the objective function. Note that this method still relies on a nonconvex objective function to obtain the estimator of β 0 . Thus, we refer to it as “nonconvex Lasso”.
The nonconvexity of the penalized objective function makes it challenging to obtain the global minimum of the optimization problem (5). To solve the optimization problem (5), Loh and Wainwright [62] applied the projected gradient descent algorithm and demonstrated that even if the penalized objective function is nonconvex, the solution produced by this algorithm can reach the global minimum with high probability. The algorithm finds the global minimum in an iterative way as follows. At ( k + 1 ) th iteration,
β NCL ( k + 1 ) = arg min β 1 c 0 s L ( β NCL ( k ) ) + L ( β NCL ( k ) ) T ( β β NCL ( k ) ) + η 2 β β NCL ( k ) 2 2 + λ β 1 ,
where L ( β ) = Σ ^ β ρ ˜ is the gradient of loss function L ( β ) , η > 0 denotes the stepsize parameter. For details of this algorithm, please see [62,90,91,92]. Loh and Wainwright [62] proved that the solution obtained by iteration (8) is quite near to the global minimum in both l 1 -norm and l 2 -norm under some conditions. Specifically, for all t 0 ,
β NCL ( k ) β ^ NCL 2 2 γ k β NCL ( 0 ) β ^ NCL 2 2 + C 1 log p n β ^ NCL β 0 1 2 + C 2 β ^ NCL β 0 2 2 , β NCL ( k ) β ^ NCL 1 2 k β NCL ( k ) β ^ NCL 2 + 2 k β ^ NCL β 0 2 + 2 β ^ NCL β 0 1 ,
where C 1 and C 2 are positive constants, γ ( 0 , 1 ) is a contraction coefficient independent of ( n , p , k ) . For the estimator β ^ NCL of the true regression coefficient vector β 0 , Loh and Wainwright [62] showed that, with any c 0 β 0 2 and λ = O ( log p / n ) , the l q -estimation error of β ^ NCL satisfies the bounds
β ^ NCL β 0 q = O s 1 / q log p n , q = 1 , 2 .
When q = 1 , l 1 -estimation error can reach the convergence rate s log p / n ; when q = 2 , l 2 -estimation error can reach the convergence rate s log p / n . However, Loh and Wainwright [62] did not establish the variable selection consistency and oracle inequality for prediction error of nonconvex Lasso estimator.

2.2. Convex Conditioned Lasso

Nonconvex Lasso [62] overcomes the problem of unsolvability caused by nonconvex objective function in the presence of measurement errors. However, there are some drawbacks to this method. First, the nonconvex Lasso solves the problem by adding constraints to β , but the penalized objective function remains nonconvex. It is well recognized that the convexity of the penalized objective function will be incredibly useful for theoretical analysis and computation. Second, two important unknown parameters c 0 and s are included in the optimization problem (5). These two parameters have a direct impact on the estimation results, but we are not sure about their magnitudes in applications. Third, Loh and Wainwright [62] have not established the variable selection results of nonconvex Lasso estimator. To remedy these issues, Datta and Zou [63] proposed Convex Conditioned Lasso (CoCoLasso) based on a convex objective function, which possesses computational and theoretical superiority brought by convexity.
In order to construct the convex objective function, Datta and Zou [63] introduced a nearest positive semi-definite matrix projection operator for the square matrix, which is defined as
( A ) + = arg min A 1 0 A A 1 max ,
where A is a square matrix. Let Σ ˜ = ( Σ ^ ) + , and the alternating direction method of multipliers (ADMM) algorithm [93] can be utilized to derive Σ ˜ from Σ ^ . Based on Σ ˜ , the following convex objective function can be constructed, and it yields CoCoLasso estimator
β ^ coco = arg min β 1 2 β T Σ ˜ β ρ ˜ T β + λ β 1 .
When the covariates contain additive measurement errors,
Σ ˜ add = ( Σ ^ add ) + , ρ ˜ add = n 1 W T y , Σ ^ add = n 1 W T W Σ u .
When the covariates contain multiplicative measurement errors,
Σ ˜ mul = ( Σ ^ mul ) + , ρ ˜ mul = n 1 W T y μ m , Σ ^ mul = n 1 W T W ( Σ m + μ m μ m T ) .
Note that Σ ˜ not only contributes to the construction of the convex objective function but also possesses the same level of estimation accuracy as Σ ^ in [62]. It can be guaranteed by the following equation
Σ ˜ Σ max Σ ˜ Σ ^ max + Σ ^ Σ max 2 Σ ^ Σ max .
Since Σ ˜ is semi-positive definite, we can perform Cholesky decomposition on Σ ˜ . Then, the cholesky factor of Σ ˜ can be used to simplify computations by rewriting (10) as
β ^ coco = arg min β 1 2 n y ˜ W ˜ β 2 2 + λ β 1 ,
where W ˜ denotes Cholesky factor of Σ ˜ satisfying n 1 W ˜ T W ˜ = Σ ˜ , and y ˜ is the vector satisfying n 1 W ˜ T y ˜ = ρ ˜ . The penalized objective function in (13) is similar to that of standard Lasso. Thus, we can utilize the coordinate descent algorithm to obtain CoCoLasso estimator, please see the details in [63,94,95]. Theoretically, Datta and Zou [63] established the l q -estimation ( q = 1 , 2 ) and prediction error bounds of CoCoLasso estimator. Suppose that
ψ = min δ 0 , δ S c 1 3 δ S 1 δ T Σ δ δ 2 2 > 0 .
For s ζ log p / n < λ min { ϵ 0 , 12 ϵ 0 β 0 S } , where ζ = max { σ ε 4 , σ U 4 , 1 } , ϵ 0 = σ U 2 , σ ε 2 and σ U 2 are sub-Gaussian parameters of model error and measurement error, respectively, CoCoLasso estimator β ^ coco satisfies that with probability at least 1 C exp ( c log p ) ,
β ^ coco β 0 q = O λ s 1 / q ψ , q = 1 , 2 ,
n 1 / 2 X ( β ^ coco β 0 ) 2 = O λ s ψ .
The fomulas (14) and (15) show the oracle inequalities for l q -estimation error with q = 1 , 2 and prediction error. Further, Datta and Zou [63] established the sign consistency of CoCoLasso estimator under additional irrepresentable condition and minimum signal strength condition. While there was no variable selection result provided for the nonconvex Lasso estimator β ^ NCL in [62]. Thus, CoCoLasso estimation method not only enjoys the computational convenience of convexity, but also possesses excellent theoretical results. However, when the dimension of covariates p is large, the computation of Σ ˜ is expensive. To improve the computational efficiency, Escribe et al. [96] applied a two-step block descent algorithm and proposed a block coordinate descent convex conditioned Lasso (BDCoCoLasso), which is designed for the case that the covariate matrix is only partially corrupted.

2.3. Balanced Estimation

CoCoLasso is effective in parameter estimation of high-dimensional measurement error models, but it suffers from overfitting. To overcome this drawback, Zheng et al. [64] replaced Lasso penalty in CoCoLasso with the combined l 1 and concave penalty and developed the balanced estimator, which can be obtained by
β ^ bal = arg min β 1 2 β T Σ ˜ β ρ ˜ T β + λ 0 β 1 + p λ ( β ) 1 ,
where λ 0 = c 1 log p / n is the regularization parameter for the l 1 penalty with c 1 being a positive constant, p λ ( β ) = [ p λ ( | β 1 | ) , , p λ ( | β p | ) ] T , and p λ ( u ) , u [ 0 , + ) is a concave penalty function with the tuning parameter λ 0 . The definitions of Σ ˜ and ρ ˜ are the same as those in (11) and (12) with the two kinds of measurement error data. In contrast to CoCoLasso estimator, balanced estimator strikes a perfect balance between prediction and variable selection. Surprisingly, excellent variable selection results promote the estimation and prediction accuracy of balanced estimator. The simulation studies in [64] demonstrate the estimation and prediction accuracy, as well as the better variable selection results of balanced estimator. As for asymptotic properties of β ^ bal , Zheng et al. [64] established the oracle inequalities for l q -estimation and prediction error,
β ^ bal β 0 q = O p λ 0 s 1 / q ϕ 2 , q = 1 , 2 ,
n 1 / 2 X ( β ^ bal β 0 ) 2 = O p λ 0 s ϕ ,
where
ϕ = min δ 0 , δ S c 1 7 δ S 1 n 1 / 2 X δ 2 δ S 2 δ S c * 2 > 0 ,
and δ S c * R s contains the s largest absolute vaules of δ S c . It can be seen from (17) and (18) that the bounds of l q -estimation ( q = 1 , 2 ) and prediction error are free of regularization parameter λ for the concave penalty. Also, the upper bound of falsely discovered signs is provided in [64]. Denote FS ( β ^ ) = | { 1 j p : sgn ( β ^ j ) sgn ( β 0 , j ) } | , then
FS ( β ^ ) = O p λ 0 2 s λ 2 ϕ 4 .
From (19), we can see that if min j S | β 0 j | s log p / n such that λ 2 λ 0 2 s , balanced estimator can achieve sign consistency, which is stronger than the variable selection consistency. Compared with balanced estimator, CoCoLasso estimator requires additional irrepresentable condition to achieve this property.

2.4. Calibrated Zero-norm Regularized Least Square Estimation

The nearest positive semi-definite matrix projection operator defined in [63] solves the problem that the penalized objective function is nonconvex in high-dimensional measurement error models. However, with the constraint of the positive semi-definite matrix, the computation cost of Σ ˜ is high. Tao et al. [65] demonstrated that as the dimension p increases, the time required to calculate Σ ˜ using the ADMM algorithm will increase significantly. Thus, Tao et al. [65] suggested replacing Σ ˜ with an approximation of Σ ^ that is easy to obtain but less precise. To achieve this purpose, consider the eigendecomposition of Σ ^ as follows
Σ ^ = V diag ( θ 1 , , θ p ) V T ,
where diag ( θ 1 , , θ p ) is a diagonal matrix containing the eigenvalues of Σ ^ with θ 1 θ 2 θ p , V R p × p is an orthonormal matrix consisting of the corresponding eigenvectors. Then, Tao et al. [65] substituted Frobenius norm for elementwise maximum norm in (9) and obtained a positive definite approximation of Σ ^ as follows
Σ ˜ F = arg min W ξ I Σ ^ W F for some ξ > 0 .
Note that the optimal solution of (20) is the same as that of the problem
min W ξ I Σ ^ W F 2 .
Thus, we have
Σ ˜ F = ξ I + Π S + p ( Σ ^ ξ I ) = V diag [ max ( θ 1 , ξ ) , , max ( θ p , ξ ) ] V T ,
where Π S + p ( · ) denotes the projection of a matrix on S + p . Similar to Σ ˜ , we have Σ ˜ F = n 1 W ˜ F T W ˜ F , where n 1 / 2 W ˜ F is Cholesky factor of Σ ˜ F . Let y ˜ F be the vector satisfying n 1 W ˜ F T y ˜ F = ρ ˜ . By some simple calculation, we can obtain that
W ˜ F = n V diag max θ 1 , ξ , , max θ p , ξ V T , y ˜ F = n V diag 1 max θ 1 , ξ , , 1 max θ p , ξ V T ρ ˜ .
Based on equation (22), Σ ˜ F can be obtained easily. This implies that computing Σ ˜ F requires substantially less time than computing Σ ˜ . However, the approximation accuracy of Σ ˜ F to Σ ^ is not as good as that of Σ ˜ because minimizing Frobenius norm may yield larger components compared with the elementwise maximum norm. To get an excellent estimator of β 0 , it is reasonable to find a more effective regression method to replace Lasso. Tao et al. [65] considered the zero norm penalty and defined the following calibrated zero-norm regularized least squares (CaZnRLS) estimator
β ^ zn arg min β R p 1 2 n λ W ˜ F β y ˜ F 2 2 + β 0 .
However, it is difficult to solve (24) directly. Thus, to give an equivalent form for (24) that can be solved, Tao et al. [65] defined
ϕ ( u ) : = a 1 a + 1 u 2 + 2 a + 1 u ( a > 1 ) , u R .
It is easy to verify that for any β R p ,
β 0 = min w R p i = 1 p ϕ w i : ( e w ) T | β | = 0 , 0 w e ,
where | β | = ( | β 1 | , , | β p | ) T . The formula (25) implies that the optimization problem (24) can be rewritten as the following mathematical program with equilibrium constraints (MPEC)
min β , w R p 1 2 n λ W ˜ F β y ˜ F 2 2 + i = 1 p ϕ w i : ( e w ) T | β | = 0 , 0 w e .
Note that if the optimal solution of optimization problem (24) is β ^ * , then the corresponding optimal solution of optimization problem (26) is ( β ^ * , sign ( | β ^ * | ) ) .
However, it can be seen that the annoying nonconvexity is introduced by the restriction ( e w ) T | β | = 0 in (26), and it is the cause of the difficulty in obtaining the estimator β ^ zn . Accordingly, Tao et al. [65] considered the following penalized version of optimization problem (26)
min β , w R p 1 2 n λ W ˜ F β y ˜ F 2 2 + i = 1 p ϕ w i + ρ ( e w ) T | β | , 0 w e ,
where ρ > 0 is the penalty parameter. Tao et al. [65] proved that the global optimal solution of optimization problem (27) with ρ ρ ¯ : = ( 4 a L f ) [ ( a + 1 ) λ ] 1 is the same as that of optimization problem (26), where L f is Lipschitz constant of the function f ( β ) : = ( 2 n ) 1 W ˜ F β y ˜ F 2 2 on the ball { β R p : β 2 R } , and R is a constant. Thus, β ^ zn can be obtained by solving the following optimization problem with ρ ρ ¯
β ^ zn arg min β R p , w [ 0 , e ] 1 2 n W ˜ F β y ˜ F 2 2 + i = 1 p λ ϕ ( w i ) + ρ ( 1 w i ) | β i | .
Tao et al. [65] recommended using the multi-stage convex relaxation approach (GEP–MSCRA) to obtain β ^ zn . This approach solves (28) in an iterative way with the main steps summarized as follows.
Step 1. Initialize the algorithm with w ( 0 ) [ 0 , 2 1 e ] , ρ ( 0 ) = 1 , λ > 0 , k = 1 .
Step 2. Solve the following optimization problem and get β ^ zn ( k )
β ^ zn ( k ) = arg min β R p 1 2 n W ˜ F β y ˜ F 2 2 + λ i = 1 p ( 1 w i ( k 1 ) ) | β i | .
Step 3. If k = 1 , choose an appropriate ρ ( 1 ) > ρ ( 0 ) using the information from β ^ zn ( 1 ) ; if 1 < k 3 , choose ρ ( k ) satisfying ρ ( k ) > ρ ( k 1 ) ; if k > 3 , let ρ ( k ) = ρ ( k 1 ) .
Step 4. Obtain w i ( k ) ( i = 1 , , p ) through the following optimization problem
w i ( k ) = arg min 0 w i 1 ϕ ( w i ) ρ ( k ) w i | β ^ zn , i ( k ) | .
Step 5. Let k k + 1 and repeat Steps 2–4 until the stopping conditions are satisfied.
Note that the initial w ( 0 ) in Step 1 is an arbitrary vector from the interval [ 0 , 2 1 e ] rather than the feasible set [ 0 , e ] in (28). The reason is to obtain a better initial estimator β ^ zn ( 1 ) . In addition, w i ( k ) in Step 4 has the following closed form based on the convexity of ϕ
w i ( k ) = min 1 , max ( a + 1 ) ρ ( k ) | β i ( k ) | 2 2 ( a + 1 ) , 0 , i = 1 , , p .
Consequently, the primary calculation in each iteration is to solve a weighted l 1 -norm regularized least square problem. Under some regularity conditions, β ^ zn ( k ) satisfies
β ^ zn ( k ) β 0 2 = O p ( λ s ) k N + .
It can be seen from (29) that the l 2 -estimation error bound of CaZnRLS estimator possesses the same order as those of nonconvex Lasso and CoCoLasso estimators. Tao et al. [65] further showed that the error bound of β ^ zn ( k + 1 ) is better than that of β ^ zn ( k ) for all k N + . Furthermore, Tao et al. [65] demonstrated that GEP-MSCRA will produce a β ^ zn ( k ) such that supp ( β ^ zn ( k ) ) = supp ( β 0 ) in a finite number of iterations if the minimum nonzero value of smallest nonzero entries of β 0 is not too small.

2.5. Linear and Conic Programming Estimation

In addition to the approaches mentioned above, another class of methods is based on the idea of Dantzig selector to acquire estimator of true regression coefficients β 0 . Rosenbaum and Tsybakov [66] proposed the following matrix uncertainty (MU) selector
β ^ MU = arg min β β 1 : n 1 W T ( y W β ) δ β 1 + λ ,
where δ 0 and λ 0 are tuning parameters depending on the level of measurement error U and model error ε , respectively.
However, the n 1 W T W is included in (30) rather than n 1 X T X due to the unobservability of X . Obviously, the matrix n 1 W T W contains bias caused by measurement errors. To address this issue, Rosenbaum and Tsybakov [67] proposed an improved version of MU selector called compensated MU selector. It is applicable to the case that the entries of measurement error U i is independent such that σ U , j 2 = n 1 i = 1 n E ( U i j 2 ) is finite for j = 1 , , p . The compensated MU selector is defined as
β ^ CMU = arg min β β 1 : n 1 W T ( y W β ) + D ^ β δ β 1 + λ ,
where D ^ is a diagonal matrix consisting of σ ^ U , j 2 , j = 1 , , p , and constants δ and λ are the same as those in (30). Rosenbaum and Tsybakov [67] showed that the l q -estimation error of the estimator β ^ CMU satisfies
β ^ CMU β 0 q = O p s 1 / q ( β 0 1 + 1 ) log p n , 1 q .
MU selector and compensated MU selector provide two alternative estimation methods for high-dimensional measurement error models, but there remains an issue. The optimization problem in (31) may be nonconvex, and Rosenbaum and Tsybakov [67] did not offer a suitable algorithm to the general case. To remedy this issue, Belloni et al. [70] proposed the conic-programming based estimator β ^ cp . Consider the following optimization problem
min β , t β 1 + κ t , s . t . n 1 W T ( y W β ) + D ^ β δ t + λ , β 2 t , t R + ,
where κ , δ and λ are positive tuning parameters. Suppose that the solution of (32) is ( β ^ cp , t ^ ) , then β ^ cp is defined as the conic-programming based estimator of true regression coefficients β 0 . It is easy to verify that the optimization problem (32) can be solved efficiently in a polynomial time as it is a second-order cone programming problem. To analyze the asymptotic properties of β ^ cp , assume that κ [ 2 1 , 2 ] , δ = O ( log p / n ) , and λ = O ( log p / n ) . Then, Belloni et al. [70] showed that l q -estimation ( 1 q ) and prediction error of β ^ cp satisfy
β ^ cp β 0 q = O p s 1 / q ( β 0 2 + 1 ) log p n , 1 q ,
n 1 / 2 X ( β ^ cp β 0 ) 2 = O p s 1 / 2 ( β 0 2 + 1 ) log p n .
In contrast to nonconvex Lasso in [62], the conic-programming based estimator β ^ cp can achieve the convergence rate in (33) and (34) without any information of the parameters β 0 1 , β 0 2 or s. Compared with compensated MU selector in [67], the conic-programming based estimator β ^ cp can be obtained in the general case without the computational difficulty of nonconvexity.

3. Estimation Methods for Generalized Linear Models

The above methods are mainly for linear models. This section introduces the estimation methods for high-dimensional generalized linear models with measurement errors.

3.1. Estimation Method for Poisson Models

Count data is commonly encountered in various fields including finance, economics and social sciences. In order to analyze count data, Poisson regression models are a popular choice in practice. Jiang and Ma [74] studied the high-dimensional Poisson regression models with additive measurement errors and proposed a novel optimization algorithm to obtain the estimator of true regression coefficient vector β 0 . Suppose that Y i is the response variable following a Poisson distribution satisfying E ( Y i | X i ) = exp ( X i T β ) , where X i R p is an unobservable covariate. Its error-prone surrogate W i = X i + U i , and the measurement error U i follows from a sub-Gaussian distribution with known covariance matrix Σ u . It is easy to verify that
E Y i W i T β exp β T W i β T Σ u β / 2 X i , Y i = Y i X i T β exp β T X i .
From (35), Jiang and Ma [74] imposed restriction on β similar to it in [62] and estimated β by solving the following optimization problem
β ^ p = arg min β 1 c p s , β 2 c p L ( β ) + λ β 1 ,
where
L ( β ) = 1 n i = 1 n Y i W i T β exp β T W i β T Σ u β / 2 .
The estimator β ^ p can be obtained by the composite gradient descent algorithm. Specifically, at ( k + 1 ) th iteration, first solve the following optimization problem without any restrictions on β
β ˜ p ( k + 1 ) = arg min β L ( β p ( k ) ) / β T ( β β ( k ) ) + η / 2 β β ( k ) 2 2 + λ β 1 ,
where η > 0 is a stepsize parameter. Next, apply the projection method in [90] to project β ˜ p ( k + 1 ) onto the l 1 ball with radius c p s and produce β ˘ p ( k + 1 ) . If β ˘ p ( k + 1 ) 2 > c p , let β ^ p ( k + 1 ) = β ˘ p ( k + 1 ) c p / β ˘ p ( k + 1 ) 2 , otherwise let β ^ p ( k + 1 ) = β ˘ p ( k + 1 ) . Repeat the above steps until the stopping condition is satisfied. Jiang and Ma [74] proved the convergence of this algorithm. Under some regularity conditions, they further showed that the global minimum β ^ p of (36) satisfies
β ^ p β 0 q = O ( s 1 / q λ ) .
There is an usual requirement that λ 2 L ( β ) / β in Poisson models, and the term L ( β ) / β = O ( n / log p ) . Thus, the convergence rate of β ^ p is slower than those of nonconvex Lasso, CoCoLasso and balanced estimators in linear models.

3.2. Generalized Matrix Uncertainty Selector

The method in [74] is only designed for high-dimensional Poisson models with measurement errors. To develop a method that is applicable to generalized linear models, Sørensen et al. [68] drew on the idea of MU selector and proposed the generalized matrix uncertainty (GMU) selector for high-dimensional generalized linear models with additive measurement errors.
Consider a generalized linear model with response variable Y distributed according to
f Y ( y ; θ , ϕ ) = exp y θ b ( θ ) a ( ϕ ) + c ( y , ϕ ) ,
where θ = X T β 0 , X R p are the covariates. The expected response is given by the mean function μ ( θ ) = b ( θ ) , and Taylor expansion of the mean function μ ( X i T β 0 ) at point W i T β 0 is
μ ( X i T β 0 ) = = 0 μ ( ) W i T β 0 ! ( U i T β 0 ) ,
where μ ( ) ( · ) is the th derivative of function μ ( · ) . With Taylor expansion (39) of the mean function, the generalized matrix uncertainty selector can be defined as
β ^ GMU L = arg min β β 1 : β Θ L , Θ L = β R p : max 1 j p 1 n w i j [ Y i μ ( W i T β ) ] λ + = 1 L δ ! n β 1 μ ( ) ( W β ) 2 ,
where δ is the positive parameter satisfying
U δ , μ ( ) ( W β ) = [ μ ( ) ( W 1 T β ) , , μ ( ) ( W n T β ) ] T .
In practice, Sørensen et al. [68] recommended using L = 1 for computational convenience and demonstrated that the first-order approximation produces satisfactory results.
To solve the optimation problem (40) and obtain the estimator β ^ GMU L , we can utilize the iterative reweighing algorithm. The main iteration step of the algorithm is stated as follows
β ^ GMU ( k + 1 ) = arg min β β 1 : 1 n W ˜ g ( k ) T ( z ˜ ( k ) W ˜ g ( k ) β ) λ + = 1 L δ ! n β 1 V ( , k ) 2 ,
where W ˜ g R n × p is a matrix of weighted error-prone surrogate of covariates with elements w ˜ g , i j ( k ) = w i j V i ( 1 , k ) , z ˜ ( k ) R n is a vector with the elements z ˜ i ( k ) = z i ( k ) V i ( 1 , k ) ,
z i ( k ) = W i T β ^ GMU ( k ) + Y i μ W i T β ^ GMU ( k ) μ W i T β ^ GMU ( k ) 1 , i = 1 , , n ,
and
V ( , k ) = μ ( ) W 1 T β ^ GMU ( k ) , , μ ( ) W n T β ^ GMU ( k ) T = V 1 ( , k ) , , V n ( , k ) T , = 1 , , L
is the weight vector in Taylor expansion with L terms. When L = 1 is applied, it is easy to verify that (41) is a linear program. For more details about the algorithm, please see [68,97]. However, Sørensen et al. [68] did not establish any asymptotic properties of GMU selector.

4. Hypothesis Testing Methods

The aforementioned works on high-dimensional measurement error models mainly investigate estimation problems and numerical algorithms of optimization problems, as well as the theoretical properties of estimators. Recently, some works have studied the hypothesis testing problems for high-dimensional measurement error regression models, which will be introduced in this section.

4.1. Corrected Decorrelated Score Test

The above methods are proposed under the setting that all covariates are corrupted. In practice, it is common that not all covariates are measured with errors. Thus, Li et al. [86] investigated high-dimensional measurement error models where a fixed number of covariates contain measurement errors and proposed statistical inference methods for the regression coefficients corresponding to these covariates.
Consider the following high-dimensional linear model with one of the covariates containing additive errors
y i = β 0 X i + γ 0 T Z i + ε i , W i = X i + U i , i = 1 , , n ,
where X i R is an unobservable covariate, and W i is its error-prone surrogate, Z i R p 1 is an observed covariate precisely. The measurement error U i follows from sub-Gaussian distribution with mean zero and variance σ U 2 , and U i is independent of ( X i , Z i , ε i ) . Denote y = ( y 1 , , y n ) T , X = ( X 1 , , X n ) T , W = ( W 1 , , W n ) T and Z = ( Z 1 , , Z n ) T . This subsection aims to test the hypothesis:
H 0 : β 0 = β * H 1 : β 0 β * ( β * R ) ,
and construct a confidence interval for β 0 under high-dimensional settings.
Since that we are only concerned with the inference of the parameter β , then the parameter γ is regarded as a nuisance. Following the idea in [81], Li et al. [86] defined the corrected score function as
S θ ( θ ) = Σ ^ θ ρ ^ = 1 n i = 1 n S i θ ( θ ) = S β ( β , γ ) S γ ( β , γ ) = Σ ^ 11 β + Σ ^ 12 γ ρ ^ 1 Σ ^ 21 β + Σ ^ 22 γ ρ ^ 2 ,
where θ = ( β , γ T ) T ,
Σ ^ = Σ ^ 11 Σ ^ 12 Σ ^ 21 Σ ^ 22 = W T W / n σ U 2 W T Z / n Z T W / n Z T Z / n and ρ ^ = ρ ^ 1 ρ ^ 2 = W T y / n Z T y / n
are consistent estimators of Σ = ( X , Z ) T ( X , Z ) / n and ρ = ( X , Z ) T y / n , respectively. The corrected score covariance matrix is defined as
I ( θ ) = E S i θ ( θ ) S i θ ( θ ) T = I β β I β γ I γ β I γ γ .
To conduct statistical inference on the target parameter β , it is crucial to eliminate the influence of nuisance parameter γ . Thus, Li et al. [86] developed the corrected decorrelated score function for target parameter β as
S ( β , γ ) = S β ( β , γ ) ω T S γ ( β , γ ) ,
where ω T = I β γ I γ γ 1 = E ( X i Z i T ) E ( Z i Z i T ) 1 . It easy to verify that E [ S ( β 0 , γ 0 ) S γ ( β 0 , γ 0 ) ] = 0 , which indicates that S ( β , γ ) and nuisance score function S γ ( β , γ ) are uncorrelated. Obviously, we can obtain that Var [ S ( β , γ ) ] = I β β I β γ I γ γ 1 I γ β = : σ β γ 2 . Then, Li et al. [86] constructed the test statistic and the confidence interval for β 0 based on the estimated decorrelated score function. This statistical inference procedure is summarized as follows.
Step 1. Apply CoCoLasso estimation method in [63] to calculate initial estimator θ ˜ = ( β ˜ , γ ˜ T ) T , and utilize following Dantzig type estimator to estimate ω
ω ^ = arg min ω ω 1 , s . t . Σ ^ 12 ω T Σ ^ 22 λ ,
where λ = O ( log p / n ) .
Step 2. Estimate the decorrelated score function by
S ^ ( β , γ ˜ ) = S β ( β , γ ˜ ) ω ^ T S γ ( β , γ ˜ ) ,
and calculate the test statistic T ^ = n S ^ β * , γ ˜ ( σ ^ β γ , H 0 2 ) 1 / 2 , where
σ ^ β γ , H 0 2 = I ^ β β ω ^ T I ^ γ β β = β * = ( σ ^ ε , H 0 2 + β * 2 σ U 2 ) ( 1 ω ^ T Σ ^ 21 ) + β * 2 E ( U i 4 ) + σ ^ ε , H 0 2 σ U 2 β * 2 σ U 4 .
Step 3. Estimate β as
β ^ = β ˜ S ^ ( θ ˜ ) / ( Σ ^ 11 ω ^ T Σ ^ 21 ) ,
and construct ( 1 α ) 100 % confidence interval for β 0 as
β ^ u 1 α / 2 σ ^ β 2 / n , β ^ + u 1 α / 2 σ ^ β 2 / n ,
where u 1 α / 2 is the ( 1 α / 2 ) quantile of standard normal distribution,
σ ^ β 2 = ( 1 ω ^ T Σ ^ 21 ) 2 ( σ ^ ε 2 + β ^ 2 σ U 2 ) ( 1 ω ^ T Σ ^ 21 ) + β ^ 2 E ( U i 4 ) + σ ^ ε 2 σ U 2 β ^ 2 σ U 4
is the estimator of the asymptotic variance σ β 2 of β ^ , and σ ^ ε 2 = n 1 i = 1 n ( y i β ^ W i γ ˜ T Z i ) 2 β ^ 2 σ U 2 is the estimator of the variance σ ε 2 of ε i .
Note that the methods used to estimate θ and ω in Step 1 can be varying, as long as the corresponding estimators are consistent, please see more discussions in [86]. Li et al. [86] showed that, under some regularity conditions,
n S ^ β * , γ ˜ ( σ ^ β γ , H 0 2 ) 1 / 2 d N ( 0 , 1 ) as n .
Further, the asymptotic normality of the test statistic T ^ n at local alternatives was also established in [86] without any additional condition. Li et al. [86] also constructed the asymptotic confidence interval for target parameter β in Step 3 based on the asymptotic normality of β ^ , which is given as follows
n ( β ^ β 0 ) = E S β , γ 0 β β = β 0 1 n S β 0 , γ 0 + o P ( 1 ) d N ( 0 , σ β 2 ) as n ,
where σ β 2 = E X i 2 ω T E X i Z i 2 σ β γ , 0 2 , and
σ β γ , 0 2 = ( σ ε 2 + β 0 2 σ U 2 ) 1 ω T E X i Z i + β 0 2 E ( U i 4 ) + σ ε 2 σ U 2 β 0 2 σ U 4 .

4.2. Wald and Score Tests for Poisson Models

In addition to linear models, researchers have made some progress on hypothesis testing problems for Poisson models. Jiang et al. [88] studied hypothesis testing problems for high-dimensional Poisson measurement error models, and they proposed Wald and score tests for the linear function of regression coefficients.
Consider the following hypothesis test
H 0 : C β 0 M = b H 1 : C β 0 M = b + h n for some h n R r ,
where C R r × m is a matrix with r m , β 0 M R m × 1 is a subvector of the true regression coefficient vector β 0 = ( β 01 , , β 0 p ) T formed by β 0 j ( j M ) . To construct a valid test statistic, Jiang et al. [88] drew on the idea of estimation method in [74] and suggested estimating regression coefficients under the null hypothesis by
β ^ p n = arg min β 1 R 1 , β 2 R 2 L β + p λ β M c , s . t . C β M = b ,
where p λ ( · ) is a penalty function, and L ( β ) is defined in (37). Similarly, the following estimator of β 0 can be considered without assuming the null hypothesis
β ^ p w = arg min β 1 R 1 , β 2 R 2 L ( β ) + p λ β M c .
The estimators β ^ p n and β ^ p w can be obtained by ADMM algorithm, please see more details in [88]. It can be seen that optimization problems (42) and (43) can be distinguished from the method in (36) because we do not impose penalties on the components of the target parameter β M to avoid forcing them to be zeros. Then, based on the above estimators of β 0 , Jiang et al. [88] proposed the following score statistic and Wald statistic to test whether C β 0 M = b or not
T S = n L ( β ^ ) β T M S A T Ψ 1 ( Σ ^ r , Q ^ , β ^ ) A L ( β ^ ) β M S , T W = n ( C β ^ pw , M b ) T Ψ ( Σ ^ r , Q ^ , β ^ pw ) 1 ( C β ^ pw , M b ) ,
where A = C I m × m , 0 m × k Q ^ M S , M S 1 ( β ^ ) ,
Ψ ( Σ , Q , β ) C I m × m , 0 m × k Q M S , M S 1 ( β ) Σ M S , M S ( β ) Q M S , M S 1 ( β ) I m × m , 0 m × k T C T ,
Σ ^ r ( β ) and Q ^ ( β ) are estimators of Σ r ( β ) and Q ( β ) = E exp β T X X X T respectively, and
Σ r ( β ) = E Y i W i exp ( β T W i β T Σ u β / 2 ) W i Σ u β 2
is the covariance of the residuals.
Jiang et al. [88] established the consistency of β ^ p n and β ^ p w with λ larger than O ( { log p / n } 1 / 4 ) , m = o ( { log p / n } 1 / 2 ) and s = o ( { log p / n } 1 / 2 ) . Further, the asymptotic distributions of the two test statistics are established, specifically, as n , we have
T S d χ 2 r , n h n T Ψ 1 Σ , Q , β t h n , T W d χ 2 r , n h n T Ψ 1 Σ , Q , β t h n .
Thus, we reject the null hypothesis if T S > χ 1 α 2 for score test with the nominal significance level α > 0 , and reject the null hypothesis if T W > χ 1 α 2 for Wald test, where χ 1 α 2 is the ( 1 α ) quantile of the chi-square distribution χ 2 r .

5. Screening Methods

As the dimension of data becomes higher and higher, we often encounter ultrahigh-dimensional data. For the ultrahigh-dimensional models, we frequently reduce dimension using variable screening techniques and then apply other estimation or hypothesis testing methods. The variable screening technique SIS [49] designed for ultrahigh-dimensional clean data has achieved great success and has been extended to various settings. SIS screens the variables according to the magnitudes of their marginal correlations with the response variable. Nghiem et al. [89] drew inspiration from the ideas of SIS in [49] and marginal bridge estimation in [98], and proposed the corrected sure independence screening (SISc) method and corrected penalized marginal screening method (PMSc). Consider the following optimization problem
β ˜ sc = arg min β L ( β ) = arg min β j = 1 p L j ( β j ) = arg min β 1 n j = 1 p i = 1 n y i w i j β j 2 σ j 2 β j 2 + p λ β j ,
where p λ ( · ) is a penalty function, and the bridge penalty is adopted in [89]. Based on (44), Nghiem et al. [89] proposed PMSc and SISc methods. For PMSc method, it suggested taking the selected submodel as
S ^ PMSc = j : β ˜ sc , j 0 .
Under some regularity conditions, Nghiem et al. [89] showed that P ( S S ^ PMSc ) 1 . Furthermore, when λ = 0 , we can obtain that
β ˜ sc , j = i = 1 n w i j y i i = 1 n w i j 2 n σ u , j 2 , j = 1 , , p ,
which measures the marginal correlation between the jth variable and the response variable. The SISc selects the variable according to the magnitude of β ˜ sc , j . The corresponding selected set is
S ^ SISc = 1 j p : | β ˜ j | is among the d largest of all .
Nghiem et al. [89] proved that P ( S S ^ SISc ) = 1 O { p exp ( C n ) } for some constant C > 0 under some regularity conditions.

6. Conclusions

With the advent of big data era, high-dimensional measurement error data have proliferated in various fields. Over the past few years, many statistical inference methods for high-dimensional measurement error regression models have been developed to overcome the difficulties in scientific research and provide effective approaches for tackling problems in applications. This paper reviews the research advances in estimation and hypothesis testing methods for high-dimensional measurement error models, as well as variable screening methods for ultrahigh-dimensional measurement error models. Due to the prevalence of high-dimensional measurement error data in daily life and the growing demand for the statistical inference methods of measurement error regression models in applications, the related research is still one of the crucial aspects in statistical research. At present, the statistical inference methods and the theoretical system of high-dimensional measurement error models are far from complete. Further research in this area includes the following aspects.
  • Existing estimation methods for high-dimensional measurement error regression models are mainly for linear or generalized linear models. Therefore, it is urgent to develop estimation methods for nonlinear models with high-dimensional measurement error data such as nonparametric and semiparametric models.
  • Existing works mainly focus on independent and identically distributed data. It is worthwhile to extend the estimation and hypothesis testing methods to measurement error models with complex data such as panel data and functional data.
  • In most studies of high-dimensional measurement error models, it is assumed that the covariance structure of the measurement errors is specific or that the covariance matrix of measurement errors is known. Thus, it is a challenging problem to develop estimation and hypothesis testing methods in the case that the covariance matrix of measurement errors is completely unknown.

Author Contributions

Conceptualization, G.L.; methodology, J.L.; validation, L.Y.; formal analysis, G.L.; investigation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, G.L. and L.Y.; supervision, G.L.; project administration, G.L.; funding acquisition, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (grant numbers: 12271046, 11971001, 12131006 and 12001277).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SIMEX               Simulation-extrapolation
SCAD Smoothly clipped absolute deviation
SICA Smooth integration of counting and absolute deviation
MCP Minimax concave penalty
SIS Sure independence screening
CoCoLasso Convex conditioned Lasso
CaZnRLS Calibrated zero-norm regularized least squares
MU Matrix uncertainty
MEBoost Measurement error boosting
SIMSELEX Simulation-selection-extrapolation
IRO Imputation-regularized optimization
FDR False discovery rate
PMSc Corrected penalized marginal screening
SISc Corrected sure independence screening
ADMM Alternating direction method of multipliers
BDCoCoLasso Block coordinate descent convex conditioned Lasso
MPEC Mathematical program with equilibrium constraints
GEP–MSCRA Multi-stage convex relaxation approach
GMU Generalized matrix uncertainty

References

  1. Liang, H.; Härdle, W.; Carroll, R.J. Estimation in a semiparametric partially linear errors-in-variables model. The Annals of Statistics 1999, 27(5), 1519–1535. [Google Scholar] [CrossRef]
  2. Cook, J.; Stefanski, L.A. Simulation-extrapolation estimation in parametric measurement error models. Journal of the American Statistical Association 1994, 89(428), 1314–1328. [Google Scholar] [CrossRef]
  3. Carroll, R.J.; Lombard, F.; Kuchenhoff, H.; Stefanski, L.A. Asymptotics for the SIMEX estimator in structural measurement error models. Journal of the American Statistical Association 1996, 91(433), 242–250. [Google Scholar] [CrossRef]
  4. Fan, J.Q.; Truong, Y.K. Nonparametric regression with errors in variables. The Annals of Statistics 1993, 21(4), 1900–1925. [Google Scholar] [CrossRef]
  5. Cui, H.J.; Chen, S.X. Empirical likelihood confidence region for parameter in the errors-in-variables models. Journal of Multivariate Analysis 2003, 84(1), 101–115. [Google Scholar] [CrossRef]
  6. Cui, H.J.; Kong, E.F. Empirical likelihood confidence region for parameters in semi-linear errors-in-variables models. Scandinavian Journal of statistics 2006, 33(1), 153–168. [Google Scholar] [CrossRef]
  7. Cheng, C.L.; Tsai, J.R.; Schneeweiss, H. Polynomial regression with heteroscedastic measurement errors in both axes: estimation and hypothesis testing. Statistical Methods in Medical Research 2019, 28(9), 2681–2696. [Google Scholar] [CrossRef]
  8. He, X.M.; Liang, H. Quantile regression estimates for a class of linear and partially linear errors-in-variables models. Statistica Sinica 2000, 10, 129–140. [Google Scholar]
  9. Carroll, R.J.; Delaigle, A.; Hall, P. Nonparametric prediction in measurement error models. Journal of the American Statistical Association 2009, 104(487), 993–1003. [Google Scholar] [CrossRef]
  10. Jeon, J.M.; Park, B.U.; Keilegom, I.V. Nonparametric regression on lie groups with measurement errors. The Annals of Statistics 2022, 50(5), 2973–3008. [Google Scholar] [CrossRef]
  11. Chen, L.P.; Yi, G.Y. Model selection and model averaging for analysis of truncated and censored data with measurement error. Electronic Journal of Statistics 2020, 14(2), 4054–4109. [Google Scholar] [CrossRef]
  12. Shi, P.X.; Zhou, Y.C.; Zhang, A.R. High-dimensional log-error-in-variable regression with applications to microbial compositional data analysis. Biometrika 2022, 109(2), 405–420. [Google Scholar] [CrossRef]
  13. Li, B.; Yin, X.R. On surrogate dimension reduction for measurement error regression: an invariance law. The Annals of Statistics 2007, 35(5), 2143–2172. [Google Scholar] [CrossRef]
  14. Staudenmayer, J.; Buonaccorsi, J.P. Measurement error in linear autoregressive models. Journal of the American Statistical Association 2005, 100(471), 841–852. [Google Scholar] [CrossRef]
  15. Wei, Y.; Carroll, R.J. Quantile regression with measurement error. Journal of the American Statistical Association 2009, 104(487), 1129–1143. [Google Scholar] [CrossRef] [PubMed]
  16. Liang, H.; Li, R.Z. Variable selection for partially linear models with measurement errors. Journal of the American Statistical Association 2009, 104(485), 234–248. [Google Scholar] [CrossRef]
  17. Hall, P.; Ma, Y.Y. Estimation in a semiparametric partially linear errors-in-variables model. The Annals of Statistics 2007, 35(6), 2620–2638. [Google Scholar]
  18. Hall, P.; Ma, Y.Y. Semiparametric estimators of functional measurement error models with unknown error. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2007, 69, 429–446. [Google Scholar] [CrossRef]
  19. Ma, Y.Y.; Carroll, R.J. Locally efficient estimators for semiparametric models with measurement error. Journal of the American Statistical Association 2006, 101(476), 1465–1474. [Google Scholar] [CrossRef]
  20. Ma, Y.Y.; Li, R.Z. Variable selection in measurement error models. Bernoulli 2010, 16(1), 274–300. [Google Scholar] [CrossRef]
  21. Ma, Y.Y.; Hart, J.D.; Janicki, R.; Carroll, R.J. Local and omnibus goodness-of-fit tests in classical measurement error models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2011, 73, 81–98. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, L.Q. Estimation of nonlinear models with Berkson measurement errors. The Annals of Statistics 2004, 32(6), 2559–2579. [Google Scholar] [CrossRef]
  23. Nghiem, L.H.; Byrd, M.C.; Potgieter, C.J. Estimation in linear errors-in-variables models with unknown error distribution. Biometrika 2020, 107(4), 841–856. [Google Scholar] [CrossRef]
  24. Pan, W.Q.; Zeng, D.L.; Lin, X.H. Estimation in semiparametric transition measurement error models for longitudinal data. Biometrics 2009, 65(3), 728–736. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, J.; Zhou, Y. Calibration procedures for linear regression models with multiplicative distortion measurement errors. Brazilian Journal of Probability and Statistics 2020, 34(3), 519–536. [Google Scholar] [CrossRef]
  26. Zhang, J. Estimation and variable selection for partial linear single-index distortion measurement errors models. Statistical Papers 2021, 62, 887–913. [Google Scholar] [CrossRef]
  27. Wang, L.Q.; Hsiao, C. Method of moments estimation and identifiability of semiparametric nonlinear errors-in-variables models. Journal of Econometrics 2011, 165, 30–44. [Google Scholar] [CrossRef]
  28. Schennach, S.M.; Hu, Y.Y. Nonparametric identification and semiparametric estimation of classical measurement error models without side information. Journal of the American Statistical Association 2013, 108(501), 177–186. [Google Scholar] [CrossRef]
  29. Zhang, X.Y.; Ma, Y.Y.; Carroll, R.J. MALMEM: model averaging in linear measurement error models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2019, 81, 763–779. [Google Scholar] [CrossRef]
  30. Carroll, R.J.; Ruppert, D.; Stefanski, L.A.; Crainiceanu, C.M. Measurement Error in Nonlinear Models, 2nd ed.; Chapman and Hall: New York, America, 2006. [Google Scholar]
  31. Cheng, C.L.; Van Ness, J.W. Statistical Regression With Measurement Error; Oxford University Press: New York, America, 1999. [Google Scholar]
  32. Fuller, W.A. Measurement Error Models; John Wiley & Sons: New York, America, 1987. [Google Scholar]
  33. Li, G.R.; Zhang, J.; Feng, S.Y. Modern Measurement Error Models; Science Press: Beijing, China, 2016. [Google Scholar]
  34. Yi, G.Y. Statistical Analysis with Measurement Error or Misclassification; Springer: New York, America, 2017. [Google Scholar]
  35. Yi, G.Y.; Delaigle, A.; Gustafson, P. Handbook of Measurement Error Models; Chapman and Hall: New York, America, 2021. [Google Scholar]
  36. Tibshirani, R. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  37. Fan, J.Q.; Li, R.Z. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 2001, 96(456), 1348–1360. [Google Scholar] [CrossRef]
  38. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2005, 67, 301–320. [Google Scholar] [CrossRef]
  39. Zou, H. The adaptive Lasso and its oracle properties. Journal of the American Statistical Association 2006, 101(476), 1418–1429. [Google Scholar] [CrossRef]
  40. Candès, E.J.; Tao, T. The Dantzig selector: statistical estimation when p is much larger than n. The Annals of Statistics 2007, 35(6), 2313–2351. [Google Scholar]
  41. Lv, J.C.; Fan, Y.Y. A unified approach to model selection and sparse recovery using regularized least squares. The Annals of Statistics 2009, 37(6A), 3498–3528. [Google Scholar] [CrossRef]
  42. Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 2010, 38(2), 894–942. [Google Scholar] [CrossRef] [PubMed]
  43. Fan, J.Q.; Lv, J.C. A selective overview of variable selection in high dimensional feature space. Statistica Sinica 2010, 20, 101–148. [Google Scholar]
  44. Wu, Y.N.; Wang, L. A survey of tuning parameter selection for high-dimensional regression. Annual Review of Statistics and Its Application 2020, 7, 209–226. [Google Scholar] [CrossRef]
  45. Kuchibhotla, A.K.; Kolassa, J.E.; Kuffner, T.A. Post-selection inference. Annual Review of Statistics and Its Application 2022, 9, 1–23. [Google Scholar] [CrossRef]
  46. Bühlmann, P.; van de Geer, S. Statistics for High-Dimensional Data: Methods, Theory and Applications; Springer-Verlag: Heidelberg, Germany, 2011. [Google Scholar]
  47. Hastie, T.; Tibshirani, R.; Wainwright, M. Statistical Learning with Sparsity: The Lasso and Generalizations; Taylor & Francis Group, CRC: Boca Raton, America, 2015. [Google Scholar]
  48. Fan, J.Q.; Li, R.Z.; Zhang, C.-H.; Zou, H. Statistical Foundations of Data Science; Chapman and Hall: Boca Raton, America, 2020. [Google Scholar]
  49. Fan, J.Q.; Lv, J.C. Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2008, 70, 849–911. [Google Scholar] [CrossRef]
  50. Barut, E.; Fan, J.Q.; Verhasselt, A. Conditional sure independence screening. Journal of the American Statistical Association 2016, 111(515), 1266–1277. [Google Scholar] [CrossRef] [PubMed]
  51. Fan, J.Q.; Song, R. Sure independence screening in generalized linear models with NP-dimensionality. The Annals of Statistics 2010, 38(6), 3567–3604. [Google Scholar] [CrossRef]
  52. Fan, J.Q.; Feng, Y.; Song, R. Nonparametric independence screening in sparse ultrahigh-dimensional additive models. Journal of the American Statistical Association 2011, 106(494), 544–557. [Google Scholar] [CrossRef] [PubMed]
  53. Li, G.R.; Peng, H.; Zhang, J.; Zhu, L.X. Robust rank correlation based screening. The Annals of Statistics 2012, 40(3), 1846–1877. [Google Scholar] [CrossRef]
  54. Ma, S.J.; Li, R.Z.; Tsai, C.L. Variable screening via quantile partial correlation. Journal of the American Statistical Association 2017, 112(518), 650–663. [Google Scholar] [CrossRef] [PubMed]
  55. Pan, W.L.; Wang, X.Q.; Xiao, W.N.; Zhu, H.T. A generic sure independence screening procedure. Journal of the American Statistical Association 2019, 114(526), 928–937. [Google Scholar] [CrossRef]
  56. Tong, Z.X.; Cai, Z.R.; Yang, S.S.; Li, R.Z. Model-free conditional feature screening with FDR control. Journal of the American Statistical Association 2022, in press. [Google Scholar] [CrossRef]
  57. Wen, C.H.; Pan, W.L.; Huang, M.; Wang, X.Q. Sure independence screening adjusted for confounding covariates with ultrahigh dimensional data. Statistica Sinica 2018, 28(1), 293–317. [Google Scholar]
  58. Wang, L.M.; Li, X.X.; Wang, X.Q.; Lai, P. Unified mean-variance feature screening for ultrahigh-dimensional regression. Computational Statistics 2022, 37, 1887–1918. [Google Scholar] [CrossRef]
  59. Zhao, S.F.; Fu, G.F. Distribution-free and model-free multivariate feature screening via multivariate rank distance correlation. Journal of Multivariate Analysis 2022, 192, article–105081. [Google Scholar] [CrossRef]
  60. Slijepcevic, S. ; Megerian, S; Potkonjak, M. Location errors in wireless embedded sensor networks: sources, models, and effects on applications. Mobile Computing and Communications Review 2002. [Google Scholar]
  61. Purdom, E.; Holmes, S.P. Error distribution for gene expression data. Statistical Applications in Genetics and Molecular Biology 2005, 4(1), 16–16. [Google Scholar] [CrossRef] [PubMed]
  62. Loh, P.-L.; Wainwright, M.J. High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity. The Annals of Statistics 2012, 40(3), 1637–1664. [Google Scholar] [CrossRef]
  63. Datta, A.; Zou, H. CoCoLasso for high-dimensional error-in-variables regression. The Annals of Statistics 2017, 45(6), 2400–2426. [Google Scholar] [CrossRef]
  64. Zheng, Z.M.; Li, Y.; Yu, C.X.; Li, G.R. Balanced estimation for high-dimensional measurement error models. Computational Statistics & Data Analysis 2018, 126, 78–91. [Google Scholar]
  65. Tao, T.; Pan, S.H.; Bi, S.J. Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression. Statistica Sinica 2018, 31(2), 909–933. [Google Scholar] [CrossRef]
  66. Rosenbaum, M.; Tsybakov, A. Sparse recovery under matrix uncertainty. The Annals of Statistics 2010, 38(5), 2620–2651. [Google Scholar] [CrossRef]
  67. Rosenbaum, M.; Tsybakov, A. Improved matrix uncertainty selector. From Probability to Statistics and Back: High-Dimensional Models and Processes 2013, 9, 276–290. [Google Scholar]
  68. Sørensen, Ø; Hellton, K.H.; Frigessi, A.; Thoresen, M. Covariate selection in high-dimensional generalized linear models with measurement error. Journal of Computational and Graphical Statistics 2018, 27, 739–749. [Google Scholar]
  69. Sørensen, Ø; Frigessi, A.; Thoresen, M. Measurement error in Lasso: impact and likelihood bias correction. Statistics Sinica 2019, 25, 809–829. [Google Scholar]
  70. Belloni, A.; Rosenbaum, M.; Tsybakov, A.B. Linear and conic programming estimators in high dimensional errors-in-variables models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2017, 79, 939–956. [Google Scholar] [CrossRef]
  71. Romeo, G.; Thoresen, M. Model selection in high-dimensional noisy data: a simulation study. Journal of Statistical Computation and Simulation 2019, 89(11), 2031–2050. [Google Scholar] [CrossRef]
  72. Brown, B.; Weaver, T.; Wolfson, J. Meboost: variable selection in the presence of measurement error. Statistics in Medicine 2019, 38, 2705–2718. [Google Scholar] [CrossRef] [PubMed]
  73. Nghiem, L.H.; Potgieter, C.J. Simulation-selection-extrapolation: estimation in high-dimensional errors-in-variables models. Biometrics 2019, 75, 1133–1144. [Google Scholar] [CrossRef]
  74. Jiang, F.; Ma, Y.Y. Poisson regression with error corrupted high dimensional features. Statistica Sinica 2022, 32, 2023–2046. [Google Scholar] [CrossRef]
  75. Byrd, M.; McGee, M. A simple correction procedure for high-dimensional generalized linear models with measurement error. arXiv preprint 2019, arXiv:1912.11740. [Google Scholar]
  76. Liang, F.M.; Jia, B.C.; Xue, J.N.; Li, Q.Z.; Luo, Y. An imputation–regularized optimization algorithm for high dimensional missing data problems and beyond. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2018, 80, 899–926. [Google Scholar] [CrossRef]
  77. van de Geer, S.; Bühlmann, P.; Ritov, Y.; Dezeure, R. On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics 2014, 42(3), 1166–1202. [Google Scholar] [CrossRef]
  78. Zhang, C.-H.; Zhang, S.S. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2014, 76, 217–242. [Google Scholar] [CrossRef]
  79. Ma, S.J.; Carroll, R.J.; Liang, H.; Xu, S.Z. Estimation and inference in generalized additive coefficient models for nonlinear interactions with high-dimensional covariates. The Annals of Statistics 2015, 43(5), 2102–2131. [Google Scholar] [CrossRef] [PubMed]
  80. Dezeure, R.; Bühlmann, P.; Meier, L.; Meinshausen, N. High-dimensional inference: confidence intervals, p-values and R-software hdi. Statistical Science 2015, 30(4), 533–558. [Google Scholar] [CrossRef]
  81. Ning, Y.; Liu, H. A general theory of hypothesis tests and confidence regions for sparse high dimensional models. The Annals of Statistics 2017, 45(1), 158–195. [Google Scholar] [CrossRef]
  82. Zhang, X.Y.; Cheng, G. Simultaneous inference for high-dimensional linear models. Journal of the American Statistical Association 2017, 112(518), 757–768. [Google Scholar] [CrossRef]
  83. Vandekar, S.N.; Reiss, P.T.; Shinohara, R.T. Interpretable high-dimensional inference via score projection with an application in neuroimaging. Journal of the American Statistical Association 2019, 114(526), 820–830. [Google Scholar] [CrossRef]
  84. Ghosh, S.; Tan, Z.Q. Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data. Bernoulli 2022, 28(3), 1675–1703. [Google Scholar] [CrossRef]
  85. Belloni, A.; Chernozhukov, V.; Kaul, A. Confidence bands for coefficients in high dimensional linear models with error-in-variables. arXiv preprint 2017, arXiv:1703.00469. [Google Scholar]
  86. Li, M.Y.; Li, R.Z.; Ma, Y.Y. Inference in high dimensional linear measurement error models. Journal of Multivariate Analysis 2021, 184, article–104759. [Google Scholar] [CrossRef]
  87. Huang, X.D.; Bao, N.N.; Xu, K.; Wang, G.P. Variable selection in high-dimensional error-in-variables models via controlling the false discovery proportion. Communications in Mathematics and Statistics 2022, 10, 123–151. [Google Scholar] [CrossRef]
  88. Jiang, F.; Zhou, Y.Q.; Liu, J.X.; Ma, Y.Y. On high dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization. The Annals of Statistics 2023, 51(1), 233–259. [Google Scholar] [CrossRef]
  89. Nghiem, L.H.; Hui, F.K.C.; Müller, S.; Welsh, A.H. Screening methods for linear errors-in-variables models in high dimensions. Biometrics 2022, in press. [Google Scholar] [CrossRef]
  90. Duchi, J.; Shalev-Shwartz, S.; Singer, Y.; Chandra, T. Efficient projections onto the l1-ball for learning in high dimensions. Proceedings of International Conference on Machine Learning, New York, America, July 2008. [Google Scholar]
  91. Agarwal, A.; Negahban, S.; Wainwright, M.J. Fast global convergence of gradient methods for high-dimensional statistical recovery. The Annals of Statistics 2012, 40(5), 2452–2482. [Google Scholar] [CrossRef]
  92. Chen, Y.D.; Caramanis, C. Noisy and missing data regression: distribution-oblivious support recovery. Journal of Machine Learning Research 2013, 28, 383–391. [Google Scholar]
  93. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 2011, 3(1), 1–122. [Google Scholar] [CrossRef]
  94. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. The Annals of Statistics 2004, 32(2), 407–499. [Google Scholar] [CrossRef]
  95. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 2010, 33(1), 1–22. [Google Scholar] [CrossRef] [PubMed]
  96. Escribe, C.; Lu, T.Y.; Keller-Baruch, J.; Forgetta, V.; Xiao, B.W.; Richards, J.B.; Bhatnagar, S.; Oualkacha, K.; Greenwood, C.M.T. Block coordinate descent algorithm improves variable selection and estimation in error-in-variables regression. Genetic Epidemiology 2021, 45, 874–890. [Google Scholar] [CrossRef]
  97. James, G.M.; Radchenko, P. A generalized Dantzig selector with shrinkage tuning. Biometrika 2009, 96(2), 323–337. [Google Scholar] [CrossRef]
  98. Huang, J.; Horowitz, J.L.; Ma, S.G. Asymptotic properties of bridge estimators in sparse high-dimensional regression models. The Annals of Statistics 2008, 36(2), 587–613. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated