Model selection and model averaging have been the popular approaches in handling modelling uncertainties. Fan and Li(2006) laid out a unified frame work for variable selection via penalized likelihood. The tuning parameter selection is vital in the optimization problem for the penalized estimators in achieving consistent selection and optimal estimation. Since the OLSpost-LASSO estimator by Belloni and Chernozhukov (2013), few studies have focused on the finite sample performances of the class of OLS post-penalty estimators with the tuning parameter choice determined by different tuning parameter selection approaches. We aim to supplement the existing model selection literature by studying such a class of OLS post-selection estimators.
Inspired by the Shrinkage Averaging Estimator (SAE) by Schomaker(2012) and the Mallows Model Averaging (MMA) criterion by Hansen (2007), we further propose a Shrinkage Mallows Model Averaging (SMMA) estimator for averaging high dimensional sparse models. Based on the Monte Carlo design by Wang et al. (2009) which features an expanding sparse parameter space as the sample size increases, our Monte Carlo design further considers the effect of the effective sample size and the degree of model sparsity on the finite sample performances of model selection and model averaging estimators. From our data examples, we find that the OLS post-SCAD(BIC) estimator in finite sample outperforms most of the current penalized least squares estimators as long as the number of parameters does not exceed the sample size. In addition, the SMMA performs better given sparser models. This supports the use of the SMMA estimator when averaging high dimensional sparse models.
Keywords:
Subject: Business, Economics and Management - Econometrics and Statistics
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.