![]() First, they produce unbiased estimators of the parameters that help us understand the final model without considering the penalty effect. ![]() There are some non-convex penalties including the SCAD that have very distinct advantages against others such as the bridge ( Huang et al., 2008 Kim et al., 2016) and log ( Zou and Li, 2008 Kwon et al., 2016). There are many possible penalty functions for the penalized estimation such as the least absolute selection and shrinkage operator (LASSO) ( Tibshirani, 1996) and smoothly clipped absolute deviation (SCAD) ( Fan and Li, 2001). Hence we need not to exhaustively search all the possible candidate sub-models when the AR process has very large model order. However, the advantage of the penalized estimation comes from the efficiency of the computation since there exist many fast and efficient algorithms ( Friedman et al., 2007 Kim et al., 2008 Lee et al., 2016). ![]() The penalized estimation has nice asymptotic properties such as selection consistency and minimax optimality for various statistical models that include the generalized linear regression model ( Fan and Peng, 2004 Zou, 2006 Ye and Zhang, 2010 Kwon and Kim, 2012). However, these approaches suffer from computational complexity since it is almost impossible to compare all the candidate sub-processes when the maximal order is very large ( McClave, 1975 Sarkar and Kanjilal, 1995 Chen, 1999 McLeod and Zhang, 2006).įor years, penalized estimation has been studied as an alternative for the subset selection problem ( Nardi and Rinaldo, 2011 Schmidt and Makalic, 2013 Sang and Sun, 2015 Kwon et al., 2017). Na (2017) proved that there is a large class of GICs that is selection consistent, including the BIC as an example. Recently, Na (2017) introduced the generalized information criterion (GIC) ( Kim et al., 2012) for the AR process that includes most of information criteria such as Akaike information criterion (AIC) ( Akaike, 1973), Hannan-Quinn criterion (HQC) ( Hannan, 1980) and Bayesian information criterion (BIC) ( Schwarz, 1978). Theoretical properties such as asymptotic efficiency and selection consistency of the final sub-process from these information criteria have also been investigated ( Shibata, 1976 Hannan and Quinn, 1979 Tsay, 1984 Claeskens and Hjort, 2003 Claeskens et al., 2007). ![]() Various information criteria ( Akaike, 1969, 1973, 1979 Schwarz, 1978 Hannan and Quinn, 1979 Claeskens and Hjort, 2003) have been proposed to identify true non-zero parameters of the AR process, which we call subset selection problem in this paper. The usual least square estimation may yield severe modeling biases when the AR model is sparse including zero parameters. The autoregressive (AR) process is a basic and important processes for time series data analysis. Keywords : autoregressive process, subset selection, non-convex penalty, oracle property, tuning parameter selection Simulation studies are given to confirm the theoretical results. Further, we construct a practical method to select tuning parameters using generalized information criterion, of which the minimizer asymptotically recovers the best theoretical non-penalized estimator of the sparse AR process. The results hold when the maximal order of the AR process increases to infinity and the minimal size of true non-zero parameters decreases toward zero as the sample size increases. ![]() We prove that the penalized estimators achieve some standard theoretical properties such as weak and strong oracle properties which have been proved in sparse linear regression framework. A class of non-convex penalties are considered that include the smoothly clipped absolute deviation and minimax concave penalties as special examples. We study how to distinguish the parameters of the sparse autoregressive (AR) process from zero using a non-convex penalized estimation. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |