您尚未登录,请登录后浏览更多内容! 登录 | 注册

QQ登录

只需一步,快速开始

 找回密码
 注册

QQ登录

只需一步,快速开始

查看: 2385|回复: 0

高维统计,稀疏推断,惩罚估计经典书籍以及与Lasso变量选择有关的论文清单

[复制链接]
  • TA的每日心情
    开心
    2016-3-19 06:18
  • 签到天数: 18 天

    [LV.4]偶尔看看III

    发表于 2015-8-12 08:38:44 | 显示全部楼层 |阅读模式

    近些年来,与变量选择有关的高维统计推断非常火(特别是和L1惩罚函数有关的lasso方法,以及一些推广的方法,具体看看网上的这些博文Lasso思想及算法统计学习那些事The Lasso,videolectures上的视频http://videolectures.net/site/search/?q=LASSO http://videolectures.net/site/search/?q=+High-Dimensional+Data)。变量选择作为现代数理统计的重要一支得到了迅速的发展,在生物,医药, 网络,经济金融、图像处理等领域的应用广泛。一些大牛门(这些大牛的名字可见下面与Lasso变量选择有关的高引论文,引用次数截至2015.8.12)经常在统计四大天王杂志

    Journal of the Royal Statistical Society Series B-Statistical Methodology,Annals of Statistics,Biometrika ,Journal of the American Statistical Association

    上“灌水”。在统计的六小天王杂志

    Bernoulli,Statistica Sinica,Scandinavian Journal of Statistics, Electronic Journal of Statistics,

    Statistical Science,Technometrics

    上也有许多相当好的文章。

           随着科学技术的进步,收集数据维数也越来越大。因此如何有效地从海量数据中挖掘出有用的信息备受人们的关注。高维统计建模无疑是目前处理这一问题的最有效的手段之一。在低维模型建立之初,为了尽量减小因缺少重要自变量而出现的模型偏差,人们通常会选择尽可能多的自变量。但在高维数据建模中,由于维数祸根(Curse of Dimensionality,见Introduction to High-Dimensional Statistics by Christophe Giraud的第1章详细描述),若把所以变量选出来是不合符实际的。故我们需要选择一些变量,以提高模型的解释性和预测精度。变量选择也服从了奥卡姆剃刀(Occam's Razor)的思想。他在《箴言书注》2卷15题说“切勿浪费较多东西,去做‘用较少的东西,同样可以做好的事情’。奥卡姆是由14世纪逻辑学家、圣方济各会修士奥卡姆的威廉(William of Occam,约1285年至1349年)提出。

           Occam’s Razor is a well known principle of “parsimony of explanations” which is influential
    in scientific thinking in general and in problems of statistical inference in particular. by Rasmussen


         要研究高维统计也不容易,需要下面的基础课程作为预备知识:
    数理统计(经典统计推断),高等概率论(极限理论以及大样本理论部分),线性与广义线性模型(矩阵论,经典线性模型),统计计算(优化方法)
    书单可见博文概率统计金融数学计量精算一些内容利于自学,新而全的教科书

          下面的书籍是专门讲(或者有一些章节提到)高维统计,稀疏推断,惩罚估计的一些书籍(按照时间顺序排列)。

    偏理论的高维统计(稀疏推断,惩罚估计)推断书籍:
    2002,Subset selection in regression 2ed by Miller, A.  
    2005,The concentration of measure phenomenon by Ledoux, M.
    2007,Introduction to Clustering Large and High-Dimensional Data by Jacob Kogan
    2007,Concentration inequalities and model selection by Massart, P.
    2008,Modern multivariate statistical techniques by Izenman, A. J.

    2008,High-Dimensional Data Analysis in Cancer Research by Xiaochun Li and Ronghui Xu
    2010,High-dimensional Data Analysis by Tony Cai and Xiaotong Shen
    2009,Spectral Analysis of Large Dimensional Random Matrices by Zhidong Bai and Jack W. Silverstein
    2012,大维统计分析 白志东
    2010,Statistics for High-Dimensional Data: Methods, Theory and Applications by Peter Bühlmann
    2010,Large-scale inference: empirical Bayes methods for estimation, testing, and prediction by Efron, B.
    2011,Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems by Koltchinskii, V.
    2013,Multivariate statistical analysis: A high-dimensional approach by Serdobolskii, V. I. by Max Bramer
    2013,High Dimensional Probability VI The Banff Volume
    2013,High-Dimensional Covariance Estimation: With High-Dimensional Data by Mohsen Pourahmadi
    2013,Penalty, Shrinkage and Pretest Strategies: Variable Selection and Estimation by S. Ejaz Ahmed
    2014,Multivariate Statistics High-Dimensional and Large-Sample Approximations,Fujikoshi

    2014,Introduction to High-Dimensional Statistics by Christophe Giraud
    2014,An Introduction to Sparse Stochastic Processes by M Unser, PD Tafti
    2015,Statistical Learning for High-Dimensional Data by  Jianqing Fan,Runze Li

    2015,Multivariate Density Estimation: Theory, Practice, and Visualization 2ed by David W. Scott  (第7章)
    2015,Applied multivariate statistical analysis 4ed by Härdle, W., & Simar, L.
    2015,Statistical Learning with Sparsity: The Lasso and Generalizations by Hastie, T., Tibshirani, R., & Wainwright, M.
    2015,Modeling and Stochastic Learning for Forecasting in High Dimensions by Antoniadis, A., Poggi, J. M., & Brossat, X.
    2015,Large Sample Covariance Matrices and High-Dimensional Data Analysis by Jianfeng Yao and Shurong Zheng

    偏应用的统计学习书籍:
    1998,Statistical learning theory by Vapnik   
    2004,All of Statistics: A Concise Course in Statistical Inference by Larry Wasserman(这个书后半部分几乎统计学习的内容,Bootstrap 图模型 因果推断 分类 非参都有介绍)  
    2008,Statistical Learning from a Regression Perspective by Richard A. Berk  
    2009,The Elements of Statistical Learning : Data Mining, Inference, and Prediction by Robert Tibshirani、Trevor Hastie、Jerome Friedman  这本书的作者是Boosting方法,变量选择最活跃的几个研究人员,发明的Gradient Boosting提出了理解Boosting方法的新角度,极大扩展了Boosting方法的应用范围。这本书对当前最为流行的方法有比较全面深入的介 绍,对工程人员参考价值也许要更大一点。另一方面,它不仅总结了已经成熟了的一些技术,而且对尚在发展中的一些议题也有简明扼要的论述。让读者充分体会到 机器学习是一个仍然非常活跃的研究领域,应该会让学术研究人员也有常读常新的感受。”  
    2009,Algebraic Geometry and Statistical Learning Theory by Sumio Watanabe                                                               
    2012,统计学习方法 李航(作者是国内机器学习领域的几个大家之一,曾在MSRA任高级研究员,现在华为诺亚方舟实验室。书中写了十个算法,每个算法的介绍都很干脆,直接上公 式,是彻头彻尾的“干货书”。每章末尾的参考文献也方便了想深入理解算法的童鞋直接查到经典论文。)                        
    2012,Machine Learning: A Probabilistic Perspective by Kevin P. Murphy
    2013,Machine Learning with R by Brett Lantz
    2013,Probability for Statistics and Machine Learning by Anirban DasGupta (统计学习中的概率理论应有尽有)
    2013,An Introduction to Statistical Learning: with Applications in R by Gareth James
    2014,Applied Linear Regression, 4th Edition by Sanford Weisberg(第10章)


          下面Lasso变量选择有关的高引论文(谷歌学术引用次数大于100,这里用的是)清单:
           Lasso变量选择的提出是Tibshirani在1996年JRSS-B上的一篇文章
    Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288. 被引用次数:13667
           Lasso的全称是least Least absolute shrinkage and selection operator。其想法可以用如下的最优化问题来表述:
    360截图20150812082300104.jpg

    Tibshirani(1996)提出Lasso方法之前的变量选择高引论文
    Akaike, H. (1973), "Information theory and an extension of the maximum likelihood principle", in Petrov, B.N.; Csáki, F., 2nd International Symposium on Information Theory, Tsahkadsor, Armenia, USSR, September 2-8, 1971, Budapest: Akadémiai Kiadó, p. 267-281.(AIC准则) 被引用次数:14906
    Mallows, C. L. (1973). Some comments on Cp. Technometrics, 15(4), 661-675. (MallowsCp)被引用次数:3336

    Schwarz, Gideon E. (1978), Estimating the dimension of a model, Annals of Statistics 6 (2): 461–464 (BIC准则) 被引用次数:24512
    Frank, L. E., & Friedman, J. H. (1993). A statistical view of some chemometrics regression tools. Technometrics, 35(2), 109-135. (桥估计)被引用次数:1630

    Breiman, L. (1995). Better subset regression using the nonnegative garrote. Technometrics, 37(4), 373-384.被引用次数:737
    Mallows, C. L. (1995). More comments on Cp. Technometrics, 37(4), 362-372.被引用次数:127


    Tibshirani(1996)提出Lasso方法之后的高引论文
    1-10
    Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. The Annals of statistics, 32(2), 407-499.(提出最小角回归方法) 被引用次数:5125
    Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301-320. (提出 elastic net)被引用次数:3872
    Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association, 96(456), 1348-1360. (提出SCAD)被引用次数:2888
    Yuan, M., & Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1), 49-67. (提出Group lassso) 被引用次数:2686
    Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476), 1418-1429. (提出adaptive lasso ) 被引用次数:2303
    Friedman, J., Hastie, T., & Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of statistical software, 33(1), 1. 被引用次数:2207
    Candes, E., & Tao, T. (2007). The Dantzig selector: statistical estimation when p is much larger than n. The Annals of Statistics, 2313-2351. (Dantzig selector) 被引用次数:1893
    Meinshausen, N., & Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. The Annals of Statistics, 1436-1462.(lasso in graphs model) 被引用次数:1489
    Zhao, P., & Yu, B. (2006). On model selection consistency of Lasso. The Journal of Machine Learning Research, 7, 2541-2563. (consistency of Lasso) 被引用次数:1241
    Zou, H., Hastie, T., & Tibshirani, R. (2006). Sparse principal component analysis. Journal of computational and graphical statistics, 15(2), 265-286. (稀疏主成分分析) 被引用次数:1176
    11-20
    Friedman, J., Hastie, T., Höfling, H., & Tibshirani, R. (2007). Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2), 302-332. 被引用次数:1024
    Bickel, P. J., Ritov, Y. A., & Tsybakov, A. B. (2009). Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics, 1705-1732. 被引用次数:970
    Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., & Knight, K. (2005). Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1), 91-108. 被引用次数:935 Park, T., & Casella, G. (2008). The bayesian lasso. Journal of the American Statistical Association, 103(482), 681-686. (贝叶斯lasso)被引用次数:886
    Knight, K., & Fu, W. (2000). Asymptotics for lasso-type estimators. Annals of statistics, 1356-1378.(lasso渐进性质的必读经典论文) 被引用次数:774
    Breiman, L. (1995). Better subset regression using the nonnegative garrote. Technometrics, 37(4), 373-384. 被引用次数:737
    Fu, W. J. (1998). Penalized regressions: the bridge versus the lasso. Journal of computational and graphical statistics, 7(3), 397-416. 被引用次数:703
    Meier, L., Van De Geer, S., & Bühlmann, P. (2008). The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1), 53-71. 被引用次数:709
    Fan, J., & Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(5), 849-911. 被引用次数:713
    Wainwright, M. J. (2009). Sharp thresholds for high-dimensional and noisy sparsity recovery using-constrained quadratic programming (Lasso). Information Theory, IEEE Transactions on, 55(5), 2183-2202. 被引用次数:686
    21-30Schäfer, J., & Strimmer, K. (2005). A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical applications in genetics and molecular biology, 4(1).被引用次数:678
    Tibshirani, R. (1997). The lasso method for variable selection in the Cox model. Statistics in medicine, 16(4), 385-395.被引用次数:680
    Zhu, J., Rosset, S., Hastie, T., & Tibshirani, R. (2004). 1-norm support vector machines. Advances in neural information processing systems, 16(1), 49-56.被引用次数:635
    Meinshausen, N., & Bühlmann, P. (2010). Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4), 417-473.被引用次数:672
    Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics surveys, 4, 40-79.被引用次数:647
    Yuan, M., & Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1), 19-35.被引用次数:608
    Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 894-942. 被引用次数:588
    Zou, H., & Li, R. (2008). One-step sparse estimates in nonconcave penalized likelihood models. Annals of statistics, 36(4), 1509.被引用次数:565
    Park, M. Y., & Hastie, T. (2007). L1‐regularization path algorithm for generalized linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(4), 659-677.被引用次数:554
    Osborne, M. R., Presnell, B., & Turlach, B. A. (2000). On the lasso and its dual. Journal of Computational and Graphical statistics, 9(2), 319-337.被引用次数:545
    Hastie, T., Rosset, S., Tibshirani, R., & Zhu, J. (2004). The entire regularization path for the support vector machine. The Journal of Machine Learning Research, 5, 1391-1415.被引用次数:520
    31-40
    Koenker, R. (2004). Quantile regression for longitudinal data. Journal of Multivariate Analysis, 91(1), 74-89.被引用次数:502Meinshausen, N., & Yu, B. (2009). Lasso-type recovery of sparse representations for high-dimensional data. The Annals of Statistics, 246-270. 被引用次数:485
    Fan, J., & Peng, H. (2004). Nonconcave penalized likelihood with a diverging number of parameters. The Annals of Statistics, 32(3), 928-961.被引用次数:483
    Zou, H., Hastie, T., & Tibshirani, R. (2007). On the “degrees of freedom” of the lasso. The Annals of Statistics, 35(5), 2173-2192.被引用次数:479
    Bach, F. R. (2008). Consistency of the group lasso and multiple kernel learning. The Journal of Machine Learning Research, 9, 1179-1225.被引用次数:476
    Blei, D. M., & Lafferty, J. D. (2007). A correlated topic model of science. The Annals of Applied Statistics, 17-35. 被引用次数:477
    Antoniadis, A., & Fan, J. (2011). Regularization of wavelet approximations. Journal of the American Statistical Association. 被引用次数:467
    Genkin, A., Lewis, D. D., & Madigan, D. (2007). Large-scale Bayesian logistic regression for text categorization. Technometrics, 49(3), 291-304. 被引用次数:455
    Hofmann, T., Schölkopf, B., & Smola, A. J. (2008). Kernel methods in machine learning. The annals of statistics, 1171-1220.被引用次数:451
    Portnoy, S., & Koenker, R. (1997). The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators. Statistical Science, 12(4), 279-300.被引用次数:415
    41-50
    Bühlmann, P., & Hothorn, T. (2007). Boosting algorithms: Regularization, prediction and model fitting. Statistical Science, 477-505.被引用次数:407
    Zhang, C. H., & Huang, J. (2008). The sparsity and bias of the lasso selection in high-dimensional linear regression. The Annals of Statistics, 1567-1594.被引用次数:407Jacob, L., Obozinski, G., & Vert, J. P. (2009, June). Group lasso with overlap and graph lasso. In Proceedings of the 26th annual international conference on machine learning (pp. 433-440). ACM.被引用次数:405
    Koh, K., Kim, S. J., & Boyd, S. P. (2007). An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression. Journal of Machine learning research, 8(8), 1519-1555.被引用次数:420
    Wu, T. T., & Lange, K. (2008). Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 224-244.被引用次数:385
    Jolliffe, I. T., Trendafilov, N. T., & Uddin, M. (2003). A modified principal component technique based on the LASSO. Journal of computational and Graphical Statistics, 12(3), 531-547.
    Van de Geer, S. A. (2008). High-dimensional generalized linear models and the lasso. The Annals of Statistics, 614-645.被引用次数:357
    Bair, E., Hastie, T., Paul, D., & Tibshirani, R. (2006). Prediction by supervised principal components. Journal of the American Statistical Association, 101(473).被引用次数:356
    Candès, E. J., & Plan, Y. (2009). Near-ideal model selection by ℓ1 minimization. The Annals of Statistics, 37(5A), 2145-2177.被引用次数:345
    Ramsay, J. O., Hooker, G., Campbell, D., & Cao, J. (2007). Parameter estimation for differential equations: a generalized smoothing approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(5), 741-796.被引用次数:341
    51-60
    Wang, H., Li, R., & Tsai, C. L. (2007). Tuning parameter selectors for the smoothly clipped absolute deviation method. Biometrika, 94(3), 553-568.被引用次数:329
    Rosset, S., & Zhu, J. (2007). Piecewise linear regularized solution paths. The Annals of Statistics, 1012-1030.被引用次数:328
    Zhao, P., Rocha, G., & Yu, B. (2009). The composite absolute penalties family for grouped and hierarchical variable selection. The Annals of Statistics, 3468-3497.被引用次数:331
    Bunea, F., Tsybakov, A., & Wegkamp, M. (2007). Sparsity oracle inequalities for the Lasso. Electronic Journal of Statistics, 1, 169-194.被引用次数:321
    Wu, T. T., Chen, Y. F., Hastie, T., Sobel, E., & Lange, K. (2009). Genome-wide association analysis by lasso penalized logistic regression. Bioinformatics, 25(6), 714-721.被引用次数:321
    Kuo, L., & Mallick, B. (1998). Variable selection for regression models. Sankhyā: The Indian Journal of Statistics, Series B, 65-81.被引用次数:314
    Leeb, H., & Pötscher, B. M. (2005). Model selection and inference: Facts and fiction. Econometric Theory, 21(01), 21-59.被引用次数:298
    Fan, J., & Li, R. (2002). Variable selection for Cox's proportional hazards model and frailty model. Annals of Statistics, 74-99.被引用次数:303
    Negahban, S., Yu, B., Wainwright, M. J., & Ravikumar, P. K. (2009). A unified framework for high-dimensional analysis of $ M $-estimators with decomposable regularizers. In Advances in Neural Information Processing Systems (pp. 1348-1356).被引用次数:319
    Jenatton, R., Audibert, J. Y., & Bach, F. (2011). Structured variable selection with sparsity-inducing norms. The Journal of Machine Learning Research, 12, 2777-2824.被引用次数:301
    61-70
    Chen, J., & Chen, Z. (2008). Extended Bayesian information criteria for model selection with large model spaces. Biometrika, 95(3), 759-771.被引用次数:312
    Fan, J., & Li, R. (2004). New estimation and model selection procedures for semiparametric modeling in longitudinal data analysis. Journal of the American Statistical Association, 99(467), 710-723.被引用次数:291
    Fan, J., & Lv, J. (2010). A selective overview of variable selection in high dimensional feature space. Statistica Sinica, 20(1), 101.被引用次数:294
    Sauerbrei, W., & Royston, P. (1999). Building multivariable prognostic and diagnostic models: transformation of the predictors by using fractional polynomials. Journal of the Royal Statistical Society. Series A (Statistics in Society), 71-94.被引用次数:286
    Guo, Y., Hastie, T., & Tibshirani, R. (2007). Regularized linear discriminant analysis and its application in microarrays. Biostatistics, 8(1), 86-100.被引用次数:283
    Wainwright, M. J. (2009). Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting. Information Theory, IEEE Transactions on, 55(12), 5728-5741.被引用次数:285
    George, E. I. (2000). The variable selection problem. Journal of the American Statistical Association, 95(452), 1304-1308.被引用次数:272
    Huang, J., Ma, S., & Zhang, C. H. (2008). Adaptive Lasso for sparse high-dimensional regression models. Statistica Sinica, 18(4), 1603.被引用次数:275
    Shen, H., & Huang, J. Z. (2008). Sparse principal component analysis via regularized low rank matrix approximation. Journal of multivariate analysis, 99(6), 1015-1034.被引用次数:280
    Friedman, J. H., & Popescu, B. E. (2008). Predictive learning via rule ensembles. The Annals of Applied Statistics, 916-954.被引用次数:265
    71-80
    Shevade, S. K., & Keerthi, S. S. (2003). A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19(17), 2246-2253.被引用次数:265
    Buehlmann, P. (2006). Boosting for high-dimensional linear models. The Annals of Statistics, 559-583.被引用次数:264
    Hunter, D. R., & Li, R. (2005). Variable selection using MM algorithms. Annals of statistics, 33(4), 1617.被引用次数:262
    Ravikumar, P., Lafferty, J., Liu, H., & Wasserman, L. (2009). Sparse additive models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(5), 1009-1030.被引用次数:260
    Zhang, H. H., & Lu, W. (2007). Adaptive Lasso for Cox's proportional hazards model. Biometrika, 94(3), 691-703.被引用次数:255
    Huang, J. Z., Liu, N., Pourahmadi, M., & Liu, L. (2006). Covariance matrix selection and estimation via penalised normal likelihood. Biometrika, 93(1), 85-98.被引用次数:255
    Lange, N., & Zeger, S. L. (1997). Non‐linear Fourier Time Series Analysis for Human Brain Mapping by Functional Magnetic Resonance Imaging. Journal of the Royal Statistical Society: Series C (Applied Statistics), 46(1), 1-29.被引用次数:252Obozinski, G., Taskar, B., & Jordan, M. I. (2010). Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, 20(2), 231-252.被引用次数:255
    Greenshtein, E., & Ritov, Y. A. (2004). Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli, 10(6), 971-988.被引用次数:252
    Huang, J., Horowitz, J. L., & Ma, S. (2008). Asymptotic properties of bridge estimators in sparse high-dimensional regression models. The Annals of Statistics, 587-613.被引用次数:253
    81-90
    Bunea, F., Tsybakov, A. B., & Wegkamp, M. H. (2007). Aggregation for Gaussian regression. The Annals of Statistics, 35(4), 1674-1697.被引用次数:243
    Wand, M. P. (2003). Smoothing and mixed models. Computational statistics, 18(2), 223-249.被引用次数:238Bai, J., & Ng, S. (2008). Forecasting economic time series using targeted predictors. Journal of Econometrics, 146(2), 304-317.被引用次数:238
    Peng, J., Wang, P., Zhou, N., & Zhu, J. (2009). Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486).被引用次数:243
    Huang, J., Zhang, T., & Metaxas, D. (2011). Learning with structured sparsity. The Journal of Machine Learning Research, 12, 3371-3412.被引用次数:243
    Mazumder, R., Hastie, T., & Tibshirani, R. (2010). Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research, 11, 2287-2322.被引用次数:241
    Li, C., & Li, H. (2008). Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics, 24(9), 1175-1182.被引用次数:239
    Zou, H., & Zhang, H. H. (2009). On the adaptive elastic-net with a diverging number of parameters. Annals of statistics, 37(4), 1733.被引用次数:239
    Meinshausen, N. (2007). Relaxed lasso. Computational Statistics & Data Analysis, 52(1), 374-393.被引用次数:238
    De Mol, C., Giannone, D., & Reichlin, L. (2008). Forecasting using a large number of predictors: Is Bayesian shrinkage a valid alternative to principal components?. Journal of Econometrics, 146(2), 318-328.被引用次数:235
    91-100
    O'Hara, R. B., & Sillanpää, M. J. (2009). A review of Bayesian variable selection methods: what, how and which. Bayesian analysis, 4(1), 85-117.被引用次数:234
    Turlach, B. A., Venables, W. N., & Wright, S. J. (2005). Simultaneous variable selection. Technometrics, 47(3), 349-363.被引用次数:233
    Yuan, M., & Lin, Y. (2007). On the non‐negative garrotte estimator. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(2), 143-161.被引用次数:218Gui, J., & Li, H. (2005). Penalized Cox regression analysis in the high-dimensional and low-sample size settings, with applications to microarray gene expression data. Bioinformatics, 21(13), 3001-3008.被引用次数:216
    Goeman, J. J. (2010). L1 penalized estimation in the cox proportional hazards model. Biometrical Journal, 52(1), 70-84.被引用次数:223
    Steyerberg, E. W., Borsboom, G. J., van Houwelingen, H. C., Eijkemans, M. J., & Habbema, J. D. F. (2004). Validation and updating of predictive logistic regression models: a study on sample size and shrinkage. Statistics in medicine, 23(16), 2567-2586.被引用次数:217
    Kadane, J. B., & Lazar, N. A. (2004). Methods and criteria for model selection. Journal of the American statistical Association, 99(465), 279-290.被引用次数:218
    Aliferis, C. F., Statnikov, A., Tsamardinos, I., Mani, S., & Koutsoukos, X. D. (2010). Local causal and markov blanket induction for causal discovery and feature selection for classification part i: Algorithms and empirical evaluation. The Journal of Machine Learning Research, 11, 171-234.被引用次数:214
    Bondell, H. D., & Reich, B. J. (2008). Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Biometrics, 64(1), 115-123.被引用次数:209
    Fan, J., & Li, R. (2006). Statistical challenges with high dimensionality: feature selection in knowledge discovery. In Proceedings oh the International Congress of Mathematicians: Madrid, August 22-30, 2006: invited lectures (pp. 595-622).被引用次数:208
    101-110
    Journée, M., Nesterov, Y., Richtárik, P., & Sepulchre, R. (2010). Generalized power method for sparse principal component analysis. The Journal of Machine Learning Research, 11, 517-553.被引用次数:206
    Chun, H., & Keleş, S. (2010). Sparse partial least squares regression for simultaneous dimension reduction and variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(1), 3-25.被引用次数:206
    Sardy, S., Bruce, A. G., & Tseng, P. (2000). Block coordinate relaxation methods for nonparametric wavelet denoising. Journal of computational and graphical statistics, 9(2), 361-379.被引用次数:203
    Seeger, M. W. (2008). Bayesian inference and optimal design for the sparse linear model. The Journal of Machine Learning Research, 9, 759-813.被引用次数:203
    Tibshirani, R., & Wang, P. (2008). Spatial smoothing and hot spot detection for CGH data using the fused lasso. Biostatistics, 9(1), 18-29.被引用次数:204
    Leng, C., Lin, Y., & Wahba, G. (2006). A note on the lasso and related procedures in model selection. Statistica Sinica, 16(4), 1273.被引用次数:196
    Wasserman, L., & Roeder, K. (2009). High dimensional variable selection.Annals of statistics, 37(5A), 2178.被引用次数:193
    Meier, L., Van de Geer, S., & Bühlmann, P. (2009). High-dimensional additive modeling. The Annals of Statistics, 37(6B), 3779-3821.被引用次数:192
    Li, R., & Liang, H. (2008). Variable selection in semiparametric regression modeling. Annals of Statistics, 36(1), 261.被引用次数:191
    Zhang, H. H., Ahn, J., Lin, X., & Park, C. (2006). Gene selection using support vector machines with non-convex penalty. Bioinformatics, 22(1), 88-95.被引用次数:186
    111-120
    Lin, Y., & Zhang, H. H. (2006). Component selection and smoothing in multivariate nonparametric regression. The Annals of Statistics, 34(5), 2272-2297.被引用次数:185
    d'Aspremont, A., Bach, F., & Ghaoui, L. E. (2008). Optimal solutions for sparse principal component analysis. The Journal of Machine Learning Research, 9, 1269-1294.被引用次数:187
    Clemmensen, L., Hastie, T., Witten, D., & Ersbøll, B. (2011). Sparse discriminant analysis. Technometrics, 53(4).被引用次数:186
    Wang, H., Li, G., & Jiang, G. (2007). Robust regression shrinkage and consistent variable selection through the LAD-Lasso. Journal of Business & Economic Statistics, 25(3), 347-355.被引用次数:181
    Sauerbrei, W. (1999). The use of resampling methods to simplify regression models in medical statistics. Journal of the Royal Statistical Society: Series C (Applied Statistics), 48(3), 313-329.被引用次数:181
    Carvalho, C. M., Polson, N. G., & Scott, J. G. (2010). The horseshoe estimator for sparse signals. Biometrika, asq017.被引用次数:179
    Barron, A. R., Cohen, A., Dahmen, W., & DeVore, R. A. (2008). Approximation and learning by greedy algorithms. The annals of statistics, 64-94.被引用次数:177
    Breheny, P., & Huang, J. (2011). Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The annals of applied statistics, 5(1), 232.被引用次数:179
    Liu, H., Lafferty, J., & Wasserman, L. (2009). The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. The Journal of Machine Learning Research, 10, 2295-2328.被引用次数:172
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.被引用次数:173
    121-130
    Wang, H., & Leng, C. (2007). Unified LASSO estimation by least squares approximation. Journal of the American Statistical Association, 102(479).被引用次数:168
    Huang, J., Horowitz, J. L., & Wei, F. (2010). Variable selection in nonparametric additive models. Annals of statistics, 38(4), 2282.被引用次数:166
    Raskutti, G., Wainwright, M. J., & Yu, B. (2011). Minimax rates of estimation for high-dimensional linear regression over-balls. Information Theory, IEEE Transactions on, 57(10), 6976-6994.被引用次数:162
    Kyung, M., Gill, J., Ghosh, M., & Casella, G. (2010). Penalized regression, standard errors, and Bayesian lassos. Bayesian Analysis, 5(2), 369-411.被引用次数:159
    Teo, C. H., Vishwanthan, S. V. N., Smola, A. J., & Le, Q. V. (2010). Bundle methods for regularized risk minimization. The Journal of Machine Learning Research, 11, 311-365.被引用次数:157
    Zou, H., & Yuan, M. (2008). Composite quantile regression and the oracle model selection theory. The Annals of Statistics, 1108-1126.被引用次数:157
    Hesterberg, T., Choi, N. H., Meier, L., & Fraley, C. (2008). Least angle and ℓ1 penalized regression: A review. Statistics Surveys, 2, 61-93.被引用次数:154
    Hans, C. (2009). Bayesian lasso regression. Biometrika, 96(4), 835-845.被引用次数:156
    Negahban, S., & Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, 1069-1097.被引用次数:156
    Fukumizu K, Bach F R, Jordan M I. Kernel dimension reduction in regression[J]. The Annals of Statistics, 2009: 1871-1905.被引用次数:152
    131-140
    Shalev-Shwartz, S., & Tewari, A. (2011). Stochastic methods for l 1-regularized loss minimization. The Journal of Machine Learning Research, 12, 1865-1892.被引用次数:152
    Fan, J., & Song, R. (2010). Sure independence screening in generalized linear models with NP-dimensionality. The Annals of Statistics, 38(6), 3567-3604.被引用次数:152
    Rothman, A. J., Levina, E., & Zhu, J. (2009). Generalized thresholding of large covariance matrices. Journal of the American Statistical Association, 104(485), 177-186.被引用次数:151
    Lounici, K. (2008). Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. Electronic Journal of statistics, 2, 90-102.被引用次数:150
    Mazumder, R., Friedman, J. H., & Hastie, T. (2011). SparseNet: Coordinate descent with nonconvex penalties. Journal of the American Statistical Association, 106(495).被引用次数:149
    Fan, J. (1997). Comments on «wavelets in statistics: A review» by a. antoniadis. Journal of the Italian Statistical Society, 6(2), 131-138.被引用次数:149
    Chen, Z., & Dunson, D. B. (2003). Random effects selection in linear mixed models. Biometrics, 59(4), 762-769.被引用次数:143
    Wu, Y., & Liu, Y. (2009). Variable selection in quantile regression. Statistica Sinica, 19(2), 801.被引用次数:148
    Wang, L., Li, H., & Huang, J. Z. (2008). Variable selection in nonparametric varying-coefficient models for analysis of repeated measurements. Journal of the American Statistical Association, 103(484), 1556-1569.被引用次数:139
    Yuan, M., & Lin, Y. (2005). Efficient empirical Bayes variable selection and estimation in linear models. Journal of the American Statistical Association, 100(472).被引用次数:138
    141-150
    Lin, Y., & Zhang, H. H. (2006). Component selection and smoothing in smoothing spline analysis of variance models. Annals of Statistics, 34(5), 2272-2297.被引用次数:135
    Pan, W., & Shen, X. (2007). Penalized model-based clustering with application to variable selection. The Journal of Machine Learning Research, 8, 1145-1164.被引用次数:136
    Fan, J., Samworth, R., & Wu, Y. (2009). Ultrahigh dimensional feature selection: beyond the linear model. The Journal of Machine Learning Research, 10, 2013-2038.被引用次数:137
    Belloni, A., & Chernozhukov, V. (2011). ℓ1-penalized quantile regression in high-dimensional sparse models. The Annals of Statistics, 39(1), 82-130.被引用次数:142
    Fan, J., & Lv, J. (2011). Nonconcave penalized likelihood with NP-dimensionality. Information Theory, IEEE Transactions on, 57(8), 5467-5484.被引用次数:142
    Lv, J., & Fan, Y. (2009). A unified approach to model selection and sparse recovery using regularized least squares. The Annals of Statistics, 3498-3528.被引用次数:138
    Wang, H., Li, G., & Tsai, C. L. (2007). Regression coefficient and autoregressive order shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(1), 63-78.被引用次数:136
    Kim, Y., Kim, J., & Kim, Y. (2006). Blockwise sparse regression. Statistica Sinica, 16(2), 375.被引用次数:131
    Wang, L., Zhu, J., & Zou, H. (2006). The doubly regularized support vector machine. Statistica Sinica, 16(2), 589.被引用次数:133
    Sauerbrei, W., Royston, P., & Binder, H. (2007). Selection of important variables and determination of functional form for continuous predictors in multivariable model building. Statistics in medicine, 26(30), 5512-5528.被引用次数:130
    151-160
    Chen, X., Lin, Q., Kim, S., Carbonell, J. G., & Xing, E. P. (2012). Smoothing proximal gradient method for general structured sparse regression. The Annals of Applied Statistics, 6(2), 719-752.被引用次数:136
    Bach, F. R. (2008). Consistency of trace norm minimization. The Journal of Machine Learning Research, 9, 1019-1048.被引用次数:129
    Fan, J., Feng, Y., & Wu, Y. (2009). Network exploration via the adaptive LASSO and SCAD penalties. The annals of applied statistics, 3(2), 521.被引用次数:131
    Zhang, N. R., & Siegmund, D. O. (2007). A modified Bayes information criterion with applications to the analysis of comparative genomic hybridization data. Biometrics, 63(1), 22-32.被引用次数:130
    Li, H., & Gui, J. (2006). Gradient directed regularization for sparse Gaussian concentration graphs, with applications to inference of genetic networks. Biostatistics, 7(2), 302-317.被引用次数:128
    Biau, G. (2012). Analysis of a random forests model. The Journal of Machine Learning Research, 13(1), 1063-1095.被引用次数:134
    Zhao, P., Rocha, G., & Yu, B. (2006). Grouped and hierarchical model selection through composite absolute penalties. Department of Statistics, UC Berkeley, Tech. Rep, 703.被引用次数:127
    Huang, J., Ma, S., Xie, H., & Zhang, C. H. (2009). A group bridge approach for variable selection. Biometrika, 96(2), 339-355.被引用次数:127
    Wang, H., Li, B., & Leng, C. (2009). Shrinkage tuning parameter selection with a diverging number of parameters. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(3), 671-683.被引用次数:130
    Levina, E., Rothman, A., & Zhu, J. (2008). Sparse estimation of large covariance matrices via a nested Lasso penalty. The Annals of Applied Statistics, 2(1), 245-263.被引用次数:126
    161-170
    Xu, S. (2007). An empirical Bayes method for estimating epistatic effects of quantitative trait loci. Biometrics, 63(2), 513-521.被引用次数:125
    Kanamori, T., Hido, S., & Sugiyama, M. (2009). A least-squares approach to direct importance estimation. The Journal of Machine Learning Research, 10, 1391-1445.被引用次数:133

    Xu, Z., Zhang, H., Wang, Y., Chang, X., & Liang, Y. (2010). L1/2 regularization. Science China Information Sciences, 53(6), 1159-1169.被引用次数:127
    James, G. M., Radchenko, P., & Lv, J. (2009). DASSO: connections between the Dantzig selector and lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(1), 127-142.被引用次数:127
    Griffin, J. E., & Brown, P. J. (2010). Inference with normal-gamma prior distributions in regression problems. Bayesian Analysis, 5(1), 171-188.被引用次数:127
    Simon, N., Friedman, J., Hastie, T., & Tibshirani, R. (2013). A sparse-group lasso. Journal of Computational and Graphical Statistics, 22(2), 231-245.被引用次数:130
    Yuan, G. X., Chang, K. W., Hsieh, C. J., & Lin, C. J. (2010). A comparison of optimization methods and software for large-scale l1-regularized linear classification. The Journal of Machine Learning Research, 11, 3183-3234.被引用次数:124
    Wang, L., Zhu, J., & Zou, H. (2008). Hybrid huberized support vector machines for microarray classification and gene selection. Bioinformatics, 24(3), 412-419.被引用次数:124
    Wang, H., & Xia, Y. (2009). Shrinkage estimation of the varying coefficient model. Journal of the American Statistical Association, 104(486).被引用次数:124
    Simon, N., Friedman, J., Hastie, T., & Tibshirani, R. (2011). Regularization paths for Cox’s proportional hazards model via coordinate descent. Journal of statistical software, 39(5), 1-13.被引用次数:126
    171-180
    Yuan, M., Ekici, A., Lu, Z., & Monteiro, R. (2007). Dimension reduction and coefficient estimation in multivariate linear regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(3), 329-346.被引用次数:127
    Belloni, A., Chernozhukov, V., & Wang, L. (2011). Square-root lasso: pivotal recovery of sparse signals via conic programming. Biometrika, 98(4), 791-806.被引用次数:124
    Raskutti, G., Wainwright, M. J., & Yu, B. (2010). Restricted eigenvalue properties for correlated Gaussian designs. The Journal of Machine Learning Research, 11, 2241-2259.被引用次数:121
    van Houwelingen, H. C., Bruinsma, T., Hart, A. A., van't Veer, L. J., & Wessels, L. F. (2006). Cross‐validated Cox regression on microarray gene expression data. Statistics in medicine, 25(18), 3201-3216.被引用次数:119
    Cai, T. T., Xu, G., & Zhang, J. (2009). On recovery of sparse signals via minimization. Information Theory, IEEE Transactions on, 55(7), 3388-3397.被引用次数:119
    Donoho, D. L., Maleki, A., & Montanari, A. (2011). The noise-sensitivity phase transition in compressed sensing. Information Theory, IEEE Transactions on, 57(10), 6920-6941.被引用次数:119
    Hansen, M. H., & Kooperberg, C. (2002). Spline Adaptation in Extended Linear Models (with comments and a rejoinder by the authors. Statistical Science, 17(1), 2-51.被引用次数:117
    Witten, D. M., & Tibshirani, R. (2009). Covariance‐regularized regression and classification for high dimensional problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(3), 615-636.被引用次数:117
    Peng, J., Zhu, J., Bergamaschi, A., Han, W., Noh, D. Y., Pollack, J. R., & Wang, P. (2010). Regularized multivariate regression for identifying master predictors with application to integrative genomics study of breast cancer. The annals of applied statistics, 4(1), 53.被引用次数:115
    Li, Y., & Zhu, J. (2008). L1-norm quantile regression. Journal of Computational and Graphical Statistics, 17(1).被引用次数:115
    181-190
    Belloni, A., Chen, D., Chernozhukov, V., & Hansen, C. (2012). Sparse models and methods for optimal instruments with an application to eminent domain. Econometrica, 80(6), 2369-2429.被引用次数:117
    Yuan, M. (2010). High dimensional inverse covariance matrix estimation via linear programming. The Journal of Machine Learning Research, 11, 2261-2286.被引用次数:116
    Goeman, J. J., Van De Geer, S. A., & Van Houwelingen, H. C. (2006). Testing against a high dimensional alternative. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3), 477-493.被引用次数:114
    Lockhart, R., Taylor, J., Tibshirani, R. J., & Tibshirani, R. (2014). A significance test for the lasso. Annals of statistics, 42(2), 413.被引用次数:113
    Hastie, T., Taylor, J., Tibshirani, R., & Walther, G. (2007). Forward stagewise regression and the monotone lasso. Electronic Journal of Statistics, 1, 1-29.被引用次数:113
    Koltchinskii, V. (2009). Sparsity in penalized empirical risk minimization. Annales de l’Institut Henri Poincaré - Probabilités et Statistiques (Vol. 45, No. 1, pp. 7-57).被引用次数:112
    Fan, J., Feng, Y., & Song, R. (2011). Nonparametric independence screening in sparse ultra-high-dimensional additive models. Journal of the American Statistical Association, 106(494).被引用次数:112
    Witten, D. M., & Tibshirani, R. (2011). Penalized classification using Fisher's linear discriminant. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(5), 753-772.被引用次数:112
    Dudík, M., Phillips, S. J., & Schapire, R. E. (2007). Maximum entropy density estimation with generalized regularization and an application to species distribution modeling. Journal of Machine Learning Research, 8(6).被引用次数:111
    Tsiatis, A. A., Davidian, M., Zhang, M., & Lu, X. (2008). Covariate adjustment for two‐sample treatment comparisons in randomized clinical trials: A principled yet flexible approach. Statistics in medicine, 27(23), 4658-4677.被引用次数:110
    191-200
    Seaman, S. R., & White, I. R. (2013). Review of inverse probability weighting for dealing with missing data. Statistical methods in medical research, 22(3), 278-295.被引用次数:109
    Parkhomenko, E., Tritchler, D., & Beyene, J. (2009). Sparse canonical correlation analysis with application to genomic data integration. Statistical Applications in Genetics and Molecular Biology, 8(1), 1-34.被引用次数:107
    Kim, Y., Choi, H., & Oh, H. S. (2008). Smoothly clipped absolute deviation on high dimensions. Journal of the American Statistical Association, 103(484), 1665-1673.被引用次数:108
    Bickel, P. J., Li, B., Tsybakov, A. B., van de Geer, S. A., Yu, B., Valdés, T., ... & van der Vaart, A. (2006). Regularization in statistics. Test, 15(2), 271-344.被引用次数:107
    Liu, H., Han, F., Yuan, M., Lafferty, J., & Wasserman, L. (2012). High-dimensional semiparametric Gaussian copula graphical models. The Annals of Statistics, 40(4), 2293-2326.被引用次数:104
    Lamarche, C. (2010). Robust penalized quantile regression estimation for panel data. Journal of Econometrics, 157(2), 396-408.被引用次数:104
    Li, R., & Sudjianto, A. (2012). Analysis of computer experiments using penalized likelihood in gaussian kriging models. Technometrics.被引用次数:103
    Inoue, A., & Kilian, L. (2008). How useful is bagging in forecasting economic time series? A case study of US consumer price inflation. Journal of the American Statistical Association, 103(482), 511-522.被引用次数:102
    Leeb, H., & Pötscher, B. M. (2008). Sparse estimators and the oracle property, or the return of Hodges’ estimator. Journal of Econometrics, 142(1), 201-211.被引用次数:101
    200-
    Witten, D. M., Friedman, J. H., & Simon, N. (2011). New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics, 20(4), 892-900.被引用次数:98
    Wang, H., & Leng, C. (2008). A note on adaptive group lasso. Computational statistics & data analysis, 52(12), 5277-5286.被引用次数:95
    Koenker, R., & Mizera, I. (2004). Penalized triograms: total variation regularization for bivariate smoothing. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66(1), 145-163.被引用次数:94Park, M. Y., Hastie, T., & Tibshirani, R. (2007). Averaged gene expressions for regression. Biostatistics, 8(2), 212-227.被引用次数:94
    Li, Q., & Lin, N. (2010). The Bayesian elastic net. Bayesian Analysis, 5(1), 151-170.被引用次数:97
    Meinshausen, N., Meier, L., & Bühlmann, P. (2012). P-values for high-dimensional regression. Journal of the American Statistical Association.
    被引用次数:99
    Huang, X., & Pan, W. (2003). Linear regression and two-class classification with gene expression data. Bioinformatics, 19(16), 2072-2078.被引用次数:95
    Wang, H. (2009). Forward regression for ultra-high dimensional variable screening. Journal of the American Statistical Association, 104(488), 1512-1524.被引用次数:95
    Städler, N., Bühlmann, P., & Van De Geer, S. (2010). ℓ 1-penalization for mixture regression models. Test, 19(2), 209-256.被引用次数:93
    Zhao, P., & Yu, B. (2007). Stagewise lasso. The Journal of Machine Learning Research, 8, 2701-2726.被引用次数:92
    Kozumi, H., & Kobayashi, G. (2011). Gibbs sampling methods for Bayesian quantile regression. Journal of statistical computation and simulation, 81(11), 1565-1578.被引用次数:94
    Guan, Y., & Stephens, M. (2011). Bayesian variable selection regression for genome-wide association studies and other large-scale problems. The Annals of Applied Statistics, 1780-1815.被引用次数:97
    Marx, B. D. (1996). Iteratively reweighted partial least squares estimation for generalized linear regression. Technometrics, 38(4), 374-381.被引用次数:90
    Wand, M. P. (2000). A comparison of regression spline smoothing procedures. Computational Statistics, 15(4), 443-462.被引用次数:88
    Bach, F., Jenatton, R., Mairal, J., & Obozinski, G. (2012). Structured sparsity through convex optimization. Statistical Science, 27(4), 450-468.被引用次数:93
    Danaher, P., Wang, P., & Witten, D. M. (2014). The joint graphical lasso for inverse covariance estimation across multiple classes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(2), 373-397.被引用次数:96
    Khan, J. A., Van Aelst, S., & Zamar, R. H. (2007). Robust linear model selection based on least angle regression. Journal of the American Statistical Association, 102(480), 1289-1299.被引用次数:88
    Polson, N. G., & Scott, J. G. (2010). Shrink globally, act locally: Sparse Bayesian regularization and prediction. Bayesian Statistics, 9, 501-538.被引用次数:89
    Grömping, U. (2009). Variable importance assessment in regression: linear regression versus random forest. The American Statistician, 63(4).被引用次数:90
    Zhang, Y., Li, R., & Tsai, C. L. (2010). Regularization parameter selections via generalized information criterion. Journal of the American Statistical Association, 105(489), 312-323.被引用次数:91
    Qiu, P., Zou, C., & Wang, Z. (2010). Nonparametric profile monitoring by mixed effects modeling. Technometrics, 52(3).被引用次数:88
    Friedman, J. H. (2012). Fast sparse regression and classification. International Journal of Forecasting, 28(3), 722-738.被引用次数:86
    Wang, S., & Zhu, J. (2008). Variable Selection for Model‐Based High‐Dimensional Clustering and Its Application to Microarray Data. Biometrics, 64(2), 440-448.被引用次数:83
    Tibshirani, R., Bien, J., Friedman, J., Hastie, T., Simon, N., Taylor, J., & Tibshirani, R. J. (2012). Strong rules for discarding predictors in lasso‐type problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(2), 245-266.被引用次数:89
    Hastie, T., & Tibshirani, R. (2004). Efficient quadratic regularization for expression arrays. Biostatistics, 5(3), 329-340.被引用次数:82
    Li, J., Das, K., Fu, G., Li, R., & Wu, R. (2011). The Bayesian lasso for genome-wide association studies. Bioinformatics, 27(4), 516-523.被引用次数:85
    Scott, J. G., & Carvalho, C. M. (2008). Feature-inclusion stochastic search for Gaussian graphical models. Journal of Computational and Graphical Statistics, 17(4).被引用次数:81
    Bai, J., & Ng, S. (2010). Instrumental variable estimation in a data rich environment. Econometric Theory, 26(06), 1577-1606.被引用次数:81
    Zou, H., & Yuan, M. (2008). The Fo-norm Support Vector Machine. Statistica Sinica, 18, 379-398.
    Van de Geer, S., Bühlmann, P., Ritov, Y. A., & Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics, 42(3), 1166-1202.
    Xie, H., & Huang, J. (2009). SCAD-penalized regression in high-dimensional partially linear models. The Annals of Statistics, 673-696.
    Castle, J. L., Doornik, J. A., & Hendry, D. F. (2012). Model selection when there are multiple breaks. Journal of Econometrics, 169(2), 239-246.
    Liang, H., & Li, R. (2009). Variable selection for partially linear models with measurement errors. Journal of the American Statistical Association, 104(485), 234-248.
    Li, F., & Zhang, N. R. (2010). Bayesian variable selection in structured high-dimensional covariate spaces with applications in genomics. Journal of the American Statistical Association, 105(491).
    Hoefling, H. (2010). A path algorithm for the fused lasso signal approximator. Journal of Computational and Graphical Statistics, 19(4), 984-1006.
    Lee, A. B., Nadler, B., & Wasserman, L. (2008). Treelets: an adaptive multi-scale basis for sparse unordered data. The Annals of Applied Statistics, 435-471.
    Pötscher, B. M., & Leeb, H. (2009). On the distribution of penalized maximum likelihood estimators: The LASSO, SCAD, and thresholding. Journal of Multivariate Analysis, 100(9), 2065-2082.
    Chatterjee, A., & Lahiri, S. N. (2011). Bootstrapping lasso estimators. Journal of the American Statistical Association, 106(494), 608-625.
    Xue, L., & Zou, H. (2012). Regularized rank-based estimation of high-dimensional nonparanormal graphical models. The Annals of Statistics, 40(5), 2541-2571.
    Liang, H., Liu, X., Li, R., & Tsai, C. L. (2010). Estimation and testing for partially linear single-index models. Annals of statistics, 38(6), 3811.
    Bach F. Self-concordant analysis for logistic regression[J]. Electronic Journal of Statistics, 2010, 4: 384-414.
    Zhang, C. H., & Zhang, S. S. (2014). Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1), 217-242.
    Xue, L., & Zou, H. (2012). Regularized rank-based estimation of high-dimensional nonparanormal graphical models. The Annals of Statistics, 40(5), 2541-2571.
    Greenshtein, E. (2006). Best subset selection, persistence in high-dimensional statistical learning and optimization under l1 constraint. The Annals of Statistics, 34(5), 2367-2386.
    Liang, H., Liu, X., Li, R., & Tsai, C. L. (2010). Estimation and testing for partially linear single-index models. Annals of statistics, 38(6), 3811.
    Huang J, Ma S, Xie H. Regularized Estimation in the Accelerated Failure Time Model with High‐Dimensional Covariates[J]. Biometrics, 2006, 62(3): 813-820.
    Qian, M., & Murphy, S. A. (2011). Performance guarantees for individualized treatment rules. Annals of statistics, 39(2), 1180.
    Zhang C H, Zhang T. A general theory of concave regularization for high-dimensional sparse estimation problems[J]. Statistical Science, 2012, 27(4): 576-593.
    Bühlmann, P., & Yu, B. (2006). Sparse boosting. The Journal of Machine Learning Research, 7, 1001-1024.
    Lafferty, J., & Wasserman, L. (2008). Rodeo: sparse, greedy nonparametric regression. The Annals of Statistics, 28-63.
    Fan, J., Lin, H., & Zhou, Y. (2006). Local partial-likelihood estimation for lifetime data. The Annals of Statistics, 34(1), 290-325.
    Johnson, B. A., Lin, D. Y., & Zeng, D. (2008). Penalized estimating functions and variable selection in semiparametric regression models. Journal of the American Statistical Association, 103(482), 672-680.
    Harchaoui, Z., & Lévy-Leduc, C. (2010). Multiple change-point estimation with a total variation penalty. Journal of the American Statistical Association, 105(492).
    Breheny, P., & Huang, J. (2009). Penalized methods for bi-level variable selection. Statistics and its interface, 2(3), 369.
    Bradic, J., Fan, J., & Wang, W. (2011). Penalized composite quasi‐likelihood for ultrahigh dimensional variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(3), 325-349.
    Bien, J., Taylor, J., & Tibshirani, R. (2013). A lasso for hierarchical interactions. The Annals of Statistics, 41(3), 1111-1141.
    Li, L., Dennis Cook, R., & Nachtsheim, C. J. (2005). Model‐free variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 285-299.
    Cai, J., Fan, J., Li, R., & Zhou, H. (2005). Variable selection for multivariate failure time data. Biometrika, 92(2), 303-316.
    Hall, P., & Miller, H. (2009). Using generalized correlation to effect variable selection in very high dimensional problems. Journal of Computational and Graphical Statistics, 18(3).
    Caner, M. (2009). Lasso-type GMM estimator. Econometric Theory, 25(01), 270-290.
    Tibshirani, R. J., & Taylor, J. (2012). Degrees of freedom in lasso problems. The Annals of Statistics, 40(2), 1198-1232.
    Khalili, A., & Chen, J. (2007). Variable selection in finite mixture of regression models. Journal of the american Statistical association, 102(479).
    Bien, J., & Tibshirani, R. J. (2011). Sparse estimation of a covariance matrix. Biometrika, 98(4), 807.Li, R., Zhong, W., & Zhu, L. (2012). Feature screening via distance correlation learning. Journal of the American Statistical Association, 107(499), 1129-1139.
    Hsu, N. J., Hung, H. L., & Chang, Y. M. (2008). Subset selection for vector autoregressive processes using lasso. Computational Statistics & Data Analysis, 52(7), 3645-3657.
    Meinshausen, N., Rocha, G., & Yu, B. (2007). Discussion: A tale of three cousins: Lasso, L2Boosting and Dantzig. The Annals of Statistics, 2373-2384.
    Rothman, A. J., Levina, E., & Zhu, J. (2010). Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics, 19(4), 947-962.
    Li, R., & Lin, D. K. (2002). Data analysis in supersaturated designs. Statistics & Probability Letters, 59(2), 135-144.
    Goddard, M. E., Wray, N. R., Verbyla, K., & Visscher, P. M. (2009). Estimating effects and making predictions from genome-wide marker data. Statistical Science, 24(4), 517-529.
    Boysen, L., Kempe, A., Liebscher, V., Munk, A., & Wittich, O. (2009). Consistencies and rates of convergence of jump-penalized least squares estimators. The Annals of Statistics, 157-183.
    Liu, Y., & Wu, Y. (2012). Variable selection via a combination of the L0 and L1 penalties. Journal of Computational and Graphical Statistics.
    McShane, B. B., & Wyner, A. J. (2011). A statistical analysis of multiple temperature proxies: Are reconstructions of surface temperatures over the last 1000 years reliable?. The Annals of Applied Statistics, 5-44.
    Schelldorfer, J., Bühlmann, P., DE, G., & VAN, S. (2011). Estimation for High‐Dimensional Linear Mixed‐Effects Models Using ℓ1‐Penalization. Scandinavian Journal of Statistics, 38(2), 197-214.
    James, G. M., & Radchenko, P. (2009). A generalized Dantzig selector with shrinkage tuning. Biometrika, 96(2), 323-337.
    Loubes, J. M., & Van De Geer, S. (2002). Adaptive estimation with soft thresholding penalties. Statistica Neerlandica, 56(4), 453-478.Paul, D., Bair, E., Hastie, T., & Tibshirani, R. (2008). " Preconditioning" for feature selection and regression in high-dimensional problems. The Annals of Statistics, 1595-1618.
    Liu, Y., Zhang, H. H., Park, C., & Ahn, J. (2007). Support vector machines with adaptive Lq penalty. Computational Statistics & Data Analysis, 51(12), 6380-6394.
    Yuan, M., Joseph, V. R., & Lin, Y. (2007). An efficient variable selection approach for analyzing designed experiments. Technometrics, 49(4), 430-439.
    Cai, T., Tian, L., Wong, P. H., & Wei, L. J. (2011). Analysis of randomized comparative clinical trial data for personalized treatment selections. Biostatistics, 12(2), 270-282.
    Zou, C., & Qiu, P. (2012). Multivariate statistical process control using LASSO. Journal of the American Statistical Association.
    Shen, X., Pan, W., & Zhu, Y. (2012). Likelihood-based selection and sharp parameter estimation. Journal of the American Statistical Association, 107(497), 223-232.
    Radchenko, P., & James, G. M. (2008). Variable inclusion and shrinkage algorithms. Journal of the American Statistical Association, 103(483).
    Marra, G., & Wood, S. N. (2011). Practical variable selection for generalized additive models. Computational Statistics & Data Analysis, 55(7), 2372-2387.
    Xie, X., & Geng, Z. (2008). A recursive method for structural learning of directed acyclic graphs. The Journal of Machine Learning Research, 9, 459-483.
    Wu, Y., Boos, D. D., & Stefanski, L. A. (2012). Controlling variable selection by the addition of pseudovariables. Journal of the American Statistical Association.
    Wei, F., & Huang, J. (2010). Consistent group selection in high-dimensional linear regression. Bernoulli, 16(4), 1369.被引用次数:53
    Mishchencko, Y., Vogelstein, J. T., & Paninski, L. (2011). A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data. The Annals of Applied Statistics, 1229-1261.Amato, U., Antoniadis, A., & Pensky, M. (2006). Wavelet kernel penalized estimation for non-equispaced design regression. Statistics and Computing, 16(1), 37-55.
    Bühlmann, P. (2013). Statistical significance in high-dimensional linear models. Bernoulli, 19(4), 1212-1242.
    Cook, R. D., & Forzani, L. (2008). Principal fitted components for dimension reduction in regression. Statistical Science, 23(4), 485-501.
    Li, Q., Xi, R., & Lin, N. (2010). Bayesian regularized quantile regression. Bayesian Analysis, 5(3), 533-556.
    Ni, L., Cook, R. D., & Tsai, C. L. (2005). A note on shrinkage sliced inverse regression. Biometrika, 92(1), 242-247.
    Clarke, B. (2003). Comparing Bayes model averaging and stacking when model approximation error cannot be ignored. The Journal of Machine Learning Research, 4, 683-712.
    Tutz, G., & Ulbricht, J. (2009). Penalized regression with correlation-based penalty. Statistics and Computing, 19(3), 239-253.
    Shen, X., & Huang, H. C. (2006). Optimal model assessment, selection, and combination. Journal of the American Statistical Association, 101(474), 554-568.
    Vansteelandt, S., Bekaert, M., & Claeskens, G. (2012). On model selection and model misspecification in causal inference. Statistical methods in medical research, 21(1), 7-30.
    She, Y. (2009). Thresholding-based iterative selection procedures for model selection and shrinkage. Electronic Journal of Statistics, 3, 384-415.
    Hebiri, M., & Van De Geer, S. (2011). The Smooth-Lasso and other ℓ1+ ℓ2-penalized methods. Electronic Journal of Statistics, 5, 1184-1226.
    Li, R., & Lin, D. K. (2003). Analysis methods for supersaturated design: some comparisons. Journal of Data Science, 1(3), 249-260.
    Hautsch, N., Schaumburg, J., & Schienle, M. (2014). Financial network systemic risk contributions. Review of Finance, rfu010.
    Carbonetto, P., & Stephens, M. (2012). Scalable variational inference for Bayesian variable selection in regression, and its accuracy in genetic association studies. Bayesian Analysis, 7(1), 73-108.
    Balakrishnan, S., & Madigan, D. (2008). Algorithms for sparse linear classifiers in the massive data setting. The Journal of Machine Learning Research, 9, 313-337.
    Xie, B., Pan, W., & Shen, X. (2008). Penalized model-based clustering with cluster-specific diagonal covariance matrices and grouped variables. Electronic journal of statistics, 2, 168.
    Chesneau, C., & Hebiri, M. (2008). Some theoretical results on the grouped variables Lasso. Mathematical Methods of Statistics, 17(4), 317-326.
    Fan, J., Lv, J., & Qi, L. (2011). Sparse high dimensional models in economics. Annual review of economics, 3, 291.
    Li, L., & Nachtsheim, C. J. (2006). Sparse sliced inverse regression. Technometrics, 48(4).
    Wang, S., Nan, B., Zhu, N., & Zhu, J. (2009). Hierarchically penalized Cox regression with grouped variables. Biometrika, 96(2), 307-322.
    Li, P., Chen, J., & Marriott, P. (2009). Non-finite Fisher information and homogeneity: an EM approach. Biometrika, asp011.
    Chen, Y. H., Chatterjee, N., & Carroll, R. J. (2009). Shrinkage estimators for robust and efficient inference in haplotype-based case-control studies. Journal of the American Statistical Association, 104(485), 220-233.
    Meinshausen, N. (2004, May). Consistent neighbourhood selection for sparse high-dimensional graphs with the Lasso. Seminar für Statistik, Eidgenössische Technische Hochschule (ETH), Zürich.
    Mai, Q., Zou, H., & Yuan, M. (2012). A direct approach to sparse discriminant analysis in ultra-high dimensions. Biometrika, asr066.
    Yuan, M., Joseph, V. R., & Zou, H. (2009). Structured variable selection and estimation. The Annals of Applied Statistics, 1738-1757.
    Belitz, C., & Lang, S. (2008). Simultaneous selection of variables and smoothing parameters in structured additive regression models. Computational Statistics & Data Analysis, 53(1), 61-81.
    Sardy, S., Antoniadis, A., & Tseng, P. (2004). Automatic smoothing with wavelets for a wide class of distributions. Journal of computational and graphical statistics, 13(2), 399-421.被引用次数:45
    Ni, L., Cook, R. D., & Tsai, C. L. (2005). A note on shrinkage sliced inverse regression. Biometrika, 92(1), 242-247.
    Liu, H., & Zhang, J. (2009). Estimation consistency of the group lasso and its applications. In International Conference on Artificial Intelligence and Statistics (pp. 376-383).Lu, Y., Zhou, Y., Qu, W., Deng, M., & Zhang, C. (2011). A Lasso regression model for the construction of microRNA-target regulatory networks. Bioinformatics, 27(17), 2406-2413.
    Zhou, N., & Zhu, J. (2010). Group variable selection via a hierarchical lasso and its oracle property. arXiv preprint arXiv:1006.2871.
    Hans, C. (2010). Model uncertainty and variable selection in Bayesian lasso regression. Statistics and Computing, 20(2), 221-229.
    Pötscher, B. M., & Schneider, U. (2009). On the distribution of the adaptive LASSO estimator. Journal of Statistical Planning and Inference, 139(8), 2775-2790.


    Zou, H. (2008). A note on path-based variable selection in the penalized proportional hazards model. Biometrika, 95(1), 241-247.
    van de Geer, S., Bühlmann, P., & Zhou, S. (2011). The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso). Electronic Journal of Statistics, 5, 688-749.
    Shi, W., Wahba, G., Wright, S., Lee, K., Klein, R., & Klein, B. (2008). LASSO-Patternsearch algorithm with application to ophthalmology and genomic data. Statistics and its Interface, 1(1), 137.Javanmard, A., & Montanari, A. (2014). Hypothesis testing in high-dimensional regression under the gaussian random design model: Asymptotic theory. Information Theory, IEEE Transactions on, 60(10), 6522-6554.
    Zhou, S., van de Geer, S., & Bühlmann, P. (2009). Adaptive Lasso for high dimensional regression and Gaussian graphical modeling. arXiv preprint arXiv:0903.2515.
    Genovese, C. R., Jin, J., Wasserman, L., & Yao, Z. (2012). A comparison of the lasso and marginal regression. The Journal of Machine Learning Research, 13(1), 2107-2143.
    Bunea, F. (2008). Consistent selection via the Lasso for high dimensional approximating regression models. In Pushing the limits of contemporary statistics: contributions in honor of Jayanta K. Ghosh (pp. 122-137). Institute of Mathematical Statistics.
    Zou, C., Ning, X., & Tsung, F. (2012). LASSO-based multivariate linear profile monitoring. Annals of operations research, 192(1), 3-19.
    Hansen, N. R., Reynaud-Bouret, P., & Rivoirard, V. (2015). Lasso and probabilistic inequalities for multivariate point processes. Bernoulli, 21(1), 83-143.被引用次数:31
    Alhamzawi, R., Yu, K., & Benoit, D. F. (2012). Bayesian adaptive Lasso quantile regression. Statistical Modelling, 12(3), 279-297.
    Kim, S., & Xing, E. P. (2012). Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping. The Annals of Applied Statistics, 6(3), 1095-1117.
    Massart, P., & Meynet, C. (2011). The Lasso as an ℓ1-ball model selection procedure. Electronic Journal of Statistics, 5, 669-687.
    Radchenko, P., & James, G. M. (2011). Improved variable selection with forward-lasso adaptive shrinkage. The Annals of Applied Statistics, 5(1), 427-448.
    Nardi, Y., & Rinaldo, A. (2011). Autoregressive process modeling via the lasso procedure. Journal of Multivariate Analysis, 102(3), 528-549.
    Simon, N., & Tibshirani, R. (2012). Standardization and the group lasso penalty. Statistica Sinica, 22(3), 983.
    Fraley, C., & Hesterberg, T. (2009). Least angle regression and LASSO for large datasets. Statistical Analysis and Data Mining: The ASA Data Science Journal, 1(4), 251-259.
    Hebiri, M., & Lederer, J. (2013). How correlations influence Lasso prediction. Information Theory, IEEE Transactions on, 59(3), 1846-1854.
    Lykou, A., & Whittaker, J. (2010). Sparse CCA using a Lasso with positivity constraints. Computational Statistics & Data Analysis, 54(12), 3144-3157.
    Liu, J., & Ye, J. (2010). Fast overlapping group lasso. arXiv preprint arXiv:1009.0306.
    Zhao, Y., Ogden, R. T., & Reiss, P. T. (2012). Wavelet-based LASSO in functional linear regression. Journal of Computational and Graphical Statistics, 21(3), 600-617.
    Zou, C., Jiang, W., & Tsung, F. (2012). A LASSO-based diagnostic framework for multivariate statistical process control. Technometrics.
    Lambert-Lacroix, S., & Zwald, L. (2011). Robust regression through the Huber’s criterion and adaptive lasso penalty. Electronic Journal of Statistics, 5, 1015-1053.
    Kim, J., Kim, Y., & Kim, Y. (2012). A gradient-based optimization algorithm for lasso. Journal of Computational and Graphical Statistics.
    Charbonnier, C., Chiquet, J., & Ambroise, C. (2010). Weighted-LASSO for structured network inference from time course data. Statistical applications in genetics and molecular biology, 9(1).
    Lee, J. D., Sun, D. L., Sun, Y., & Taylor, J. E. (2013). Exact post-selection inference with the lasso. arXiv preprint arXiv:1311.6238.
    Kamarianakis, Y., Shen, W., & Wynter, L. (2012). Real‐time road traffic forecasting using regime‐switching space‐time models and adaptive LASSO. Applied Stochastic Models in Business and Industry, 28(4), 297-315.
    Huang, J., Ma, S., & Zhang, C. H. (2008). The iterated lasso for high–dimensional logistic regression. The University of Iowa Department of Statistical and Actuarial Science Technical Report, (392).
    Huang, H. C., Hsu, N. J., Theobald, D. M., & Breidt, F. J. (2010). Spatial LASSO with applications to GIS model selection. Journal of Computational and Graphical Statistics, 19(4), 963-983.
    Johnson, B. A. (2009). On lasso for censored data. Electronic Journal of statistics, 3, 485-506.
    Chatterjee, S., Steinhaeuser, K., Banerjee, A., Chatterjee, S., & Ganguly, A. R. (2012, April). Sparse Group Lasso: Consistency and Climate Applications. In SDM (pp. 47-58).
    Leng, C., & Ma, S. (2007). Path consistent model selection in additive risk model via Lasso. Statistics in medicine, 26(20), 3753-3770.
    Kato, T., & Uemura, M. (2012). Period Analysis using the Least Absolute Shrinkage and Selection Operator (Lasso). Publications of the Astronomical Society of Japan, 64(6), 122.
    Leng, C., Tran, M. N., & Nott, D. (2014). Bayesian adaptive lasso. Annals of the Institute of Statistical Mathematics, 66(2), 221-244.
    Xu, J., & Ying, Z. (2010). Simultaneous estimation and variable selection in median regression using Lasso-type penalty. Annals of the Institute of Statistical Mathematics, 62(3), 487-514.Wang, D., Eskridge, K. M., & Crossa, J. (2011). Identifying QTLs and epistasis in structured plant populations using adaptive mixed LASSO. Journal of agricultural, biological, and environmental statistics, 16(2), 170-184.
    Chatterjee, A., & Lahiri, S. N. (2013). Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap. The Annals of Statistics, 41(3), 1232-1259. 被引用次数:19
    Kalouptsidi, M. (2014). Time to build and fluctuations in bulk shipping. The American Economic Review, 104(2), 564-608.
    Meinshausen, N. (2008). A note on the Lasso for Gaussian graphical model selection. Statistics & Probability Letters, 78(7), 880-884.
    Huang, J., Sun, T., Ying, Z., Yu, Y., & Zhang, C. H. (2013). Oracle inequalities for the lasso in the Cox model. Annals of statistics, 41(3), 1142.
    Percival, D. (2012). Theoretical properties of the overlapping groups lasso. Electronic Journal of Statistics, 6, 269-288.
    Reid, S., Tibshirani, R., & Friedman, J. (2013). A study of error variance estimation in lasso regression. arXiv preprint arXiv:1311.5274.
    Chang, C., & Tsay, R. S. (2010). Estimation of covariance matrix via the sparse Cholesky factor with lasso. Journal of Statistical Planning and Inference, 140(12), 3858-3873.
    Chatterjee, A., & Lahiri, S. (2010). Asymptotic properties of the residual bootstrap for lasso estimators. Proceedings of the American Mathematical Society, 138(12), 4497-4509.
    Kato, K. (2011). Group Lasso for high dimensional sparse quantile regression models. arXiv preprint arXiv:1103.1458.
    Chiquet, J., Grandvalet, Y., & Charbonnier, C. (2012). Sparsity with sign-coherent groups of variables via the cooperative-lasso. The Annals of Applied Statistics, 6(2), 795-830.
    Zeng, P., He, T., & Zhu, Y. (2012). A Lasso-type approach for estimation and variable selection in single index models. Journal of Computational and Graphical Statistics, 21(1), 92-109.
    Biswas, S., & Lin, S. (2012). Logistic Bayesian LASSO for Identifying Association with Rare Haplotypes and Application to Age‐Related Macular Degeneration. Biometrics, 68(2), 587-597.
    Ren, Y., & Zhang, X. (2010). Subset selection for vector autoregressive processes via adaptive Lasso. Statistics & probability letters, 80(23), 1705-1712.
    Silver, M., Montana, G., & Alzheimer's Disease Neuroimaging Initiative. (2012). Fast identification of biological pathways associated with a quantitative trait using group lasso with overlaps. Statistical applications in genetics and molecular biology, 11(1), 1-43.
    Gefang, D. (2014). Bayesian doubly adaptive elastic-net Lasso for VAR shrinkage. International Journal of Forecasting, 30(1), 1-11.
    Ahmed, S. E., Hossain, S., & Doksum, K. A. (2012). LASSO and shrinkage estimation in Weibull censored regression models. Journal of Statistical Planning and Inference, 142(6), 1273-1284.
    Gao, X., & Huang, J. (2010). Asymptotic analysis of high-dimensional lad regression with lasso. Statistica Sinica, 20(4), 1485.
    Alquier, P. (2008). Lasso, iterative feature selection and the correlation selector: Oracle inequalities and numerical performances. Electronic Journal of Statistics, 2, 1129-1152.
    Sequential Lasso cum EBIC for feature selection with ultra-high dimensional feature space. Journal of the American Statistical Association, 109(507), 1229-1240.
    Lykou, A., & Ntzoufras, I. (2013). On Bayesian lasso variable selection and the specification of the shrinkage parameter. Statistics and Computing, 23(3), 361-390.被引用次数:10
    Wagener, J., & Dette, H. (2012). Bridge estimators and the adaptive Lasso under heteroscedasticity. Mathematical Methods of Statistics, 21(2), 109-126.
    Tian, G. L., Tang, M. L., Fang, H. B., & Tan, M. (2008). Efficient methods for estimating constrained parameters with applications to regularized (lasso) logistic regression. Computational statistics & data analysis, 52(7), 3528-3542.
    Tateishi, S., Matsui, H., & Konishi, S. (2010). Nonlinear regression modeling via the lasso-type regularization. Journal of Statistical Planning and Inference, 140(5), 1125-1134.
    Osborne, M. R., & Turlach, B. A. (2011). A homotopy algorithm for the quantile regression lasso and related piecewise linear problems. Journal of Computational and Graphical Statistics, 20(4).
    Liu, J., Huang, J., Ma, S., & Wang, K. (2013). Incorporating group correlations in genome-wide association studies using smoothed group Lasso. Biostatistics, 14(2), 205-219.
    Arslan, O. (2012). Weighted LAD-LASSO method for robust parameter estimation and variable selection in regression. Computational Statistics & Data Analysis, 56(6), 1952-1965.
    Vidaurre, D., Bielza, C., & Larrañaga, P. (2012). Lazy lasso for local regression. Computational Statistics, 27(3), 531-550.
    Guo, R., Zhu, H., Chow, S. M., & Ibrahim, J. G. (2012). Bayesian lasso for semiparametric structural equation models. Biometrics, 68(2), 567-577.
    Jia, J., Rohe, K., & Yu, B. (2010). The lasso under heteroscedasticity. arXiv preprint arXiv:1011.1026.
    Foster, S. D., Verbyla, A. P., & Pitchford, W. S. (2008). A random model approach for the LASSO. Computational Statistics, 23(2), 217-233.
    Wang, H., Zou, G., & Wan, A. T. (2013). Adaptive LASSO for varying-coefficient partially linear measurement error models. Journal of Statistical Planning and Inference, 143(1), 40-54.
    van de Geer, S., Buhlmann, P., & Zhou, S. (2010). The adaptive and the thresholded Lasso for potentially misspecified models. arXiv preprint arXiv:1001.5176.
    Huang, F. (2003). Prediction error property of the lasso estimator and its generalization. Australian & New Zealand journal of statistics, 45(2), 217-228.
    Witten, D., & Friedman, J. (2011). A fast screening rule for the graphical lasso. Journal of Computational and Graphical Statistics, to appear.
    Foster, S. D., Verbyla, A. P., & Pitchford, W. S. (2009). Estimation, prediction and inference for the LASSO random effects model. Australian & New Zealand Journal of Statistics, 51(1), 43-61.
    Peterson, C., Vannucci, M., Karakas, C., Choi, W., Ma, L., & MALETIĆ-SAVATIĆ, M. I. R. J. A. N. A. (2013). Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors. Statistics and its interface, 6(4), 547.
    Meynet, C. (2013). An ℓ 1-oracle inequality for the Lasso in finite mixture Gaussian regression models. ESAIM: Probability and Statistics, 17, 650-671.
    De Castro, Y. (2013). A remark on the lasso and the dantzig selector. Statistics & Probability Letters, 83(1), 304-314.
    Bergersen, L. C., Glad, I. K., & Lyng, H. (2011). Weighted lasso with data integration. Statistical applications in genetics and molecular biology, 10(1), 1-29.
    Frank, L. E., & Heiser, W. J. (2008). Feature selection in Feature Network Models: Finding predictive subsets of features with the Positive Lasso. British Journal of Mathematical and Statistical Psychology, 61(1), 1-27.
    Park, H., & Sakaori, F. (2013). Lag weighted lasso for time series model. Computational Statistics, 28(2), 493-504.
    Masarotto, G., & Varin, C. (2012). The ranking lasso and its application to sport tournaments. The Annals of Applied Statistics, 6(4), 1949-1970.
    Wagener, J., & Dette, H. (2013). The adaptive lasso in high-dimensional sparse heteroscedastic models. Mathematical Methods of Statistics, 22(2), 137-154.Hossain, S., & Ahmed, E. (2012). Shrinkage and penalty estimators of a Poisson regression model. Australian & New Zealand Journal of Statistics, 54(3), 359-373.被引用次数:6
    Fang, Z., & Meinshausen, N. (2012). LASSO isotone for high-dimensional additive isotonic regression. Journal of Computational and Graphical Statistics, 21(1), 72-91.
    Sampson, J. N., Chatterjee, N., Carroll, R. J., & Müller, S. (2013). Controlling the local false discovery rate in the adaptive Lasso. Biostatistics, kxt008.
    Hirose, K., & Konishi, S. (2012). Variable selection via the weighted group lasso for factor analysis models. Canadian Journal of Statistics, 40(2), 345-361.
    Lu, W., Goldberg, Y., & Fine, J. P. (2012). On the robustness of the adaptive lasso to model misspecification. Biometrika, 99(3), 717-731.
    Li, J., & Gu, M. (2012). Adaptive LASSO for general transformation models with right censored data. Computational Statistics & Data Analysis, 56(8), 2583-2597.
    Alquier, P., & Hebiri, M. (2012). Transductive versions of the LASSO and the Dantzig Selector. Journal of Statistical Planning and Inference, 142(9), 2485-2500.
    Chen, K., & Chan, K. S. (2011). Subset ARMA selection via the adaptive Lasso. Statistics and Its Interface, 4, 197-205.
    Chatterjee, A., & Lahiri, S. N. (2011). Strong consistency of Lasso estimators. Sankhya A, 73(1), 55-78.
    Massart, P., & Meynet, C. (2012, January). Some rates of convergence for the selected Lasso estimator. In Algorithmic learning theory (pp. 17-33). Springer Berlin Heidelberg.
    Gupta, S. (2012). A note on the asymptotic distribution of LASSO estimator for correlated data. Sankhya A, 74(1), 10-28.
    Ye, F., & Zhang, C. H. (2009). Rate minimaxity of the lasso and dantzig estimators. Technical report, Department of Statistics and Biostatistics, Rutgers University.
    Tran, M. N., Nott, D. J., & Leng, C. (2012). The predictive lasso. Statistics and computing, 22(5), 1069-1084.
    Sabbe, N., Thas, O., & Ottoy, J. P. (2013). EMLasso: logistic lasso with missing data. Statistics in medicine, 32(18), 3143-3157.
    Hussami, N., & Tibshirani, R. (2013). A Component Lasso. arXiv preprint arXiv:1311.4472.
    Benoit, D. F., Alhamzawi, R., & Yu, K. (2013). Bayesian lasso binary quantile regression. Computational Statistics, 28(6), 2861-2873.
    Wang, X., & Song, L. (2011). Adaptive Lasso Variable Selection for the Accelerated Failure Models. Communications in Statistics-Theory and Methods, 40(24), 4372-4386.
    Wu, T. T. (2013). Lasso penalized semiparametric regression on high-dimensional recurrent event data via coordinate descent. Journal of Statistical Computation and Simulation, 83(6), 1145-1155.

    2014
    Fan, J., Xue, L., & Zou, H. (2014). Strong oracle optimality of folded concave penalized estimation. Annals of statistics, 42(3), 819.
    Wang, Z., Liu, H., & Zhang, T. (2014). Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Annals of statistics, 42(6), 2164.
    Allen, G. I., Grosenick, L., & Taylor, J. (2014). A generalized least-square matrix decomposition. Journal of the American Statistical Association, 109(505), 145-159.
    Pakman, A., & Paninski, L. (2014). Exact hamiltonian monte carlo for truncated multivariate gaussians. Journal of Computational and Graphical Statistics, 23(2), 518-542.

    Viallon, V., Lambert-Lacroix, S., Hoefling, H., & Picard, F. (2014). On the robustness of the generalized fused lasso to prior specifications. Statistics and Computing, 1-17.
    Bühlmann, P., Kalisch, M., & Meier, L. (2014). High-dimensional statistics with a view toward applications in biology. Annual Review of Statistics and Its Application, 1, 255-278.
    Cuevas, A. (2014). A partial overview of the theory of statistics with functional data. Journal of Statistical Planning and Inference, 147, 1-23.
    Lv, J., & Liu, J. S. (2014). Model selection principles in misspecified models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1), 141-167.
    Chi, E. C., & Lange, K. (2014). Splitting methods for convex clustering. Journal of Computational and Graphical Statistics, (just-accepted), 00-00.
    Ročková, V., & George, E. I. (2014). Emvs: The em approach to bayesian variable selection. Journal of the American Statistical Association, 109(506), 828-846.
    Schelldorfer, J., Meier, L., & Bühlmann, P. (2014). Glmmlasso: an algorithm for high-dimensional generalized linear mixed models using ℓ1-penalization. Journal of Computational and Graphical Statistics, 23(2), 460-477.
    Groll, A., & Tutz, G. (2014). Variable selection for generalized linear mixed models by l 1-penalized estimation. Statistics and Computing, 24(2), 137-154.
    Yao, Y., & Lee, Y. (2014). Another look at linear programming for feature selection via methods of regularization. Statistics and Computing, 24(5), 885-905.
    Kim, H. H., & Swanson, N. R. (2014). Forecasting financial and macroeconomic variables using data reduction methods: New empirical evidence. Journal of Econometrics, 178, 352-367.
    Ciuperca, G. (2014). Model selection by LASSO methods in a change-point model. Statistical Papers, 55(2), 349-374.Luo, S., & Chen, Z. (2014).
    Vincent, M., & Hansen, N. R. (2014). Sparse group lasso and high dimensional multinomial classification. Computational Statistics & Data Analysis, 71, 771-786.
    Yang, Y., & Zou, H. (2014). A fast unified algorithm for solving group-lasso penalize learning problems. Statistics and Computing, 1-13.
    Xu, H. K. (2014). Properties and iterative methods for the Lasso and its variants. Chinese Annals of Mathematics, Series B, 35(3), 501-518.
    Arribas-Gil, A., Bertin, K., Meza, C., & Rivoirard, V. (2014). LASSO-type estimators for semiparametric nonlinear mixed-effects models estimation. Statistics and Computing, 24(3), 443-460.
    Chretien, S., & Darses, S. (2014). Sparse recovery with unknown variance: a LASSO-type approach. Information Theory, IEEE Transactions on, 60(7), 3970-3988.
    Leng, C., Tran, M. N., & Nott, D. (2014). Bayesian adaptive lasso. Annals of the Institute of Statistical Mathematics, 66(2), 221-244.
    Bühlmann, P., & Mandozzi, J. (2014). High-dimensional variable screening and bias in subsequent inference, with an empirical comparison. Computational Statistics, 29(3-4), 407-430.
    Efron, B. (2014). Estimation and accuracy after model selection. Journal of the American Statistical Association, 109(507), 991-1007.
    Caner, M., & Zhang, H. H. (2014). Adaptive elastic net for generalized methods of moments. Journal of Business & Economic Statistics, 32(1), 30-47.
    Ke, T., Jin, J., & Fan, J. (2014). Covariance assisted screening and estimation. Annals of statistics, 42(6), 2202.
    Wainwright, M. J. (2014). Structured regularizers for high-dimensional problems: Statistical and computational issues. Annual Review of Statistics and Its Application, 1, 233-253.
    Covas, F. B., Rump, B., & Zakrajšek, E. (2014). Stress-testing US bank holding companies: A dynamic panel quantile regression approach. International Journal of Forecasting, 30(3), 691-713.
    Baraud, Y., Giraud, C., & Huet, S. (2014). Estimator selection in the Gaussian setting. In Annales de l'Institut Henri Poincaré, Probabilités et Statistiques (Vol. 50, No. 3, pp. 1092-1119). Institut Henri Poincaré.
    Kong, S., & Nan, B. (2014). Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso. Statistica Sinica, 24(1), 25-42.
    Fan, J., Fan, Y., & Barut, E. (2014). Adaptive robust variable selection. Annals of statistics, 42(1), 324.
    Mohan, K., London, P., Fazel, M., Witten, D., & Lee, S. I. (2014). Node-based learning of multiple gaussian graphical models. The Journal of Machine Learning Research, 15(1), 445-488.
    Gefang, D. (2014). Bayesian doubly adaptive elastic-net Lasso for VAR shrinkage. International Journal of Forecasting, 30(1), 1-11.
    Liu, J., Li, R., & Wu, R. (2014). Feature selection for varying coefficient models with ultrahigh-dimensional covariates. Journal of the American Statistical Association, 109(505), 266-274.
    Bhattacharya, A., Pati, D., Pillai, N. S., & Dunson, D. B. (2014). Dirichlet-Laplace priors for optimal shrinkage. Journal of the American Statistical Association, (just-accepted), 00-00.
    van der Pas, S. L., Kleijn, B. J. K., & van der Vaart, A. W. (2014). The horseshoe estimator: Posterior concentration around nearly black vectors. Electronic Journal of Statistics, 8(2), 2585-2618.
    Fan, J., & Liao, Y. (2014). Endogeneity in high dimensions. Annals of statistics, 42(3), 872.
    Fan, J., Ma, Y., & Dai, W. (2014). Nonparametric independence screening in sparse ultra-high-dimensional varying coefficient models. Journal of the American Statistical Association, 109(507), 1270-1284.
    Bühlmann, P., Peters, J., & Ernest, J. (2014). CAM: Causal additive models, high-dimensional order search and penalized regression. The Annals of Statistics, 42(6), 2526-2556.
    Belloni, A., Chernozhukov, V., & Hansen, C. (2014). High-dimensional methods and inference on structural and treatment effects. The Journal of Economic Perspectives, 29-50.
    Lange, K., Papp, J. C., Sinsheimer, J. S., & Sobel, E. M. (2014). Next generation statistical genetics: Modeling, penalization, and optimization in high-dimensional data. Annual review of statistics and its application, 1(1), 279.
    Qian, J., & Su, L. (2014). Shrinkage estimation of common breaks in panel data models via adaptive group fused Lasso. Available at SSRN 2417560.
    Homrighausen, D., & McDonald, D. J. (2014). Leave-one-out cross-validation is risk consistent for lasso. Machine Learning, 97(1-2), 65-78.Hong, Z., & Lian, H. (2011). Inference of genetic networks from time course expression data using functional regression with lasso penalty. Communications in Statistics-Theory and Methods, 40(10), 1768-1779.
    Lin, J., & Li, S. (2014). Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO. Applied and Computational Harmonic Analysis, 37(1), 126-139.
    Zhang, T., & Zou, H. (2014). Sparse precision matrix estimation via lasso penalized D-trace loss. Biometrika, 101(1), 103-120.
    Curtis, S. M., Banerjee, S., & Ghosal, S. (2014). Fast Bayesian model assessment for nonparametric additive regression. Computational Statistics & Data Analysis, 71, 347-358.
    Narisetty, N. N., & He, X. (2014). Bayesian variable selection with shrinking and diffusing priors. The Annals of Statistics, 42(2), 789-817.
    Luo, S., & Chen, Z. (2014). Sequential Lasso cum EBIC for feature selection with ultra-high dimensional feature space. Journal of the American Statistical Association, 109(507), 1229-1240.
    Picchini, U. (2014). Inference for SDE models via approximate Bayesian computation. Journal of Computational and Graphical Statistics, 23(4), 1080-1100.
    Geer, S. (2014). Weakly decomposable regularization penalties and structured sparsity. Scandinavian Journal of Statistics, 41(1), 72-86.
    Chavez-Demoulin, V., Embrechts, P., & Sardy, S. (2014). Extreme-quantile tracking for financial time series. Journal of Econometrics, 181(1), 44-52.
    Li, Y., Dicker, L., & Zhao, S. D. (2014). The Dantzig selector for censored linear regression models. Statistica Sinica, 24(1), 251.
    Yen, Y. M., & Yen, T. J. (2014). Solving norm constrained portfolio optimization via coordinate-wise descent algorithms. Computational Statistics & Data Analysis, 76, 737-759.
    Alquier, P., Friel, N., Everitt, R., & Boland, A. (2014). Noisy Monte Carlo: Convergence of Markov chains with approximate transition kernels. Statistics and Computing, 1-19.
    Fastrich, B., Paterlini, S., & Winker, P. (2014). Cardinality versus q-norm constraints for index tracking. Quantitative Finance, 14(11), 2019-2032.
    Zeng, P., Wei, Y., Zhao, Y., Liu, J., Liu, L., Zhang, R., ... & Chen, F. (2014). Variable selection approach for zero-inflated count data via adaptive lasso. Journal of Applied Statistics, 41(4), 879-894.
    Zhou, H., & Wu, Y. (2014). A generic path algorithm for regularized statistical estimation. Journal of the American Statistical Association, 109(506), 686-699.
    Zhao, Y., Chen, H., & Ogden, R. T. (2014). Wavelet-based weighted LASSO and screening approaches in functional linear regression. Journal of Computational and Graphical Statistics, (just-accepted), 00-00.Kundu, S., & Dunson, D. B. (2014). Bayes variable selection in semiparametric linear models. Journal of the American Statistical Association, 109(505), 437-447.
    Zhou, J., Bhattacharya, A., Herring, A. H., & Dunson, D. B. (2014). Bayesian factorizations of big sparse tensors. Journal of the American Statistical Association, (just-accepted), 00-00.
    Zhao, W., Zhang, R., Liu, J., & Lv, Y. (2014). Robust and efficient variable selection for semiparametric partially linear varying coefficient model based on modal regression. Annals of the Institute of Statistical Mathematics, 66(1), 165-191.
    Meinshausen, N. (2014). Group bound: confidence intervals for groups of variables in sparse high dimensional regression without assumptions on the design. Journal of the Royal Statistical Society: Series B (Statistical Methodology).
    Zhang, J., Wang, X., Yu, Y., & Gai, Y. (2014). Estimation and variable selection in partial linear single index models with error-prone linear covariates. Statistics, 48(5), 1048-1070.
    Oelker, M. R., Gertheiss, J., & Tutz, G. (2014). Regularization and model selection with categorical predictors and effect modifiers in generalized linear models. Statistical Modelling, 14(2), 157-177.
    Chatterjee, S. (2014). A new perspective on least squares under convex constraint. The Annals of Statistics, 42(6), 2340-2381.
    Viallon, V., Lambert-Lacroix, S., Hoefling, H., & Picard, F. (2014). On the robustness of the generalized fused lasso to prior specifications. Statistics and Computing, 1-17.
    Hao, N., & Zhang, H. H. (2014). Interaction Screening for Ultrahigh-Dimensional Data. Journal of the American Statistical Association, 109(507), 1285-1301.
    Wen, X. (2014). Bayesian model selection in complex linear systems, as illustrated in genetic association studies. Biometrics, 70(1), 73-83.
    Lin, W., Shi, P., Feng, R., & Li, H. (2014). Variable selection in regression with compositional covariates. Biometrika, asu031.
    De Bin, R., Sauerbrei, W., & Boulesteix, A. L. (2014). Investigating the prediction ability of survival models based on both clinical and omics data: two case studies. Statistics in medicine, 33(30), 5310-5329.
    Hall, P., & Xue, J. H. (2014). On selecting interacting features from high-dimensional data. Computational Statistics & Data Analysis, 71, 694-708.
    Yang, Y., & Zou, H. (2014). A fast unified algorithm for solving group-lasso penalize learning problems. Statistics and Computing, 1-13.
    Chi, E. C., & Scott, D. W. (2014). Robust parametric classification and variable selection by a minimum distance criterion. Journal of Computational and Graphical Statistics, 23(1), 111-128.
    Liu, F., Chakraborty, S., Li, F., Liu, Y., & Lozano, A. C. (2014). Bayesian regularization via graph Laplacian. Bayesian Analysis, 9(2), 449-474.
    McKeague, I. W., & Qian, M. (2014). Estimation of treatment policies based on functional predictors. Statistica Sinica, 24(3), 1461.
    Hirose, K., & Yamamoto, M. (2014). Sparse estimation via nonconcave penalized likelihood in factor analysis model. Statistics and Computing, 1-13.
    Zheng, Z., Fan, Y., & Lv, J. (2014). High dimensional thresholded regression and shrinkage effect. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(3), 627-649.
    Cheng, M. Y., Honda, T., Li, J., & Peng, H. (2014). Nonparametric independence screening and structure identification for ultra-high dimensional longitudinal data. The Annals of Statistics, 42(5), 1819-1849.
    Lan, W., Wang, H., & Tsai, C. L. (2014). Testing covariates in high-dimensional regression. Annals of the Institute of Statistical Mathematics, 66(2), 279-301.
    Belloni, A., Chernozhukov, V., & Kato, K. (2014). Uniform post-selection inference for least absolute deviation regression and other Z-estimation problems. Biometrika, asu056.
    Bhattacharya, S., & McNicholas, P. D. (2014). A LASSO-penalized BIC for mixture model selection. Advances in Data Analysis and Classification, 8(1), 45-61.
    Meinshausen, N., & Bühlmann, P. (2014). Maximin effects in inhomogeneous large-scale data. arXiv preprint arXiv:1406.0596.
    Wang, J. C., & Hastie, T. (2014). Boosted varying-coefficient regression models for product demand prediction. Journal of Computational and Graphical Statistics, 23(2), 361-382.
    Milanzi, E., Alonso, A., Buyck, C., Molenberghs, G., & Bijnens, L. (2014). A permutational-splitting sample procedure to quantify expert opinion on clusters of chemical compounds using high-dimensional data. The Annals of Applied Statistics, 8(4), 2319-2335.
    Yu, Y., & Feng, Y. (2014). Modified cross-validation for penalized high-dimensional linear regression models. Journal of Computational and Graphical Statistics, 23(4), 1009-1027.
    Rashid, N., Sun, W., & Ibrahim, J. G. (2014). Some Statistical Strategies for DAE-seq Data Analysis: Variable Selection and Modeling Dependencies Among Observations. Journal of the American Statistical Association, 109(505), 78-94.
    Kuk, A. Y., Li, J., & John Rush, A. (2014). Variable and threshold selection to control predictive accuracy in logistic regression. Journal of the Royal Statistical Society: Series C (Applied Statistics), 63(4), 657-672.
    Fan, Y., Foutz, N., James, G. M., & Jank, W. (2014). Functional response additive model estimation with online virtual stock markets. The Annals of Applied Statistics, 8(4), 2435-2460.
    Hansen, C., & Kozbur, D. (2014). Instrumental variables estimation with many weak instruments using regularized JIVE. Journal of Econometrics, 182(2), 290-308.
    Hirose, K., & Yamamoto, M. (2014). Estimation of an oblique structure via penalized likelihood factor analysis. Computational Statistics & Data Analysis, 79, 120-132.
    Ando, T., & Bai, J. (2014). Asset pricing with a general multifactor structure. Journal of Financial Econometrics, nbu026.
    Song, R., Yi, F., & Zou, H. (2014). On varying-coefficient independence screening for high-dimensional varying-coefficient models. Statistica Sinica, 24(4), 1735.
    Jiang, D., & Huang, J. (2014). Majorization minimization by coordinate descent for concave penalized generalized linear models. Statistics and computing, 24(5), 871-883.
    Yu, Y., & Feng, Y. (2014). APPLE: approximate path for penalized likelihood estimators. Statistics and Computing, 24(5), 803-819.
    Thurman, A. L., & Zhu, J. (2014). Variable selection for spatial Poisson point processes via a regularization method. Statistical Methodology, 17, 113-125.
    Wu, H., Lu, T., Xue, H., & Liang, H. (2014). Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling. Journal of the American Statistical Association, 109(506), 700-716.
    Park, H., Sakaori, F., & Konishi, S. (2014). Robust sparse regression and tuning parameter selection via the efficient bootstrap information criteria. Journal of Statistical Computation and Simulation, 84(7), 1596-1607.
    Tan, K. M., & Witten, D. M. (2014). Sparse biclustering of transposable data. Journal of Computational and Graphical Statistics, 23(4), 985-1008.
    Tian, R., & Xue, L. (2014). Variable selection for semiparametric errors-in-variables regression model with longitudinal data. Journal of Statistical Computation and Simulation, 84(8), 1654-1669.
    Zou, C., Yin, G., Feng, L., & Wang, Z. (2014). Nonparametric maximum likelihood approach to multiple change-point problems. The Annals of Statistics, 42(3), 970-1002.
    Wang, X., Nan, B., Zhu, J., & Koeppe, R. (2014). Regularized 3D functional regression for brain image data via Haar wavelets. The Annals of Applied Statistics, 8(2), 1045-1064.
    Lai, R. C., Hannig, J., & Lee, T. C. (2014). Generalized fiducial inference for ultrahigh dimensional regression. Journal of the American Statistical Association, (just-accepted), 00-00.
    Kaufman, S., & Rosset, S. (2014). When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples. Biometrika, 101(4), 771-784.
    Arribas-Gil, A., Bertin, K., Meza, C., & Rivoirard, V. (2014). LASSO-type estimators for semiparametric nonlinear mixed-effects models estimation. Statistics and Computing, 24(3), 443-460.
    Chang, C. J., & Joseph, V. R. (2014). Model Calibration Through Minimal Adjustments. Technometrics, 56(4), 474-482.
    Müller, P., & van de Geer, S. (2014). Censored linear model in high dimensions. TEST, 1-18.
    Fan, Y., & Lv, J. (2014). Asymptotic properties for combined L1 and concave regularization. Biometrika, 101(1), 57-70.
    Wilson, A., & Reich, B. J. (2014). Confounder selection via penalized credible regions. Biometrics, 70(4), 852-861.
    Pan, J., & Huang, C. (2014). Random effects selection in generalized linear mixed models via shrinkage penalty function. Statistics and Computing, 24(5), 725-738.
    Zhang, T., & Zou, H. (2014). Sparse precision matrix estimation via lasso penalized D-trace loss. Biometrika, 101(1), 103-120.
    Li, J., Zhong, W., Li, R., & Wu, R. (2014). A fast algorithm for detecting gene–gene interactions in genome-wide association studies. The Annals of Applied Statistics, 8(4), 2292-2318.
    Roberts, S., & Nowak, G. (2014). Stabilizing the lasso against cross-validation variability. Computational Statistics & Data Analysis, 70, 198-211.
    Fu, G. H., Zhang, W. M., Dai, L., & Fu, Y. Z. (2014). Group variable selection with oracle property by weight-fused adaptive elastic net model for strongly correlated data. Communications in Statistics-Simulation and Computation, 43(10), 2468-2481.
    Chen, J., & Ye, J. (2014). Sparse trace norm regularization. Computational Statistics, 29(3-4), 623-639.
    Steyerberg, E. W., Ploeg, T., & Calster, B. (2014). Risk prediction with machine learning and regression methods. Biometrical Journal, 56(4), 601-606.
    Freytag, S., & Bickeböller, H. (2014). Comparison of three summary statistics for ranking genes in genome‐wide association studies. Statistics in medicine, 33(11), 1828-1841.
    Marchetti, Y., & Zhou, Q. (2014). Solution path clustering with adaptive concave penalty. Electronic Journal of Statistics, 8(1), 1569-1603.
    Wu, L., Yang, Y., & Liu, H. (2014). Nonnegative-lasso and application in index tracking. Computational Statistics & Data Analysis, 70, 116-126.
    Lian, H. (2014). Semiparametric Bayesian information criterion for model selection in ultra-high dimensional additive models. Journal of Multivariate Analysis, 123, 304-310.
    Wu, L. C. (2014). Variable Selection in Joint Location and Scale Models of the Skew-t-Normal Distribution. Communications in Statistics-Simulation and Computation, 43(3), 615-630.
    Stefanski, L. A., Wu, Y., & White, K. (2014). Variable selection in nonparametric classification via measurement error model selection likelihoods. Journal of the American Statistical Association, 109(506), 574-589.
    Jiang, B., & Liu, J. S. (2014). Variable selection for general index models via sliced inverse regression. The Annals of Statistics, 42(5), 1751-1786.
    Sun, Z., Su, Z., & Ma, J. (2014). Focused vector information criterion model selection and model averaging regression with missing response. Metrika, 77(3), 415-432.
    Wang, K., & Lin, L. (2014). Variable selection in robust semiparametric modeling for longitudinal data. Journal of the Korean Statistical Society, 43(2), 303-314.
    Jansen, M. (2014). Information criteria for variable selection under sparsity. Biometrika, 101(1), 37-55.
    Yanagimoto, T., & Ohnishi, T. (2014). Permissible boundary prior function as a virtually proper prior density. Annals of the Institute of Statistical Mathematics, 66(4), 789-809.
    Hansen, B. E. (2014). Shrinkage efficiency bounds. Econometric Theory, forthcoming.
    Bleich, J., Kapelner, A., George, E. I., & Jensen, S. T. (2014). Variable selection for BART: An application to gene regulation. The Annals of Applied Statistics, 8(3), 1750-1781.
    Storlie, C., Anderson, B., Vander Wiel, S., Quist, D., Hash, C., & Brown, N. (2014). Stochastic identification of malware with dynamic traces. The Annals of Applied Statistics, 8(1), 1-18.
    Bertsimas, D., & Mazumder, R. (2014). Least quantile regression via modern optimization. The Annals of Statistics, 42(6), 2494-2525.
    Aue, A., Cheung, R. C., Lee, T. C., & Zhong, M. (2014). Segmented model selection in quantile regression using the minimum description length principle. Journal of the American Statistical Association, 109(507), 1241-1256.
    Chen, Z., Tang, M. L., Gao, W., & Shi, N. Z. (2014). New robust variable selection methods for linear regression models. Scandinavian Journal of Statistics, 41(3), 725-741.

    Matsui, H. (2014). Variable and boundary selection for functional data via multiclass logistic regression modeling. Computational Statistics & Data Analysis, 78, 176-185.
    Zeng, L., & Xie, J. (2014). Group variable selection via SCAD-L 2. Statistics, 48(1), 49-66.
    Jiang, L., Bondell, H. D., & Wang, H. J. (2014). Interquantile shrinkage and variable selection in quantile regression. Computational statistics & data analysis, 69, 208-219.
    Kalli, M., & Griffin, J. E. (2014). Time-varying sparsity in dynamic regression models. Journal of Econometrics, 178(2), 779-793.
    Xu, D., Zhang, Z., & Wu, L. (2014). Variable selection in high-dimensional double generalized linear models. Statistical Papers, 55(2), 327-347.
    Brouste, A., Fukasawa, M., Hino, H., Iacus, S. M., Kamatani, K., Koike, Y., ... & Yoshida, N. (2014). The yuima project: A computational framework for simulation and inference of stochastic differential equations. Journal of Statistical Software, 57(4), 1-51.
    Champion, M., Cierco-Ayrolles, C., Gadat, S., & Vignes, M. (2014). Sparse regression and support recovery with L2-boosting algorithms. Journal of Statistical Planning and Inference, 155, 19-41.
    Hall, P., Jin, J., & Miller, H. (2014). Feature selection when there are many influential features. Bernoulli, 20(3), 1647-1671.
    Trendafilov, N. T. (2014). From simple structure to sparse components: a review. Computational Statistics, 29(3-4), 431-454.
    Zhu, H., Yao, F., & Zhang, H. H. (2014). Structured functional additive regression in reproducing kernel Hilbert spaces. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(3), 581-603.
    Roberts, S., & Nowak, G. (2014). Stabilizing the lasso against cross-validation variability. Computational Statistics & Data Analysis, 70, 198-211.
    Zheng, S. (2008). Selection of components and degrees of smoothing via lasso in high dimensional nonparametric additive models. Computational Statistics & Data Analysis, 53(1), 164-175.
    Tang, Y., Xiang, L., & Zhu, Z. (2014). Risk Factor Selection in Rate Making: EM Adaptive LASSO for Zero‐Inflated Poisson Regression Models. Risk Analysis, 34(6), 1112-1127.
    Wu, L., Yang, Y., & Liu, H. (2014). Nonnegative-lasso and application in index tracking. Computational Statistics & Data Analysis, 70, 116-126.

    2015
    Lee, S., Seo, M. H., & Shin, Y. (2015). The lasso for high dimensional regression with a possible change point. Journal of the Royal Statistical Society: Series B (Statistical Methodology).
    Breheny, P. (2015). The group exponential lasso for bi‐level variable selection. Biometrics.
    Arnold, T. B., & Tibshirani, R. J. (2015). Efficient implementations of the generalized lasso dual path algorithm. Journal of Computational and Graphical Statistics, (just-accepted).
    Giordani, P. (2015). Lasso-constrained regression analysis for interval-valued data. Advances in Data Analysis and Classification, 9(1), 5-19.
    Chan, N. H., Yau, C. Y., & Zhang, R. M. (2015). LASSO estimation of threshold autoregressive models. Journal of Econometrics.
    Buja, A., & Brown, L. (2014). Discussion: A significance test for the lasso. The Annals of Statistics, 42(2), 509-517.
    Yu, D., Won, J. H., Lee, T., Lim, J., & Yoon, S. (2015). High-Dimensional Fused Lasso Regression Using Majorization–Minimization and Parallel Processing. Journal of Computational and Graphical Statistics, 24(1), 121-153.
    Hossain, S., Ahmed, S. E., & Doksum, K. A. (2015). Shrinkage, pretest, and penalty estimators in generalized linear models. Statistical Methodology, 24, 52-68.








    回复

    使用道具 举报

    您需要登录后才可以回帖 登录 | 注册

    本版积分规则

    数学建模与统计建模论坛微信群
        
      手机版|Archiver|鄂ICP备16007464号

    GMT+8, 2020-7-15 04:27 , Processed in 0.232301 second(s), 30 queries .

    © 2001-2011 Powered by Discuz! X3.2. Theme By Yeei! Licensed

    返回顶部