Pub Date : 2026-01-01Epub Date: 2023-06-01DOI: 10.1016/j.ecosta.2023.05.003
Matei Demetrescu , Christoph Hanck , Robinson Kruse-Becher
Time-varying volatility arises in many macroeconomic and financial applications. While “fixed-” arguments provide refinements in the use of estimators for the asymptotic variance of GMM estimators, the resulting fixed- distributions of test statistics are not pivotal under time-varying volatility. Three approaches to robustify inference are investigated: (i) wild bootstrapping, (ii) time transformations and (iii) selection of test statistics and critical values according to the outcome of a pretest for heteroskedasticity. Simulations quantify the distortions from using the original fixed- approach and compare the effectiveness of the proposed corrections. Overall, the wild bootstrap is to be recommended. An empirical application to the Fama & French five factor model illustrates the relevance of the procedures.
{"title":"Robust Fixed-b Inference in the Presence of Time-Varying Volatility","authors":"Matei Demetrescu , Christoph Hanck , Robinson Kruse-Becher","doi":"10.1016/j.ecosta.2023.05.003","DOIUrl":"10.1016/j.ecosta.2023.05.003","url":null,"abstract":"<div><div><span>Time-varying volatility arises in many macroeconomic and financial applications. While “fixed-</span><span><math><mi>b</mi></math></span>” arguments provide refinements in the use of estimators for the asymptotic variance of GMM estimators, the resulting fixed-<span><math><mi>b</mi></math></span> distributions of test statistics are not pivotal under time-varying volatility. Three approaches to robustify inference are investigated: (i) wild bootstrapping, (ii) time transformations and (iii) selection of test statistics and critical values according to the outcome of a pretest for heteroskedasticity. Simulations quantify the distortions from using the original fixed-<span><math><mi>b</mi></math></span> approach and compare the effectiveness of the proposed corrections. Overall, the wild bootstrap is to be recommended. An empirical application to the Fama & French five factor model illustrates the relevance of the procedures.</div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"37 ","pages":"Pages 154-173"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88515536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2023-06-02DOI: 10.1016/j.ecosta.2023.05.005
Jackson P. Lautier , Vladimir Pozdnyakov , Jun Yan
Proper econometric analysis should be informed by data structure. Many forms of financial data are recorded in discrete-time and relate to products of a finite term. If the data is sampled from a financial trust, it will often be further subject to random left-truncation. The estimation of a distribution function from left-truncated data has been extensively addressed, but the case of discrete data over a known, finite number of possible values has not yet been thoroughly investigated. A precise discrete framework and suitable sampling procedure for the Woodroofe-type estimator for discrete data over a known, finite number of possible values is therefore established. Subsequently, the resulting vector of hazard rate estimators is proved to be asymptotically normal with independent components. Asymptotic normality of the survival function estimator is then established. Sister results for the left-truncating random variable are also proved. Taken together, the resulting joint vector of hazard rate estimates for the lifetime and left-truncation random variables is proved to be the maximum likelihood estimate of the parameters of the conditional joint lifetime and left-truncation distribution given the lifetime has not been left-truncated. A hypothesis test for the shape of the distribution function based on our asymptotic results is derived. Such a test is useful to formally assess the plausibility of the stationarity assumption in length-biased sampling. The finite sample performance of the estimators is investigated in a simulation study. Applicability of the theoretical results in an econometric setting is demonstrated with a subset of data from the Mercedes-Benz 2017-A securitized bond.
{"title":"Estimating a discrete distribution subject to random left-truncation with an application to structured finance","authors":"Jackson P. Lautier , Vladimir Pozdnyakov , Jun Yan","doi":"10.1016/j.ecosta.2023.05.005","DOIUrl":"10.1016/j.ecosta.2023.05.005","url":null,"abstract":"<div><div>Proper econometric analysis should be informed by data structure. Many forms of financial data are recorded in discrete-time and relate to products of a finite term. If the data is sampled from a financial trust, it will often be further subject to random left-truncation. The estimation of a distribution function from left-truncated data has been extensively addressed, but the case of discrete data over a known, finite number of possible values has not yet been thoroughly investigated. A precise discrete framework and suitable sampling procedure for the Woodroofe-type estimator for discrete data over a known, finite number of possible values is therefore established. Subsequently, the resulting vector of hazard rate estimators is proved to be asymptotically normal with independent components. Asymptotic normality of the survival function estimator is then established. Sister results for the left-truncating random variable are also proved. Taken together, the resulting joint vector of hazard rate estimates for the lifetime and left-truncation random variables is proved to be the maximum likelihood estimate of the parameters of the conditional joint lifetime and left-truncation distribution given the lifetime has not been left-truncated. A hypothesis test for the shape of the distribution function based on our asymptotic results is derived. Such a test is useful to formally assess the plausibility of the stationarity assumption in length-biased sampling. The finite sample performance of the estimators is investigated in a simulation study. Applicability of the theoretical results in an econometric setting is demonstrated with a subset of data from the Mercedes-Benz 2017-A securitized bond.</div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"37 ","pages":"Pages 174-198"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84208774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2023-02-01DOI: 10.1016/j.ecosta.2023.01.005
Peiyun Jiang , Eiji Kurozumi
A new test is proposed to detect whether break points are common in heterogeneous panel data models where the time series dimension T could be large relative to cross-section dimension N. The error process is assumed to be cross-sectionally independent. The test is based on the cumulative sum (CUSUM) of ordinary least squares (OLS) residuals. The asymptotic distribution of the detecting statistic is derived under the null hypothesis, while the test is shown to be consistent under the alternative. Monte Carlo simulations and an empirical example show good performance of the test.
{"title":"A new test for common breaks in heterogeneous panel data models","authors":"Peiyun Jiang , Eiji Kurozumi","doi":"10.1016/j.ecosta.2023.01.005","DOIUrl":"10.1016/j.ecosta.2023.01.005","url":null,"abstract":"<div><div>A new test is proposed to detect whether break points are common in heterogeneous panel data models where the time series dimension T could be large relative to cross-section dimension N. The error process is assumed to be cross-sectionally independent. The test is based on the cumulative sum (CUSUM) of ordinary least squares (OLS) residuals. The asymptotic distribution<span> of the detecting statistic is derived under the null hypothesis, while the test is shown to be consistent under the alternative. Monte Carlo simulations and an empirical example show good performance of the test.</span></div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"37 ","pages":"Pages 87-125"},"PeriodicalIF":2.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76090848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2022-05-17DOI: 10.1016/j.ecosta.2022.05.004
M. Rodrigo , A. Lo
The implied volatility in the Black-Scholes framework is not a constant but a function of both the strike price (“smile/skew”) and the time to expiry. A popular approach to recovering the volatility surface is through the use of deterministic volatility function models via Dupire’s equation. A new method for volatility surface calibration based on the Mellin transform is proposed. An explicit formula for the volatility surface is obtained in terms of the Mellin transform of the call option price with respect to the strike price, and a numerical algorithm is provided. Results of numerical simulations are presented and the stability of the method is numerically verified. The proposed Mellin transform approach provides a simpler and more direct fitting of generalised forms of the volatility surface given previously in the literature.
{"title":"Calibrating with a smile: A Mellin transform approach to volatility surface calibration","authors":"M. Rodrigo , A. Lo","doi":"10.1016/j.ecosta.2022.05.004","DOIUrl":"10.1016/j.ecosta.2022.05.004","url":null,"abstract":"<div><div>The implied volatility<span><span> in the Black-Scholes framework is not a constant but a function of both the strike price (“smile/skew”) and the time to expiry. A popular approach to recovering the volatility surface is through the use of deterministic volatility function models via Dupire’s equation. A new method for volatility surface calibration based on the Mellin transform is proposed. An explicit formula for the volatility surface is obtained in terms of the Mellin transform of the </span>call option price with respect to the strike price, and a numerical algorithm is provided. Results of numerical simulations are presented and the stability of the method is numerically verified. The proposed Mellin transform approach provides a simpler and more direct fitting of generalised forms of the volatility surface given previously in the literature.</span></div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"36 ","pages":"Pages 73-80"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90997204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2022-09-22DOI: 10.1016/j.ecosta.2022.09.004
Luis F. Martins , Vasco J. Gabriel
Moment conditions model averaging (MA) estimators in the GMM framework are considered. Under finite sample considerations, MA estimators with optimal weights are proposed, in the sense that weights minimize the corresponding higher-order asymptotic mean squared error (AMSE). It is shown that the higher-order AMSE objective function has a closed-form expression, which makes this procedure applicable in practice. In addition, and as an alternative, different averaging schemes based on moment selection criteria are considered, in which weights for averaging across GMM estimates can be obtained by direct smoothing or by numerical minimization of a specific criterion. Asymptotic properties assuming correctly specified models are derived and the performance of the proposed averaging approaches is contrasted with existing model selection alternatives analytically, for a simple IV example, and by means of Monte Carlo experiments in a nonlinear setting, showing that MA compares favourably in many relevant setups. The usefulness of MA methods is illustrated by studying the effect of institutions on economic performance.
{"title":"GMM Model Averaging Using Higher Order Approximations","authors":"Luis F. Martins , Vasco J. Gabriel","doi":"10.1016/j.ecosta.2022.09.004","DOIUrl":"10.1016/j.ecosta.2022.09.004","url":null,"abstract":"<div><div><span>Moment conditions model averaging (MA) estimators in the GMM framework are considered. Under finite sample considerations, MA estimators with optimal weights are proposed, in the sense that weights minimize the corresponding higher-order asymptotic mean squared error (AMSE). It is shown that the higher-order AMSE objective function has a closed-form expression, which makes this procedure applicable in practice. In addition, and as an alternative, different averaging schemes based on moment selection criteria are considered, in which weights for averaging across GMM estimates can be obtained by direct smoothing or by numerical minimization of a specific criterion. Asymptotic properties assuming correctly specified models are derived and the performance of the proposed averaging approaches is contrasted with existing model selection alternatives </span><span><math><mrow><mi>i</mi><mo>)</mo></mrow></math></span> analytically, for a simple IV example, and <span><math><mrow><mi>i</mi><mi>i</mi><mo>)</mo></mrow></math></span> by means of Monte Carlo experiments in a nonlinear setting, showing that MA compares favourably in many relevant setups. The usefulness of MA methods is illustrated by studying the effect of institutions on economic performance.</div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"36 ","pages":"Pages 37-54"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77529352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2023-04-29DOI: 10.1016/j.ecosta.2023.04.002
Yves I. Ngounou Bakam , Denys Pommeret
A nonparametric copula density estimator based on Legendre orthogonal polynomials is proposed. A nonparametric copula estimator is then deduced by integration. Their asymptotic properties are reviewed. Both estimators are based on a sequence of moments that characterize the copulas and that we shall call the copula coefficients. A data-driven method is proposed to select the number of copula coefficients to use. An intensive simulation study shows the good performance of both copulas and copula densities estimators compared to a large panel of competitors. Two real datasets illustrate this approach.
{"title":"Nonparametric estimation of copulas and copula densities by orthogonal projections","authors":"Yves I. Ngounou Bakam , Denys Pommeret","doi":"10.1016/j.ecosta.2023.04.002","DOIUrl":"10.1016/j.ecosta.2023.04.002","url":null,"abstract":"<div><div><span>A nonparametric copula<span> density estimator based on Legendre orthogonal polynomials is proposed. A nonparametric copula estimator is then deduced by integration. Their asymptotic properties are reviewed. Both estimators are based on a sequence of moments that characterize the copulas and that we shall call the </span></span><em>copula coefficients</em>. A data-driven method is proposed to select the number of copula coefficients to use. An intensive simulation study shows the good performance of both copulas and copula densities estimators compared to a large panel of competitors. Two real datasets illustrate this approach.</div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"36 ","pages":"Pages 90-118"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82080471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2022-12-21DOI: 10.1016/j.ecosta.2022.12.001
Marcos Escobar-Anel , Lars Stentoft , Xize Ye
In a controlled, simulated setting, the questions of what the benefits are of including option prices in the estimation of GARCH models, the extent to which options can replace returns, and what the best type of options is for estimation, are addressed. The computational advantages of affine GARCH models for option pricing make these questions numerically tractable, therefore the experiments focus on the Heston-Nandi GARCH model. Three estimation methods, namely, returns-only estimation, options-only calibration and joint returns-options estimation-calibration are compared. The study reveals that, although the benefit is insignificant for the risk premium factor, adding options significantly reduces the standard errors of the GARCH dynamic parameters. This conclusion holds true under both linear and variance-dependent pricing kernels. The results suggest that, in a realistic setting, practitioners can use a large and recent sample of option prices to compensate for the lack of available return data. As a by-product, evidence also shows that out-of-the-money, short-maturity options are the best choice to improve the quality of the estimation.
{"title":"The benefits of returns and options in the estimation of GARCH models. A Heston-Nandi GARCH insight","authors":"Marcos Escobar-Anel , Lars Stentoft , Xize Ye","doi":"10.1016/j.ecosta.2022.12.001","DOIUrl":"10.1016/j.ecosta.2022.12.001","url":null,"abstract":"<div><div>In a controlled, simulated setting, the questions of what the benefits are of including option prices in the estimation of GARCH models<span>, the extent to which options can replace returns, and what the best type of options is for estimation, are addressed. The computational advantages of affine GARCH models for option pricing make these questions numerically tractable, therefore the experiments focus on the Heston-Nandi GARCH model. Three estimation methods, namely, returns-only estimation, options-only calibration and joint returns-options estimation-calibration are compared. The study reveals that, although the benefit is insignificant for the risk premium factor, adding options significantly reduces the standard errors of the GARCH dynamic parameters. This conclusion holds true under both linear and variance-dependent pricing kernels. The results suggest that, in a realistic setting, practitioners can use a large and recent sample of option prices to compensate for the lack of available return data. As a by-product, evidence also shows that out-of-the-money, short-maturity options are the best choice to improve the quality of the estimation.</span></div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"36 ","pages":"Pages 1-18"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72408877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2023-04-28DOI: 10.1016/j.ecosta.2023.04.005
Christopher Walsh , Carsten Jentsch
It is well known that the limiting variance of nearest neighbor matching estimators cannot be consistently estimated by a naive Efron-type bootstrap as the conditional variance of the bootstrap estimator does not generally converge to the correct limit in expectation. In essence this is caused by the fact that the bootstrap sample contains ties with positive probability even when the sample size becomes large. This negative result was originally derived in a simple setting by Abadie and Imbens (ECONOMETRICA, pp. 235–267, 76(6), 2008). A proof of concept for a direct M-out-of-N bootstrap on the data is provided in this setting. It is proven that in this setting the conditional variance of a direct M-out-of-N-type bootstrap estimator without bias-correction does converge to the correct limit in expectation. The key to the proof lies in the fact that asymptotically with probability one there are no ties in the bootstrap sample. The potential of the direct M-out-of-N-type bootstrap is investigated in simulations.
{"title":"Nearest neighbor matching: M-out-of-N bootstrapping without bias correction vs. the naive bootstrap","authors":"Christopher Walsh , Carsten Jentsch","doi":"10.1016/j.ecosta.2023.04.005","DOIUrl":"10.1016/j.ecosta.2023.04.005","url":null,"abstract":"<div><div><span>It is well known that the limiting variance of nearest neighbor matching estimators cannot be consistently estimated by a naive Efron-type bootstrap<span><span> as the conditional variance of the bootstrap estimator does not generally converge to the correct limit in expectation. In essence this is caused by the fact that the </span>bootstrap sample<span><span> contains ties with positive probability even when the </span>sample size becomes large. This negative result was originally derived in a simple setting by Abadie and Imbens (ECONOMETRICA, pp. 235–267, 76(6), 2008). A proof of concept for a direct M-out-of-N bootstrap on the data is provided in this setting. It is proven that in this setting the conditional variance of a direct M-out-of-N-type bootstrap estimator </span></span></span><em>without</em> bias-correction does converge to the correct limit in expectation. The key to the proof lies in the fact that asymptotically with probability one there are no ties in the bootstrap sample. The potential of the direct M-out-of-N-type bootstrap is investigated in simulations.</div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"36 ","pages":"Pages 81-89"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78326349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2021-12-23DOI: 10.1016/j.ecosta.2021.12.006
Subal C. Kumbhakar , A. Peresetsky , Y. Shchetynin , A. Zaytsev
It is formally proven that if inefficiency () is modelled through its variance, considered as a function of exogenous variables , then the marginal effects of on technical inefficiency () and technical efficiency () have opposite signs in the typical setup with a normally distributed random error and an exponentially or half-normally distributed . This is true for both conditional and unconditional and . An example is provided to show that the signs of the marginal effects of on and may coincide for some ranges of . If the real data comes from a bimodal distribution of , and a model is estimated with an exponential or half-normal distribution for , the estimated efficiency and the marginal effect of on could be wrong. Moreover, the rank correlations between the true and the estimated values of could be small and even negative for some subsamples of the data. This is a warning that in the case when the true (real life) distribution of the inefficiency is bimodal, commonly used standard SFA models could lead to wrong policy recommendations. The kernel density plot of the residuals is suggested as a diagnostic plot. The results are illustrated by simulations.
{"title":"Technical efficiency and inefficiency: Reliability of standard SFA models and a misspecification problem","authors":"Subal C. Kumbhakar , A. Peresetsky , Y. Shchetynin , A. Zaytsev","doi":"10.1016/j.ecosta.2021.12.006","DOIUrl":"10.1016/j.ecosta.2021.12.006","url":null,"abstract":"<div><div>It is formally proven that if inefficiency (<span><math><mi>u</mi></math></span>) is modelled through its variance, considered as a function of exogenous variables <span><math><mi>z</mi></math></span>, then the marginal effects of <span><math><mi>z</mi></math></span> on technical inefficiency (<span><math><mrow><mi>T</mi><mi>I</mi></mrow></math></span>) and technical efficiency (<span><math><mrow><mi>T</mi><mi>E</mi></mrow></math></span>) have opposite signs in the typical setup with a normally distributed random error and an exponentially or half-normally distributed <span><math><mi>u</mi></math></span>. This is true for both conditional and unconditional <span><math><mrow><mi>T</mi><mi>I</mi></mrow></math></span> and <span><math><mrow><mi>T</mi><mi>E</mi></mrow></math></span>. An example is provided to show that the signs of the marginal effects of <span><math><mi>z</mi></math></span> on <span><math><mrow><mi>T</mi><mi>I</mi></mrow></math></span> and <span><math><mrow><mi>T</mi><mi>E</mi></mrow></math></span> may coincide for some ranges of <span><math><mi>z</mi></math></span>. If the real data comes from a bimodal distribution of <span><math><mi>u</mi></math></span>, and a model is estimated with an exponential or half-normal distribution for <span><math><mi>u</mi></math></span>, the estimated efficiency and the marginal effect of <span><math><mi>z</mi></math></span> on <span><math><mrow><mi>T</mi><mi>E</mi></mrow></math></span> could be wrong. Moreover, the rank correlations between the true and the estimated values of <span><math><mrow><mi>T</mi><mi>E</mi></mrow></math></span> could be small and even negative for some subsamples of the data. This is a warning that in the case when the true (real life) distribution of the inefficiency is bimodal, commonly used standard SFA models could lead to wrong policy recommendations. The kernel density plot of the residuals is suggested as a diagnostic plot. The results are illustrated by simulations.</div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"36 ","pages":"Pages 55-72"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2023-01-22DOI: 10.1016/j.ecosta.2023.01.003
Zhenzhong Wang, Zhengyuan Zhu, Cindy Yu
In the data-rich environment, using many economic predictors to forecast a few key variables has become a new trend in econometrics. The commonly used approach is factor augment (FA) approach. This paper pursues another direction, variable selection (VS) approach, to handle high-dimensional predictors. VS is an active topic in statistics and computer science. However, it does not receive as much attention as FA in economics. This paper introduces several cutting-edge VS methods to economic forecasting, which includes: (1) classical greedy procedures; (2) regularization; (3) false-discovery-rate control methods, (4) gradient descent with sparsification and (5) meta-heuristic algorithms. Comprehensive simulation studies are conducted to compare their variable selection accuracy and prediction performance under different scenarios. Among the reviewed methods, a meta-heuristic algorithm called sequential Monte Carlo algorithm performs the best. Surprisingly the classical forward selection is comparable to it and better than other more sophisticated algorithms. In addition, these VS methods are applied on economic forecasting and compared with the popular FA approach. It turns out for employment rate and CPI inflation, some VS methods can achieve considerable improvement over FA, and the selected predictors can be well explained by economic theories.
{"title":"Variable Selection in Macroeconomic Forecasting with Many Predictors","authors":"Zhenzhong Wang, Zhengyuan Zhu, Cindy Yu","doi":"10.1016/j.ecosta.2023.01.003","DOIUrl":"10.1016/j.ecosta.2023.01.003","url":null,"abstract":"<div><div><span><span>In the data-rich environment, using many economic predictors to forecast a few key variables has become a new trend in econometrics. The commonly used approach is factor augment (FA) approach. This paper pursues another direction, variable selection (VS) approach, to handle high-dimensional predictors. VS is an active topic in statistics and computer science. However, it does not receive as much attention as FA in economics. This paper introduces several cutting-edge VS methods to </span>economic forecasting, which includes: (1) classical greedy procedures; (2) </span><span><math><msub><mi>l</mi><mn>1</mn></msub></math></span><span><span> regularization; (3) false-discovery-rate control methods, (4) </span>gradient descent<span><span><span> with sparsification and (5) meta-heuristic algorithms. Comprehensive simulation studies are conducted to compare their variable selection accuracy and prediction performance under different scenarios. Among the reviewed methods, a meta-heuristic algorithm called sequential Monte Carlo algorithm performs the best. Surprisingly the classical forward selection is comparable to it and better than other more sophisticated algorithms. In addition, these VS methods are applied on economic forecasting<span> and compared with the popular FA approach. It turns out for employment rate and CPI </span></span>inflation, some VS methods can achieve considerable improvement over FA, and the selected predictors can be well explained by </span>economic theories.</span></span></div></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"36 ","pages":"Pages 19-36"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83123283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}