Generalized Method of Moments
Marcin Majka
Project Manager | Business Trainer | Business Mentor | Doctor of Physics
The Generalized Method of Moments (GMM) is a flexible econometric technique that has gained substantial prominence in empirical research across various disciplines, including economics, finance, and the social sciences. Originating from the seminal work of Lars Peter Hansen in 1982, GMM provides a unified framework for estimating parameters in models where traditional assumptions of ordinary least squares (OLS) and maximum likelihood estimation (MLE) may not hold. Unlike these conventional methods, GMM leverages moment conditions derived from economic theory, making it particularly valuable in situations where the underlying distribution of the data is unknown or difficult to specify.
The importance of GMM lies in its ability to handle complex models with multiple endogenous variables and its applicability to a wide range of problems characterized by heteroskedasticity, autocorrelation, and other forms of statistical irregularities. By exploiting the information contained in moment conditions—equations that relate population moments to model parameters—GMM estimators achieve consistency and asymptotic normality under relatively weak assumptions. This methodological versatility has made GMM an indispensable tool for researchers aiming to draw reliable inferences from empirical data, especially in the presence of measurement errors or instrumental variables.
This article aims to elucidate the theoretical underpinnings of the Generalized Method of Moments and to illustrate its practical applications. We begin by delving into the mathematical foundation of GMM, exploring the concept of moment conditions and the derivation of the GMM objective function. Subsequent sections will guide the reader through the GMM estimation procedure, including the selection of moment conditions and the construction of the optimal weighting matrix. This article will also discuss the asymptotic properties of GMM estimators, highlighting their consistency, asymptotic normality, and efficiency relative to other estimation techniques.
Furthermore, I will present a range of empirical applications to demonstrate the versatility of GMM in practice. These examples will span various fields, showcasing how GMM can be implemented to address real-world econometric challenges. To facilitate practical implementation, we will review software packages that support GMM estimation and provide illustrative code snippets. Additionally, I will address common challenges and limitations associated with GMM, such as issues of identification, finite sample properties, and computational complexity, offering strategies to mitigate these concerns.
Theoretical Background
The Generalized Method of Moments is a powerful and general estimation technique introduced by Lars Peter Hansen in 1982. It stands out in the econometric toolkit due to its flexibility and robustness in handling a variety of models that do not conform to the stringent assumptions of other estimation methods, such as Ordinary Least Squares (OLS) and Maximum Likelihood Estimation (MLE). GMM is particularly advantageous in dealing with models where the likelihood function is either unknown or too complex to be specified accurately. By leveraging moment conditions derived from the underlying economic or statistical model, GMM can provide consistent and efficient estimates without requiring a complete specification of the data-generating process. This methodological versatility allows GMM to be applied across a wide range of empirical contexts, making it an essential tool for econometricians and applied researchers.
Central to the GMM framework is the concept of moment conditions. Moment conditions are equations that express the expected values of certain functions of the data and model parameters as zero. These conditions arise naturally from economic theories or statistical models, reflecting fundamental relationships that should hold in the population. For example, in the context of the classical linear regression model, the orthogonality condition between the error term and the regressors is a moment condition. Mathematically, if (θ) represents the vector of parameters to be estimated and (g) denotes a vector of moment functions based on the data (Xt), the moment conditions are typically written as:
These conditions form the backbone of the GMM approach, guiding the estimation process by providing the necessary information to identify and estimate the model parameters.
The mathematical foundation of GMM is built upon the notion of minimizing a quadratic form in the sample analog of the moment conditions. Given a set of (T) observations, the sample moment conditions can be expressed as:
The GMM estimator is obtained by minimizing a weighted quadratic form of these sample moments, specifically:
where (W) is a positive definite weighting matrix. The choice of the weighting matrix (W) is important, as it influences the efficiency of the estimator. In practice, the optimal weighting matrix is often estimated iteratively, leading to a two-step or iterative GMM procedure. Identification of the model parameters requires that the moment conditions provide enough independent information; formally, the rank condition must be satisfied, ensuring that the Jacobian matrix of the moment conditions with respect to the parameters is of full rank. These theoretical foundations ensure that under suitable regularity conditions, the GMM estimator is consistent and asymptotically normally distributed, providing a solid basis for inference.
The selection of moment conditions is another step in the GMM estimation process. These conditions should be carefully chosen to ensure that they accurately capture the underlying economic or statistical relationships intended to be modeled. Moment conditions can be derived from various sources, including theoretical models, auxiliary assumptions, or empirical regularities observed in the data. For example, in a consumption-based asset pricing model, moment conditions might be derived from the Euler equations, reflecting the optimal consumption choices of economic agents. The validity of the chosen moment conditions is paramount, as incorrect or weak moment conditions can lead to biased and inconsistent estimates. Moreover, over-identifying restrictions, where the number of moment conditions exceeds the number of parameters to be estimated, provide a useful tool for model specification tests, allowing researchers to assess the overall adequacy of the specified model.
The role of the weighting matrix in GMM estimation cannot be overstated, as it directly affects the efficiency of the estimator. The optimal weighting matrix is the inverse of the variance-covariance matrix of the moment conditions, which ensures that the GMM estimator achieves the lowest possible asymptotic variance. In practice, this matrix is often unknown and must be estimated from the data. A common approach is to use a consistent estimator of the variance-covariance matrix in a two-step procedure, where an initial estimate of the parameters is obtained using an arbitrary weighting matrix, followed by re-estimation using the optimal weighting matrix derived from the first-step residuals. Iterative GMM procedures refine this approach further by continuously updating the weighting matrix and parameter estimates until convergence. This iterative process enhances the efficiency of the GMM estimator, making it more robust to potential misspecifications and heteroskedasticity in the data.
Implementing GMM involves several well-defined steps. First, the researcher must specify the model and derive the appropriate moment conditions based on theoretical or empirical considerations. Next, an initial estimate of the parameters is obtained by minimizing the GMM objective function using a preliminary weighting matrix, often chosen to be the identity matrix. In the second step, the optimal weighting matrix is estimated using the residuals from the initial estimates. The parameters are then re-estimated using this optimal weighting matrix, yielding the two-step GMM estimator. For increased efficiency, iterative GMM can be employed, where the weighting matrix and parameter estimates are updated iteratively. Throughout this process, careful attention must be paid to issues of convergence, numerical stability, and the choice of optimization algorithms, as these factors can significantly influence the accuracy and reliability of the final estimates.
By laying out these theoretical foundations, we can set the stage for a deeper exploration of the properties and applications of GMM estimators. Understanding the intricacies of moment conditions, the role of the weighting matrix, and the implementation steps is crucial for effectively applying GMM in empirical research and for appreciating its robustness and versatility as an estimation technique.
GMM Estimation Procedure
The choice of moment conditions is pivotal to the successful implementation of the Generalized Method of Moments (GMM). These conditions must be derived with careful consideration from the underlying theoretical model or the empirical characteristics of the data. The primary goal is to select moment conditions that are valid, meaning they accurately represent the relationships and constraints inherent in the model. For instance, in a basic linear regression context, the orthogonality condition between the error term and the regressors can serve as a moment condition. More complex models, such as those involving dynamic panel data or instrumental variables, require more sophisticated moment conditions. These might be derived from Euler equations in consumption models, market equilibrium conditions in asset pricing, or conditional moment restrictions in models of financial returns. The selection process must ensure that these conditions are neither too weak to identify the parameters effectively nor too numerous, which could lead to overfitting and inefficiency. Over-identification, where the number of moment conditions exceeds the number of parameters to be estimated, provides a robust framework for testing the model's validity through statistical tests such as the Hansen's J-test.
The weighting matrix is very important component in GMM estimation as it directly influences the efficiency of the estimator. The optimal weighting matrix is the inverse of the variance-covariance matrix of the sample moments, ensuring that the estimator achieves the smallest possible asymptotic variance. Formally, if (W) represents the weighting matrix and (S) denotes the variance-covariance matrix of the moment conditions, the optimal weighting matrix is given by (W = S^{-1}). In practice, this matrix is unknown and must be estimated from the data. The typical approach involves a two-step estimation procedure. In the first step, an initial estimate of the parameters is obtained using an arbitrary weighting matrix, often the identity matrix, to simplify computation. In the second step, the residuals from the initial estimates are used to construct a consistent estimate of the variance-covariance matrix of the moment conditions, which then serves as the optimal weighting matrix for re-estimation. This two-step procedure can be further refined through iterative GMM, where the weighting matrix and parameter estimates are updated iteratively until convergence. The iterative process enhances the robustness and efficiency of the GMM estimator, particularly in the presence of heteroskedasticity or autocorrelation in the data.
Implementing GMM involves a systematic procedure that begins with the specification of the model and the derivation of the appropriate moment conditions. The first step is to define the model parameters and formulate the moment conditions based on theoretical or empirical grounds. Once the moment conditions are established, the next step is to compute the sample moments, which are the empirical counterparts of the theoretical moments. These are obtained by averaging the moment functions over the sample observations. The initial parameter estimates are then obtained by minimizing the GMM objective function using a preliminary weighting matrix, usually chosen to be the identity matrix for simplicity.
Mathematically, the GMM objective function to be minimized is:
where (W) is the initial weighting matrix, and (g) represents the moment conditions evaluated at the sample data and parameter vector (θ).
In the second step, the residuals from these initial estimates are used to construct an estimate of the variance-covariance matrix of the moment conditions, denoted as (S). The optimal weighting matrix is then the inverse of this estimated variance-covariance matrix, (W = S^{-1}). With this optimal weighting matrix, the parameters are re-estimated by minimizing the GMM objective function again, yielding the two-step GMM estimator.
For enhanced efficiency, iterative GMM can be employed, where the process of updating the weighting matrix and re-estimating the parameters is repeated until convergence is achieved. This iterative approach ensures that the final parameter estimates are as efficient as possible, given the available data and moment conditions.
Throughout the implementation process, special attention must be paid to issues of convergence and numerical stability. The choice of optimization algorithms can significantly affect the accuracy and reliability of the estimates. Commonly used algorithms include the Newton-Raphson method, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm, and other gradient-based methods. Ensuring convergence to the global minimum of the objective function, rather than a local minimum, is crucial for obtaining valid and reliable parameter estimates. Additionally, diagnostics and model specification tests, such as Hansen's J-test for over-identifying restrictions, should be performed to assess the adequacy of the model and the validity of the moment conditions.
By following these steps and carefully considering the choice of moment conditions and the construction of the weighting matrix, researchers can effectively apply GMM to a wide range of empirical problems, ensuring robust and efficient parameter estimation.
Properties of GMM Estimators
One of the fundamental properties of GMM estimators is consistency. Consistency ensures that as the sample size (T) tends to infinity, the GMM estimator converges in probability to the true value of the parameter vector (θ). This property is crucial because it guarantees that with a sufficiently large sample, the GMM estimator will yield parameter estimates that are arbitrarily close to the true parameters governing the data-generating process. The consistency of GMM estimators hinges on several key conditions: the validity of the moment conditions, the identification of the parameters, and the regularity conditions regarding the behavior of the data and the moment functions. Mathematically, if the true parameter vector is denoted by (θ0), consistency implies that
The proof of consistency typically involves showing that the sample moments converge uniformly to their population counterparts and that the GMM objective function is well-behaved (e.g., continuously differentiable and convex) in a neighborhood of the true parameter vector. Moreover, the identification condition requires that the population moment conditions uniquely determine the true parameter vector, which is often ensured by verifying that the Jacobian matrix of the moment conditions with respect to the parameters is of full rank.
Another property of GMM estimators is asymptotic normality. Asymptotic normality implies that, when appropriately scaled, the distribution of the GMM estimator converges to a multivariate normal distribution as the sample size increases. This property is vital for conducting statistical inference, such as constructing confidence intervals and performing hypothesis tests. Formally, if (θ) y6is the GMM estimator of the parameter vector (θ0), then under regularity conditions, it holds that:
领英推荐
where (σ) is the asymptotic variance-covariance matrix of the estimator. The asymptotic variance-covariance matrix (σ) can be expressed as:
where (D) is the Jacobian matrix of the moment conditions with respect to the parameters, (W) is the optimal weighting matrix, and (S) is the variance-covariance matrix of the moment conditions. The asymptotic normality result relies on the central limit theorem and the law of large numbers, ensuring that the sample moments behave well as the sample size grows. This property allows researchers to make probabilistic statements about the parameter estimates and to construct hypothesis tests using standard normal distribution critical values.
The efficiency of GMM estimators is anothet.property that determines the quality of the estimator in terms of the precision of the parameter estimates. An estimator is considered efficient if it achieves the lowest possible asymptotic variance among a class of estimators. In the context of GMM, efficiency is achieved when the weighting matrix s chosen to be the inverse of the variance-covariance matrix of the moment conditions. This choice of the weighting matrix ensures that the GMM estimator attains the Cramér-Rao lower bound, making it the most efficient among all GMM estimators that use the same set of moment conditions. The efficiency of the GMM estimator can be compared to other estimators, such as the ordinary least squares (OLS) estimator or the maximum likelihood estimator (MLE), depending on the context and the assumptions of the model. In cases where the model is correctly specified and the likelihood function is known, the MLE is asymptotically efficient. However, in many practical situations where the likelihood function is complex or unknown, the GMM provides a more flexible and robust estimation approach. The efficiency of GMM estimators is particularly important in empirical applications where precision in parameter estimates can significantly impact the conclusions drawn from the analysis.
By understanding these properties—consistency, asymptotic normality, and efficiency—researchers can appreciate the strengths and limitations of GMM estimators. These properties provide the theoretical foundation for using GMM in empirical research, ensuring that the estimators yield reliable and accurate results under appropriate conditions. This theoretical understanding also guides the practical implementation of GMM, helping researchers to make informed choices about the specification of moment conditions, the selection of the weighting matrix, and the interpretation of the estimation results.
Applications of GMM
The Generalized Method of Moments has been widely applied across various fields, demonstrating its versatility and robustness in empirical research. In economics, GMM is frequently employed in the estimation of dynamic models, such as those involving panel data and time series analysis. For instance, GMM has become a standard tool for estimating production functions where endogeneity of input choices poses significant challenges. Researchers leverage GMM to address these issues by using instrumental variables derived from lagged values of inputs, ensuring consistent parameter estimates. Another prominent application is in the estimation of consumption-based asset pricing models. Here, the moment conditions are derived from Euler equations that describe the optimal consumption choices of economic agents. By matching these theoretical moments to their empirical counterparts, GMM allows for the estimation of risk aversion parameters and discount factors that are central to understanding intertemporal consumption decisions and asset returns.
In finance, GMM is extensively used to estimate models of stock returns and volatility. One notable application is the estimation of the Capital Asset Pricing Model (CAPM) and its extensions. GMM facilitates the estimation of the parameters of these models by utilizing the moment conditions implied by the linear relationship between expected returns and risk factors. This approach is particularly advantageous when dealing with heteroskedasticity or serial correlation in the data, as GMM estimators remain robust under such conditions. Additionally, GMM is applied in the estimation of stochastic volatility models and GARCH (Generalized Autoregressive Conditional Heteroskedasticity) models, where the complex nature of the likelihood function makes traditional estimation methods cumbersome. By relying on moment conditions derived from the model dynamics, GMM provides a feasible and efficient estimation technique.
Implementing GMM in empirical research has been greatly facilitated by the availability of various software packages that provide built-in functions for GMM estimation. Popular statistical software such as R, Stata, and Python offer comprehensive support for GMM, including functions for specifying moment conditions, choosing weighting matrices, and conducting diagnostic tests. For example, in R, the gmm package provides a user-friendly interface for specifying and estimating GMM models. Users can define moment conditions and optimize the GMM objective function using various algorithms. Similarly, Stata's gmm command allows for flexible specification of moment conditions and offers robust options for handling different types of data structures, including panel data and time series. Python, with its extensive libraries such as statsmodels and linearmodels, also supports GMM estimation, providing researchers with the tools needed to implement complex models efficiently. These software tools typically include diagnostic tests for over-identifying restrictions and other specification checks, which are crucial for validating the GMM estimates.
While GMM offers considerable advantages, it also presents several challenges and limitations that researchers must navigate. One of the primary challenges is the issue of identification. Weak identification occurs when the moment conditions do not provide sufficient information to uniquely determine the parameter estimates. This can lead to biased and inconsistent estimates, particularly in small samples. Researchers must carefully select strong and relevant instruments to mitigate this risk and ensure robust identification. Another limitation is related to the finite sample properties of GMM estimators. In small samples, GMM estimators can exhibit significant bias and variance, which can distort inference. To address this, researchers often employ bootstrap methods or other resampling techniques to improve the finite sample performance of GMM estimators.
The computational burden associated with GMM is another significant challenge, especially when dealing with high-dimensional models or large datasets. The iterative nature of GMM, particularly when estimating the optimal weighting matrix, can be computationally intensive. Efficient numerical algorithms and parallel computing techniques are often necessary to handle these demands. Additionally, the choice of moment conditions and weighting matrices can greatly influence the efficiency and reliability of GMM estimates. Researchers must exercise caution in specifying these elements to avoid problems such as multicollinearity among instruments or incorrect weighting matrix specification.
Despite these challenges, the flexibility and robustness of GMM make it an invaluable tool in empirical research. By carefully addressing the potential pitfalls and leveraging the available software tools, researchers can effectively apply GMM to a wide range of empirical problems, ensuring robust and reliable parameter estimation. The continued development of computational techniques and diagnostic tools further enhances the applicability of GMM, solidifying its role as a cornerstone of modern econometric analysis.
Challenges and Limitations
The primary challenge associated with the Generalized Method of Moments is the issue of identification. For GMM estimators to be consistent and reliable, the moment conditions must provide sufficient information to uniquely determine the parameter vector. This is known as the identification condition. Weak identification occurs when the instruments or moment conditions are not sufficiently correlated with the endogenous variables, leading to imprecise and unreliable parameter estimates. In such cases, the GMM estimator can suffer from substantial finite sample bias and increased variance, making inference problematic. The rank condition, which requires that the Jacobian matrix of the moment conditions with respect to the parameters is of full rank, is crucial for ensuring proper identification. Researchers must carefully select strong and relevant instruments to mitigate weak identification. This often involves using lagged variables or external instruments that are theoretically justified and empirically valid. Moreover, the presence of too many weak instruments can exacerbate the problem, leading to overfitting and multicollinearity among the instruments. Diagnostic tests such as the Hansen's J-test for over-identifying restrictions and the Cragg-Donald statistic for weak instruments are essential tools for assessing the adequacy of the chosen instruments and ensuring robust identification.
The finite sample properties of GMM estimators present another significant limitation. While GMM estimators are asymptotically efficient and normally distributed under regular conditions, their performance in small samples can be suboptimal. In finite samples, GMM estimators can exhibit considerable bias and variance, which can distort statistical inference. The asymptotic properties of GMM rely on large sample approximations, and deviations from these conditions in small samples can lead to misleading results. To address these issues, researchers often employ bootstrap methods or other resampling techniques to improve the finite sample performance of GMM estimators. Bootstrapping involves repeatedly resampling the data to generate empirical distributions of the estimator, providing more accurate standard errors and confidence intervals. Additionally, finite sample corrections, such as those proposed by Newey and Windmeijer, can be applied to adjust the standard errors and improve the reliability of inference. These methods help to mitigate the adverse effects of small sample sizes and enhance the robustness of GMM estimates.
The computational complexity of GMM estimation poses another challenge, particularly when dealing with high-dimensional models or large datasets. The iterative nature of GMM, especially when estimating the optimal weighting matrix, can be computationally intensive and time-consuming. In the two-step GMM procedure, the initial estimation using a preliminary weighting matrix is followed by the re-estimation using the optimal weighting matrix derived from the first-step residuals. This process can be further complicated in iterative GMM, where the weighting matrix and parameter estimates are updated iteratively until convergence. Efficient numerical algorithms and optimization techniques are crucial for managing this computational burden. Gradient-based methods, such as the Newton-Raphson or Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithms, are commonly used to optimize the GMM objective function. Parallel computing and advanced hardware capabilities, such as multicore processors and GPUs, can also enhance computational efficiency. However, ensuring numerical stability and convergence to the global minimum of the objective function remains a critical consideration. Poor convergence or local minima can lead to inaccurate parameter estimates and unreliable inference.
Other practical issues can affect the implementation and reliability of GMM estimators. The choice of moment conditions and the specification of the weighting matrix can significantly influence the efficiency and accuracy of the estimates. Incorrect or misspecified moment conditions can lead to biased estimates, while an inappropriate weighting matrix can reduce the efficiency of the GMM estimator. Diagnostic tests and sensitivity analyses are essential to validate the chosen specifications and ensure the robustness of the results. Furthermore, the presence of heteroskedasticity, autocorrelation, or other forms of model misspecification can complicate the estimation process and affect the validity of the GMM estimates. Robust standard errors and generalized GMM techniques, which account for these issues, are often necessary to obtain reliable results.
Conclusion
The Generalized Method of Moments stands as a cornerstone of modern econometric analysis, offering a flexible and robust framework for parameter estimation in models where traditional methods such as Ordinary Least Squares (OLS) and Maximum Likelihood Estimation (MLE) may fall short. Its capacity to leverage moment conditions derived from economic theory or empirical regularities provides a unique advantage, allowing researchers to estimate parameters without the need for fully specified likelihood functions. This flexibility is particularly beneficial in complex models involving endogeneity, heteroskedasticity, and autocorrelation, making GMM an invaluable tool in both theoretical and applied econometric research.
The theoretical foundations of GMM, encompassing the concepts of consistency, asymptotic normality, and efficiency, provide a solid basis for its application. Consistency ensures that GMM estimators converge to the true parameter values as the sample size increases, provided the moment conditions are valid and identification conditions are met. Asymptotic normality facilitates statistical inference, enabling the construction of confidence intervals and hypothesis tests that are asymptotically valid. Efficiency, achieved through the optimal weighting matrix, ensures that GMM estimators have the smallest possible asymptotic variance among a class of estimators, enhancing the precision of the parameter estimates. These properties collectively underscore the robustness and reliability of GMM as an estimation technique.
However, the practical implementation of GMM is not without challenges. Identification issues, finite sample properties, and computational complexity pose significant hurdles that researchers must navigate carefully. The selection of strong and relevant instruments is crucial to mitigate weak identification, while bootstrapping and finite sample corrections can address the biases and variance issues in small samples. The computational demands of GMM, particularly in iterative procedures, necessitate the use of efficient numerical algorithms and advanced computational resources to ensure timely and accurate estimation. Moreover, the specification of moment conditions and the weighting matrix requires meticulous attention to avoid biases and inefficiencies. Diagnostic tests and sensitivity analyses play a critical role in validating the chosen specifications and ensuring the robustness of the results.
Despite these challenges, the versatility and robustness of GMM make it a powerful tool for empirical research. Its applications span a wide range of fields, from economics and finance to social sciences and beyond, demonstrating its broad utility and impact. The continued development of computational techniques and diagnostic tools further enhances the applicability of GMM, enabling researchers to tackle increasingly complex models and datasets. By addressing the potential pitfalls and leveraging the strengths of GMM, researchers can achieve reliable and accurate parameter estimates, thereby contributing to the rigor and advancement of empirical analysis.
In conclusion, the Generalized Method of Moments is an important methodological innovation that has significantly advanced the field of econometrics. Its theoretical rigor, combined with its practical flexibility, allows researchers to derive meaningful insights from empirical data, even in the presence of complex modeling challenges. As empirical research continues to evolve, the principles and techniques of GMM will remain indispensable, guiding researchers toward more robust and insightful findings. By continuing to refine the application of GMM and addressing its inherent challenges, the econometric community can further enhance the precision and reliability of empirical research, fostering a deeper understanding of economic phenomena and informing policy decisions with greater accuracy.
Literature:
1. Arellano, M., & Bond, S. (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. The Review of Economic Studies, 58(2), 277-297.
2. Blundell, R., & Bond, S. (1998). Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics, 87(1), 115-143.
3. Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50(4), 1029-1054. L
4. Hall, A. R. (2005). Generalized Method of Moments. Oxford University Press.
5. Hayashi, F. (2000). Econometrics. Princeton University Press.
6. Hansen, L. P., & Singleton, K. J. (1982). Generalized instrumental variables estimation of nonlinear rational expectations models. Econometrica, 50(5), 1269-1286.
7. Newey, W. K., & West, K. D. (1987). A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica, 55(3), 703-708.
8. Stock, J. H., & Wright, J. H. (2000). GMM with weak identification. Econometrica, 68(5), 1055-1096.
9. Windmeijer, F. (2005). A finite sample correction for the variance of linear efficient two-step GMM estimators. Journal of Econometrics, 126(1), 25-51.
10. Wooldridge, J. M. (2010). Econometric Analysis of Cross Section and Panel Data (2nd ed.). MIT Press.