Statistical Identification and Estimability Research Paper

Academic Writing Service

View sample Statistical Identification and Estimability Research Paper. Browse other statistics research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

A statistical identification problem arises when a sample of infinite size, assumed to be generated by an element of a class of statistical models indexed by a parameter, does not lead to certain knowledge of one or more components of that parameter. Roughly, a model is identified if enough observed data could allow one to distinguish elements of the parameter set from one another.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Statistical identification problems are ubiquitous, occurring in fields as diverse as reliability, optimal control theory, experimental design, and epidemiology. A rich array of statistical identification problems appear in the social sciences: latent structure analysis, factor analysis, and analysis of covariance structures are based on multivariate statistical models that possess, by construction, this problem. Nonresponse and data censoring in the conduct of sample surveys often lead to an identification problem and the problem is a central feature of the analysis of econometric structural equation models and of errors-invariables regression models. Experimental designs in which treatments are confounded lead to an identification problem. A nonparametric identification problem occurs in the theory of competing risks. Understanding if a statistical model is identified, nonidentified, or unidentified is a necessary prelude to the construction of a sensible method for making inferences about model parameters from observed data.

1. Background

The problem of statistical identifiability appears in economics as early as Pigou (1910). Thurstone discusses identification in his applications of factor analysis to intelligence (1935, 1947) and, in his latent structure analysis of opinion research and attitudes, Lazarsfeld deals with identification issues (1950).




Principal contributions to a rigorous mathematical treatment of the problem began in the 1950s with the work of Haavelmo (1944), Hurwicz (1950), Koopmans et al. (1950) and Koopmans and Reiersøl (1950) and spawned a huge literature on the subject. Fisher (1966) provides a thorough rigorous treatment of identification for simultaneous equation systems. He laid to rest the issue of whether or not identifiability is a discrete or continuous property of parameter estimation by demonstrating that estimators which are consistent given a correct model specification converge in probability to true parameter values as specification error approaches zero. Thus identifiability may be regarded as a continuous property of a set of parameters and small specification errors lead to small inconsistencies in parameter estimation. Rothenberg (1971) and Bowden (1973) extend identification theory to embrace nonlinear models, distinguishing local from global identifiability. Hannan (1969, 1971) generalizes the theory of identification for linear structural equation systems with serially uncorrelated residual errors in another direction: he establishes conditions for identifiability for the important case of auto-regressive moving average error terms.

Probabilistic models employed in linear system optimal control theory have much in common with econometric structural (simultaneous) equation systems in which observations are measured with random error. In the psychometric literature, the later class of models goes under the acronym LISREL (Linear Structural Relationships). Joreskog and Sorbom (1984) have developed methods for specifying LISREL model structure, algorithms for calculating various types of parameter estimators and a framework for interpretation of results. As with econometric structural equation systems, applications of LISREL span a wide domain of social science applications, marketing, psychology, sociology, and organizational behavior among them.

A Bayesian treatment of identification began with Dawid (1979) and Kadane’s (1974) studies of identification in Bayesian theory and Dreze’s (1972), (1974) analysis of econometric simultaneous equation systems from a Bayesian perspective. Identification through a Bayesian lens is currently a vigorous area of research yielding new insights into how to construct meaningful prior distributions for parameters.

2. Example 1

Statistical Identification and Estimability Research Paper Table 1

A sample survey is conducted to determine attitudes of individuals in a population toward a particular subject. The sample frame consists of N individuals, each of whose response is classified as either ‘Favorable’ or ‘Unfavorable.’ This sample yields NR respondents and NS = N – NR non-respondents. Of the NR respondents, NFR are classified as ‘Favorable’ and NUR = N – NFR are classified as ‘Unfavorable.’ In the absence of information bearing directly on the fraction of non-respondents who would be classified as ‘Favorable,’ statistics available for inference are NFR, NUR, NS subject to NFS + NUS = NS and to NFR + NUR + NS = N. Suppose that the process generating data about both respondents and non-respondents is modeled as a double dichotomy with probabilities pFR, pUR, pFS and pUS > 0 and pFR + pUR + pFS + pUS = 1 as shown in Table 1. Table 2 shows the corresponding tables for the model of observed data.

Statistical Identification and Estimability Research Paper Table 2

The probability distribution for observations NFR, NUR and NS in Table 1 is proportional to

Statistical Identification and Estimability Research Paper Formula 1

Observations NFR, NUR and NS are sufficient for joint inference about pS= pFS + pUS, pFS and pUR but even an infinite sample will not lead to knowledge of pFS and pUS with certainty: pFS and pUS are not identifiable in the presence of non-response as an infinite sample yields knowledge only that pFS and pUS satisfy pFS + pUS = pS

3. Definitions

A statistical ‘model’ is a family {Pθ, θ ϵ Θ} of probability distributions indexed by a parameter θ ϵ Θ. A statistical ‘structure’ is Pθ for a particular θ = θ0 ϵ Θ. Parameters θ and θ´ ϵ Θ are ‘observationally equivalent’ if and only if Pθ (A) = Pθ (A) for all A in a measurable space J.

Definition: Θ is ‘identifiable’ if and only if θ = θ´ for all observationally equivalent pairs θ, θ´ ϵ Θ.

In practice, it is useful to couch these definitions in terms of distribution functions: let X be an observable random variable with range JX and distribution function Fθ(x) = P(X ≤ x|θ) belonging to a model {Fθ, θ ϵ Θ}. If there exists at least one pair (θ, θ´) for which θ, θ´ ϵ Θ and θ ≠ θ´ such that Fθ(x) = Fθ (x) for all x ϵ Jx then we say that Θ is ‘non-identifiable’ by X. If Θ is not non-identifiable by X it is identifiable. These definitions of identifiability can be recast to embrace nonparametric models as well (Basu 1981, Berman 1963).

Table 2

4. Example 2

In general, unobservables in an ‘econometric simultaneous equation model’ are restricted to residuals (errors in model equations). A ‘factor analysis model,’ on the other hand, possesses both unobservable latent and unobservable explanatory variables. Each is a special case of a more general ‘structural equation model’ in which a ‘latent variable model’ displaying interactions among latent variables and a ‘measurement model’ specifying how observable variables are related to latent variables are conjoined. Properties of the covariance matrix of observable variables are at center stage and structural equation test procedures are designed to measure the quality of fit between an observed (sample) covariance matrix Σ and a model based covariance matrix Σ(θ) indexed by a vector θ of model parameters.

Even when the structure of relations between observable variables and latent variables and the structure of relations among latent variables are both linear, the model covariance matrix Σ(θ) may be a complicated, nonlinear function of elements of the parameter vector θ. In this setting, in order for an element of the parameter vector θ to be identified it must be an explicit, known function of one or more elements of the sample covariance matrix Σ under the hypothesis that Σ = Σ(θ). (See Bollen (1989) for a comprehensive treatment. Also Hoyle (1995) f or applications.) If PDS Σ(θ) is (k × k) it has at most 1/2k(k + 1) functionally independent components so for θ to be identifiable, it can possess at most 1/2 k(k + 1) functionally independent components. This is a generic, easy-to-apply, necessary but not sufficient condition for identifiability of θ.

In the notation of Bollen 1989 we have a

Latent Variable Model:

Statistical Identification and Estimability Research Paper Formula 2

The (m × m) coefficient matrix B* induces interactions among latent endogenous variables (m × 1) η, the (m × r) coefficient matrix Γ establishes the effect of exogenous variables (r × 1) ξ on η and (m × 1) ζ represents a random residual error term assumed to be uncorrelated with ξ and to possess mean zero and finite (PDS) variance matrix Ψ. In addition B = I – B* is assumed to be nonsingular.

Measurement Model:

Statistical Identification and Estimability Research Paper Formula 3

The elements of the ( p × m) coefficient matrix Λy determine the magnitude of the effect of levels and changes in levels of latent variables η on observable variables y; (q × r) Λx relates exogenous variables ξ to observable variables in x in a similar fashion. Residual errors (p × 1) ε and (q × 1) δ are random variables, generally assumed to be uncorrelated with one another, with η and with ξ and to possess common mean zero, but possibly different (PDS) covariance matrices Θε and Θδ respectively.

To be specific, focus on the composition of the variance matrix Var(y) of y as a function of latent variable model parameters B, Γ, Φ, and Ψ and measurement model parameters Λy and Θε:

Statistical Identification and Estimability Research Paper Formula 4

The ( p × p) PDS matrix Var(y) possesses 1/2p(p + 1) functionally independent components. The variance matrix Var(ε) = Θε alone also possesses 1/2p(p + 1) functionally independent components and, in the absence of restrictions, other than just Φ and Ψ a re PDS, Λy, B, Γ, Φ, and Ψ possess 1/2m(m + 1) + 1/2r(r + 1) + m(m + p + r) additional functionally independent components. As the above equation suggests, it can be a formidable task to verify by direct algebraic analysis whether or not a particular model parameter is, or is not identifiable.

Clearly, parameter matrices Λy, B, Γ, Φ, and Θε cannot be expressed as unique functions of elements of Var( y) unless restrictions are placed upon them. When one or more components of B cannot be recovered uniquely as a function of elements of Var(y), such components are called ‘under-identified’ or ‘non- identified.’ A central task then, is to select constraints on latent variable model parameters and measurement model parameters that renders desired parameters ‘identifiable.’

If a priori restrictions on parameters of latent variable and measurement model parameters are specified in a way that leads to two or more distinct representations of some components of B as a function of components of Var( y), those components of B are called ‘over-identified.’ It can happen, of course, that some elements of B are identified, some are under- identified and others are over-identified. Bartels (1985) provides an easily accessible treatment of these concepts.

A common form of a priori constraint is to ‘specify’ that some parameter values are equal to zero. For example, if the modeler assumes that y and x are measured without error and that Λy = (m × m) I, Λx = (r r) I,Var(η) = Var(ξ) = Θε = Θδ = 0 and the variance of y becomes

Statistical Identification and Estimability Research Paper Formula 5

If, in turn, as is often the case in econometric applications, exogenous variables x are assumed to be predetermined and known with certainty, Φ = 0. With all of these restrictions in force η = y, ξ = x and the latent variable model becomes By = Γz + δ. Then observable y = Πz + u, with Π = B−1 Γ, u = B−1 δ and Var( y) = B−1 Ψ(B-1)= Ω, is called the ‘reduced form’ of this model.

Provided that predetermined variables z are uncorrelated with variables u (weaker assumptions are possible) and that the asymptotic variance matrix of the z’s exists, both Π and Ω can be consistently estimated by ordinary least squares. However, the distribution of y given z now depends exclusively on parameters Π and Ω. Pre-multiplication of both sides of B y = Γz + δ by any nonsingular (m m) matrix leads to the same reduced form, so all such structures are observationally equivalent and without further restrictions the system is underidentified.

Fisher (1966) provides an elegant and thorough exposition of conditions for identifiability of linear simultaneous equation system model parameters. Among the most important are ‘order’ and ‘rank’ conditions established by Koopmans et al. (1950). Suppose interest focuses on a single equation and that exclusion restrictions alone on y and z are in force. A (necessary but not sufficient) ‘order condition’ for parameters of this equation to be identifiable is that the number of variables excluded be greater than or equal to m – 1—one less than the number of components in (endogenous) y. Alternatively, the total number of excluded predetermined variables must be greater than or equal to the number of variables not excluded from this equation, less one (Fisher 1966, pp. 40–1).

The following argument due to Fisher is a transparent explanation of the origin of the ‘rank condition’ for linear simultaneous equation systems, first proved by Koopmans et al. (1950). Exclusion of the l th element of the first row a1 of m × (m + r)A = [B, Γ] is representable as a1c(l) = 0, c(l) an (m + r) 1 column vector with a 1 at element l and zeros elsewhere. Thus k such exclusions of distinct variables may be written in terms of (m + r) × k C as a1C = 0. Clearly if a1 is some linear combination of other rows of A, it is not identifiable. Thus, a sufficient condition for a to be identified is that xAC = 0 holds only for (1 × m) x of the form = (α, 0, …, 0), α a scalar. In the absence of other a priori restrictions on elements of a1, this condition is also necessary. When it obtains, the rank of AC must be m – 1. The order condition follows from the observation that the rank of AC must be less than or equal to that of C, so the number of independent restrictions imposed by C must be at least m – 1.

5. Estimation

When a linear statistical model is just identified, both consistent and unbiased estimators of its parameters exist. A more general notion of estimability for linear statistical models, first proposed by Bose (1944, 1947), says that a parameter θ of such a model is (linearly) estimable if there exists a linear function of observations with expectation equal to θ for each θ in a prespecified set Θ. This special case of identifiability, apparently developed independently of the more general theory of identification outlined in this review, has generated a large literature of its own.

Deistler and Seifert (1978) address the interplay between identification and estimability for a quite general (possibly nonlinear) version of an econometric structural equation model without measurement error. They answer the question ‘When, for such systems, does identifiability imply existence of consistent estimators?’

Joreskog (1973) and Wiley (1973) were among the first to specify practical procedures for estimation of parameters by analysis of the covariance structure of observations generated by a general structural equation model of the form described in Example 2. For a detailed description of parameter estimation via maximum likelihood, instrumental variables and least squares see, for example, Joreskog and Sorbom (1984).

Simultaneous Equation Estimates (Exact and Approximate), Distribution of is a crisp summary of methods designed to recover consistent estimates of structural equation parameters, such as Full Information Maximum Likelihood (FIML), Limited Information Maximum Likelihood (LIML), R-class estimators, two-and three-stage least squares (2 SLS, 3 SLS). A more leisurely presentation of econometric simultaneous equation system identification issues, couched in terms of a two-equation supply and demand model, and the nature of estimation problems they pose, appears in Simultaneous Equation Estimation: Overview.

6. Bayesian Identification

Non-Bayesians typically assign exact numerical values to a subset of parameters of a statistical model in order to achieve identification of one or more other parameters. Fisher (1961) suggested broadening this special type of a priori specification by replacing exact numerical values with probabilistic constraints. Dreze (1962) took up the challenge in his treatment of Bayesian identification for simultaneous equation estimation models. As before, let By + Γz = u be a ‘structural equation system’ indexed by matrix parameters consisting of nonsingular B, Γ, and PDS Var(u) = Σ with reduced form y = Πz + v, Π = -B−1Γ and disturbance variance Var(v) = B−1∑(B-1)t = Ω. Because the likelihood function for observations y can be written as a function of reduced form parameters Π and Ω alone, full knowledge of the exact numerical values of elements of Π and Ω does not permit recovery of unique values for elements of B, Γ, and Σ. The map from Π and Ω to B, Γ, and Σ is one to many. In effect, all nonsingular matrices B are observationally equivalent and B is only identifiable in the special case when f (B|Π, Ω) is concentrated on a single point in the space of nonsingular matrices B. Nevertheless, one can assign a prior density g(B, Γ, Σ) to structural parameters, observe data (Y, Z) and compute a posterior density f (B, Γ, Σ|Y, Z) for B, Γ, and Σ whether or not the model is identified in a classical sense. Because identifiability is a property of the likelihood function alone, if a proper prior distribution is assigned to a complete set of parameters (B, Π, and Ω here) the posterior for these parameters is proper whether or not the likelihood function is identified. Such priors must be chosen judiciously: too tight a prior will anchor the posterior on the prior until sample size becomes very large; a very loose proper prior may fail to reflect relevant prior knowledge. In order to streamline computation of posteriors, Raiffa and Schlaifer (1961) introduced the concept of ‘natural conjugate’ priors. The functional form of a natural conjugate prior is derived by interpreting parameter as sufficient statistic and sufficient statistic as parameter in the likelihood function. A consequence is that both prior and posterior possess the same functional form, a form dictated by the shape of the likelihood function. For simultaneous equation systems, multivariate-, matric and poly-t posterior distributions for parameters follow and figure prominently in work by Dreze (1976), Dreze and Morales (1976), Dreze and Richard (1983), and Mouchard (1976).

To minimize the impact of a priori evidence or, in some cases, to exploit invariance with respect to mapping from one representation of a parameter to another, a diffuse (vague, noninformative) prior can be the prior of choice. However, in some settings, this choice comes at a cost as a diffuse prior may lead to an improper posterior. Lindley and El Sayeed (1968) foreshadow much recent work on the character of diffuse priors. They were perhaps the first to recognize several important features of prior to posterior analysis. First, a proper prior always yields a proper posterior. Second, ‘… in the functional relationship problem [improper priors] can cause considerable difficulties … .’ Third, even when the likelihood function is not identifiable, observed data can lead to a posterior that is informative; i.e., different from the prior. Fourth, ‘Uncritical use of improper distributions can lead to inconsistent estimates.’

A simple example often used to introduce the idea of nonidentifiability highlights issues raised by diffuse priors.

7. Example 3

Suppose observations to be independent, identically normal with mean θ = µ1 + µ2 and variance 1. Then µ1 and µ2 are non-identifiable. Nevertheless, assignment of a proper prior to µ1 and µ2 leads to a proper posterior. As the number of observations approaches infinity, the posterior for θ converges in distribution to a true value θ0 of θ and the (proper) posterior for µ and µ concentrates on the line θ0 = µ1 + µ2.

If, however, a diffuse prior dµ1, – ∞ < µ1 < ∞, is assigned to µ1 and proper prior g(θ) to θ, the posterior for µ1 remains diffuse irrespective of sample size. Replace the proper prior g(θ) for θ with a proper prior h( µ2) for µ2. Then, provided the prior for µ2 is diffuse, the posterior for µ1 remains the same as the prior h( µ2), no matter how many observations are made.

These features of Example 3 suggest that uncritical choice of a diffuse prior can lead to a posterior with unsatisfactory or even unacceptable properties, in particular when the parameter space is of large dimension. For example, Kleibergen and Van Dijk (1997) point out that for simultaneous equation systems ‘… the order condition reflects overall identification while the rank condition reflects local (non) identification.’ They show that local non-identification coupled with a diffuse prior leads to posterior pathology when one attempts Limited Information analysis of a single equation and suggest an alternative approach. (See Gelfand and Sahu (1999) for other examples.)

Dawid (1979) and Kadane (1974) use conditional independence as a device for a definition of nonidentifiability which embraces both Bayesian and non-Bayesian perspectives. Suppose that an observable X possesses a distribution Fθ determined by a parameter θ which we partition into two pieces θ1 and θ2. If the distribution of X is fully determined by θ1 alone, θ = (θ1, θ2) is non-identifiable.

When a Bayesian assigns a prior distribution g(θ) to θ, Bayes theorem tells her that the posterior distribution f (θ|y) for a parameter θ given data y is proportional to the product of the likelihood function L(θ|y) for θ given y and the prior g(θ) assigned to θ. The (posterior) distribution for θ given θ and data y given prior g(θ) = g(θ21)h(θ1) is

Statistical Identification and Estimability Research Paper Formula 6

If θ2 is absent from the likelihood function,

Statistical Identification and Estimability Research Paper Formula 7

so that while data will modify her prior judgements about θ1, no amount of observable data y will change her a priori probability assessment g(θ21) for θ2 given θ1. Data y is, however, unconditionally informative about θ2; as data accrues and we learn more about θ1, prior judgements about θ2 are modified in accord with

Statistical Identification and Estimability Research Paper Formula 8

provided that both g(θ21) and h(θ1|y) are proper distributions.

Return to Example 1. As Bayesians, (noting that pFR + pUR + pFS + pUS = 1), we assign a prior distribution g(pFR, pUR, pFS) to pFR, pUR, pFS. If observed data y = (NS , NFR, NUR) is generated in accord with Table 2, the likelihood function for pFR, pUR, pFS,

Statistical Identification and Estimability Research Paper Formula 9

does not depend on pFS. Consequently,

Statistical Identification and Estimability Research Paper Formula 10

This matches our intuition: given pFR + pUR = α we then know with certainty that pFS + pUS = 1 – α, pFS, pUS ≥ 0. Data of type (NS , NFR, NUR) refines our knowledge about the value of α, but even an infinite sample of this type will not lead to knowledge certain of pFS. The conditional distribution of pFS given pFR + pUR = α prior to observing y is the same as the conditional distribution of pFS given pFR + pUR = α posterior to observing y.

While the theory of identification in a classical statistical context is now well understood, the story of identification from a Bayesian viewpoint is still unfolding. Markov Chain Monte Carlo [MCMC] methods breathe new life into Bayesian inference, allowing efficient computation of normalizing constants and parameter estimates for complex models. An explosion of Bayesian application of MCMC methods surfaces new identification issues. In their study of Bayesian identifiability for ‘Generalized Linear Models’ (GLMs), Gelfand and Sahu (1999) present conditions under which assignment of an improper prior distribution to some components of a parameter set leads to a proper posterior distribution. More particularly, if a posterior distribution for parameters is improper, under what conditions does there exist a lower-dimensional parameter set that possesses a proper posterior distribution? Ghosh et al. (2000) prove that when data y is uninformative about θ2 given θ1 so that θ2 is not identifiable, the posterior density f (θ2, θ1 |y) is proper if and only if both h(θ1 |y) and g(θ21) are proper, a fact that Gelfand and Sahu use to establish that for GLMs at least, even when the posterior distribution for all parameters is improper, there are embedded lower dimensional models that possess a proper posterior distribution, although in general, that distribution is not uniquely determined. (See Gelfand and Sahu (1999) for conditions that guarantee uniqueness.) They warn, however, that model fitting using MCMC methods is ‘… somewhat of an art form, requiring suitable trickery and turning to obtain results in which we can have confidence.’

Bibliography:

  1. Bartels R 1985 Identification in Econometrics. The American Statistician 39(2): 102–4
  2. Basu A P 1981 Statistical Distributions in Scientific Work 5. In: Taillie C, Patil, Baldessari (eds.) D Reidel, Dordrecht, The Netherlands, pp. 335–48
  3. Berman S M 1963 Annals of Mathematical Statistics 34: 1104–6
  4. Bollen K A 1989 Structural Equations with Latent Variables. Wiley, New York
  5. Bose R C 1944 Proceedings of the 31st Indian Sci. Congr., Delhi, Vol. 3, pp. 5–6
  6. Bose R C 1947 Proceedings of the 34th Indian Sci. Congr., Delhi, Part II, pp. 1–25
  7. Bowden R 1973 The theory of parametric identification. Econometrica 41(6): 1069–74
  8. Dawid A P 1979 Conditional independence in statistical theory (with discussion). Journal of the Royal Statistical Society Ser. B 41: 1–31
  9. Deistler M, Seifert H-G 1978 Identifiability and consistent estimability in econometric models. Econometrica 46(4): 969–80
  10. Dreze J H 1962 The Bayesian approach to simultaneous equation estimation. ONR Research Memorandum No. 67. Northwestern University
  11. Dreze J H 1972 Econometrics and Decision Theory. Econometrica 40: 1–17
  12. Dreze J H 1974 Bayesian theory of identification in simultaneous equations models. Studies in Bayesian Econometrics and Statistics. North-Holland, Amsterdam, pp. 159–174 (presented to Third NBER-NSF Seminar on Bayesian Inference Econometrics, Harvard Univ., October 1971)
  13. Dreze J H 1976 Bayesian limited information analysis of the simultaneous equations model. Econometrica 44: 1045–75
  14. Dreze J H, Morales J A 1976 Bayesian full information analysis of simultaneous equations. Journal of the American Statistical Association 71(376): 919–23
  15. Dreze J H, Richard J F 1983 Bayesian analysis of simultaneous equation systems. Handbook of Econometrics 1
  16. Fisher F M 1961 On the cost of approximate specification in simultaneous equation estimation. Econometrica 29(2)
  17. Fisher F M 1966 The Identification Problem in Econometrics. McGraw-Hill, New York
  18. Gelfand A, Sahu S K 1999 Identifiability, improper priors, and Gibbs sampling for generalized linear models. Journal of the American Statistical Association 94(445): 247–53
  19. Ghosh M, Chen M H, Ghosh M, Agresti A 2000 Noninformative priors for one parameter item response models. Journal of Statistical Planning and Inference 88: 99–115
  20. Hannan E J 1969 The identification of vector mixed autoregressive moving average systems. Biometrika 57: 223–5
  21. Hannan E J 1971 The identification problem for multiple equation systems with moving average errors. Econometrica 39: 751–65
  22. Haavelmo T 1944 The probability approach in econometrics. Econometrica 12, Supplement
  23. Hoyle 1995 Structural Equation Modeling: Concepts, Issues and Applications. Sage, Thousand Oaks, CA
  24. Hurwicz L 1950 Generalization of the concept of identification. Statistical Inference in Dynamic Economic Models. Cowles Commission Monograph 10. Wiley, New York
  25. Johnson W O, Gastwirth J L 1991 Bayesian inference for medical screening tests: Approximations useful for the analysis of AIDS. Journal of the Royal Statistical Society Ser. B 53: 427–39
  26. Joreskog K G 1973 In: Krishnaiah P R (ed.) Analysis of Covariance Structures. Multivariate Analysis III. Academic Press, New York
  27. Joreskog K G, Sorbom D 1984 LISREL VI Analysis of Linear Structural Relations by Maximum Likelihood, Instrumental Variables, and Least Square Methods. User’s Guide. Department of Statistics, University of Uppsala, Uppsala, Sweden
  28. Kadane J B 1974 The role of identification in Bayesian theory. Studies in Bayesian Econometrics and Statistics. NorthHolland, Amsterdam, pp. 175–91 (presented to Winter Meeting of Econometric Society, Toronto, December 1972).
  29. Kleibergen F, Van Dijk H K 1997 Bayesian Simultaneous Equations Analysis using Reduced Rank Structures. Erasmus University Technical Report 9714 A
  30. Koopmans T C, Reiersøl O 1950 The identification of structural characteristics. Annals of Mathematical Statistics 21: 165–81
  31. Koopmans T C, Rubin H, Leipnik R B 1950 Measuring the equation systems of dynamic economics. Statistical Inference in Dynamic Economic Models. Cowles Commission Monograph 10. Wiley, New York
  32. Lazarsfeld P F 1950 The logical and mathematical foundation of latent structure analysis. The interpretation of some latent structures. Measurement and Prediction 4: Studies in Psychology of World War II
  33. Leamer E E 1978 Specification Searches. Wiley, New York
  34. Mouchard J-F 1976 Posterior and Predictive Densities for Simultaneous Equation Models. Springer-Verlag, New York
  35. Neath A A, Samaniego F J August 1997 On the efficacy of Bayesian inference for nonidentifiable models. The American Statistician 51(3): 225–32
  36. Pigou C 1910 A method of determining the numerical values of elasticities of demand. Economic Journal 20: 636–40
  37. Poirier D J 1996 Revising Beliefs in Non-Identified Models. Technical Report, University of Toronto, Canada
  38. Raiffa H, Schlaifer R O 1961 Applied Statistical Decision Theory. Harvard Business School, Boston
  39. Rothenberg T J May 1971 Identification in parametric models. Econometrica 39: 577–91
  40. Thurstone L L 1935 The Vectors of Mind. University of Chicago Press, Chicago
  41. Thurstone L L 1947 Multiple-Factor Analysis. University of Chicago Press, Chicago
  42. Van der Genugten B B 1997 Statistica Neerlandica 31: 69–89
  43. Wald A 1950 Note on the identification of economic relations. Statistical Inference in Dynamic Economic Models. Cowles Commission Monograph 10. Wiley, New York
  44. Wiley D E 1973 The identification problem for structural equation models with unmeasured variables. In: Goldberger A S, Duncan O D (eds.) Structural Equation Models in the Social Sciences. Seminar Press, New York
Later Statistical Methods Research Paper
Missing Statistical Data Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!