An Introduction to Causal Inference by Judea Pearl

An Introduction to Causal Inference by Judea Pearl

Author:Judea Pearl [Pearl, Judea]
Language: eng
Format: epub
Tags: structural equation models, confounding, graphical methods, counterfactuals, causal effects, potential-outcome, mediation, policy evaluation, causes of effects
Published: 2014-04-29T00:00:00+00:00


5.2. Problem formulation and the demystification of “ignorability”

The main drawback of this black-box approach surfaces in problem formulation, namely, the phase where a researcher begins to articulate the “science” or “causal assumptions” behind the problem of interest. Such knowledge, as we have seen in Section 1, must be articulated at the onset of every problem in causal analysis – causal conclusions are only as valid as the causal assumptions upon which they rest.

To communicate scientific knowledge, the potential-outcome analyst must express assumptions as constraints on P*, usually in the form of conditional independence assertions involving counterfactual variables. For instance, in our example of Fig. 5, to communicate the understanding that Z is randomized (hence independent of UX and UY), the potential-outcome analyst would use the independence constraint Z⊥⊥{Yz1, Yz2, . . .,Yzk}.14 To further formulate the understanding that Z does not affect Y directly, except through X, the analyst would write a, so called, “exclusion restriction”: Yxz = Yx.

A collection of constraints of this type might sometimes be sufficient to permit a unique solution to the query of interest. For example, if one can plausibly assume that, in Fig. 4, a set Z of covariates satisfies the conditional independence

(35) Yx ⊥ ⊥ X|Z

(an assumption termed “conditional ignorability” by Rosenbaum and Rubin (1983),) then the causal effect P(y|do(x)) = P*(Yx = y) can readily be evaluated to yield

(36)

The last expression contains no counterfactual quantities (thus permitting us to drop the asterisk from P*) and coincides precisely with the standard covariate-adjustment formula of Eq. (25).

We see that the assumption of conditional ignorability (35) qualifies Z as an admissible covariate for adjustment; it mirrors therefore the “back-door” criterion of Definition 3, which bases the admissibility of Z on an explicit causal structure encoded in the diagram.

The derivation above may explain why the potential-outcome approach appeals to mathematical statisticians; instead of constructing new vocabulary (e.g., arrows), new operators (do(x)) and new logic for causal analysis, almost all mathematical operations in this framework are conducted within the safe confines of probability calculus. Save for an occasional application of rule (34) or (32)), the analyst may forget that Yx stands for a counterfactual quantity—it is treated as any other random variable, and the entire derivation follows the course of routine probability exercises.

This orthodoxy exacts a high cost: Instead of bringing the theory to the problem, the problem must be reformulated to fit the theory; all background knowledge pertaining to a given problem must first be translated into the language of counterfactuals (e.g., ignorability conditions) before analysis can commence. This translation may in fact be the hardest part of the problem. The reader may appreciate this aspect by attempting to judge whether the assumption of conditional ignorability (35), the key to the derivation of (36), holds in any familiar situation, say in the experimental setup of Fig. 2(a). This assumption reads: “the value that Y would obtain had X been x, is independent of X, given Z”. Even the most experienced potential-outcome expert would be unable to discern whether any subset Z of covariates in Fig.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.