Bryan Caplan

bcaplan@gmu.edu

http://www.gmu.edu/departments/economics/bcaplan

Econ 637

Spring, 1998

Week 6: Applications and Reflections, I

1. Correlation and Causation
1. After doing all of this math, it is very easy to overestimate how far we have actually gotten.
2. We can describe the correlation between variables, but does this show that one thing is causing the other? Could there be third factors causing both?
3. Examples:
1. Russian doctors
2. Police and crime
3. Price of eggs and price of chickens
4. Others?
4. Could it be a pure coincidence?
1. Ex: The curse of Tecumseh.
5. Controlling for extra variables can help us to avoid confusing correlation and causation. But the danger is still there.
6. Problem #1: Many variables are hard to quantify. Suppose, for example, that people get paid more for high ability, not education. But how do you measure "ability"? Just because you can't measure it, doesn't mean it isn't important.
7. Problem #2: Data limitations. Even if you can quantify something, it may be hard to get the data. It may be easy to get data on loan approval and race, but hard to get data on credit-worthiness.
8. Problem #3: What's exogenous? In principle, IV techniques can correct for endogeneity, but only on the assumption that we know that something else is exogenous. And how do you really know that?
1. Regress cancer treatment on cancer severity. You'll see a positive correlation if you don't use a randomized selection procedure, since very sick people will also get more powerful medicine.
2. Suppose that the central bank increases the money supply in order to "accommodate the needs of trade." You observe a positive correlation between money and output. But does money cause the increase in output? You need a randomized selection procedure to get good results.
2. A Plea for Pluralism
1. In reality, (virtually) no one abandons beliefs about economics or anything else just because a statistical test comes out against it.
2. To some, this merely shows that most economists (people?) are unscientific, irrational, dogmatic, etc.
3. Yet perhaps it just reflects the fact that before doing any statistics we already possess a wealth of non-statistical information from:
1. Common sense
2. Introspection
3. History
4. Thought experiments
4. Moreover, this information is often of much higher quality than the statistical evidence.
5. To many, econometrics is an atheoretical way to decide between unproven theories. But as we've learned, econometrics is itself a theory, relying on a host of untested assumptions.
6. Since econometrics relies upon untested assumptions, it merits "testing" just as much as any other purported source of information. I.e., try out econometrics in cases where you already know the answer by other means, and see if it gives the right answer.
7. Pretending that you are ready to change your mind about basic issues as soon as the next battery of econometric tests comes in has many bad consequences:
1. Cynicism - since no one does this, it appears that no one lives up to scientific standards.
2. Trickery: Econometrics gets less reliable as people torture the data to make it "come out right."
3. Dogmatism: If econometric evidence is the only evidence that counts, then people must conceal the real basis for their beliefs. If these beliefs cannot be debated, people are likely to become dogmatic about them.
4. Ignorance: If you actually abandoned all of your beliefs without econometric foundation, you would throw out most of the knowledge that you have.
3. Card, Krueger, and the Method of Natural Experiments
1. Exogeneity is almost always debatable. Dissatisfaction with standard exogeneity assumptions sparked an interest in the method of "natural experiments."
2. Card-Krueger wanted to determine the impact of the minimum wage on employment; a NJ decision to raise the state minimum wage provided the opportunity they were looking for.
3. [see sub-handout]
1. [see subhandout]
2. S-s employment rule? (Dougan)
3. GE effects on other workers? (Akerlof, Dickens, and Perry)
4. "Has Leviathan Been Bound?", I: Theory
1. Theory: Paper develops a model in which both the size of government and the probability of victory increase as the party's "endowment" of non-policy support increases. I call this the "Leviathan" hypothesis.
2. (p.17) Alternate theories:
1. Ideologue-type theories, according to which the big-G party expands government as their probability of victory grows, and the small-G party contracts government as their probability of victory grows. (Also consistent with: voter preference shift).
2. Fully-constrained theories, according to which elections are so binding that politicians have leeway within which to change the size of government. (Also consistent with: parties have no policy preferences).
5. "Has Leviathan Been Bound?", II: Empirics
1. How to measure victory probability? % of seats of majority party.
2. Fiscal variables measured in real per-capita terms; taken from 48 contiguous states over 1950-1989 period.
3. Distinguishing competing hypotheses:
1. Dempercent defined as #Dem/(#Dem+#Rep). If Ideologue-type theories were true, this would be positively related to the size of G.
2. Distance defined as abs(Dempercent-.5) - it's just the victory margin of the winning party. If Leviathan were true, this would be positively related to the size of G.
3. (Graphs); p.22.
4. Controls (all fiscal variables expressed in real per-capita terms):
1. State dummies
2. Year dummies
3. Personal income
4. Federal grants
5. Regress fiscal variables on both Dempercent and Distance to "race" hypotheses against each other.
6. Results: Both coefficients are positive and significant, throughout a wide range of specifications.
7. Other results: Decomposing spending reveals some interest patterns too.
8. My interpretation: For Democrats, power motive and ideology augment each other; for Republicans, power motive and ideology dampen each other.