Prof. Bryan Caplan

bcaplan@gmu.edu

http://www.bcaplan.com

Econ 812

 

Week 11: Behavioral Economics and Irrationality, II

I.             The Behavioral Approach and Belief Formation

A.           Last week we reviewed empirical evidence on choice theory.  This week we pursue a parallel agenda on belief formation.

B.           Belief formation gets less attention than choice theory in basic micro, but nevertheless there are definite standard assumptions, and most work relies on these assumptions.

C.           Economists, especially those who earned their Ph.D.s after the RE revolution, frequently refer to the violation of these assumptions as "irrationality," as distinguish from ignorance.  Intuitively, there are two quite different reasons you might make mistakes:

1.            Lack of information

2.            Irrationality/stupidity

D.           While the distinction is uncontroversial, in practice, economists are reluctant to blame errors on anything other than lack of information.  However, claims about rationality are empirically testable.

E.           Weakest rationality assumption: Bayesianism.  Even if you put no restrictions on agents' prior probabilities, there are testable empirical implications.  Examples:

1.            P(A&B)£P(A).

2.            Bayes' Rule

F.            Stronger rationality assumption: RE.  Almost all modern models explicitly rely on RE, and a great deal of earlier work implicitly relies on it.  And RE has definite empirical implications:

1.            No systematic errors

2.            Errors uncorrelated with available info

G.           In what sense do earlier models implicitly rely on RE?  Take a simple story about price controls.  If suppliers systematically and persistently underestimate the price control, no shortage will arise.  Suppliers will keep responding optimally to the market as they imagine it.

H.           A large empirical literature has uncovered a variety of deviations from not only RE, but elementary probability theory.  Once again, I will partly be playing devil's advocate, but also indicating some reservations along the way.

II.            Cognitive versus Motivational Biases

A.           Psychologists distinguish between two sorts of bias: cognitive and motivational.

B.           Motivational biases are biases where our emotions steer our intellectual faculties away from the sensible answer they would otherwise reach.

C.           Cognitive biases are biases where our intellectual faculties give us mistaken answers in the absence of any emotional commitment. 

D.           Many psychologists - especially those who specialize in cognitive bias - maintain that all biases are, in fact, cognitive.  These psychologists have been especially influential in economics.

E.           As you might guess, other psychologists disagree.  Their objections have received less attention from economists, but they have nevertheless had some influence.

F.            People occasionally equate cognitive biases with "not sensitive to incentives" and motivational biases with "sensitive to incentives."  But this is hardly clear.  Incentives could work on diverse margins.

III.          Belief Perseverance and Confirmatory Biases

A.           The Bayesian framework is all about updating.  Empirically, though, there are a number of experiments showing "belief perseverance."  People stick with their initial view in spite of contrary evidence that comes to them. 

B.           What is particularly striking is that people can actually be more accurate with less information.  Someone who views the complete history of a blurry image gradually coming into focus has more trouble identifying the image than another person who saw only the later part of the history.

C.           Other experiments find an even stronger effect: Once people believe an hypothesis, they tend to grow increasingly confident.  Why?  They are more likely to notice confirming evidence, and to misinterpret ambiguous evidence as additional support.  This is known as "confirmatory bias."

D.           In one particularly interesting experiment on the death penalty, people were initially sorted into supporters and opponents.  Both groups were shown the same evidence, and both groups became more confident in their judgments!

E.           In more general terms, there is some evidence of systematic over-confidence.  This can usually be found if you graph the probabilities that people assign to their beliefs against the fraction of those beliefs that are correct.

1.            However, people are more accurate when they give their average accuracy rate instead of rating their accuracy question-by-question.

F.            Real world significance?  The experiments demonstrate the existence of these problems, but what real-world mistakes can be attributed to them?  Once you have a lot of evidence, it should take a lot of evidence to noticeably change your mind.  And how often is it that people keep getting more and more certain of their views?  There are few issues as emotional as the death penalty, so perhaps this evidence is not so impressive.  

IV.          Availability and Representativeness Biases

A.           People often estimate probabilities according to the ease of thinking of examples.  This is known as the "availability heuristic."

B.           While this is sometimes a useful heuristic, it also predictably generates biased judgments.  If examples of something are especially vivid or memorable, we tend to overestimate probabilities.

C.           Example: Are there more words in the dictionary that (a) start with "a" or (b) have "i" as the third-to-last letter?  It is easier to come up with examples of the former, and people normally conclude - falsely - that such words are more common.  (Hint: How many words end in "ing"?)

D.           Availability bias has often to used to explain why, e.g., people overestimate the risk of flying.  Plane crashes are vivid and memorable, so people infer they are likely.

E.           Another common technique for estimating probabilities is to compare particular cases to stereotypes, and go with the "better match."  This is known as the "representativeness" heuristic. 

1.            Example: Suppose someone asked you which was more likely: a Chinese professor teaches Chinese literature, or a Chinese professor teaches psychology.  Your stereotype of Chinese literature professors is probably that they are almost all Chinese, while your stereotypical psych prof is not.

F.            What is wrong with this?  Oftentimes, nothing.  However, many experiments have documented a tendency to ignore "base rates."  If there are many more psych profs than Chinese lit profs, this must raise the probability that the Chinese prof is a psychologist.  In practice, people often suffer from "representativeness bias," where they look only at the stereotype and ignore base rates.

G.           Classic experiment: You walk into a joint engineer/psychologist convention.  70%[30%] of the attendees are engineers.  You meet a guy with horn-rimmed glasses and a pocket protector.  What is the probability he is an engineer?

1.            You generally get the same answer regardless of the whether the base rates are 70/30 or 30/70.

H.           False positives.  Suppose a medical test always detects an illness if it is present, but gives a false positive 5% of the time.  One person in a thousand has the disease.  What is the probability you have the disease conditional on testing positive?  Our stereotypical sick person tests positive; our stereotypical well person does not.  But the conditional probability of having the disease if you test positive is only 1.96%!

I.             Real world significance?

V.           Risk Misperceptions

A.           The basic RE assumption is that actors' risk estimates are, on average, correct.  A large empirical literature examines this question, and often concludes that this is not so.

B.           A standard finding is that estimates of low-probability events are particularly biased.  In particular, it seems as if people either:

1.            Treat low-probability events as if they had 0 probability.

2.            Or, treat low-probability events as if they were much more likely than they really are.

C.           While advocates of paternalistic safety regulations often appeal to this literature, the policy link is tenuous.  If you take this literature seriously and want to use policy to do something about it, you would obviously want to reduce the level of safety in a wide variety of areas.

VI.          Systematically Biased Beliefs About Economics

A.           Most intro econ classes try to correct students' pre-existing systematically biased beliefs about economics.  Many famous historical economists operate from a similar perspective.

B.           But almost all academic work in economics assumes that people's economic beliefs satisfy RE.

C.           I have a series of empirical papers that examine this question.  I find overwhelming evidence of systematic errors in the public's beliefs about the economy.

D.           Data: The Survey of Americans and Economists on the Economy

E.           Method: Estimating beliefs as a function of Econ dummy and control variables.  RE says Econ dummies' coefficients should equal 0, at least after appropriate controls.

F.            Why the controls?  Many critics of the profession say it is the economists who are biased, not the public.  Two main versions:

1.            Self-serving bias

2.            Ideological bias

G.           Clusters of error:

1.            Anti-market bias

2.            Anti-foreign bias

3.            Make-work bias

4.            Pessimistic bias

H.           Other findings: The public is heterogeneous.  Neither income nor conservative ideology make people "think like economists," but the following do:

1.            Education

2.            Being male

3.            Job security

4.            Income growth

I.             Real world implications?  At least in my judgment, it is rather easy to link these biases to specific real-world outcomes.  Most policies that economists think are foolish can be naturally linked to public's confused beliefs about economics.