Prof. Bryan Caplan

Spring, 2002

Part 1: True, False, and Explain

(10 points each - 3 for the right answer, and 7 for the explanation)

State whether each of the following nine propositions is true or false.  Using 2-3 sentences AND/OR equations, explain your answer.

1.  You and a friend both read the same article in the AER finding that the minimum wage does not increase unemployment.  You both agree that P(article finds minimum wage raises unemployment | the minimum wage really does raise unemployment)=.75, and P(article finds minimum wage raises unemployment | the minimum wage really does not raise unemployment)=.25.  But the two of you DISAGREE in your final estimates: your P(minimum wage raises unemployment| article's findings)=.9, while your friend sets the same probability at .45.

True, False, and Explain: Your prior probability that the minimum wage increases unemployment must be exactly double your friend's prior probability that the minimum wage increases unemployment.

FALSE.  Let A be "minimum wage raises unemployment" and B be "AER article finds that minimum wage does not raise unemployment."  We want to solve for P(A) knowing P(A|B), P(~B|A), and P(~B|~A).  P(~B|A)=1-P(B|A), so using Bayes' Law, we can calculate P(A) for me and my friend:

For me:  , implying my P(A)=.964

For my friend: , implying my friend's P(A)=.771

By inspection, .964<2*.771.

Most answers treated .75 as P(B|A) instead of P(~B|A).  But the text gives you P(article finds minimum wage RAISES unemployment) conditional on various things, but the article in fact facts that the minimum wage does NOT raise unemployment!  One sign that this answer is wrong is that it implies that we get more confident that the minimum wage causes unemployment after we see conflicting evidence!

Problems 2 and 3 refer to the following information.

60% of all agents in an economy have U=ln x + ln y, and the other 40% have U=2 ln x + ln y.  All agents start with 1 unit of x and 1 unit of y.

2. True, False, and Explain:

TRUE.  Using the formula from the homework, and adjusting the ratios of agents from 50/50 to 60/40:

3. True, False, and Explain: Agents of the first type will buy approximately .155 units of x.

FALSE.  Each agent has total income of 1*1.31+1*1=2.31.  Agents of type 1 spend half of their income on x.  Their consumption of x therefore equals their total spending divided by the price: 2.31/2/1.31=.882.

4.  There are two players in the "Big Brother/Little Brother" game.  The players simultaneously decide whether to Read or Play Sports.  Player 1 earns a payoff of 10 if he does the SAME thing Player 2 does, and 0 otherwise; Player 2 earns a payoff of 10 if he does the something DIFFERENT than what Player 1 does, and 0 otherwise.

True, False, and Explain:  The following normal form accurately represents this game.

 Player 2 Player 1 Read Play Sports Read 10,0 0,10 Play Sports 0,10 10,0

TRUE.  The left payoff goes to Player 1, the right to Player 2.  When they both Read or both Play Sports, the payoffs are therefore 10 to Player 1 and 0 to Player 2.  In the two other boxes, the payoffs are reversed.

5.  True, False, and Explain: In a game of complete and perfect information, every Nash equilibrium is subgame perfect, and MSNE can never exist.

FALSE.  Kreps Figure 12.11(a) shows a game of complete and perfect information with two NE, only one of which is subgame perfect.  Every game of complete and perfect information has a SGPNE, but that does NOT mean that every NE is SGP.

MSNE can exist in a game of complete and perfect information if there are tied payoffs.

6.  Suppose the Ultimatum Game is played simultaneously rather than sequentially.  One player writes down an offer, and the other player writes down a minimum acceptable offer.  Both notes are then opened; if Player 1's offer is greater than or equal to Player 2's minimum acceptable offer, they get Player 1's allocation.  Otherwise both players get nothing.

True, False, and Explain:  There is only one weakly dominant strategy in this game and one subgame perfection equilibrium, but (assuming players' offers do not have to be whole numbers) an infinity of Nash equilibria.

FALSE.  It is indeed weakly dominant for Player 2 to write a minimum acceptable offer of 0, making it strictly dominant for Player 1 to offer 0.

Moreover, it is indeed that case that there are an infinity of NE.  If Player 1's offer plus Player 2's minimum acceptable offer add up to the total payoff, you have a NE.

HOWEVER, in a simultaneous game, all of these infinite number of NE are subgame perfect!  In a simultaneous move game, there is only one subgame, so all NE are SGP.  Intuitively, in the simultaneous game, Player 2 is in just as strong a bargaining position as Player 1.  Player 1 can threaten to give a low offer, but Player 2 can just as easily threaten to reject such an offer.  In contrast, in the sequential game, Player 1 does not just threaten to give a low offer.  He makes the offer irrevocably, which then leaves Player 2 with a definite choice between something and nothing.

7.  Suppose two bargainers infinitely repeat the following game.  Their strategy is to always play Soft as long as both players have always played Soft in the past.  If one player ever plays Hard, both of them play the MSNE of the game forever afterwards.

 Player 2 Player 1 Hard Soft Hard 0,0 5,1 Soft 1,5 4,4

True, False, and Explain:  The smallest value of b able to sustain cooperation is .5.

FALSE.  To answer this problem, we first need to solve for the MSNE.  Let s be P2's probability of playing Hard.  Then P1 is indifferent if 0s+5(1-s)=1s+4(1-s), implying s=.5.  Since payoffs are symmetric, both players play Hard 50% of the time.  Thus, players are equally likely to get the (Hard,Hard) payoff of 0, the (Hard,Soft) payoff of 5, the (Soft,Hard) payoff of 1, or the (Soft,Soft) payoff of 4.  The expected payoff is thus .25*0+.25*5+.25*1+.25*4=2.5.

Once we know this, we simply solve for the critical b:

.  Thus: , implying that .

8.  N firms are trying to form a stable cartel.  One of them makes the following suggestion: "Let's set our cartel price somewhat below the monopoly level.  That way, we'll make lower profits every turn, but the incentive to cheat will also be a lot smaller.  Moderate collusion will be easier to sustain than full collusion."

True, False, and Explain:  Assuming firms play trigger strategies, this reduces the critical value of b under both Bertrand and Cournot competition.

FALSE.  Under Bertrand collusion, it makes no difference.  If the cartel price and profits fall, a defecting Bertrand firm slightly undercuts the reduced cartel price and thereby steals only the reduced cartel profits.  The cooperation condition becomes:

, so the profits still cancel out, leaving the requirement that .

Under Cournot collusion, however, this strategy could work.  If the cartel expands output, the market price falls, and the optimal defection quantity accordingly falls (and falls even more relative to the higher level of output the cartel allows).  So defection profits fall relative to a 1/N share of cartel profits, making cooperation more sustainable.

9.  Suppose there is costless entry and exit in the lobbying/rent-seeking "industry."

True, False, and Explain:  Repeated interaction between lobbyists will neither increase nor reduce the total social cost of rent-seeking.

TRUE.  Without entry and exit, repeated interaction could either increase or reduce the total social cost of rent-seeking.  However, with costless entry and exit, the rate of return in the lobbying industry has to exactly equal the overall market rate of return.  Thus, if lobbyists collude to raise the return to lobbying, they attract new entry until the return falls back to normal.  Similarly, if lobbyists push returns below the normal level in an effort to build up a "tough" reputation, firms exist until the return rises back to normal.

(20 points each)

In 4-6 sentences AND/OR equations, answer all three of the following questions.

1.  Democratic governments are often lobbied to extend monopoly protection to various firms.  Carefully analyze and diagram the full Kaldor-Hicks efficiency consequences (allocative, productive, lobbying) of this practice if (a) productive efficiency and lobbying ability are perfectly correlated (i.e., the lowest-cost firm always defeats higher-cost rivals in political battles if they spend the same amount), and (b) productive efficiency and lobbying abilities are imperfectly correlated.

In case (a), there will be allocative and lobbying inefficiency, but no productive inefficiency.  That is because the most productively efficient firms wins every time.  In contrast, in case (b), there is a probability of productive inefficiency in addition to allocative and lobbying inefficiency, because the higher-cost firm might win the lobbying contest.  It is also worth noting that a more productively efficient firm will always be willing to pay more to get a monopoly privilege because the lower your costs, the greater the monopoly profits the privilege enables you to earn.  With unequal costs, you will not necessarily see full rent dissipation, especially in case (a).  There, the most productively efficient firm only needs to spend an amount greater than the monopoly profits the second most-efficient firm could earn if it won the privilege.

2.  Why does Landsburg say that "prices are good"?  Carefully explain Landsburg's position.  What role do the Welfare Theorems and game theory play in his argument?

Landsburg explains that competition by itself could be good or bad.  In general, there is no guarantee that competition leads to socially desirable outcomes.  However, when you combine competition, rationality, and market prices, you do have a (limited) guarantee of social optimality.  The First Welfare Theorem in particular shows that if rational agents compete subject to market prices, you normally get an efficient outcome.  Landsburg uses some game theory examples to illustrate cases where competition and rationality lead to socially suboptimal outcomes.

3.  In some graduate economics programs, like the University of Chicago's, the students rarely study cooperatively.  In other programs, practically every student belongs to a study group.  Some observers attribute this to Chicago's high failure rate and the conditionality of funding on passing first-year exams.  Others attribute this to a Chicago "culture."  Use game theory to model both of these competing hypotheses.  To what extent do your two accounts rely on repeated game considerations?

To model the first hypothesis, I would suggest modeling study group participation as a repeated PD game.  Because of the high failure rate, the probability that the game continues is smaller than at other schools.  Moreover, because funding depends on passing exams, the temptation payoff if you defect at the last minute (taking your friends' notes right before the big exam and then refusing to share yours) is greater than in other programs.  Both of these factors require greater patience to sustain cooperation.

The model the second hypothesis, I would suggest modeling study group participation as a coordination game.  If other students join groups, you'd better too.  But if no one else is joining, you look like an idiot if you go around trying to form one.  Thus, equilibria with and without study groups both exist.