Bryan Caplan

bcaplan@gmu.edu

http://www3.gmu.edu/departments/economics/bcaplan

Econ 637

Spring, 1999

Week 12: Discrete and Limited Dependent Variable Models

  1. Discrete Choices
    1. Independent dummy variables appear frequently in empirical work in economics, and pose no special econometric problems.
    2. Dependent dummy variables appear less often, but are still common - and they do pose some special econometric problems.
    3. There is a large set of different kinds of variables that pose similar problems when they are dependent:
      1. Regular dummy variables
      2. Dummy variables that are derived from continuous variables - called "latent" or "index" variables.
      3. Unordered polychotomous variables (e.g. y=1 if you walk, 2 if you drive, 3 if you take the train)
      4. Ordered polychotomous variables (e.g. y=1 if you are in the top third of your class, 2 if you are in the middle third, and 3 if you are in the bottom third).
      5. Sequential polychotomous variables (e.g. y=1 if you finished high school, 2 if you had some college, 3 if you finished college, 4 if you hold an advanced degree).
      6. "Count data" - if the variable must take on an integer value.
  2. The Linear Probability Model
    1. The simplest way to deal with the problem of discrete dependent variables is to ignore the problem. Just treat it like any other left-hand-side variable and perform OLS.
    2. Natural interpretation: prediction of Y|X is the conditional probability of Y given X.
    3. Problem #1: Predictions are not constrained to lie between 0 and 1! This can make the interpretation of the results quite puzzling.
    4. Problem #2: Linear probability model is heteroscedastic. The residual ei equals either 1-Xib or -Xib since y equals either 0 or 1. Implied variance of disturbance term is condition on X: var(e i|Xi)=Xib (1-Xib ).
    5. Heteroscedasticity can be corrected with White procedure, but the problem of out-of-sensible-range predictions is harder to solve.
    6. Example of linear probability model.
  3. The Probit Model
    1. How can you constrain your predictions to lie between 0 and 1? Why not take some function of your result that maps the whole domain of real numbers into the 0-1 range? I.e., find a suitable F such that P(Yi=1)=F(Xib ).
    2. First attempt at this: the probit model. Let F be the cumulative density function of the standard normal distribution, traditionally denoted by F (.)=. Notice that the range of this F lies between 0 and 1.
    3. Some rationale: Suppose your dependent dummy variable is derived from an unobserved continuous "latent" variable y*, such that yi=1 if yi*>0, 0 otherwise, where yi*=Xib +e i. (e i~N(0,s 2), so y*|X is normally distributed).
    4. Then note that P(yi=1)=P(yi*>0)=P(Xib +e i>0)=P(e i>-Xib ) =P(e i/s >-Xib /s ).
    5. Since y* is normally distributed, this is equal to P(e i/s <Xib /s )= F (Xib /s ), the probit distribution.
    6. Simply doing a linear regression and plugging it into the standard normal cdf to find out the parameters for the probit would make no sense.
    7. Rather, we want to estimate F (Xib ) using MLE. This technique makes sense, since it will tell us what value of b is most likely to have generated our observations given that P(Y|X)=F (Xib ).
    8. There is no clean solution to the probit; rather, the MLE is discovered numerically by your canned software. As explained earlier, the computer begins with some initial values and performs search algorithms (like checking the derivatives and moving in the suggested direction) until further iterations yield no significant improvement.
    9. Nice feature of the probit is that since the likelihood function is globally concave, the local max is also the global max.
    10. Example of probit.
  4. The Logit Model
    1. The logit is extremely similar to the probit, and attempts approximately the same mission. The only difference is that the logit has a different F.
    2. Instead of F=F (Xib ), the logit uses F=L (Xib ), where lambda is the logistic distribution: . Notice that this must lie between 0 and 1.
    3. The logit, like the probit, is calculated numerically using a MLE routine.
    4. Like the probit, the local max of a logit is also the global max.
    5. Example of a logit.
  5. Linear Probability, Probit, and Logit Compared
    1. In the attached example, all three models were applied to estimate the conditional probability of being at war. In the linear probability case, the War dummy was simply regressed on a constant, real output growth, lagged real output growth, inflation, and lagged inflation.
    2. For the probit and logit estimation, the regression model was simply inserted into the brackets of the probit and the logit; then Eviews estimated the coefficients using MLE.
    3. The tstats look similar, but nothing else does on the initial page of output.
    4. But doing simulations shows that the predictions are virtually identical. The three pages show the three models predictions of the probability of war, conditional on the growth rate of real output -3 to +7%), for three different rates of inflation (0%, 10%, 150%).
      1. W1 is linear probability.
      2. W2 is logit.
      3. W3 is probit.
  6. The Ordered Probit
    1. Suppose that you have defined three discrete variables by partitionining an unobserved continuous variable.
      1. E.g.: y1=1 if you don't work at all; y2=1 if you work part-time; y3=1 if you work full-time (and 0 otherwise in all three cases). These three discrete variables are a function of the continuous variable, y*=hours worked: y1=1 if y*<c1; y2=1 if c1£ y*<c2; y3=1 if y*>c2.
    2. You could then estimate the ordered probits (or logits) using MLE:
      1. P(y1=1)=F(c1-Xb )
      2. P(y2=1)=1-F(c2-Xb )-F(c1-Xb )
      3. P(y3=1)=1-P(y1=1)-P(y2=1)=1-F(c2-Xb )
    3. Your output estimates not only the b but the c's.
      1. What are the c's? The c's are estimates of the "limit points" that sub-divide the sample. (You lose one of the c's - let it be c1 - in estimation, but you still get estimates of relative spacing of categories).
    4. In general, if you are doing probit-type estimation but realize that you can sub-divide your estimates into 3 or more categories - and think that might be interesting - ordered probit (or ordered logit) is the way to go.
  7. The Tobit
    1. Data can be either truncated or censored.
      1. Data is "truncated" when both the X's and the Y's are missing.
      2. Data is "censored" if we have the X's but not the Y's.
    2. Censoring issues sometimes arise with discrete estimation. Why? Because when the dummy is 1, we may actually see what happens, and when the dummy is 0, we don't.
      1. Ex: Predicting the taste for cars. If you buy a car, we can also observe how much you pay; but if you don't buy, we don't know anything specific about your taste for cars.
      2. Contrast with the classic truncation problem of the Roman soldier.
    3. One way to cope with censored dummy data is to use the Tobit (aka "Tobin's probit"), which is a simple extension of the probit. A Tobit applies MLE to estimate: yi=max{0, Xib +e i). See text for specifics.
    4. Estimating regular model on censored data leads to attenuation bias.