Restat – Water Quality Awareness and Breastfeeding: Evidence of Health Behavior Change in Bangladesh

A paper by Keskin, Shastry, and Willis.

Late 1990s in Bangladesh, shallow wells were screened for arsenic. 55% of the 8.5 million wells were tested for arsenic. Wells were painted red if they had arsenic, green if they didn’t. There was also a campaign in 1999 to disseminate information on the dangers of arsenic exposure.

Breastfeeding can be used to limit arsenic exposure to young infants. Exclusive breastfeeding limits exposure a lot. The authors ask whether not having easy access to an arsenic-free well leads to more breastfeeding.

While breastfeeding is nearly universal in Bangladesh (about 97% of children have been breastfed), exclusive breastfeeding for long periods of time is rare. The authors are thus unlikely to find much of an effect on breastfeeding at all, but may find an effect on the time length of exclusive breastfeeding, and also the time length of any breastfeeding. So they concentrated on those two LHSs.

They use “probability of living within 1 mi of an contaminated well” as their measure of “arsenic contaminated residence.” The reason this is a probability (rather than just a dummy) is because of complicated noise in their data; they spend a lot of time discussing this but I won’t get into it here. They interact that probability with a “post information campaign” dummy, which yields a generalized diff-in-diff specification:

breastfed/exclusively breastfed  ~ (post info campaign)*(prob of contaminated residency) + FE_residential_location + FE_child_yob + FE_child_current_age

They find that being near a contaminated well increases the chance of being breastfed more post the information campaign relative to pre the information campaign. Which is as we would expect. We could say, the information campaign increased the marginal effect of being near an arsenic well on breastfeeding; or we could equally say, the effect of being near an arsenic well on breastfeeding rose after the information campaign.

Their breastfeeding data and GIS location data for residencies comes from the Bangladesh Demographic and Health Surveys (BDHS), a national survey of about 10k hosueholds every year.

Heterogeneity – they find larger effects:

  • In Rural areas
  • For very young infants < 6m

Also the post*contamination variable enters negatively when you put (a) child died or (b) incidence of diarrhea on the LHS. It enters positively when you put child’s weight on the LHS. This means that post*contamination improves child health. That is, they can conclude roughly that painting wells red or green based on arsenic contamination and then running a public health campaign about it is likely to lead to more breastfeeding by mothers and improved infant health in developing countries.

While breastfeeding is a likely response to contaminated drinking water in developing countries, but, I would expect that in a developed country like the US, bottled water purchase is probably the main response. Breastfeeding probably doesn’t move. It might move for very poor and disadvantaged mothers, I suppose.

 

Difference-in-differences when many cases have no within variation

Quite often I find myself confused about observations in difference-in-differences that have no within variation.

To be concrete, suppose we observe drinking water contamination violations for public water systems (PWSs) — call that “MCL” — as our LHS variable. And we observe some measure of “threat” as our RHS variable. For simplicity, suppose both variables are dummies.

It will surely be the case that a large number of PWSs have no MCLs, a large number of PWSs will have no threats, and a large number of PWSs will have neither of them. We’re going to use some sort of fixed-effects regression, with time dummies and PWS dummies. Given that these PWSs have no within variation in y, or x, or even sometimes neither y nor x, are these observations important? Or can they just be dropped?

The answer, I think, is that they cannot be dropped. These “all zeroes” observations are important for pinning down the counterfactual.

It’s quite possible that the main reason we get confused about this is that a logistic fixed-effects regression, estimated through conditional maximum likelihood, removes all observations for which there is no within variation in the dependent variable. I think, maybe, this makes us think sometimes that those observations must not matter.

For an -xtlogit, fe- example, if you run this in Stata,

you’ll find that probably something like 15-25 ids are dropped (about 1/4 of the data) because the LHS is always 0 or 1.

It seems intuitively reasonable that -xtlogit, fe- would remove these observations since their “fixed-effects” are negative or positive infinity in that case. By contrast in a linear model (LPM in this case) the fixed-effects are finite numbers for the all zero cases.

A quick example

Here’s one case where it’s obvious that this matters.

Suppose we have two periods t = 1, 2, and treatment happens in period 2 for some subsample. The cross-sectional dimension is indexed by the variable “id.” As before, the outcome variable is “MCL”; the RHS is “threat”

Suppose the DGP is like this. MCL is always zero, unless the PWS is treated. Then it happens with 50:50 chance.

We want to run the diff-in-diff,

MCL ~ year + pws + treated

Now suppose we drop all the cases with no within variation in MCL. By construction, all cases that are never treated have no within variation in MCL. So all that’s left are treated cases!

So then what does the “treated” variable look like on the “some within variation in LHS” subsample?

Well, treated = 0 iff year = 0, treated = 1 iff year = 1… in other words, on this subsample, treated = year.

To put it bluntly, not only are the zero-within-variation observations important in this case. In fact, if we drop them the model cannot be estimated because treated and year are perfectly collinear.

This suggests a general rule. Even if by chance the “all zeroes” observations don’t noticeably change the point estimate of the regression, their exclusion will generally affect the standard errors.

A detailed simulation

The following is an R simulation giving an example where the coefficients and standard errors from a diff-in-diff change upon exclusion of “all zero LHS (or LHS and RHS)” cases. Including these observations leads to reasonably accurate estimates of the population parameter. Excluding them leads to large bias.

 

 

The Political Economy of the US Mortgage Default Crisis — Mian, Sufi and Trebbi (2010 AER)

“The Political Economy of the US Mortgage Default Crisis.” (MST2010)

The authors run variants of the model

Congressmen Voted for the Foreclosure Prevention Act ~ The mortgage default rate for that congressman’s congressional district (CD) + congressman’s ideology + controls

The foreclosure prevention act I think they’re talking about has long name “American Housing Rescue and Foreclosure Prevention Act,” abbreviated AHRFPA.

They find that republicans who represent CDs that have greater default rates — an in particular greater increases in default rates — are more likely to vote “Yes” on the foreclosure prevention act. (All democrats voted yes, so all variation is in republicans.)

Note that

  • The collapse of the housing market – and the two key votes on this act (the first time it failed, the second time it passed) – were both happening just before the 2008 election. Meaning this is a perfect opportunity to study a high-intensity political economy event. The housing market collapse continued to happen after — and unemployment continued to rise — but foreclosures were definitely a big thing when the AHRFPA was voted on (twice) in May and July 2008 (that’d be quarters 2&3 of 2008, about when the rate of change in foreclosures was at its maximum — for a quick descriptive plot, I find this picture from Currie & Tekin AEJ:Economic policy 2015 (hereafter CT2015) very good:

currie-foreclosures-unemployment

  • Democrats seem to be more likely to be effected by foreclosure than republicans. This can be seen from the MST summary stats; democrat districts have a 2.9% increase in default rate while republican districts havea  2.2% increase. Moreover the 2007:IV default rate for democrats in democrat/republican districts is higher than republicans in democrat/republian districts. Democrats had more defaults than republicans. Note that the calculation for “Democrats in Republican CDs” etc. use the fact that zip codes divide CDs, and some zip codes have more republicans than others even within CD. So, (democrat default rate in the CD) = sum over zip codes zz in the CD[(population in the zz) times (% dem in zz) times (default rate in the zz)]/((population in the zz) times (% dem in zz)). For example if a CD has two zip codes one that is 100% dem the other than is 100% rep, this expression gives the default rate for the first zip code. This approach I think is biased toward finding no contrast between democrats and republicans within CD. One way to see this is to assume that every CD was exactly one zip code. Then this approach will say that the democrat foreclosure rate == the republican default rate in every CD. I think similar problems arise in the across CD comparisons. For example suppose every CD was 51% one party, 49% the other. Then the contrast in dem vs rep reported here, is something like 2% the size of the actual contrast. In any case these attenuation problems are toward finding no contrasts, and the found partisan contrasts in default rates are quite large even given these problems. For example looking at the change in Mortgage default rates. from 2005:IV to 2007:IV, we’re talking 0.029 for dems while 0.022 for republican CDs. Even if we assume this is an upper bound, we’re talking a (0.029-0.022)/((0.022+0.029)/2) = 27% difference in the change, that’s gigantic. CT2015 study the small but detailed data in the PSID and find that those hit by foreclosures were younger, probably because older people are much more likely to own their homes free-and-clear so don’t face mortgage issues like this. Young people are also more likely to be democrat. So everything lines up to the following stylized fact: democrats were more likely to be foreclosed on or default on their mortgages, while republicans were more likely to own their house and just lose a ton of accumulated savings. Which suggests that democrat political donors are more likely to stop donating while republican donors may continue donating (because possibly they’re more likely to not “lose their shirts,” but rather just lose a lot of wealth) and perhaps, because they are angry, switch allegiances somehow.

summstats-trebbi

  • There is very, very little variation in the voting behavior within party. See their table 3, reproduced below. As most bills are, this was a very partisan bill. Indeed, they restrict to the republican party because of this. Moreover the bill was probably sure to pass, as Bush’s veto threat had been lifted in July; as they write:

This is a useful robustness check given a substantial difference from roll call 519: a presidential veto threat on the bill (possible to overcome by a 290-vote majority). In May 2008, President Bush opposed the AHRFPA and in particular the $300 billion insurance provision, while the July vote was brought to the House floor the same day the veto threat was lifted. We should note that the “cost” for Republicans abandoning President Bush may have been low as of May 2008, given his low popularity ratings and his imminent departure from the presidency. However, it was still likely more costly in terms of party standing to vote for the legislation in May than in July. While the coefficient on the mortgage default rate is smaller, it is not statistically significantly different from the estimate on the July 26 vote, which confirms that politicians respond to constituents even when doing so may harm their standing within the party.

  • So, these roll call votes are probably reflecting entirely what the party members want to signal to the world about their preferences, rather than their true “if I were pivotal” preferences. Given that the LHS is really pure signaling, one might start asking whether we can do better: is there a way to come up with a latent continuous measure of support for a bill like this using observables? One could imagine text mining the speeches made by congressmen on the bill. It shouldn’t be too hard to classify congressmen as being super supportive and super opposed, remembering all the while that these congressmen probably don’t want to convince other congressmen of their support, but rather, their constituents (at least, those who may be paying attention — like donors). If this could be done the variation here might be able to be expanded a bit. As it stands the variation is very limited and it is amazing MST found anything at all: we’re talking 45 yeas (23%) out of 194 republican votes — small data.

trebbi-votes-for-ahrfpa

Their results are nothing short of pure perfectly. They’re actually so good, they’re hard-to-believe:

  • A one-standard deviation increase in the default rate leads to a 12.6pp increase in chance of voting for AHRFPA
    • This changes only slightly when you add state fixed-effects or census demographic controls. It’s luck.
    • This doesn’t change at all when you control for DW-Nominate score. In fact, DW-Nominate is uncorrelated with the level of mortgage defaults in 2007. It’s magic.
    • The vote is associated with the change in mortgage default rates in the last two years, but not with the level. It’s a miracle.
    • Mortgage default rates are related to the vote, but there is no independent effect (in a multiple regression) of nonhome default rate, even though the two measures are correlated at 0.58. So politicians completely separate, I guess, mortgage default rates from other credit problems that could be happening. Remarkable.
    • The effects are twice as large in close races, and the difference is statistically significant at the 5% level. Even though, probably close races are only 20% of the republican CDs or less (hard to tell, I don’t think they report these numbers). Amazing.
    • But they’re not done yet. Remember how they have measures of CD level “democrat default” and “republican default” rates? Even though given their approach these gaps are being pounded toward zero and even though CDs usually have no more than a 2pp estimated gap and even though the pairwise correlation between democrat default and republican default in the same CD is 0.90 — even with all of these problems they find that republican support is ONLY positively predicted by the republican constituent rate of default. The effect of democrat constituent default rates is small, and in fact in most specifications the point estimate is negative. Interacting with competitive CD doubles the coefficient sizes, and makes the democrat constituent default rate coeff significantly negative! It’s a miracle.

So every specification fell into place for this paper, I guess.

As an aside, while there may be theoretical reasons to believe that special interests may be less important in these high salience roll-call votes (I think that’s what the rest of the polisci literature suggests), they write that “our study provides quantitative upper bounds on the elasticity of congressional voting to constituent interests, special interests, and ideology, which are widely studied parameters in the political economy literature.” So their opinion is that special interests and constituent interests are more salient, not less, in a case like this.

Ok, so here’s what am I thinking about this:

  • Suppose we could measure default rates for donors versus non-donors. MST find that politicians respond to own-party constituents. Do politicians respond more to own-party donors? With good micro data we could get pretty detailed on this question. First, contrasting default rates of republican donors versus democrat donors in the CD, we get a measure that is likely more precise than MST, to run the same “Fenno voting-bloc hypothesis” specification as MST. Second we can run the voting-bloc hypothesis on republican donors versus republican voters. Ok, it’s still hard to figure out where the republican voters are. We could try big republican donors who donate to the incumbent congressmen versus smaller (though still itemized) republican donors who donate e.g. to RNC. That sort of thing.
  • As I stated above, if we could come up with a method to turn the LHS into a continuous variable rather than a binary one, that would be amazing. As I’ve said I think the LHS is, in practice, measuring “how much they want to signal” that they’re supporting. There should be other easily observable measures of congressional signaling.
  • The fact that all signs are pointing toward ultimately more financial trouble for democrats suggests the possibility that democrat donors may have disappeared with greater propensity than republican donors as a consequence of the housing crash. Republican donors may have changed their allegiances, however, given that they lost a large amount of money. Calculating these potentially losses and seeing if they influence donor behavior would be interesting. Moreover, CT2015 have citations suggesting that residents of foreclosed properties do not immediately move, suggesting that we may possibly be able to observe donations even for those who own foreclosed residencies.

Paying students for performance: Levitt, List, Meckermann, and Sadoff (2016)

“The Behaviorialist Goes to School: Leveraging Leveraging Behavioral Economics to Improve Educational Performance.” I see this paper as mainly just another attack on the question, “does paying students for performance improve their outcomes?”

They have a new angle though. Because payments with delay (“$20 if you improve this quarter, etc.”) don’t seem to work well, they decide to shorten the delay and pay students immediately after they take a low-stakes test, if they improve performance relative to their last test.

They have a bunch of different experiments where they change some parameters of their randomization or the size of the incentive, but the upshot is that “$10 doesn’t work, no effect on test scores; but $20 works, raising test scores by on average about 0.07 (minimum of 95th CI about 0.014).

Unfortunately their effects are much higher in not-CPS. Let me explain. They run the experiment in three districts, Chicago Public Schools (CPS), and two smaller districts (though all districts have relatively low student performance). In the two smaller districts they’re seeing effect sizes of the immediate $20 after the test of 0.24, 0.13, 0.30, very large numbers here. In CPS though they see an effect size of more like 0.0, 0.05, 0.09, and in all cases, statistically insignificant. Only by pooling these studies do they get significance.

Larger effects were found when they frame things as “financial losses” but I thought their experimental design there was a little clunky. They put the $20 bill in front of the kids while they take the test, and say “if you don’t do well on the test, you have to give this $20 back to us.” That sounds strange and I couldn’t imagine implementing it in a non-experimental setting. In any case the larger effects were not significantly different than the “just pay $20,” so the authors might have just got lucky here.

 

ILR Review October 2016 – Who Benefits from a Minimum Wage Increase?

Here’s a sweet paper from an awesome journal, “Who Benefits from a Minimum Wage Increase?” By Lopresti and Mumford, in ILR Review Oct 2016.

To give the gist of the paper:

  • Step 1. Match CPS respondents, forming an individual two-period panel of respondents. Even though the CPS is rotated like 4-8-4 months, they use only the interviews with enough info, leaving 2 interviews, about a year apart, for each respondent (CPS respondents can be matched over time, but it requires some work. They implement the matching algorithm themselves, but I believe IPUMS has very recently done this for everyone.)
  • Step 2. Find all times when states (or the Federal government) changed their minimum wage. They focus on the period 2005-2008.
  • Step 3. For each CPS respondent, figure out whether a minimum wage increase occurred during the CPS respondent’s sample period. Call that a “treated” respondent.
  • Step 4. Interact the respondents base period wage with the size of the treatment in a series of dummies, in a median regression with percentage wage change on the LHS, producing this beautiful matrix:

lopresti-mumford-2016-effects-of-minimum-wage-change

I don’t have any real criticism of their paper. But the interpretation of the results is odd. Essentially, “if you want to raise the wages of workers, you must engage in large wage increases, not small ones.” Strange.

Kirabo Jackson CEPA – a flip between Behavioral and Test Score factors across the income distro

A quick note about Kirabo Jackson’s CEPA talk regarding the flip between behavioral and test score factors across the income distro.

Link to Jackson’s CEPA talk.

This is rather unrelated to his work, but at around 46:00 we get to see this nice table he constructed using the NELS:88 longitudinal data from NCES:

cepa-bojackson-2016-4746

It’s a quantile regression of log income on test score and behaviorial standardized scores. Test scores “don’t matter” at the bottom of the distribution while behaviorial measures do. This then reverses, sort of, at the top of the distribution.

My knee-jerk reaction to the table is to think that for students we expect to be below median income at age 25, holding schools accountable to test scores is counter-productive, we should instead be holding them accountable to behavioral factors.

That went against my prior belief — which was that test scores are better measures of basic skills rather than advanced ones, so would be more predictive at the lower end of the distribution.

While I’m willing to throw away that belief, I get the feeling that the econometrics here is a lot more complicated than the table suggests. For example the NELS:88 test scores could have a floor while the behaviorial measures might have a ceiling. In any case this probably deserves more attention in the accountability literature, even if it is a dead horse in psychometrics.

(Since I have a lot of experience with the NELS:88 maybe I should be the one to give this more attention. Unfortunately I probably have more important things to do with my time.)

How to tell R where to find a library for compiling — on linux without root access

I faced this problem before and struggled with it. The solution turns out to be very easy once you understand what’s going on.

I tried to install Simen Gaure’s awesome lfe package in R (I will definitely write about how awesome this package is in a future post), and ran into the following snag:

Now of course given that I don’t compile things myself (hoping that will change soon) I don’t have an idea what this means.

Doing some digging around though, I think this is a rough explanation of what is happening:

gcc is being called (obviously), on some *.so and *.o files (not sure what these are exactly but I think so means source and o means object maybe), with some flags. Flags include -lgfortran, which I think means include the fortran library, called libgfortran.so. Unfortunately the package can’t find this library at any of the flagged Locations, or the default locations. The flagged locations are in the -L/…/ flags. While the default locations… I think they’re in the environment variable C_INCLUDE_PATH, but now I’m not so sure, see the end of the blog post for details.

Investigating further shows that there is nothing located at the most likely location for this system to have it /usr/local/lib64, which is called in one of the -Ls already. There is, however, libgfortran.so.3 located at /usr/lib64. However, this file has the wrong name, and is in the wrong location. What to do?

After a bunch of fiddling I figured out what to do. First, I got a copy of libgfortran.so.3, renamed it to libgfortran.so, and put it into the location: /home/rdisalv2/usr/lib. I could get this from either /usr/lib64 on the system, or, I could use some yumdownloader and rpm2cpio commands. Then, in ~/.R/Makevars, put

PKG_LIBS = -L/home/rdisalv2/usr/lib

A bunch of links helped me with this but probably the most important was https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Using-Makevars.

Once I put this, the call that threw an error earlier now looks like this:

notice the new -L/…/ ? Pretty cool eh?

Now the lfe package compiled successfully.

A note about C_INCLUDE_PATH

I thought adding to the C_INCLUDE_PATH environment variable would work; something like:

export C_INCLUDE_PATH=/home/rdisalv2/usr/lib:$C_INCLUDE_PATH

 

$ printenv C_INCLUDE_PATH
/home/rdisalv2/usr/lib:/software/r/3.2.4/b1/lib64/R/include:/software/slurm/current/include:/software/slurm/current/include/slurm

A new web host for a new year

Now that it is almost 2017, I figured I might as well transition to more developer-friendly web hosting.

DigitalOcean provides a system whereby you essentially purchase an always-on Ubuntu computer with a static-ip, which you can use for anything. Web hosting is just an example.

The key advantage for me is that both my desktop and laptop run Ubuntu, so my web server is now essentially just a repeat of what I do every day. Pretty cool.

After all that, this blog still looks the same though. I asked myself deep questions about whether to move to Jekyll, but I think wordpress makes it easy for me to write blogs quickly. I learned that Apache can proxy for any port, e.g. I can run R Shiny or CherryPy on my web host on any (ufw’ed) port, and then forward the port to a reasonable-looking website like richarddisalvo.com/myapp etc. So I can have full interactivity. WordPress is indeed very constraining but I can always link to a separate page with full interactivity, so I can have the best of all worlds.

A Bartik instrument in Bolsa Família

Here’s a EER 2015 paper with a nice simple example of a Bartik instrument:

“Spillovers from conditional cash transfer programs: Bolsa Família and crime in urban Brazil,” by Chioda Mello and Soares. http://www.sciencedirect.com/science/article/pii/S0272775715000552#bib0018

I typically consider an instrument as “Bartik” if it is constructing by interacting a time-varying exogenous treatment with a time-invariant individual characteristic. For example, to exogenously vary labor demand in a particular city, you can interact national employment by industry with the starting proportion of people in this city that work in each industry times the starting city population. E.g. suppose this city is 50% construction and has 1000 people employed in 2000. If national employment in construction goes from say 10,000 to 2,000 over the sample period, a fall of 80% in that industry, then employment in this city would be expected to fall from 500 to 100, a fall of 40% for this city. Doign this for all industries provides an instrument for which, over-time variation in this IV is driven only by national trends yet there is statistical power in the panel (i.e. even after city FEs are included) because of the constructed moderator of these national trends.

In this paper, there is a Bartik instrument. In particular a policy change occurred in 2008 that expanded a conditional cash transfer CCT program from families with children ages 0-15 to those 0-18. This CCT program required that the kid attend school hence we might anticipate it impacts youth crime through incapacitation (though they find no evidence that incapacitation is particularly important relative to the cash itself). The time-series-variation-only variable in their IV construction is a dummy for the expansion of the CCT program, 0 0 1 1. The time-invariant moderator is the number of children aged 16 to 18 in the school before the policy change (2006), which is a moderator because the expansion was for this age group, hence an e.g. grades 7-12 high school that is relatively small in each grade will be expected to have a smaller expansion impact than a 9-12 high school that has larger enrollment in each grade. Thus they predict the CCT program in an xtivreg specification using the expansion dummy times the pre-policy-change number of children in those grades – this is obviously a Bartik strategy.

Of interest to my work, they use log total crime rather than crime per person because they don’t have a denominator. We have roughly the same problem when studying subgroup arrest rates in the US. I quote from their paper:

As we do not have a measure of local population, which would be the natural way to normalize the number of crimes in a given area, we use the natural logarithm of number of crimes as the dependent variable and the natural logarithm of students covered by the CCT as the treatment variable. If the populations of the respective areas are roughly constant over the short period of time that we are analyzing, which seems like a reasonable assumption, this would be roughly equivalent to running the same specification in per capita terms. In addition, we also control in all specification for the log of the number students in each school, which can also be seen as a reasonable approximation to this normalization. The log–log specification implicitly assumes that a proportional change in the number of students covered by Bolsa Família in a certain area is associated with a roughly fixed proportional change in the number of crimes in the area. The number of crimes is also typically used as dependent variable in the United States literature (see, for example, Jacob and Lefgren, 2003 and Luallen, 2006).

The cited papers, Jacob and Lefgren 2003 and Luallen 2006, are both top-notch empirical education papers, so clearly it is acceptable to put log(something) on the LHS when you don’t have a denominator (I suppose we do this all the time when we put log(income) on the LHS instead of [wage per hour], though of course we might also put log(wage per hour) under some assumption that the production function is multiplicative). This was an idea I had a few months ago but I dismissed it thinking it was awkward. Finding it here suggests that it is not so awkward.

 

Kang (ReStud 2016) – causal identification in lobbying through structure?

When Kang writes,

“In this article, I quantify the effect of lobbying expenditures on the enactment probability of a policy, controlling the endogeneity of lobbying decisions and exploiting the structure of the model described in the next section.” (page 276)

My face contorts…

Clearly an interesting question. If a firm increases its pro-bill-lobbying of a bill by $1000, what is the impact on the policy adoption chances of that bill? And ditto for increasing anti-bill-lobbying?

But firms don’t want to lobby on bills that have no chance of passing. So why wouldn’t there just be a positive relationship between all lobbying and bill enactment?

And even if we found a negative relationship between opposition spending, don’t you think the negative relationship is biased upward (toward positive) because of selection into lobbying?

Table 6 (page 279) of her paper shows that the bills “Lobbied by Opposition Only” have a 4.4% chance of (one-cycle) success while bills that are not lobbied at all have a 0% chance of success.

This raw data at least suggests that lobbying by opposition raises the chance of success. Of course the correlations could reverse when you pool in bills that are lobbied by both sides.

It seems her model does a great job “solving” (or hiding) this problem, or the data actually is generally in the opposition-leads-to-lower-chance-of-success direction after all… the latter would be an amazing finding given that my prior is that greater lobbying (even opposition lobbying) should only happen on bills with very much greater ex ante chance of success, and probably the endogeneity would overcome any little impacts of opposition spending.

in particular, page 285 table 7 shows a positive estimate for beta_f and beta_a, meaning for-spending has a positive impact on the chance while against-spending has a negative impact! Wow.

Actually though, I’m not sure… maybe this is just baked-in to the model. On page 280, equation (3.1) defines a production function that suggests opposition spending can only reduce the probability of policy enactment. But, the model could drive beta_a to a negative number, I think, if opposition spending predicted greater enactment probability. But actually the model gives a positive number for beta_a.

Remarkable!

The method

Which makes me think… if this worked so well for Kang maybe I should try to put this in my arsenal of “ways to get published in ReStud.” I think the main selling point of her paper was the original data, but original data with descriptive analysis alone is unacceptable in economics so the model was likely a critical part of getting the paper into a journal.

What does the model add? It seems to solve the serious endogeneity problem through structure alone.

Some quotes from her paper:

page 283

kang1

Here are those equations 3.1, 3.2, and 3.3:

kang2

So 3.1 is the production function mapping spending into probability of the bill passing.

kang3

Equations 3.2 and 3.3 look like selection-on-observables. Why? Well, she assumes the xi and the eta are independent. Given that the production function for probability of adoption is forced into a special form where the marginal return to lobbying for a bill is decreasing in pi — see the following derivative:

kang4

this is evaluated at about the parameter she estimates but, you could put a different number in using your eyes if you want.

Anyway when pi is small the marginal return is ALWAYS large. This is an assumption of “no junk bills” that have say pi=0 and are insensitive to lobbying, crying babies, a meteor, etc. I find this assumption hard to swallow. I imagine there are a lot of junk bills…

But given that you can always raise p() by lobbying in her model, the only thing that seems to drive differential rates of lobbying across different bills is different Vs for different lobbyists. Yet, V is uncorrelated with pi. Therefore, there is no way for bills that are intrinsically more likely to pass to be intrinsically more “exciting to lobby.”

She also points to equations “(4.6) and (4.6),” but I think those are just the structural assumptions discussed above (and presented in equations 3.1-3.3) combined with rationality on the part of the lobbyists (conditional on the Vs etc.), so I don’t think they add anything new.

This makes me think that:

  • Either there is a positive relationship between opposition lobbying and pass rate in the raw data as my intuition expects, but the model makes enough restrictive assumptions to stop this from coming through… or,
  • there actually is a negative conditional relationship, i.e. the second coeff in reg enactment lobby_for lobby_against is actually negative!

Seems like I could rule out the second case if I just pulled her data and did the reg.

I tried pulling her replication materials and running this reg. Unfortunately her replication materials are quite difficult to work with, in particular it seems really hard to replicate her coding industries into four sectors (coal oil/gas nuclear renewable). It seems likely that her coding is buried somewhere deep in fortran code, and I unfortunately am not familiar with fortran and didn’t want to spend a few days learning it just to run this reg. So for now the coeff in this reg will remain a mystery, and it isn’t clear to me whether the data just “let her” get the negative relationship she wanted, or maybe the model is somehow undoing an expected positive endogeneity relationship.