| | |
|
Cooking the Numbers -- Why the Harvard "Study" on Kids and Guns
is Bunk by Sean Oberle
Cooking the Numbers
Why the Harvard �Study� on Kids and Guns is Bunk
by Sean Oberle
[email protected]
March 4, 2002
NOTE: If this looks too
bunched up along the section where the charts are displayed, you can
view this article outside of our normal site format here: http://www.KeepAndBearArms.com/Oberle/harvard.htm.
The factoid fry-cooks at Harvard School of Public Health have pulled a sleight of hand. You�ve probably seen the press
reports[1] of their new �study,� in the Journal of
Trauma which supposedly shows:
�The higher death rates in high gun states are due to differences in deaths from firearms� and �the strong and robust association between gun ownership and children�s violent death is
compelling�[2] (see footnote for link to full-text
article).�
They even (ahem) �show� that overall violence in kids is higher in the five most gun-prevalent states than in the five least. They do this by using a gun-prevalence proxy that two of them have recently acknowledged elsewhere is an inferior way to estimate gun prevalence
� this comes in a paper for the Duke University Sanford Institute of Public
Policy[3]. Not only that, but the creator of the proxy coauthored this Duke paper
� even he, apparently, thinks it is inferior (see footnote for link to full-text paper).
What�s a proxy? Gun ownership levels can be difficult to determine, so the two ways to estimate it is to conduct surveys or to use a proxy
� a formula based on something that involves guns and deduce ownership levels from that.
The proxy the Harvard crew used is called Cook�s index. Named for Philip
Cook[4], it is based on the proportion of guns used in two types of violent death
� homicide and suicide. It is the average of two fractions � firearm suicides (FS) over all suicides (S) and firearm homicides (FH) over all homicides (H). Its formula is (FS/S + FH/H)/2. The assumption behind it is that the more that firearms are represented in overall violent death, the more they are prevalent.
The Harvard trio also used a second proxy (sort of Cook�s lite) � simply FS/S. While the Duke paper identifies it as better, it has problems similar to Cook�s index, but to a lesser degree. As well, they looked at
21-state gun prevalence surveys done by the Centers for Disease Control (CDC) in the 1990s and a regional level
survey[5]. However, they focused on Cook�s index in their article. It is from their Cook�s index calculations that they derive their pronouncements about the top and bottom five states, and it is on Cook�s index that I devote most attention.
In the Journal of Trauma article, the Harvard group claims there is no 50-state survey of gun prevalence. While perhaps technically true, there are 48-state surveys (no Hawaii and Alaska).
Could the Harvard gang simply have not known about them? Nope. In the Duke paper, two of them identify these 1996 and 1999 surveys. Indeed, the surveys were conducted
for them � or at least for the Harvard Injury Control Research Center, which is their part of the Harvard School of Public Health. In any event, all of them have used this data in other articles, in 1998 and 2000
� prior to submitting the Journal of Trauma article (submitted in December
2000)[6].
This becomes important when reviewing the Harvard gang�s Journal of Trauma work
� they imply that the 21-state survey is all they have to check their numbers against, and they note that their
Cook's and FS/S calculations correspond with the 21-state survey
� thus they must be relatively accurate, right?. Well, not exactly.
Chart 1 shows the relationships between four state-level measurements identified in the Duke and
Trauma papers:
1) Harvard�s Cook�s index calculations
2) Harvard�s FS/S calculations
3) Harvard�s 48-state surveys
4) CDC�s 21-state surveys
(Note: see data sets by clicking charts, excepting chart 2)
If a state is red/blue and underlined, it shows variance but was excluded from the Harvard threesome�s check for their Cook�s index and FS/S calculations. In other words, Harvard�s Cook�s index and FS/S match survey-based estimates
if you exclude most of the states that show variance with them.
Indeed, high variant states are the most likely to be excluded, and low variant states (black) are least likely to be excluded. Using the CDC 21-state survey as a check excludes 17 of 24 (70.8%) high-variance states and 9 of 17 (52.9%) moderate-variance states. But it excludes only 3 of 9 (33.3%) low-variance states.
Cook�s index, incidentally, does not measure quantity or rate of misuse. A town of 100,000 people, for example, with just one murder and two suicides in a year nonetheless could have a high Cook�s index if, for example, the homicide and one of the suicides were with guns. Its index would be 0.75 (maximum is 1.00). The formula: (1/1 + 1/2)/2
Cook�s index supposedly corrects for high rates of gun homicide and/or suicide in low-prevalence areas. For example, moving to the state level, Hawaii has a very low homicide rate, while New York has a higher rate. But both have low proportion of gun homicide/suicides, thus Hawaii ranks 1st and New York ranks 8th in low gun prevalence
� 49 states have higher gun prevalence than Hawaii, and 42 have higher than New York.
OK, nothing odd so far. Now let�s look at two states, one with high violence and another with low violence
� Maryland (high) and South Dakota (low). Maryland has a higher Cook�s index than South Dakota; therefore, Maryland must have more gun prevalence, right?
Yes, it must if you accept Cook�s index as valid. Indeed, in the Harvard analysis, South Dakota ranks as having less prevalence of guns than Maryland. In fact, South Dakota�s rates of gun homicide/suicide to overall homicide/suicide are so low that the state ranks 6th in the less prevalence scale. That�s right; the Harvard crew calculated that in 44 states
� including New York (8th), Connecticut (11th), Illinois (12th), California (23rd) and Maryland (28th)
� guns are more prevalent than in South Dakota.
In the FS/S proxy, however, South Dakota drops from 6th to 25th. In the 48-state survey, it
drops from 6th to 44th. On the other hand, in the FS/S proxy, Maryland rises from 29th to 11th. In the 48-state survey, it rises from 29th to 13th.
Note that the all those high-homicide, high suicide states in the bottom using either Cook�s index or FS/S move out of the bottom
� the bottom five states based on the 48-state survey are South Dakota (44, moves down 38 slots), Vermont (45, down 13), Montana (46, down 19), Idaho (47, down 11) and Wyoming (48, down 7).
The flaw (this is my analysis, not Duke�s) is that Cook�s index assumes that gun prevalence and proportion of use in homicide and suicide move together along one linear spectrum, when in reality their relationship must be looked at on a plane (see chart 2).
Cook�s wrongly assumes there are only two basic types of localities: (B) higher prevalence, higher proportion and (D) lower prevalence, lower proportion. However two additional types exist: (A) higher prevalence, lower proportion and (C) lower prevalence, higher proportion.
The problem caused by Cook�s linear flaw is that (A) and (C) locations get pushed towards the wrong ends of the prevalence spectrum.
Indeed, there is some circular reasoning going on: Overall homicides and suicides
will be affected by the proportion of gun misuse because guns are the primary weapon of choice in both. Any changes in use-levels of the mechanism will affect the whole more than any changes in lesser-used mechanisms.
Thus, (A) states tend to have lower overall violence because they have lower proportions of gun
misuse (despite their higher gun prevalence), and (C) states tend to have higher overall violence because they have higher proportions of gun misuse (despite their lower prevalence of guns).
In other words, the up and down movements of Cook�s index and violence levels are tied more to each other than to gun prevalence. This means that if you use Cook�s index as a proxy for firearm prevalence, it really is not much different from using overall violence as the proxy. Despite being hidden by omitting the first step from the thought process, the circular logic in the Harvard
Journal of Trauma article is:
High violence = high Cook�s index (now, take a measurement of violence) = high violence.
The flaws in Cook�s index become more prevalent in other ways. Looking at Texas, for example, that state�s Cook�s
index[7] falls from 1994�s 0.72 � (1,652/2,337 +1 ,568/2,113)/2
� to 1999�s 0.59 �- (1,224/2,005 + 794/1,391)/2. If Cook�s index is a valid measure of gun availability, then guns became 18% less available in Texas in the late 1990�s. Yep, Cook�s tells us that one in five guns left the state. Wow, those Texans must really hate their guns!
Warning:
Next 3 Charts Are Based on Bogus Cook�s Index Measurements
Take State Rankings with Grain of Salt
In any event, since Cook�s index and violence levels move up and down together
� independently of gun prevalence � a Cook�s index ranking of states will have high or low violence clustered at the extreme
ends[8].
This means that by looking at only the extreme ends (see Chart 3)
� which the Harvard threesome emphasizes in the Journal of Trauma article
� wrong impressions emerge, leading to inaccurate pronouncements.
The error, however, begins to become apparent when you look at all states (see Chart 4). The correlation exists only at the skewed ends. Notice that except for the ends, the trend is somewhat flat line. Why?
I can think of numerous hypotheses, including that the Cook�s index used by Harvard is derived from the
whole population, but the yellow violence line refers to a sub-set of that population
� children aged 5-14. The sub-set might have its own index to match its violence trend.
More likely is what we see by breaking �overall violence� out into its subgroups: homicide and violence (See chart 5).
By doing this � even though the order of supposed gun prevalence is bogus
� we see that there seems to be some substitution � and thus balancing � in category of violence.
Excepting South Dakota, each marked state is noted twice (find matches by looking straight up or straight down). States with higher homicide rates have lower suicide rates for this age group and vice versa. The phenomenon isn�t limited to just a few states
� it occurs in most states.
Why is that? I don�t know, and I don�t want to make half-cooked pronouncements like the Harvard crew does.
Now Let�s Look at Something Closer to Reality
What I do know is that when we look at the trend in violence based on the 48-state survey, rather than the faulty Cook�s Index (or FS/S), the lines do get rather flat
� especially when you take into account two facts: the District of Columbia is not in these comparisons, and Idaho�s high violence is due to a single year aberration.
Note that on Chart 6 the trend shows rising violence from 0% to 30% gun prevalence. Then it flat lines. Then, around 70% gun prevalence it starts going down again. (Does it go down farther as ownership approaches 100%? I have no way of knowing.)
First Idaho: I�ve marked Idaho (ID) to the far right. Idaho�s violence is so high because, in 1990, it had an aberrantly high number of suicides.
Typically the state has 2-to-5 suicides per year for this age group, but in 1990 it had 11 (5 non-gun). In fact, that year�s suicides account for about a quarter of the entire 10-year tally
� 11 out of 43. Remove 1990 (look at a different 10-year period) and Idaho�s overall violence level drops, and along with it drops the trend line
� the high-prevalence end begins to look a lot like the low prevalence end.
There�s also a lesson here: Many other states have similarly low numbers year-to-year, so have similarly unstable numbers. While the 10-year pooling gets the numbers up, stabilizing them a bit, there�s enough instability that it�s risky to draw conclusions no matter what proxy or survey used.
Now D.C.: What do we do with the District of Columbia? It should be included. It has more people (about 572,000) than the entire state of Wyoming (about 494,000), and similar numbers to a few other states like North Dakota and
Vermont[9].
On the other hand, Washington is a city, and simply throwing it into the mix is improper (as in apples and oranges). Cities generally have much higher violence rates than states because state rates moderate as high and low violence areas cancel each other out. D.C.�s violence rate for this period and age group is 9.66, more than three times the closest state
� putting it in the mix would cause a huge spike, bending the trend way out of whack.
In my opinion, combining D.C. and Maryland into a single unit makes the most sense. Geographically and socio-economically, D.C. fits into Maryland. Indeed, the west half of the city has more in common with bordering Montgomery County, Md., than it does with the east half of the city, and the east half of the city has more in common with bordering Prince George�s County, Md., than it does with the west half.
Combining D.C. and Maryland this would do two things: move Maryland closer to the left of the gun-prevalence scale
in much the way that other cities with extreme gun control � like New York City and Chicago
� move their states in that direction. However Washington�s high violence rate would make Maryland�s violence rate go up (although the spike would not occur as it would with D.C. counted alone due to state-level moderation).
Farther to the left and higher up than it is now, Maryland would pull the trend upwards, further making either end of the trend similar looking.
We�ve got a soft bell curve with a slight dip in the middle.
Hawaii and Alaska: These states also are not included. Hawaii would pull the low-prevalence end towards low violence, and Alaska probably wouldn�t change the trend much at all.
Conclusion: I�m not going to make any conclusions about guns and kids. We are dealing with many low-quantity numbers. As the Idaho example illustrates, these numbers can be unstable, making conclusions risky.
What I will conclude is that the Harvard study used a faulty proxy with unstable numbers. It proves nothing.
Sean Oberle is a Featured Writer and gun control analyst for KeepAndBearArms.com. He can be reached at
[email protected]. View other articles from him at
http://www.KeepAndBearArms.com/Oberle.
Footnotes
1 For example: Harvard School of Public Health Feb. 19 press release, available on the web at:
http://www.hsph.harvard.edu/press/releases/press2192002.html, or Erica Nagourney, Feb. 26, New York Times, available on the web at:
http://www.nytimes.com/2002/02/26/health/26SAFE.html.
2 Miller, Mathew; Azrael Debora, Hemenway, Dave. �Firearm Availability and Unintentional Firearms Deaths, Suicide, and Homicide among 5-14 Year Olds.� The Journal of Trauma, Feb. 2002, Vol. 52, No. 2, pp. 267-275. Full Text (with pdf reader):
http://www.hsph.harvard.edu/press/releases/trauma_article.pdf.
3 Azrael, Deborah; Cook, Philip; Miller, Mathew. �State and Local Prevalence of Firearms Ownership: Measurements, Structure, and Trends.� Duke University Terry Sanford Institute of Public Policy, Working Paper Series (SAN01-24), Sept. 2001. Full Text (with pdf reader):
http://www.pubpol.duke.edu/people/faculty/cook/SAN01-25.pdf.
4 Cook, Philip. �The Effect of Gun Availability on Robbery and Robber Murder: A Cross Sectional Study of 50 Cities.� Policy Studies Reviews Annual 3, 1979, pp. 743-781.
5 See footnote 2.
6 See footnote 3.
7 All violence data in this article is from the Centers for Disease Control Injury Mortality database (WISQARS), which can be accessed online at
http://www.cdc.gov/ncipc/wisqars. Click on charts to see full data sets.
8 See footnote 7.
9 U.S. Census Bureau, Table: �Time Series of State Population Estimates,� available at:
http://eire.census.gov/popest/data/states/populartables/table01.php.
|
|
|