In the well-known cases, political actors band together with researchers who continually produce results favoring the politicos pet topics. It's not that hard to produce the desired results, even when the mass of evidence doesn't support your side. It simply requires that these researchers restrict themselves to dealing with tiny slivers of the available information on their topic. Global warming deniers look at temperatures in only one location or across one short period of time. Evolution deniers focus on unanswered questions and stay far away from the genetic evidence.
The results are what you would expect. They see what they want to see. They support what they want to support. If I were to do what they do, I could declare downtown Minneapolis to be a residential district--based on only looking at the condo high-rises.
Someone would come along very quickly and point out how badly I had bungled my research, but by then, the damage might be done. A politician could still push through a zoning decision using my study (or one slightly less obviously biased). And if I wanted to make it easier for the politician, I could do another study focused on riverfront condos to support my original bad research. Two studies! The "evidence" mounts!
It shouldn't be a surprise that more groups than just global warming and evolution deniers use this strategy of designing bad studies and legislating from them. They might be the best known, however, because their motivations are so easily understood. They're downright transparent. A few scattered cranks (there are always stray cranks) aside, the political forces behind evolution denial are religious. Those behind global warming denial represent economic interests that are threatened by our need to reduce our reliance on fossil fuels. These groups are easy to spot because we understand their motivations for winnowing information down to only what they want to believe.
There are topics, however, where the deniers are less obvious, even when they engage in similar tactics. Their motivations are subtle or complex, or they form unlikely coalitions, bound together only by their views on a single subject. The strict marginalization of sex-oriented businesses is one of those topics. It unites pro-business conservatives who are appalled by sex and pro-sex liberals who consider profit equal to exploitation, plus a lot of people whose reasons are as varied as their sexual interests.
Whatever their motivation, those who argue that the presence of adult businesses has a detrimental effect on crime rates and property values are still engaging in the same kind of denialism. They're relying on just a small portion of the available information to make their case.
Why would anyone feel the need to produce anti-sex-business research? At least within the U.S., sex-related expression is protected under the First Amendment, with a few exceptions. Expression for profit falls under those protections. Those who would prefer such things not happen where they can see them have to find another reason to ban stripping, purveyors of pornography, and toys stores for grown-ups. They need a legal basis that amounts to more than "Ew."
Rising crime rates and declining property values can provide that basis. Want to say, "Not in my town/neighborhood"? Just produce a few studies saying bad things happened in other communities when they allowed adult businesses, and you have a non-speech-related reason for putting your foot down. Plenty of other communities have already done it. (I'm simplifying this drastically. For a really in-depth discussion of the legal standard, called the Secondary Effects Doctrine, check out this article. Pdf available here.)
There's just one little problem: The studies themselves. In 2001, Paul, Linz, and Shafer took a look at what kind of evidence was being used by those who wanted to marginalize sex-related businesses. What they found was impressive...but not in the way one would hope.
The researchers started with a list of four requirements that would need to be met for a study on the topic to be considered scientific. In situations like this, where laws and regulations may be challenged in court, scientific evidence isn't just a good idea. It's the legal standard, so meeting these scientific criteria is important.
- The control areas (areas without sex-related businesses used to measure the effects of everything else happening during the study period) must be, well, comparable to the areas with new sex-related businesses. Because we can't just randomly assign adult businesses to various areas and see what happens, these studies should use a matched control approach when possible. That means the study and control areas should match in factors known to affect crime, if crime rate is the topic of interest, or factors known to affect property values, if that's what's being studied. This means they should be comparable in things like population density, traffic, median income, land use, industry mix, etc.
- The study should cover as much time as possible both before and after the adult business is established. Crime rates and home values both have a seasonal component that can make short-term studies nonrepresentative of longer trends. Crime rates, particularly for individual types of crimes with low overall rates, can fluctuate wildly in the short term. As an example, at this time last year, Minneapolitans were flipping out over the murder rate. In the first eight days of January, we'd had five murders, after a total of 19 in 2009. Our city was falling apart. However, checking again at the end of January, we'd had only two more. By the end of June, we stood at 24. In all of 2010, we had 39. That's still more than twice the count in 2009, but it's the same as in 2008, which makes it the second lowest rate in the last decade, with 2009 being the lowest. The study period matters.
- The source of the data must be valid and comparable across study areas and times. The second part of this is simple. If you use one type of report or source of data to measure crime or property values, use the same measurement everywhere. That's standard research methodology. So is the first part, validity, but in the context of these studies, it deserves a special mention. Why? Because despite crime rates, property tax valuations, and sale prices being public information, many of the "studies" cited didn't use this information. They relied instead on asking people what they thought their exposure to crime would be or what property values in the area would do if a sex-related business opened. In other words, in order to show that they weren't exercising bias against sex-related businesses, communities were relying on studies that measured people's biases.
- Survey data that is used should come from properly conducted surveys. The authors mention this benchmark as something of an afterthought. While they didn't find circumstances in which surveys would be appropriate, they did note several surveys that didn't clear this hurdle.
The results were dismal by scientific standards. A full 73% of these reports were records of political discussions on the topic, not studies of any sort. Removing these, and anecdotes such as reports of arrests that happened near sex-related businesses, the authors were left with 29 studies of any sort.
They rated the ten most frequently cited reports on whether they met the four requirements above, as well as how clearly their results demonstrated secondary effects (click to enlarge the table).
None of these ten reports met all of the applicable requirements. Two were not even studies. One study, with its flaws, showed positive evidence of undesirable secondary effects. Four of the remainder showed mixed evidence for and against negative secondary effects, and fully half of the top ten most-cited reports completely failed to support the idea that sex-related businesses lead to higher crime rates or lower property values.
In other words, towns and cities that were using these reports to justify marginalizing sex-related businesses were relying on poorly produced information. Beyond that, they were using only the bits of information that supported what they already wanted to do, and misrepresenting much of it at that.
That was 2001. Has the situation changed since the Paul, Linz, and Shafer paper? It's not easy to say. I wasn't able to find a summary of recent use of secondary effects reports in zoning or other government decisions, so I can't say whether the bad reports are successfully being challenged.
In the peer-reviewed literature, the situation is a little brighter. Studies are addressing the scientific requirements above. McCleary and Weinstein's 2009 study on secondary effects in Sioux Falls, SD (pdf here) reports what it did to match its study and control areas, covers a substantial period of time, and reports an estimated error rate. McCleary's 2008 study on the crime rates before, during, and after the operation of a rural porn and adult toy store (pdf here) does no matching to a control, and it has some other problems with drawing conclusions that aren't supported by the data as presented, but it does cover an extended period of time and report error rates.
These two studies found evidence for secondary effects. However, that doesn't mean the post-2001 peer-reviewed literature unambiguously supports the idea that sex-related businesses lead to higher crime rates. Linz, Land, Ezell, Paul & Williams found in 2004 (pdf here) that, when sites in Charlotte, NC were closely matched to controls on variables already known to be related (statistically) to crime, there were largely no significant differences between the sex-related and other businesses. Where there were significant differences, there was less crime surrounding the sex-related businesses. Linz, Paul & Yao also failed to find any higher crime rates surrounding sex-related businesses in San Diego in a 2006 study (pdf here).
The picture is neither clear nor simple, unless care is taken to only look at the evidence that tells you what you want to hear. Unfortunately, that does still seem to be happening.
I don't know what the legal status of sex-related businesses is in Britain. I'm sure the topic is just as complicated and nuanced as it is here in the U.S. What I do know is that I am seeing a picking and choosing of evidence on the relationship, if any, of sex-related businesses and crime.
Dr. Brooke Magnanti (yes, aka Belle de Jour) recently published a green paper on the topic, a report meant to stimulate public and governmental discussion of a topic. The topic at hand? A reanalysis of a 2003 study suggesting a link between the addition of a lap-dancing club in Camden and increased rates of sexual assault.
Rather than go into a great deal of detail about the study or the reanalysis, I'll let the paper do the talking. The original 2003 results:
In 2003, a report was released by Lilith Research and Development, a subsidiary project of Eaves Women’s Aid, a London women’s housing agency. The report examined the phenomenon of lap-dancing clubs in the north London borough of Camden and its effects on crime rates from the late 1990s onward. One conclusion that received considerable attention was the statement that following the introduction of lap-dancing clubs, rape in Camden rose by 50%. In 2009, corrections to the statistics were reported in the Guardian stating that the change between 1999 and 2002 was a somewhat lower increase of 33% (Bell 2008). It still however implies evidence of a cause-and-effect relationship between lap dancing clubs and rape. The uncorrected claims that rapes rose by 50% after lap dancing clubs opened and that Camden’s incidence of rape is three time the national average are still reported in national and international media (Hunt 2009, Guest 2010).
In this paper, Magnanti added a longer time-frame, adjusted for population increases, and added other rates (all of England and Wales, plus two other boroughs) for comparison. Islington was included in the original report and has lap-dancing clubs. Lambeth was chosen by Magnanti as being of a similar size and ethnic makeup to Camden but without the clubs. The same information presented visually after the additional information is incorporated (red added to show the information from the original report):
As the graph shows, adding information changes the picture considerably. It no longer appears that adding lap-dancing clubs leads to an increase in rapes. The original study is shown for the artifact it likely was.
However, just as with the citations presented under the U.S. secondary effects doctrine, the reaction to Magnanti's green paper suggests that finding out the truth about the societal impact of sex-related businesses is not the point for many people. The Lilith report she examined received lots of press. It was cited repeatedly in the shaping of public policy. Her analysis has...not.
Picking and choosing the studies that support your existing position. Picking and choosing the data within studies that do the same. What is that but scientific denialism?
Paul, B., Shafer, B., & Linz, D. (2001). Government Regulation of "Adult" Businesses Through Zoning and Anti-Nudity Ordinances: Debunking the Legal Myth of Negative Secondary Effects Communication Law and Policy, 6 (2), 355-391 DOI: 10.1207/S15326926CLP0602_4
Linz, D., Paul, B., Land, K., Williams, J., & Ezell, M. (2004). An Examination of the Assumption that Adult Businesses Are Associated with Crime in Surrounding Areas: A Secondary Effects Study in Charlotte, North Carolina Law Society Review, 38 (1), 69-104 DOI: 10.1111/j.0023-9216.2004.03801003.x
Linz, D., Paul, B., & Yao, M. (2006). Peep show establishments, police activity, public place, and time: A study of secondary effects in San Diego, California Journal of Sex Research, 43 (2), 182-193 DOI: 10.1080/00224490609552313
McCleary, R. (2008). Rural Hotspots: The Case of Adult Businesses Criminal Justice Policy Review, 19 (2), 153-163 DOI: 10.1177/0887403408315111
McCleary, R., & Weinstein, A. (2009). Do “Off-Site” Adult Businesses Have Secondary Effects? Legal Doctrine, Social Theory, and Empirical Evidence Law & Policy, 31 (2), 217-235 DOI: 10.1111/j.1467-9930.2009.00295.x