Why cite the press release and not the paper? Well, that's because the conclusions of the paper said the following.
Other artifacts have yet to be quantified, and as a result, the extent to which the continued rise represents a true increase in the occurrence of autism remains unclear.
A true increase... remains unclear. Is that clear enough?
The paper is titled "The rise in autism and the role of age at diagnosis." I'll refer to it as Hertz-Picciotto & Delwiche (2009). I've already criticized multiple aspects of the paper (last time here) but I'd like to say a few more things about it.
The conclusions of the paper seem refreshingly honest, but I'm guessing they are that way simply to get through peer-review. The missing artifacts are not just any artifacts, either. One of the artifacts that did not enter the calculations is key: awareness. It would be nonsensical to assume that awareness of autism has not changed since the early 1990s.
But what did the press release say?
“It’s time to start looking for the environmental culprits responsible for the remarkable increase in the rate of autism in California,” said UC Davis M.I.N.D. Institute researcher Irva Hertz-Picciotto, a professor of environmental and occupational health and epidemiology, and an internationally respected autism researcher.
Hertz-Picciotto said that many researchers, state officials and advocacy organizations have viewed the rise in autism's incidence in California with skepticism.
This is completely at odds with the conclusions of the paper, and I find it quite dishonest. To make it perfectly clear, yes, I'm accusing Dr. Hertz-Picciotto of intellectual dishonesty.
Dr. H-P further states that:
“These are fairly small percentages compared to the size of the increase that we’ve seen in the state,” Hertz-Picciotto said.
Is that even true? The paper finds that a 2.2-fold increase may be explained by changes in diagnostic criteria, 1.56-fold due to inclusion of "milder" cases, and 1.24-fold due to changes in age at diagnosis. If you multiply these factors, you come up with a 4.26-fold increase that may be explained by just these 3 artifacts.
That's 62% of the entire rise the authors were attempting to explain. Maybe 62% is a "fairly small percentage." In this case I'm going to give Dr. H-P the benefit of the doubt and say that she likely didn't know the factors needed to be multiplied. There are no indications in the paper that a calculation of the overall contribution of the 3 artifacts combined was even attempted. I won't even get into statistical uncertainty.
The flaws of the analysis are not what bother me the most, however. Consider this. What is the value of this study? Why was it carried out? If you're trying to determine whether artifacts can explain the rise in autism service classifications in California, and you cannot estimate the contribution of all relevant artifacts, what is the point of the analysis?
Here's an analogy. Suppose you wanted to determine if global warming can be explained by the greenhouse effect. In order to do this, you estimate the contribution of methane, water vapor and nitrous oxide to recent temperature increases, but you leave out CO2. You conclude that those 3 gases alone cannot explain the entire rise in temperatures, but perhaps that's because you did not consider the biggest contributor to the greenhouse effect: CO2. Then you tell the media that greenhouse gases cannot fully explain the rise in temperatures.
Wouldn't such a study be better understood as a propaganda effort, rather than a contribution to scientific knowledge?
Someone might complain that Hertz-Picciotto & Delwiche (2009) does contribute to scientific knowledge, because it tells us about the impact of certain artifacts. But does it actually do that?
Take what is perhaps the most important artifact the paper does take into account: changes in diagnostic criteria. What would you do if you wanted to determine the impact of changes in criteria? You might carry out a prevalence study with good case-finding that uses two different criteria on the same population: DSM-IV and DSM-III (or perhaps Kanner criteria.) Hertz-Picciotto & Delwiche (2009) does not do anything that even resembles this. They use data from a separate study, so they didn't even contribute new data. We can't even be sure how well the data from Finland might apply to California. The case-finding of the Finnish study is not necessarily very good either. Plus it's just one data point, with all the uncertainties implied by that.
To determine the impact of inclusion of "milder" cases (i.e. anything that is not "autistic disoder"), what would you do? I think you could evaluate a random sample of CalDDS autistic children and diagnose them with either autistic disorder, PDD-NOS or Asperger's. What the researchers did instead was use data from a separate MIND Institute study where CalDDS children had been evaluated with the Autism Diagnostic Observation Schedule (ADOS) and the Autism Diagnostic Inventory (ADI.) Are these diagnostic tools even able to accurately distinguish between autism spectrum diagnoses? I'm not aware of any evidence that they do.
In other words, the MIND Institute study is not even informative about the impact of the artifacts it did take into account. I frankly can't see this study as a contribution to scientific knowledge at all. It gives the appearance of being part of a propaganda effort.