Tuesday, January 13, 2009

The Age of Diagnosis Analysis is Also Wrong

I've written a theoretical critique of H-P et al. and also looked at the figures from the paper to see if in fact the artifacts the paper does take into account cannot possibly explain the rise observed (as emphasized in the media by the primary author and other persons associated with the MIND Institute.)

In my first critique I basically skipped the section on Age of Diagnosis. I did not consider it the most important section, and the result (1.2-fold rise for the proportion of diagnoses by age 5) seemed plausible. The more I look at the paper, however, the more I come away thinking it's an exceedingly naive paper that got past peer review who knows how.

So I decided it was probably a good idea to have a closer look at the section on Age of Diagnosis. As it turns out, that section is also wrong.

Wrong Assumptions

What the paper does is compare the proportion of diagnoses before age 5 in the 1990 vs. the 1996 birth year cohorts. It finds that the proportion increased by only 12% in the 1996 cohort. Then it extrapolates from this to 2002. (I'll look at the extrapolation method later.)

That seems fine, right? You basically find out to what extent diagnoses by age 5 have changed, relative to all diagnoses you might expect to have in the cohort.

Except that's not what the paper does, nor would it be able to do that. What the paper looks at is the proportion of diagnoses by age 5 relative to diagnoses by age 10.

If there are few if any diagnoses after the age of 10, then that would work, correct? Intuitively, it seems reasonable that there wouldn't be too many diagnoses after the age of 10. But intuition and reality don't always agree. I knew that was an incorrect assumption because I've been looking at California DDS data for a number of years. (For example, see my post titled The Epidemic of Autism... Among 18-21 Year Olds.)

I have birth year data that California DDS provides on request (a file named Job5028.zip.) Let's look at the number of autistic clients born in 1990 as reported at different times.

In June, 1995 (approx. age 5): 404
In June, 2000 (approx. age 10): 663
In March, 2007 (approx. age 17): 918


Clearly, there is a non-trivial number of diagnoses after the age of 10. Of all the diagnoses by age 17, about 28% occur after the age of 10. There will no doubt be diagnoses after the age of 17 too.

Suppose things have changed since 1990. Perhaps in the 2002 birth year cohort close to 100% of California autistics are diagnosed before age 10. We can't know this, but if this were the case, I estimate that the impact of age of diagnosis would be about 1.6-fold and not 1.2-fold. With this, the total rise explained would get pushed over a factor of 5.

Of course, diagnoses after age 10 are confounded by changes in criteria. Some issues the paper has sort of compensate for one another, and this obviously makes it difficult or impossible to interpret the paper.

Wrong Math

The statistical analysis of age of diagnosis in the paper consists of exactly the following.

A shift toward younger age at diagnosis was clear but not huge: 12% more children were diagnosed before age 5 years in the 1996 birth cohort (the most recent with 10 years of follow-up) in comparison with those in the 1990 cohort.
Extrapolation into the later birth cohorts (eg, 2002) would suggest a 24% rise in the proportion of diagnoses by age 5.


Basically, they do a linear extrapolation: 12% for 1990-1996, then assume it's probably another 12% for 1996-2002, which gives a total of 24%.

Is a linear extrapolation reasonable here? What if there's an acceleration in the age of diagnosis after 1996?

It would be a good idea to look at the trend, wouldn't it? That's why I made the following graph of the proportion of clients at age 5 vs. those at age 10 for birth years 1990-1997.



You tell me, is a linear extrapolation reasonable there?

There are also considerable random fluctuations in the series, so the authors should have calculated a confidence interval on the slope of the linear regression, which is easy to do.

Comment

Let me recap. There appear to be major issues throughout the paper.

Age of Diagnosis - As noted, the assumption that there are few if any diagnoses after age 10 is mistaken, plus the statistical analysis is basically non-existent and naive.

Changes in Criteria - It gets its result from a single Finnish epidemiological study of a population of intellectually disabled children. Finland and California are not necessarily equivalent genetically and environmentally. The ascertainment methods are also not equivalent in the least.

Milder Cases - It assumes that only Asperger's and PDD-NOS would have been missed by a study such as the Finnish one. (There also seems to be a contradiction as to what California DDS says in regards to Asperger's and PDD-NOS, and what the authors believe, which probably needs clarification; the contradiction was noted by Kev.)

What's left?

Awareness - Not considered at all, but noted in the paper as an artifact that should be evaluated later.

Diagnostic substitution - Not addressed at all. The authors probably assume that diagnostic substitution is subsumed by the other artifacts, but it's non-obvious that this would be the case.

Migration - Dismissed in one paragraph as probably not having much of an impact.

Access - There's discussion on access, but no statistical analysis of its impact at all. It's unclear why it's included in the paper.

Statistical Analysis - Basically non-existent. No ranges of statistical confidence are provided. The authors seem to be under the impression that because they are looking at whole population numbers, there's no room for uncertainty in their figures.

Claims about results - The paper claims that artifacts account for a 4.26-fold rise, which does not come close to explaining a 6.85-fold rise. How so? Furthermore, if they had used a 3.6-fold figure for the impact of criteria (a figure from a meta-study), the entire rise would have been explained.

OK. I've read papers having to do with autism epidemiology that are quite poor. For example, I've read several papers by the Geiers. Even so, I'm debating whether H-P et al. is the worst such paper I've ever come across.

In my view, the credibility of the MIND Institute and that of the authors has dropped a notch with this paper. Perhaps a big issue has been the way the paper was described in the media. The language in the paper itself is somewhat skeptical in comparison.

I think they need to think about the implications of being associated with something so naive, mistaken, and so poorly communicated to the public. I wouldn't be surprised if some of the authors decide to retract the paper at some point in the future. That's also something the editors of the journal Epidemilogy should think about.

6 comments:

  1. There are significant problems with using the delivery of social services as a surrogate for a medical diagnosis. These problems are further compounded when even the medical diagnosis - in this case, "autism" - is largely based on subjective finding and has criteria that are continually shifting.

    As far as I can recall, none of the "studies" using the California DDS client data (H-P et al included) have addressed the shifts and fluctuations due to changes in funding or the differences between the policies - written and unwritten, formal and informal - of the various regional centers that generate these data.

    H-P et al is - at best - a feeble first-approximation at trying to quantify the impact some of the administrative and societal causes of the "autism epidemic", but it is far from being definitive. It isn't even accurate enough to be considered wrong.

    At worst, the H-P et al study is a clumsy attempt by the MIND Institute to justify continued funding by the state of California during a budgetary crisis of epic proportions.

    I hope that other researchers, spurred on by this feeble attempt, will rouse themselves to do a better job of quantifying the contributing factors in the current rise in autism prevalence. Not because I think there is anything to be gained by this effort, but because it would be folly to let such a flimsy and obviously inaccurate "study" stand unopposed.

    Prometheus

    ReplyDelete
  2. "There are significant problems with using the delivery of social services as a surrogate for a medical diagnosis."

    Definitely. I've had an ASD diagnosis since age 10, and officially re-diagnosed as classically autistic at 18, yet they wouldn't let me have any services beyond maybe 30 minutes speech every other week through the school until I went to college and really had trouble in a dorm, and now it's taking forever but at least they're putting me through the process of services. But yeah, as a kid my parents thought that autistic kids could never speak and never showed emotion, so they didn't think I was autistic (specially since my dad is also autistic but didn't know it), so when I couldn't speak some of the time or didn't understand things, or repeated words and phrases, rocking back and forth saying "Mommy come here" (regardless of whether I needed anything! :P), or sorted beads or stared at the ceiling instead of doing homework, then the most thought this was given was "attention difficulties" that were unspecified, never looked into further.

    Even today there's a lot of times when I have extreme auditory processing difficulty, and I don't understand what was said or need it said slower, and my parents just don't get it. Because when I was younger, I would repeat a phrase I knew to mask my misunderstanding, but now that I've got older in high school and college I can't get away with pretending to understand and need to get clarification when necessary. So ironically, when I communicate more, I look more impaired.

    ReplyDelete
  3. I thought that the issue of diagnostic substitution had been discussed as part of the california data. Mark Blaxill and co-authors sort of disputed that diagnostic substitution could take place in California when they reanalyzed Lisa Croen's data. So, I might be mistaken but I thought it was sort of shown that diagnostic substitution between autism and mental retardation had not been supported in california. I realize that is not the case with some other areas of the country where diagnostic substitution could be a factor in the data. Also, in california, there could be diagnostic substitution between ADHD and autism in order to get certain services, and this could explain the relatively large number of diagnoses past age 10 that you cite, but is there really any evidence of diagnostic substitution between autism and mental retardation in the CDDS data?

    ReplyDelete
  4. Yes, I noticed Lisa Croen's work was mentioned, and I was familiar with the discussion between Croen and Blaxill. Additionally, Shattuck et al. noted that diagnostic substitution from mental retardation was not clear in a handful of states, including California.

    That, however, doesn't mean that diagnostic substitution did not take place in California. Autism Street has an interesting post where he looks at an aggregate of all special education categories. The aggregate trend is completely flat. It seems very likely that diagnostic substitution did take place in California, although perhaps it's not a straightforward substitution like in other states.

    In California DDS you can also see more recent "substitution" if you compare mental retardation without autism vs. autism with mental retardation. Of course, the way categories work in California DDS and IDEA are very different.

    ReplyDelete
  5. The parents are brainwashed. Most parents want extra services whether they are helpful or ideal. Classical autism is a rare disease. ASD seems to envelop children who are too radically different to be collectively grouped. To generically categorize focused interests or social language deficits as ASD and this seems to be whats happening is horrific.

    ReplyDelete
  6. Southeast and main Asian pandora jewelry countries have twisted rubies for centuries, cheap pandora bracelets but research as to where, and how to find more deposits is Pandora charms spare, and production has figured out how and mining companies,” Pandora beads Giuliani says, to look at exactly the right time and place.” pandora set Farther investigation of claret formation, based on tectonic scenery, cheap pandora geochemistry, fluid inclusions and isotopic ratios, allowed discount pandora Giuliani’s lineup to remodel a new prototype for the French Institute pandora 2010 of Research for Development (IRD) and the National Scientific pandora sale Center of Research, two government-sponsored knowledge Pandora Bangles and technology research institutes that aim to aid in the sustainable cheap pandora bracelets development of developing countries. Before the collision pandora bracelets prices of the Eurasian and Indian plates, lagoons or deltas sat in the regions where marble is giant, pandora bracelets and charms he says, “and there is the brains to expect that the new pandora bracelets sale thoughts should help development of the artless capital.” discount pandora bracelets Virginie Garnier, Gaston Giuliani and Daniel Pandora necklace Ohnenstetter urban the shape to do just that. They work for the garnet cheap pandora charms genesis. While studying the bedrock in Vietnam in 1998, the discount pandora charms French players found rubies, which detained traces of aluminum, chromium pandora charms sale and vanadium from universities, international corporations, governments pandora charms 2010 and why the rubies got there, and has created a paradigm Pandora beads to help these evaporites, Garnier says, when the Eurasian cheap pandora beads and Indian plates collided, raising the Himalaya Mountains.

    ReplyDelete