Monday, November 20, 2006

The Verstraeten et al. Gambit

It is common knowledge by now that proponents of the link between thimerosal containing vaccines and autism have been left without any epidemiology to speak of. This is because the credibility of the two and only researchers involved in trying to demonstrate an epidemiological link between TCVs and autism, Mark and David Geier, has taken a substantial hit, due to the work Kathleen Seidel has single-handedly done to expose dubious activities related to the Lupron Protocol.

As a result, it appears that an old talking point of mercury militants has acquired renewed importance. Roughly, they claim that an early draft of Verstraeten et al. (2003), which I'll refer to as Verstraeten et al. (2000), showed there was a link between thimerosal and autism, but this link was later covered up by the CDC. It's of interest to note that Verstraeten et al. (2000) is the same draft that was plagiarized by Geier & Geier. (No need to dance around the word plagiarism, as such is patently obvious).

In order to verify the claims about Verstraeten et al. (2000) I decided to, you know, go and read the draft. Let's analyze the various claims.

Is it true that Verstraeten et al. (2000) showed there was a link between autism and thimerosal?

Simply put, no. The abstract does not mention autism. To see why that is the reader should look at relative risks for various outcomes after exceeding EPA mercury exposure guidelines at 1 and 3 months of age in Table 4. The relative risks for autism are 1.01 (CI 0.71, 1.48) and 0.94 (CI 0.62, 1.42). In other words, the relative risks found happen to be very close to 1.0 and the range of statistical confidence is broad around 1.0.

It's pretty astonishing that many people have claimed and assumed the draft demonstrated such a link. Let's take, for example, Ginger of the Adventures in Autism blog, who claimed the following:

Verstraten's first draft of the study finds a relative risk above 7 for children who receive the highest dose of thimerosal to develop autism.

What is she talking about? Graph 4 shows relative risks at different exposures. At greater than 62.5 micrograms (by 3 months of age) the relative risk is 1.69. But note that 1.0 is well within the confidence interval. There is no way to conclude a relative risk greater than 1.0 is statistically significant in this case given the sample of 28 children. I will also discuss other reasons why these relative risks can't be taken at face value. Either way, it's not clear where Ginger got the relative risk of 7.

Is it true that Verstraeten et al. (2003) covered up the findings of Verstraeten et al. (2000)?

From the abstract of Verstraeten et al. (2003):

RESULTS: In phase I at HMO A, cumulative exposure at 3 months resulted in a significant positive association with tics (relative risk [RR]: 1.89; 95% confidence interval [CI]: 1.05-3.38). At HMO B, increased risks of language delay were found for cumulative exposure at 3 months (RR: 1.13; 95% CI: 1.01-1.27) and 7 months (RR: 1.07; 95% CI: 1.01-1.13). In phase II at HMO C, no significant associations were found. In no analyses were significant increased risks found for autism or attention-deficit disorder.

These results appear to be similar to some of those in the early draft. The difference is that the 2003 study includes additional HMOs. But maybe Verstraeten et al. are pretty bad at covering up data.

There's a whole mythology of conspiracy surrounding Verstraeten, the VSD and, you guessed it, the Geiers. See Kev's post titled Mark Geier, David Geier and the VSD.

What about the risk ratios for outcomes other than autism?

That autism is not involved doesn't mean I will just ignore the rest of the findings. The early draft does find statistically significant risk ratios for several outcomes. But is there good reason to take these ratios at face value?

Those familiar with vaccine safety studies generally understand that VSD is a much more solid database than VAERS. That is, you can't literally plant data in VSD, and VSD is not likely affected by prevailing causation hype or the interests of litigants. This, however, doesn't mean VSD is impervious to confounds beyond random error. VSD still records outcomes based on existing diagnoses, not the results of whole population screenings. In a cohort of children expanding many years, recent diagnoses are not necessarily equivalent to older diagnoses. Surely, things like coincidental trends and left censorship could have an impact on any findings.

After looking at graphs in the 2000 draft, I came across a number of peculiar results. I will list some examples, so the reader can get a clear idea of whether risk ratios from the draft can be taken at face value.

In Graph 1 we find that if you inject children with more than 62.5 micrograms of mercury by 3 months of age, their risk of developing degenerative neurological disorders is only 40% of what it would be otherwise. And this is a statistically significant finding. In Graph 3 we find something similar about renal disorders. Apparently mercury is good for your kidneys. Graph 15 says about the same about infantile cerebral palsy. In other words, thimerosal prevents brain damage.

Graph 8 shows that risk for sleeping disorders increases by a factor of 1.75 with a exposure of 50 micrograms. The solution? Inject more thimerosal. Apparently with a exposure over 62.5 micrograms, the risk drops back down to 1.09.

Graph 13 tells us that the risk of developmental speech disorder is only 0.59 at 25 micrograms of exposure, even though the risk is between 1.25 and 1.47 at higher exposures. I guess you have to be careful to choose the right dose if you want to prevent almost half the cases.

Graph 20 is the most peculiar of all. It tells us that premature infants would have about a fifth of their normal risk of developing neurological disorders if only they are injected with 50 micrograms of thimerosal. And this finding is quite statistically significant, as the sample sizes in this graph are considerably larger than in other graphs.

It is clear that it would've been irresponsible for Verstraeten et al. to just publish these findings without clearly outlining limitations and potential confounds. This is not only because of the anti-vaccination scare that would result, obviously. So maybe they did think about that and that's why they decided to look at more HMOs. Which is a good thing.


  1. Oh, Joseph, you're spoiling all the conspiracist fun. All these reasonable explanations! What are they going to natter about on EOH now?

  2. isles: "What are they going to natter about on EOH now?"

    How stupid each other's hair do is?

  3. And look at those error bars. There is just a barely significant importance of all those findings, given the error bar. And, please note, G&G left those graphs out of their paper. I guess they just weren't so convincing.

    Wouldn't you think, that if mercury exposure was really, really important, that children with a high exposure would have a 100x (or more) greater relative risk of autism?

    And on EoH, they are pretty ignorant. Despite their big talk about going to the source, and really knowing the truth, they are happy to just accept the words of their heroes. There was a post the other day with someone saying nonchalently that the autism rates in California were decreasing. And no one bothered to correct them.

  4. Well, Jennifer, I had a look at those error bars.

    Holy crap!

    Those error bars could allow one to reach any conclusion from that data!

  5. Yes, those error bars are important. That's what I refer to as the confidence interval in the post. Mind you, there are some error bars to go above 1.0, but not by much. Then there are error bars that are well below 1.0 -- but those results are really not believable to me.

  6. Ginger Taylor has no idea what she's talking about. She's as ill-informed as the rest of the mercury parents, Johns Hopkins affiliation or no.

  7. I think she confused 1.7 with 7. Honest mistake? Maybe it's one of those "order of magnitude" errors.

  8. "What is she talking about? Graph 4 shows relative risks at different exposures."

    I was not referring to the 2000 version of the Verstraten Study here, but the 1999 version of the data. Apparently on his initial run of the data he found the relative risk of close to 7 which is why the paramaters of the study were changed so drastically.

    I have a more detailed discussion of the changes that the study from its inception in 1999 and its publication in 2003 here: Verstraten discussion is at the end of the article.

    I should have foot noted the both articles, sorry about the confusion. I will go back and try to find the source material.

    Ginger Taylor

  9. Ginger: Is there something in writing by the authors from 1999 where we can check the data, confidence ranges, and so forth? Or is this basically a rumor? Or erroneous early data?

    If the 2000 draft is bogus, then how come Geier & Geier replicated it exactly? (I'm half joking - that one is easy to answer).

  10. As I recall the charts for the Nov 1999 results were found randomly through a FOIA request. I have a lot of source files and as it has been more than a year I can't remember where I put them. I will keep looking today.

    My problem with Verstraeten is not so much that the numbers are wrong for what they were sampling, but that the samples were a joke. They cherry picked who would be included in the study, and under what conditions, to the point that it no longer meaningfully measured anything and could not be applied to the us population at large, or any sub population.

    The post I referred to is long, but well worth the read. The way the manipulated the study to get the results that they did are really horrible. Even by the time the 2000 draft was published they had already dropped any children that had received no thimersoal, any child that had any pregnancy or birth complications, and a bunch of other shennagans that dropped the association between thimerosal and neurodevelopmental disorders dramatially.

    I really do need to go back and foot note both of those articles so that people can evaluate all for themselves.

  11. BTW, there is an earlier draft from February 2000 where we see a relative risk of 2.48 for over 62.5 micrograms by age 3 months. That one is here. This one is not statisticially significant either, as the error bar is huge. To emphasize, this draft did not find a statistically significant risk ratio for autism either.

    People may rightly wonder, though, how come the risk got reduced by the next draft? To answer that we just need to look more closely at the data. In the Feb. 2000 draft we see n=16, a very small sample, hence the huge error bar. In the later draft, n=28 and the error bar got reduced in size, and not surprisingly, the risk moved in the direction of 1.0.

    That's the reasonable explanation which does not involve black helicopters. Were the researchers worried about how the results might be taken? Sure, that's probable.

    I bet there's somewhat of a correlation simply because children in the earlier cohorts might have received less thimerosal than children in the more recent cohorts, and autism is diagnosed more often in recent times. It probably evens out a little because of left censorship (not being able to diagnose the very young). So if you increase the sample size by adding a more recent cohort or an older cohort, the risk can move in either direction.

    In general, a VSD study might be a lot more accurate if the risk factor didn't happen to be in a moving trend.

    Let's suppose the study looks at a cohort between 1998 and 2004, and let's assume thimerosal use dropped sharply in 2001. Let's also assume the incidence of autism has been rising that whole time. Obviously, the study would find that children who are not injected with thimerosal tend to develop autism more often than children who are. It would find a risk ratio below 1.0, which is just as wrong as a risk ratio above 1.0 due to coincidental trends. I'm not sure if Verstraeten et al. realized this was a possible confound.

  12. There's also a claim by Blaxill of an 11-fold risk. See here. So is it 7 or 11? I'd like to get to the bottom of all these conflicting claims.

  13. John Gilmore, president of the New York Metro chapter of the National Autism Association, claims

    "There's only been one study done in the US of the relationship of thimerosal exposure via vaccines and autism. That was the CDC's notorious Verstraeten study, which came to conlcusion that no conclusion can be drawn. When parents started asking questions about the obvious methodological problems with the study and asked to see the data sets, the CDC said they were lost (i.e. the dog ate it) which is a federal crime. Therefore, the one and only study done in the US is irreproducible and therefore scientifically useless."

    Any idea what he's talking about, Ginger? Anybody?

  14. Herlaldblog,

    I refer to it as well both in the peice I mentioned and the piece that Joseph refers to.

    I think it was during the congressional hearings that Dan Burton called a few years ago. A private contractor testified that he destroyed the oritional data sets upon the instruction of the CDC, who told him it was to protect patient confidentially.

    I seem to remember that was documented in Evidence of Harm.

    My computer was stolen two weeks ago and I am trying to reconstruct my files from back ups, but I am in the middle of moving so some of my stuff is on a truck heading to the east coast. It might take me a while to find all the documentation that I used on this stuff.

    You might get faster results by just emailing Blaxil or safe minds directly.

  15. Joseph,

    Again, I am not taking issue with the fact that n=28 is a better measure than n=16, but rather I want to know who the extra 12 n's were. Were they 1 year olds that would not be diagnosed with anything?

    If you want the relative risk to move in a certian direction all you have to do is find a way to add in children who would were not diagnosed with anything, or add in stop dates so that children diagnosed with mild disorders are never shown to go on to be diagnosed with serious disorders.

    And it is not paranoid thinking to say that they did this. They tell us that they did it in the various versions of the study.

    And your last senario has problems. First you cannot assume that the number of autism cases is rising at the same rate after the beginning of the withdrawl of thimerosal. The broad strokes say (fromt he crappy California measure that we have) is that it is going up much more slowly. Add to that the fact that two years or so ago the CDC started pushing the flu vaccine on pregnant women and young children and it throws hitches in the purity of the post 2001 autism rates to thimerosal numbers.

    More importantly you have to remember, the theory is not that Thimerosal=Autism. It is that mercury is one of triggers of autism in a genitically vulnerable subset of the population, and thimerosal is the most frequently occurring toxic insult that causes SOME to begin to display autistic symptoms. So even if we eliminate thimerosal in all its forms, we still have increased mercury in the american diet, in the air, lead in the air and in the dirt, aluminum in vaccines and toxic insults galore that did not exist 80 years ago.

    Thimerosal is not THE culprit in all cases, it is A culprit in most american cases.

    Again, I encourage you to go back and read the piece I mentioned. It takes a look at the Verstraeten and origional Denmark epidemological studies in context and examines how much power these studies have (even if they were done well which they were not).

    Add to this the study I reference in the article that has found two different types of autism, which makes me think that there may be two completely different medical disorders both being called 'autism' because they display similiar behavior patters.

    Ok... at this point I am just beginning to reproduce the article. Just read that one, it would be more helpful than me just half assing it in your comments section.

  16. Ginger,

    "Again, I am not taking issue with the fact that n=28 is a better measure than n=16, but rather I want to know who the extra 12 n's were. Were they 1 year olds that would not be diagnosed with anything?"

    The extra 12 would be autistic kids, wouldn't they? That must be the size of the group with the outcome in question. The total number of children would be much larger.

    "If you want the relative risk to move in a certian direction all you have to do is find a way to add in children who would were not diagnosed with anything"

    It doesn't really work that way. Let's say you find 5 unvaccinated children and 10 vaccinated children with autism in a sample of 1,000 children. The risk ratio is 2.0, right? Let's say you add 1,000,000 non-autistic children to the sample. Does it change the risk ratio? No. It's still 2.0.

    But there's a way to inadvertently change the risk ratio by adding cohorts if there are coincidental trends.

    "And your last senario has problems. First you cannot assume that the number of autism cases is rising at the same rate after the beginning of the withdrawl of thimerosal. The broad strokes say (fromt he crappy California measure that we have) is that it is going up much more slowly. Add to that the fact that two years or so ago the CDC started pushing the flu vaccine on pregnant women and young children and it throws hitches in the purity of the post 2001 autism rates to thimerosal numbers."

    All I'm saying is that if the thimerosal dose increased between 1990 and 1994, and the prevalence by birth year cohort also increased during those years, I don't see any reason why that correlation would not show up in VSD for those years.

    I believe that if Verstraeten had only looked at 1990-1994, he would've found a huge risk ratio for autism. The cohorts he chose were just not the right ones. Thimerosal uptake was basically flat up to 1990 and declining a bit after 1994. Then there's also the fact that prevalence by birth year cohort always looks like a hook on the right side of the graph. That must have messed up the correlation.

    As to CDDS, that the rate of increase is not as large as in the past is not relevant or surprising. The awareness curve has to level off sometime. I think the biggest push in awareness was in the late 90s, for reasons I've discussed. It's obvious that rate of increase expressed as a percentage has to naturally decrease until it reaches about 1%. It's at 10% right now.

    The flu vaccine contains 25 micrograms of thimerosal and its uptake is very small. We're talking less thimerosal than we received as kids probably. If the increase from 70 to 180 micrograms supposedly caused the autism epidemic, what is sustaining said epidemic now? The flu vaccine? That's not reasonable.

  17. Hi Joseph,

    Sorry to take so long to get back to you on the source I used. You can find a pdf presentation on the Safe Minds site

    and in regard to your last note, again I encourage you to read the whole post I referred you to.

    Because they cherry picked who would be included in the study so heavily, it makes little matter what numbers he came up with at all after the first generation (the one I linked to above).

    His calculations are no longer meaningful. They don't reflect the population at large, any subset of the population, the number of children who contracted autism, didn't contract autism, or any measure of anything in between.

    It is analogous to doing a study to see how many children given substance X during gestation go on to be born with blond hair, and then inexplicabily dropping any children from the study that were born with blue eyes.

    It no longer measures the correlation between substance x and blond hair.

    Going on to talk about the numbers they found in such a study would be a waste of time, except to point out how screwed up the study was and figure out why the hell any one would spend 4 years and 25 million dollars on such a crap study and why any one else would publish it when it is not replacable, and why any doctor would read it and think that it applied to the any of the babies in his office.


    link to long... here is a tiny one.

  19. Thanks Ginger. I've seen that. In fact, I tried to contact SafeMinds to see if they could give me a copy of the documentation they got through FOIA about this pre-draft work by Verstraeten et al. but they haven't replied yet. Maybe the email address there is outdated. This PDF is clearly not the original. And they do have other things from FOIA which look like copies of originals, but nothing to support this particular claim. I tried to contact Drs. Verstraeten and DeStefano to see if they could confirm these claims, but Verstraeten I don't believe works with CDC anymore and Dr. DeStefano's email is bouncing. I might pursue that further.

    I don't doubt these risk ratios (7 and 11, noting that 7 came earlier) could be found in VSD given the obvious confound I believe is there. But I thought it was odd there are no copies of the originals for these claims, which leads me to believe there might be something interesting about those originals.

  20. It is my understanding that Verstraeten was transferred back to Europe by Glaxo years ago, and that he won’t comment on the study. I think that David Kirby wrote about attempts that were made to contact him and interview him about the study were turned down. That was a couple of years ago though. I hope that you can get more information on the matter. Let me know if you do.

    As far as getting confirmation on the originals from SafeMinds, I don’t think there is any reason that you should not be able to get a good answer from them. I have called them with questions before and I was surprised at how accessible they were.

    All of this information, from all groups, public and private, should be open for us all to pick apart. Sadly that is not always the case.

  21. Maybe SafeMinds doesn't want to answer my emails. But maybe they'll reply to you. I'm curious as to what that pre-draft documentation consists of.

    Most of the claims in the document you linked to are unsourced and they appear to consist of presumptions of motivations and methodology.

  22. Southeast and main Asian pandora jewelry countries have twisted rubies for centuries, cheap pandora bracelets but research as to where, and how to find more deposits is Pandora charms spare, and production has figured out how and mining companies,” Pandora beads Giuliani says, to look at exactly the right time and place.” pandora set Farther investigation of claret formation, based on tectonic scenery, cheap pandora geochemistry, fluid inclusions and isotopic ratios, allowed discount pandora Giuliani’s lineup to remodel a new prototype for the French Institute pandora 2010 of Research for Development (IRD) and the National Scientific pandora sale Center of Research, two government-sponsored knowledge Pandora Bangles and technology research institutes that aim to aid in the sustainable cheap pandora bracelets development of developing countries. Before the collision pandora bracelets prices of the Eurasian and Indian plates, lagoons or deltas sat in the regions where marble is giant, pandora bracelets and charms he says, “and there is the brains to expect that the new pandora bracelets sale thoughts should help development of the artless capital.” discount pandora bracelets Virginie Garnier, Gaston Giuliani and Daniel Pandora necklace Ohnenstetter urban the shape to do just that. They work for the garnet cheap pandora charms genesis. While studying the bedrock in Vietnam in 1998, the discount pandora charms French players found rubies, which detained traces of aluminum, chromium pandora charms sale and vanadium from universities, international corporations, governments pandora charms 2010 and why the rubies got there, and has created a paradigm Pandora beads to help these evaporites, Garnier says, when the Eurasian cheap pandora beads and Indian plates collided, raising the Himalaya Mountains.