As a result, it appears that an old talking point of mercury militants has acquired renewed importance. Roughly, they claim that an early draft of Verstraeten et al. (2003), which I'll refer to as Verstraeten et al. (2000), showed there was a link between thimerosal and autism, but this link was later covered up by the CDC. It's of interest to note that Verstraeten et al. (2000) is the same draft that was plagiarized by Geier & Geier. (No need to dance around the word plagiarism, as such is patently obvious).
In order to verify the claims about Verstraeten et al. (2000) I decided to, you know, go and read the draft. Let's analyze the various claims.
Is it true that Verstraeten et al. (2000) showed there was a link between autism and thimerosal?Simply put, no. The abstract does not mention autism. To see why that is the reader should look at relative risks for various outcomes after exceeding EPA mercury exposure guidelines at 1 and 3 months of age in Table 4. The relative risks for autism are 1.01 (CI 0.71, 1.48) and 0.94 (CI 0.62, 1.42). In other words, the relative risks found happen to be very close to 1.0 and the range of statistical confidence is broad around 1.0.
It's pretty astonishing that many people have claimed and assumed the draft demonstrated such a link. Let's take, for example, Ginger of the Adventures in Autism blog, who claimed the following:
Verstraten's first draft of the study finds a relative risk above 7 for children who receive the highest dose of thimerosal to develop autism.
What is she talking about? Graph 4 shows relative risks at different exposures. At greater than 62.5 micrograms (by 3 months of age) the relative risk is 1.69. But note that 1.0 is well within the confidence interval. There is no way to conclude a relative risk greater than 1.0 is statistically significant in this case given the sample of 28 children. I will also discuss other reasons why these relative risks can't be taken at face value. Either way, it's not clear where Ginger got the relative risk of 7.
Is it true that Verstraeten et al. (2003) covered up the findings of Verstraeten et al. (2000)?From the abstract of Verstraeten et al. (2003):
RESULTS: In phase I at HMO A, cumulative exposure at 3 months resulted in a significant positive association with tics (relative risk [RR]: 1.89; 95% confidence interval [CI]: 1.05-3.38). At HMO B, increased risks of language delay were found for cumulative exposure at 3 months (RR: 1.13; 95% CI: 1.01-1.27) and 7 months (RR: 1.07; 95% CI: 1.01-1.13). In phase II at HMO C, no significant associations were found. In no analyses were significant increased risks found for autism or attention-deficit disorder.
These results appear to be similar to some of those in the early draft. The difference is that the 2003 study includes additional HMOs. But maybe Verstraeten et al. are pretty bad at covering up data.
There's a whole mythology of conspiracy surrounding Verstraeten, the VSD and, you guessed it, the Geiers. See Kev's post titled Mark Geier, David Geier and the VSD.
What about the risk ratios for outcomes other than autism?That autism is not involved doesn't mean I will just ignore the rest of the findings. The early draft does find statistically significant risk ratios for several outcomes. But is there good reason to take these ratios at face value?
Those familiar with vaccine safety studies generally understand that VSD is a much more solid database than VAERS. That is, you can't literally plant data in VSD, and VSD is not likely affected by prevailing causation hype or the interests of litigants. This, however, doesn't mean VSD is impervious to confounds beyond random error. VSD still records outcomes based on existing diagnoses, not the results of whole population screenings. In a cohort of children expanding many years, recent diagnoses are not necessarily equivalent to older diagnoses. Surely, things like coincidental trends and left censorship could have an impact on any findings.
After looking at graphs in the 2000 draft, I came across a number of peculiar results. I will list some examples, so the reader can get a clear idea of whether risk ratios from the draft can be taken at face value.
In Graph 1 we find that if you inject children with more than 62.5 micrograms of mercury by 3 months of age, their risk of developing degenerative neurological disorders is only 40% of what it would be otherwise. And this is a statistically significant finding. In Graph 3 we find something similar about renal disorders. Apparently mercury is good for your kidneys. Graph 15 says about the same about infantile cerebral palsy. In other words, thimerosal prevents brain damage.
Graph 8 shows that risk for sleeping disorders increases by a factor of 1.75 with a exposure of 50 micrograms. The solution? Inject more thimerosal. Apparently with a exposure over 62.5 micrograms, the risk drops back down to 1.09.
Graph 13 tells us that the risk of developmental speech disorder is only 0.59 at 25 micrograms of exposure, even though the risk is between 1.25 and 1.47 at higher exposures. I guess you have to be careful to choose the right dose if you want to prevent almost half the cases.
Graph 20 is the most peculiar of all. It tells us that premature infants would have about a fifth of their normal risk of developing neurological disorders if only they are injected with 50 micrograms of thimerosal. And this finding is quite statistically significant, as the sample sizes in this graph are considerably larger than in other graphs.
It is clear that it would've been irresponsible for Verstraeten et al. to just publish these findings without clearly outlining limitations and potential confounds. This is not only because of the anti-vaccination scare that would result, obviously. So maybe they did think about that and that's why they decided to look at more HMOs. Which is a good thing.