The study, first of all, is methodologically impressive in my view. For example, they didn't look at existing diagnoses, but they actually went and evaluated the children. This in itself takes care of some confounds Verstraeten et al. likely suffered from. The new study is not perfect by any means, but anyone would be hard pressed to do any better. The confounds that remain are non-obvious and their significance unclear. This is not at all like Generation Rescue's survey, for example, where it's trivial to identify some major and obvious confounds.
It would seem that the conclusions of the CDC study are counter-intuitive to some people. After all, the study did find some statistically significant effects, and statistical significance is statistical significance. David Kirby, for instance, seems to be having a lot of trouble figuring this out. The key point to understand is that this study is not your ordinary ecological survey where they give you one or two risk ratios (RRs) and their confidence intervals. If you look at a single RR in isolation, you can be sure (barring any flaws and confounds) that there's only a 5% chance that the actual RR is outside the confidence interval. But suppose you are presented with 100 studies, each with one RR. Obviously, you should expect that, by mere chance, around 5 of those studies will be wrong; i.e. they will have actual RRs that are outside their corresponding confidence intervals.
The CDC study looked at 42 different outcomes, and determined multiple confidence intervals in each case, since different levels of exposure were tested. In total, I understand there were over 300 confidence intervals. Consequently, assuming the null hypothesis is correct, you should expect that an RR of 1.0 will be outside the 95% confidence interval in over 15 measures. What the study found was that in 12 measures there was an apparent protective factor, and in 8 measures there was an apparent risk factor. This is completely consistent with the null hypothesis. Therefore, the conclusion of the study, namely that the results of the same do not support a causal association between thimerosal-containing vaccines and neurological outcomes, is absolutely the correct conclusion.
Let's now discuss Sallie Bernard. The CDC apparently went out of its way to make this study as transparent as possible and to, frankly, appease the mercury militia (which bills itself as the "autism community" even though there's absolutely no evidence they are actually representative of the autism community.) Ms. Bernard was given a chance to participate in all stages of the study as a consultant, but when the results came in and they were not what she expected, Ms. Bernard decided to withdraw her support. That's not all. She (or SafeMinds) fired off and email/press-release titled as follows.
VACCINE STUDY IN NEW ENGLAND JOURNAL OF MEDICINE WRONG IN CONCLUDING MERCURY EXPOSURES ARE HARMLESS, STATES SAFEMINDS
(EOHarm message #65356)
Is that what the paper said? That mercury exposures are harmless? That would be wrong ane misleading. Let me check.
Our study does not support a causal association between early exposure to mercury from thimerosal-containing vaccines and immune globulins and deficits in neuropsychological functioning at the age of 7 to 10 years.
Wow. That's quite different to "mercury exposures are harmless." Who would say "mercury exposures are harmless" anyway? If you were to ingest, say, 1 gram of mercury, you would not become autistic, but you would easily end up at the hospital or dead. Certain doses of mercury are not harmless by any means.
Clearly, the CDC study has been misrepresented by SafeMinds. Is that SafeMinds statement an intellectually honest one? Ms. Bernard?