Friday, July 09, 2010

Comment Spam at Blogger Getting Out of Hand

I've changed my comment policy slightly. I'm enabling comment moderation in posts that are older than 365 days.

This morning I got spammed in a way I've never been spammed before. A spam-bot with the handle xiaoyu posted the same comment spam in what appears to be each and every one of my blog posts.

It's a shame that Blogger doesn't have a good way to deal with comment spam. Their captcha obviously isn't working very well. Unfortunately, I don't believe spam-filtering is a breeze in other platforms either.

Wednesday, May 19, 2010

80% Divorce Stat is Complete Garbage

Of course, we already knew that. There's a prior survey by Easter Seals on the question of divorce among the parents of autistic people.

Readers might remember I also tackled another 80% stat having to do with the divorce rate of autistic adults.

Both 80% stats are clearly made up. They are impossible to track back to original sources. The question is: Who made them up, and for what purpose?

Either way, do check out the story:

80 Percent Autism Divorce Rate Debunked in First-of-Its Kind Scientific Study.

Monday, April 19, 2010

The Anti-Vax Movement Still Peaked in 2002-2003 and MJ's Excuses are Trivial to Address

MJ has written yet another rebuttal of my post on the media's interest in the anti-vax movement.

MJ's primary argument is basically that the absolute article count for "autism vaccines" has grown. The count for "autism" has simply grown more. I say that the relative count is what matters, but let's look into this in more detail.

As I've noted in comments, MJ fails to take into account that the total number of articles indexed by Google News Archive has also grown from year to year. Presumably, this doesn't mean people are reading more newspapers, but simply that Google is adding sources to its index all the time.

Estimating the total number of articles in Google News Archive could presumably be problematic algorithmically for Google. I have some ideas which I mentioned in comments, but I'm not confident that they would be unbiased.

Instead, let's check if searches for "autism genetic" relative to "autism" have the issue MJ thinks relative counts have.



They do not. There's no peak in 2002 here. There's no downward trend after 2002. There's no dilution of the word "genetic" in autism articles as more topics are covered. What we see instead is a remarkably stable trend.

We previously also looked at relative counts for "neurodiversity", which MJ had verified as well. This analysis also fails to support MJ's hypothesis. The same is the case of several other trends I've checked which I'm not going to go into here.

It's also illustrative to look at the raw article counts for "vaccine injury" (in quotes). These counts are presumably also biased by increasing coverage of autism topics, but maybe less so.



What we see here is consistent with a peak in 2002-2005, and a brief recovery in 2008 due to substantial propaganda efforts in relation to Jenny McCarthy and Hannah Poling. This effect cannot be expected to last very long, though.

MJ has failed to explain why the pattern of media coverage for "autism vaccines" generally matches VAERS autism submission trends and the number of autism cases filed with the vaccine court. It's lazy and convenient to simply say "the data was not meant to track this." What is the explanation?

Additionally, Smith et al. (2007) reports that:
MMR vaccine remains the number one ‘top of mind’ vaccination issue for parents. The proportion of parents believing MMR to be a greater risk than the diseases it protects against has fallen from 24% in 2002 to 14% in 2006. The proportion of ‘hard-core rejectors’ of MMR vaccine remains stable at 6%. There has been a gradual and sustained increase in the proportion of parents across all social groups saying MMR was completely safe/slight risk rising from 60% in 2002 to a current level of 74%. There now appears to be a sustained move away from fears over MMR safety and belief in the unfounded link to autism towards a more positive perception of the vaccine.

(My emphasis.)

Finally, an immunization report by the British NHS shows that MMR coverage had a low in the 2003-2004 period.

Conclusion

I've perseverated on this a lot more than I probably should, and I've gone out of my way to address ridiculous criticisms put forth by MJ.

Saturday, April 17, 2010

MJ Reproduces A Result of Mine


The anti-vax movement peaked in 2002, maybe in 2003. There are several different lines of evidence that point in this direction. I recently presented just two of them: Google News Archive articles matching "autism vaccines" relative to "autism" articles, and VAERS report submissions. Additionally, Sullivan over at LB/RB has put forth a graph of cases before the US vaccine court.

It's not surprising that such easy-to-confirm observations would hit a nerve with some people. Commenter MJ took issue with my methodology, at first claiming that as more autism articles are written, the word "vaccines" would tend to become rare in them, and later claiming that Google News Archive does not have the right bias for this type of analysis. None of this made any sense to me, and you can read the exchange in comments.

Then MJ wrote a post in response to my analysis where, evidently, MJ has come up with a reproduction (not repetition) of my prior result. You can see MJ's graph (which I copied with "fair use" in mind) on the right.

MJ takes comfort in the fact that newspaper articles matching the word "neurodiversity" are quite uncommon, relatively speaking. But see – and I doubt MJ doesn't realize this – that's a comparison of apples to oranges. Articles matching "autism vaccines" are about a public health issue, one that is bound to interest all kinds of readers. They are articles about court cases, studies, etc. Articles on neurodiversity are about an ideology, which reporters might not cover simply because they don't see a payoff in covering it.

If you're going to compare them, it obviously only makes sense to compare trends, not absolute article counts. MJ's scaling obviously doesn't allow us to see a trend, so I've produced the following graph.



It's an entirely different pattern. (Note that since article counts are relatively small, there's bound to be more noise in these series.)

One More Thing

Regarding VAERS, MJ says:
As for VAERS, it wasn't meant to do this sort of tracking nor is it an accurate measure of all children who had a reaction to a vaccine - especially for controversial relationships like autism.

I didn't look at a measure of "reactions" – and it's not clear if reports with "autism" as a symptom are even valid. I looked at submission counts. These are clearly a valid proxy of new parents recruited into either an anti-vax mindset or vaccine litigation.

Friday, April 16, 2010

IDEA's Bass Diffusion Model

In the previous post I argued that a Bass Diffusion Model fits the administrative prevalence of autism in California remarkably well, and made specific predictions based on this observation. You can think of a Bass Diffusion Model as a word-of-mouth or an adoption-of-innovation type of model.

For the sake of completeness, I will now present a couple of Bass models I derived for the administrative prevalence of autism at the US level, based on data from the Department of Education, otherwise known as IDEA data. The following is a graph of the 6-17 IDEA prevalence along with Bass model hind-casting and forecasting all the way to 2030.



Model # 2 (the red line) is the one I prefer in this case. (I'll explain why shortly.) It predicts that prevalence will eventually level off at almost 1.1%. This is completely plausible, not only because that's roughly the new consensus prevalence of ASD, but also because Minnesota is already there.

I also find it to be a fascinating prediction of the model. If you recall, a Bass model predicts a maximum prevalence of about 0.65% (at most 0.7%) for children 6 to 9 in California DDS. This absolutely makes sense. California DDS is not like IDEA. DDS does not find every autistic person to be eligible for services, and not all developmentally disabled Californians pursue eligibility with DDS. So, in my view, a Bass model makes predictions that are remarkably consistent with our current reality.

If the models are correct, by 2013 IDEA prevalence should just have surpassed 80 in 10,000. Additionally, a leveling-off trend should not be completely evident yet. It may be slightly noticeable. Meanwhile, in the California report of Q4 2013 (and let's hope they produce data equivalent to that of reports currently available) a leveling-off trend should already be evident in the 6-9 cohort.

Technical Details

For formulas and variable names, see the California post. Parameters of both models are, again, estimated by means of genetic programming. For model # 1 I simply tried to fit the 1993-2007 prevalence series without any modifications. The resulting parameters were:

p = 4.808·10-8
q = 0.22
t0 = 1938.809 (year)
m = 118.32 (per 10,000 population)


Model # 2 is based on the observation that IDEA practically did not have an autism category prior to 1993. However, once the category was introduced, many children would've been put in the category all at once. It's like introducing a product into the market that already has a number of owners. So I performed the calculation by reducing the prevalence in all report years by 3.864, which is the 1993 prevalence. Hence, t0 should be equal to 1993. The parameters actually derived by the code I wrote were:

p = 0.0072
q = 0.222
t0 = 1993.03 (year)
m = 105.992 (per 10,000 population)


Note: In this case, model results must be added to 3.864 to obtain the projected prevalence.

The rationale of the derivation of Model # 2 makes sense to me, and that's why I prefer it. However, there's not a huge difference between the models.

Addendum (4/16/2010)

I forgot to mention that the correlation coefficient R for both models was approximately the same: 0.99993. This is exceedingly good, and better than the fit for CalDDS.

Thursday, April 15, 2010

The Administrative Prevalence of Autism is a Bass Distribution

There's a new paper on the rise of autism diagnoses in California: Liu et al. (2010). Its findings are probably not surprising to my readers, I would imagine. It finds that children living in close proximity to a child already diagnosed with autism are more likely to be diagnosed with autism themselves. This reminded me of an observation I made once about administrative prevalence growth curves. They look like "word of mouth" growth curves, and they are devoid of abrupt "step" changes.

[Note: Also see Dr. Novella's take on Liu et al. (2010).]

It occurred to me to try to model this "word of mouth" type of process. The idea is that a model could be helpful in making predictions and understanding the reasons for the observed rise in prevalence ascertained from passive databases. I even wrote a simulation, and had some preliminary results. As much as I like to come up with my own models to explain things, however, I'd much rather use a proven model. So I kept trying to find an existing solution to this sort of problem.

Eventually I found something that looked very promising: The Bass Diffusion Model. This is a highly successful model that has been applied to the acquisition of durable goods, adoption of innovations, and more recently, the growth of social networks. Evidently, the model is unheard of in the autism world, and practically undiscovered in epidemiology in general. Interestingly, though, Liu et al. (2010) repeatedly uses the term "diffusion of information" to explain its findings.

Mathematically, the Bass Diffusion Model can be expressed using the following formula.



Model variables and parameters – adapted for our purposes – are defined as follows:
  • N(t) is the administrative prevalence of autism at time t.
  • t is the time, typically represented by a year.
  • t0 is the initial time, when prevalence is zero.
  • The coefficient p is called "the coefficient of innovation, external influence or advertising effect" (Wikipedia.)
  • The coefficient q is called "the coefficient of imitation, internal influence or word-of-mouth effect" (Wikipedia.)
  • m is the maximum administrative prevalence of autism – i.e. the prevalence value reached when the prevalence curve finally levels off.

In order to apply it to real world data, we need to derive the parameters of the model. This is fairly difficult because it's non-linear. So I used genetic programming to estimate the parameters that produce the best fit between the model and observations. I did this with the 6 to 9 California DDS prevalence, and I "trained" the model with two different time ranges. I will later explain the rationale.

For 6-9 prevalence data between 1993 and 2007, the correlation coefficient was 0.9994, and the parameters were:

p = 5.959·10-8
q = 0.253
t0 = 1943.246 (year)
m = 65.395 (per 10,000 population)


When trained with prevalence between 1986 and 2007, the correlation coefficient R for the model fit was 0.9991. The resulting parameters were:

p = 1.415·10-8
q = 0.237
t0 = 1934.1 (year)
m = 70.45 (per 10,000 population)


Anyone familiar with modeling and/or statistics will tell you that a correlation coefficient of 0.9994 is not only good, it's actually hard to believe. It might even be beyond law-of-physics good.

The following is a graph of the observed 6-9 prevalence in California DDS, along with the 2 derived Bass models, with forecasting all the way to 2020.



If the first model turns out to be correct, as early as Q4 2013 the 6-9 prevalence should be very close to 60 in 10,000, and a leveling-off pattern should already be evident. The first model predicts that prevalence will level off when it reaches 65.4 in 10,000. The second model predicts it will top at 70.5 in 10,000. I think these projections are reasonable, considering California DDS has eligibility restrictions. But we'll just have to see if they pan out.

Limitations

The main limitation of the models derived in this post is that they assume m is constant. In reality m could change, not just because of possible environmental factors, but also because of changes in diagnostic criteria, and changes in eligibility policy. That's why I used a shorter time range to derive the model I actually prefer: the one based on the 1993-2007 prevalence series only.

Tuesday, April 13, 2010

The Media's Interest in the Anti-Vaccine Movement

Over at Science-Based Medicine, Dr. David Gorski has written a post about email exchanges he had with a reporter named Steven Higgs. I exchanged some emails with Dr. Gorski myself, prior to his post, about some rudimentary data analysis Mr. Higgs had done with special education counts. I sent Dr. Gorski a number of graphs in order to illustrate Mr. Higgs' interpretation errors. Do check out the post.

What I actually wanted to discuss here is Dr. Gorski's observation about the apparent lack of anti-vax activity in Autism Awareness Month.
The anti-vaccine movement’s usual suspects haven’t been all over the mainstream media, as they usually are this time every year, often as early as April 1 or even March 31.

Could the anti-vax movement be losing steam? Are they regrouping? I have no idea. But we can check how much interest the media has had in the anti-vax movement in the last 13 years.



This is a graph of Google News Archive "autism vaccines" articles per 100 "autism" articles. Google News Archive has its own graphs where you can sort of see the trend as well, but it's methodologically better to look at article counts relative to "autism" articles, for obvious reasons.

I also added a VAERS "autism" submissions series to the graph. Clearly, media coverage of anti-vax speculation correlates well with VAERS submissions. See also how it compares to Sullivan's graph of the number of autism cases before the vaccine court.

2008 was a good year for anti-vaxers, given that it was the year when the Hannah Poling story broke, and Jenny McCarthy started to publicize her autism books on TV. But if you look at the graph, 2008 provided only a marginal boost. I doubt anti-vaxers will have another 2008 ever again.

That's the reality of the situation, even though in the blogsphere we seem to perceive things differently sometimes. Anti-vaxers often talk as though they are "winning the debate." Next time you find an anti-vaxer who says they are winning the debate, ask them what they are basing that opinion on, and send them over to this post.

Thursday, February 18, 2010

Blogger Shuts Down John Best?

Note: This post is about John Best Jr. from Hew Hampshire, owner of the Hating Autism blog, AKA Fore Sam. The name John Best is a fairly common name, evidently.


[UPDATE 2/18/2010: No such luck. His 3 blogs are back. John Best claims he showed Google "who they are messing with." It's probable that people have flagged his blog so much over time, that a Google employee decided to shut down his blogs, but I'm guessing it doesn't take much to get Google to unblock blogs.]

After years of blatantly violating Blogger's Content Policy, John Best's blog, Hating Autism, has apparently been shut down for good.



I guess it's all part of the Illuminati/BigPharma/Reptilian-Alien conspiracy to control the world's population through vaccines. First Wakefield, then John Best. Next thing you know, the FDA will be raiding the Geiers' house/clinic. One can only hope.

And John, don't even try to comment. I have not changed my comment policy, and as a matter of principle, I will delete your comments, simply because I've said I would. Managing the comments section of a blog with a stated comment policy is not censorship, despite what you'll no doubt contend. What AoA does, which basically consists of arbitrarily disapproving comments in a case by case basis might be closer to censorship, but even that isn't censorship, arguably. I'm sure you'll create a blog somewhere else, so you'll have a new platform for your views, questionable as many people think they are.

Friday, January 29, 2010

Wakefield is not Galileo

For those who keep trying to invoke the "Galileo gambit" in order to defend Andrew Wakefield, let me explain something real quick. Wakefield is not Galileo for two key reasons:

1. Galileo was right.

2. Galileo did not engage in scientific misconduct.

It's as simple as that.