The New Zealand Herald and Jimbo’s have provided us with an idealised “bad science” case study.

Today, the Herald published an article about a “trial” published by pet food manufacturer Jimbo’s: No bones about bones

The trial was intended to evaluate how eating bones affects the dental health of dogs. Thankfully the article makes it pretty clear why Jimbo’s would be looking into this, although it reads more like a quote from a press release than the declaration of a conflict of interest that it really is:

Jimbo’s sells over 300 tonnes of bones per year which help thousands of cats and dogs keep healthier teeth.

This trial seems rather special in that it’s a rare composite of just about every aspect of poor methodology all put together at once. I think it makes for an excellent “bad science” case study, which could hopefully be a good resource for journalists who might find themselves in danger of reproducing the Herald’s results.

And it’s not just journalists that can benefit from understanding this. Being aware of the potential shortcomings of research can make everyone more savvy when it comes to parsing science news. None this is particularly hard to understand at a high level.

Pared way down, designing a study is about two things:

  1. Finding a way to test a hypothesis by attempting to disprove it.
  2. Taking measures to account for as many sources of bias as possible.

Jimbo’s failed the first of those objectives spectacularly, but at least they were up front about it:

The Jimbo’s Dental Trial was carried out because we wanted to prove what we already knew – that a species-appropriate diet including a bone a day can improve or maintain dental health in our furry friends.

Jimbo’s Dental Trial – 2015

It’s roughly possible to pair up different aspects of good methodology to the source of bias they’re trying to account for. For example, having a large sample size is a way to diminish the effects of random variation within your sample population.

Here’s a list of the methodological problems with this Jimbo’s trial, and the corresponding sources of bias that they aren’t accounting for:

  • Source of bias
    Publication bias, where positive results are more likely to be published than negative results.

    How you should account for it
    Register your trial ahead of time, and ensure it gets published in a peer-reviewed scientific journal.

    What Jimbo’s did in their trial
    As far as I can find, the trial wasn’t pre-registered. Instead of being published in a peer-reviewed scientific journal, it was published as a PDF on the Jimbo’s website.

  • Source of bias
    Random variation within your sample population.

    How you should account for it
    Have as large sample size as possible. Of course larger sample sizes makes research more expensive, but if your sample is too small you won’t be able to reliably detect an effect.

    What Jimbo’s did in their trial
    The study used a sample of eight dogs. This was further reduced to seven after one dropped out for not following the diet.

  • Source of bias
    Regression to the mean, changes unrelated to the experiment, Hawthorne effect etc.

    How you should account for it
    Have an appropriate control group, for example a group of dogs not on the special diet.

    What Jimbo’s did in their trial
    The study did not include a control group.

  • Source of bias
    Bias, unconscious or otherwise, from researchers making measurements.

    How you should account for it
    Blind researchers making measurements so they don’t know whether the participant they’re evaluating was in the control group or the experimental group.

    What Jimbo’s did in their trial
    There was only an experimental group, so blinding was not possible.

    2016/10/30 Edit: Thomas Lumley has made a good point about blinding over on StatsChat. That is, the researcher evaluating the photos could have been blinded to whether each one was a “before” photo or an “after” photo. The study doesn’t mention if this was done, however.

  • Source of bias
    Differences between the populations in the control and experimental groups.

    How you should account for it
    Randomise which group each study participant ends up in.

    What Jimbo’s did in their trial
    There was only an experimental group, so randomisation was not possible.

The trial also lacked any sort of statistical analysis. Without a control group, there isn’t really a good way to do this, but it seems like Jimbo’s didn’t even try to figure out how likely it was that their result was a false positive.

I always find it amusing to see research that fails so spectacularly to be well-designed, as this has, but there’s a downside as well. This was picked up completely uncritically by the New Zealand Herald. In fact their story reads to me more like an advertisement or press release than the critical analysis I’d expect to see from a high quality media outlet.

Although in the end, the Herald did one thing right. They provided a link to the original research so all of its readers could see for themselves how spectacularly bad it is.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s