The New Zealand Herald and Jimbo’s have provided us with an idealised “bad science” case study.
Today, the Herald published an article about a “trial” published by pet food manufacturer Jimbo’s: No bones about bones
The trial was intended to evaluate how eating bones affects the dental health of dogs. Thankfully the article makes it pretty clear why Jimbo’s would be looking into this, although it reads more like a quote from a press release than the declaration of a conflict of interest that it really is:
Jimbo’s sells over 300 tonnes of bones per year which help thousands of cats and dogs keep healthier teeth.
This trial seems rather special in that it’s a rare composite of just about every aspect of poor methodology all put together at once. I think it makes for an excellent “bad science” case study, which could hopefully be a good resource for journalists who might find themselves in danger of reproducing the Herald’s results.
And it’s not just journalists that can benefit from understanding this. Being aware of the potential shortcomings of research can make everyone more savvy when it comes to parsing science news. None this is particularly hard to understand at a high level.
Pared way down, designing a study is about two things:
Finding a way to test a hypothesis by attempting to disprove it.
Taking measures to account for as many sources of bias as possible.
Jimbo’s failed the first of those objectives spectacularly, but at least they were up front about it:
The Jimbo’s Dental Trial was carried out because we wanted to prove what we already knew – that a species-appropriate diet including a bone a day can improve or maintain dental health in our furry friends.
It’s roughly possible to pair up different aspects of good methodology to the source of bias they’re trying to account for. For example, having a large sample size is a way to diminish the effects of random variation within your sample population.
Here’s a list of the methodological problems with this Jimbo’s trial, and the corresponding sources of bias that they aren’t accounting for:
Source of bias Publication bias, where positive results are more likely to be published than negative results.
How you should account for it
Register your trial ahead of time, and ensure it gets published in a peer-reviewed scientific journal.
What Jimbo’s did in their trial
As far as I can find, the trial wasn’t pre-registered. Instead of being published in a peer-reviewed scientific journal, it was published as a PDF on the Jimbo’s website.
Source of bias
Random variation within your sample population.
How you should account for it
Have as large sample size as possible. Of course larger sample sizes makes research more expensive, but if your sample is too small you won’t be able to reliably detect an effect.
What Jimbo’s did in their trial
The study used a sample of eight dogs. This was further reduced to seven after one dropped out for not following the diet.
How you should account for it
Have an appropriate control group, for example a group of dogs not on the special diet.
What Jimbo’s did in their trial
The study did not include a control group.
Source of bias
Bias, unconscious or otherwise, from researchers making measurements.
How you should account for it
Blind researchers making measurements so they don’t know whether the participant they’re evaluating was in the control group or the experimental group.
What Jimbo’s did in their trial
There was only an experimental group, so blinding was not possible.
2016/10/30 Edit: Thomas Lumley has made a good point about blinding over on StatsChat. That is, the researcher evaluating the photos could have been blinded to whether each one was a “before” photo or an “after” photo. The study doesn’t mention if this was done, however.
Source of bias
Differences between the populations in the control and experimental groups.
How you should account for it
Randomise which group each study participant ends up in.
What Jimbo’s did in their trial
There was only an experimental group, so randomisation was not possible.
The trial also lacked any sort of statistical analysis. Without a control group, there isn’t really a good way to do this, but it seems like Jimbo’s didn’t even try to figure out how likely it was that their result was a false positive.
I always find it amusing to see research that fails so spectacularly to be well-designed, as this has, but there’s a downside as well. This was picked up completely uncritically by the New Zealand Herald. In fact their story reads to me more like an advertisement or press release than the critical analysis I’d expect to see from a high quality media outlet.
Although in the end, the Herald did one thing right. They provided a link to the original research so all of its readers could see for themselves how spectacularly bad it is.
This afternoon, the Advertising Standards Authority released their decision to uphold an interesting complaint regarding advertisements for a couple of cleaning products on a website. Here is the ASA’s description of how the products were described on the website:
The Wendyl’s website (http://wendyls.co.nz/) for “100% natural cleaning and beauty products” advertised their products as having “all their ingredients listed and contain no fillers, chemicals or synthetics.”
The webpage for Wendyl’s Oxygen Bleach 1KG (sodium Percarbonate) stated, in part:
This is powdered hydrogen peroxide which is a greener alternative to chlorine bleach because it breaks down to oxygen and water in the environment.
The webpage for Wendyl-San oxygen soaker 1KG stated, in part:
I’ve spent years testing this oxygen soaker and stain remover and I’m so glad to have something which is so free of chemicals and additives. Secret ingredient is sodium percarbonate, a powdered hydrogen peroxide bleach which breaks down in the environment to oxygen and water…
The advertiser uses words such as “100% natural”, and “contains no fillers, chemicals, or synthetics”.
However, the product in question is sodium percarbonate, which is not a naturally occurring product. The main active ingredient, hydrogen peroxide, is also not a naturally occurring product and it is not stable in nature.
Both are synthetic chemicals.
After hearing from the advertiser as well, the Advertising Standards Complaints Board sided with the complainant. Here is a summary from the headnote of their decision:
The Complaints Board said it accepted the Advertiser’s view that “sodium percarbonate is a much safer and more environmnetally friendly alternative to chlorine bleach” but not that it was “chemical free” and “100% natural.” The Complaints Board said the advertisement was likely to mislead consumers into thinking the products were “100% natural” and “chemical free” when they actually contained naturally occurring chemicals, in breach of Principle 2 of the Code for Environmental Claims and had not been prepared with a due sense of social responsibility to consumers in breach of Principle 1 of the Code for Environmental Claims.
Accordingly, the Complaints Board ruled to Uphold the complaint.
The most interesting part of this complaint is, I think, who the advertiser is. As well as selling cleaning and beauty products online, Wendyl Nissen writes a weekly column for the New Zealand Herald called “Wendyl Wants To Know“. The Herald describes the column as:
Each week, Wendyl Nissen takes a packaged food item and decodes what the label tells you about its contents.
Have a look for yourself, but from the columns of hers that I’ve read it seems the main argument is typically along the lines of “natural is good, chemicals are bad”. So I find it very ironic that she’s now had a complaint upheld against her for misleadingly claiming that a product she sells is “100% natural” and “chemical free”.
For a counterexample to the attitude of “natural is good, chemicals are bad”, you need look no further than the “recipes” section of her website. There, she has some pet recipes which she makes available for free including one for De-Flea Powder for Cat Biscuits and another for Doggy De-Flea Treats. In both recipes, she claims the active ingredients are yeast and garlic:
The theory behind this powder is that fleas hate the taste of yeast and garlic so will hop off and look elsewhere.
Recently I’ve run across a couple of New Zealand companies that sell therapeutic products – one a weight loss pill, the other a jet lag drink – that seem to put marketing first and let science take the back seat. This is by no means new behaviour, but I want to use them as examples to illustrate this widespread problem and suggest what can be done to combat it.
Before promoting a therapeutic product, you should first have good reason to believe that it works. This, I hope, is common sense, but it’s also enshrined in the Fair Trading Act and the Therapeutic Products Advertising Code as they prohibit unsubstantiated claims. This means you have to test the product, and do so rigorously. Rigorous clinical trials are expensive to undertake though, so they’re quite a prohibitive first step.
Instead of jumping straight into the deep end, a useful first step can be to undertake a smaller and less rigorous (and therefore less expensive) experiment. In order to answer the question of whether or not a product actually works you need to conduct a more rigorous trial. There’s a great cost involved in doing this, but if the results of a preliminary trial are optimistic then you have reason to expect a more rigorous trial might give similar results, so the expense might be worth it. You may even be able to get some funding to help with a more rigorous trial on the basis that the preliminary results were positive.
As you can see on the trial registration page, this was an uncontrolled trial on overweight adults. The original plan was to recruit 100 volunteers with the hope that at least 60 will complete the trial. I have to say I’m a bit confused about how many people were in the trial, as apparently the recruitment was increased to 200 applicants after applications opened (a change that has not been reflected in the trial’s registration) yet apparently about 400 people applied. One article claims there were 200 participants, a later media release from Tuatara Natural Products seems to imply 100 were recruited, and the analysis of the trial says there were only 81 participants. Either way, 81 participants completed the full 8 weeks, and 52 of them took the recommended dose for the whole duration of the trial.
If there were 19 or 119 participants who didn’t complete the trial, the statistical analysis seems to ignore them with no justification given. This is unusual – a 19% drop out rate is significant and shouldn’t be swept under the rug. A lot of the time Tuatara also seems to ignore the 29 participants who didn’t drop out but also didn’t take the full dose for the whole duration.
The Science Media Centre posted the responses of 2 experts, Associate Professor Andrew Jull and Professor Thomas Lumley, to a press release from Tuatara Natural Products in February. It’s a good analysis of some of the weaknesses with the study, and I recommend you read it: Kiwi diet pill claims – experts respond
This trial was uncontrolled, and therefore also unblinded and unrandomised. As Professor Lumley explains, this is a problem if you want to draw strong conclusions from its results. It is of low methodological quality, but that’s okay. There is no problem with doing less rigorous trials first if they’re done in order to determine if more rigorous trials are necessary. Dr Glenn Vile, Chief Technical Officer of Tuatara Natural Products and the principal investigator for this study, wrote the following in a comment on a post on the “Fat Mates” trial by Dr John Pickering:
The Fat Mates trial was designed by clinical trial specialists to generate information about the Satisfax® capsules that would help Tuatara Natural Products plan a larger and longer double blind, cross over, placebo controlled trial.
We will use this information to proceed with the next clinical trial, but in the meantime we were so excited the weight loss achieved by most of our Fat Mates was much greater than the placebo effect seen in other weight loss clinical trials that we decided to launch the product so that anybody who is overweight can try Satisfax® for themselves.
I think the first part of what I’ve quoted above describes exactly what Tuatara Natural Products should be doing. They’ve conducted their low quality trial, and intend to use its results to proceed with a larger, longer, and more rigorous clinical trial. This is the right way to proceed – they now have an indication that their product might be effective, so they should do the research to find out.
The problem is that that’s not all they’re doing. After performing only a small low-quality trial, they’ve released their product for sale online and have been making a lot of noise about it. In my opinion, they’ve been significantly overstepping the results of their clinical trial. For example, in his comment Dr Vile also said:
our initial trial has shown [Satisfax] to be extremely effective in some overweight people.
Dr Glenn Vile
In their media release on the 20th of February, they reported the average weight lost only by the 52 participants who took the full dose to completion (rounded up from 2.9 kg to “close to 3kg”) but not the average weight lost by all participants. They then reported in bold that the top 26 participants lost more weight, and the top two participants lost even more weight than that!
This cherry picking of the best results appears to have been part of Tuatara Natural Products’ marketing strategy for at least a few months now. In January, Stuff published an article on the trial highlighting the single person who lost the most while participating in it: Blenheim ‘fat mate’ loses 13.5kg in 8 weeks.
That article particularly highlights the person who lost the most weight out of all those in the trial, at 13.5 kg (confusingly, the maximum weight loss reported in the analysis of the trial’s results is 13.3 kg). However, she was one of only 2 participants who lost over 10 kg, and on average the 52 participants who took the recommended dose for the full eight weeks lost 2.9 kg. Losing 13.5 kg is very far from a representative example. I’m not surprised that they didn’t choose instead to focus on the participant who gained 1.2 kg despite taking the recommended dose for the whole duration, but that is actually much closer to the mean change in weight.
The article is, for all intents and purposes, one big testimonial in favour of Satisfax. It was an article, not an advertisement, which is important because in New Zealand it’s illegal to publish any medical advertisement that:
directly or by implication claims, indicates, or suggests that a medicine of the description, or a medical device of the kind, or the method of treatment, advertised… has beneficially affected the health of a particular person or class of persons, whether named or unnamed, and whether real or fictitious, referred to in the advertisement
This effectively bans all health testimonials from advertisements. I think this is a good part of the law, as testimonials can be both very convincing and completely misleading; a quack’s dream. Banning them should force businesses to instead focus on the results of research on their products, but this hasn’t stopped Tuatara Natural Products from getting stories written about the most extreme testimonials they could find from people who have lost weight at the same time as they were taking Satisfax.
More recently, Tuatara Natural Products has put out a press release multiple times (at least on the 20th of February and again on the 4th of March) that I think rather oversteps the results of their small preliminary trial:
A NEW ZEALAND SOLUTION TO A GLOBAL PROBLEM A little pill is providing an exciting answer to one of the worlds greatest and fastest growing problems: Obesity.A NEW ZEALAND SOLUTION TO A GLOBAL PROBLEM
A little pill is providing an exciting answer to one of the world’s greatest and fastest growing problems: Obesity.
I simply don’t think they are at all justified in saying that their new product is “providing an exciting answer to… Obesity”. They are putting marketing ahead of science, and that’s not okay.
Another company that seems to put marketing before research is 1Above. They make a drink which they claim can help you recover faster from jet lag, and have recently been in the news for signing a sponsorship deal with the fantastically successful golfer Lydia Ko.
At the end of that article about their sponsorship deal the reporter, Richard Meadows, made some comments regarding the science behind jet lag relief products and asked some good questions of 1Above’s CEO, Stephen Smith (emphasis mine):
[1Above’s] product contains a mixture of vitamins B and C, electrolytes, and Pycnogenol, a pine bark extract.
The efficacy of flight drinks to combat the effects of jetlag is unproven.
Late last year pharmacists were warned after the Advertising Standards Authority upheld a complaint against an ad saying a homeopathic anti-jet lag pill really worked.
[1Above CEO Stephen] Smith said 1Above would not be doing clinical trials, which were highly expensive and not necessary.
“What we tend to use is testimonials from people who have used the product and swear by it.”
Smith said the key ingredient, Pycnogenol, had itself had been tested in dozens of trials, including its effects on reducing jetlag.
Yes, you read that correctly. The CEO of 1Above literally said that they won’t be doing clinical trials because they are “not necessary” and that they use testimonials instead.
As I said before, using testimonials to promote a therapeutic product, like a drink to help you recover faster from jet lag, can be both very convincing and completely misleading. There’s a reason why testimonials implying health benefits are illegal in New Zealand, and I hope that 1Above’s marketing will not violate this regulation.
Not all testimonials are prohibited, of course. It’s entirely acceptably to provide a testimonial from someone who thinks their drink tastes great, or that they provide great service. Basically anything for which a single person’s experience can provide a useful insight. Therapeutic effects, almost without exception, do not fall into this category, which is a big part of why we need to do clinical trials in the first place. If they quote someone in saying that their product helped them recover faster from jet lag, they may be in danger of breaching the Medicines Act.
For example, I’d expect they probably shouldn’t use a testimonial that says this:
On their website, 1Above currently does refer to research on one of the ingredients in their product, “pycnogenol”. Professor Lumley recently wrote a post about this on his other blog, Biased and Inefficient, regarding these studies and how they are used by 1Above: Clinically Proven Ingredients
I recently contacted 1Above to ask about some discrepancies I found between the abstract of the study they cited for showing pycnogenol reduced the duration of jet lag and their description of it on their website:
I was interested to see the claim your company made that Pycnogenol® has been shown to support circulation and reduce the length and severity of jet lag.
The participants in the study took 50 mg Pycnogenol 3 times per day, but I haven’t been able to find out how much is contained in your products. Is this information available anywhere on your website? I notice the study also says the participants took this regimen for 7 days, starting 2 days prior to departure. Is this comparable with how your product is intended to be used?
I also noticed some differences between the description of the study and its results between the abstract and your website, I would be grateful if you could explain to me the source of these differences.
The abstract states the control group took, on average, 39.3 hours to recover and the experimental group took, on average, 18.2 hours to recover. However your website reports these as 40 and 17 hours respectively.
Also, your website states that the study involved 133 passengers (it’s not clear from the description on your website if they all took Pycnogenol or if some of them were in the control group) who reported the time it took them to recover from jetlag. However, the study’s abstract states that in the first experiment, which is the only one that involved the reporting of the time taken to recover from jetlag, only involved 68 participants – 30 in the control group and 38 in the experimental group.
I would be grateful if you could explain these differences to me, and if you could send me any other relevant scientific information that supports this claim.
To their credit, since receiving my message they did update their website to fix the discrepancies in the reported number of participants and times taken to recover from jet lag, and their CEO replied to thank me for pointing these discrepancies out.
However, they didn’t respond to my other questions about the amount of pycnogenol in their products or the study involving the participants taking pycnogenol for 7 consecutive days, starting 2 days before their flight, which is inconsistent with how 1Above recommends their products be used.
This is just one more company basing their marketing on preliminary trials instead of using them as the basis for research that could actually answer the question of whether or not a product is useful. Worse than Tuatara Natural Products, they even go so far as to consider clinical trials “not necessary” and apparently intend to rely on testimonials instead. It would be much more appropriate for them to spend some of their $2.4 million annualised income on researching their product rather than paying for a sporting celebrity to endorse them.
I try to make my rants constructive, so I want to end this article with the question “What can we do about this?”. If you have any suggestions, I’d love to hear them in the comments section.
I think the most important thing that anyone can do to address this problem is to ask for evidence. If you see a claim made about a product that you think you might buy, then get in touch with the company selling it to let them know you’re considering buying it and to ask for evidence. If they don’t have a good enough answer, then let them know that’s why you won’t be buying their product. If they give you evidence to back up their claim, then great!
Asking for evidence doesn’t have to be a big deal, involving a formal letter or anything like that. When you see a weight loss product advertised on a one day deal site, a copper bracelet that apparently offers pain relief advertised on a store counter, or a jet lag cure promoted on Twitter, make your first response be to politely ask for evidence.
This isn’t a problem that’s going away any time soon. As consumers, we deserve to be able to make informed decisions about the products we buy, and when companies put marketing before research it becomes harder to make these informed choices. But if we work together then we can encourage companies like Tuatara Natural Products and 1Above to improve their behaviour and attitudes toward marketing and research.
Let’s turn “what’s the evidence?” into a frequently asked question for all companies that sell therapeutic products.
Biosecurity is a big issue for New Zealand. Being a group of islands fairly isolated from all other landmasses and having quite a unique native ecosystem (many native birds with no native mammalian predators and few native land mammals), we have a lot to lose from introduced species. There are also biological threats to industry that we have to try really hard to keep out of the country, such as Queensland fruit fly. There’s good reason why the Ministry for Primary Industries (MPI, formerly MAF) reacted so strongly when one of these flies was found in Whangarei in April 2014. If enough of these flies made it into New Zealand to self perpetuate, they could cause massive damage to New Zealand’s $5 billion horticulture industry.
In order to kill off any biosecurity risks, including disease-causing organisms and foodborne pests, various treatments (also known as “phytosanitary actions” when used on plant products) can be used when importing products into New Zealand. Different products that can be imported each have an Import Health Standard (IHS) that documents the process of importing them.
For fruit and vegetables being imported, they need to come with a phytosanitary certificate from their country of origin, to say that either they have been inspected by someone from MPI and they couldn’t find any pests, they come from a certified pest free area, or they have been treated to kill any pests. A sample of the products is also inspected by MPI when arriving in New Zealand, and if any pests are found then the products will have to be treated if they are to enter New Zealand.
The treatment used depends on a few things, such as what pest was found that they’re trying to kill. For example, assuming I’m interpreting the IHS correctly, if Thrips palmi is found in a shipment of capsicum from Australia it would be fumigated with methyl bromide at 32 g/m3 for 2 hours. Whereas if Conogethes punctiferalis were found, then the capsicum would be irradiated with a minimum dose of 250 Gy (Grays; 1 Gray is equivalent to 1 Joule of energy absorbed per kg of food).
The previous paragraph is incorrect. Those treatments are the ones that should appear on the phytosanitary certificate, having been performed in the country of origin. The treatments done if a pest is found when they arrive in New Zealand are determined in the Approved Biosecurity Treatments Standard. So for fresh fruit and vegetables (page 37), if insects except for fruit flies (not slugs and spiders) are found then they have to be fumigated with methyl bromide at a particular rate and temperature for a particular duration (presumably depending on the pest and the produce). Looking at this standard, it seems human food doesn’t get irradiated if pests are found when it arrives in New Zealand. According to MPI’s list of treatment providers (direct PDF download), there is only one facility in New Zealand able to provide food irradiation, which is in Wellington.
Methyl bromide is an insecticide, and it’s also recognised as an ozone-depleting substance. Because of this, its use is tightly controlled. It’s only allowed to be used for a few specific purposes, one of which is quarantine, and New Zealand has to provide statistical data to the Ozone Secretariat on the annual amount of methyl bromide that we use. It’s nasty stuff – even skin contact with high enough concentration of the gas can cause severe blistering – but after being used to fumigate food it apparently dissipates fairly rapidly. There are some objects that MPI won’t fumigate with methyl bromide for various reasons, which are described in their info sheet I linked to above.
Irradiation is quite different. Using either Cobalt 60, x-rays, or an electron beam food is blasted with a specific amount of ionising radiation. Cobalt 60 is a radioactive source of this radiation, but as it emits gamma rays instead of neutrons it doesn’t make anything else around it radioactive. Both x-rays and electron beams are created by non-radioactive sources and can be switched on and off.
When food is irradiated, the process kills any organisms that are living in the food, including disease-causing organisms and pests. The food does not become radioactive, instead it will just be slightly warmed from the energy it absorbs. Also, the radiation will trigger some chemical changes, but these occur only in amounts comparable to heat treatments. In this way it’s quite similar to the process of pasteurisation used to make milk safe to drink.
In 2010, following an extensive literature search, the European Food Safety Authority (EFSA) published their Scientific Opinion on the Chemical Safety of Irradiation of Food. They found that the new evidence published since their previous decision in 2003 wasn’t enough to change their opinion that “there is not an immediate cause for concern” regarding the safety of irradiated food.
The strongest negative evidence they found seemed to be a case in which cats ate a diet consisting largely or entirely of highly irradiated (25.7 to 53.6 kGy, i.e. 100 to 200 times as much as in the capsicum example from earlier) cat food and subsequently suffered from leukoencephalomyelopathy (LEM). This evidence doesn’t necessarily have any relevance to humans though; in another report dogs ate the same pet food and didn’t exhibit LEM. Also, as the incident was only linked to one specific lot of one specific brand of pet food it’s unclear if irradiation was the culprit at all.
MPI’s Food Smart website has an informative page on food irradiation. It’s quite clear on several important points (you can read their full answers on the page):
Does irradiation change food?
At the approved doses, changes to the nutritional value of the food caused by irradiation are insignificant and do not pose any public health and safety concerns.
Some treated foods may taste slightly different, just as pasteurized milk tastes slightly different from unpasteurized milk. There are no other significant changes to these foods.
Does irradiation make food radioactive?
Is it safe to eat irradiated food?
Yes. Irradiation of food does not make the food unsafe to eat.
The World Health Organisation, the Food and Drug Administration in the US and the American Medical Association all agree that irradiated food products are safe to eat.
The FDA’s page on food irradition has an informative “Debunking Irradiation Myths” inset:
Irradiation does not make foods radioactive, compromise nutritional quality, or noticeably change the taste, texture, or appearance of food. In fact, any changes made by irradiation are so minimal that it is not easy to tell if a food has been irradiated.
FDA has evaluated the safety of irradiated food for more than thirty years and has found the process to be safe. The World Health Organization (WHO), the Centers for Disease Control and Prevention (CDC) and the U.S. Department of Agriculture (USDA) have also endorsed the safety of irradiated food.
Earlier this week, the Herald published an article by Sue Kedgley on irradiated food. In my opinion that article is a load of unscientific scaremongering. Here are a few excerpts that appear clearly intended to be more emotive than informative:
But irradiated food is anything but fresh. It’s been exposed to radiation doses that are between three and 15 million times the strength of x-rays. The Brisbane radiation facility uses Cobalt 60 to irradiate food, a radioactive material that is manufactured in Canadian nuclear reactors, and shipped to Australia in special unbreakable steel canisters.
I visited the Brisbane irradiation facility in 2004. Boxes of food travel by conveyor belt into an irradiation “chamber”. The irradiation process breaks down the molecular structure of food; destroys vitamins in food, and creates free radicals and other “radiolytic compounds” that have never been found in nature, and whose effect on human health is not known.
Also of concern is the fact that in 2008 the Australian Government was forced to ban irradiated cat food after more than 80 cats died or became seriously ill after eating irradiated cat food.
This begs the question – if cats can die, or become ill from eating irradiated cat food, what could be the cumulative effect on humans of eating significant quantities of irradiated food? There’s no benefit to New Zealand consumers, and only risks to our growers, from imported irradiated produce.
Her comment that irradiation “breaks down the molecular structure of food [and] destroys vitamins in food” is quite at odds with the evidence that the nutritional content of irradiated foods are not changed significantly. This statement is entirely blown out of proportion, it’s like describing a papercut as having “ripped my flesh apart”.
She also doesn’t mention any of the details regarding the cat food incident, such as that their diet consisted largely or wholly of food irradiated 100-200 times as much as human food generally is, that the same food seemed to have no negative effects when eaten by dogs, or that the incident was only linked to one specific lot of one brand of cat food. How it relates to humans consuming irradiated food, if it has any implications on that at all, is not clear but her reaction is just scaremongering.
Her article appears to have been prompted by a couple of changes to the regulations that are being considered:
FSANZ is currently assessing Application A1092 seeking permission to irradiate twelve specific fruits and vegetables. A call for submissions on our assessment is expected to be released in the second half of 2014.
Here’s a link to Application A1092. That page specifies the 12 fruits and vegetables involved as apple, apricot, cherry, nectarine, peach, plum, honeydew, rockmelon, strawberry, table grape, zucchini, and scallopini (squash).
Ms Kedgley describes these potential changes as:
the Government is about to approve the importation of irradiated apples, peaches, apricots and nine other fruit and vegetables from fruit fly-infested Queensland.
If they succeed, retailers will be able to sneak irradiated produce into the food chain, and it will be sold, unlabelled, as if it was “fresh”.
Surely consumers have a right to know whether the apples they are buying are fresh, or have been imported from Queensland and exposed to high doses of radiation to sterilise them and kill off potential fruit fly lava?
Looking at the IHS for fresh fruit and vegetables (direct PDF download), you can see that honeydew, rockmelon, strawberry, grape, zucchini, and scallopini are already included, they just aren’t yet allowed to be treated via irradiation. As far as I can tell the others – apple, apricot, cherry, nectarine, peach, and plum – can’t currently be imported from Australia.
Given that the entire function of irradiating food is to kill unwanted organisms such as Queensland fruit fly larva, I think it seems disingenuous of Ms Kedgley to repeatedly refer to it as though allowing these products in will bring Queensland fruit fly to New Zealand. The reason why we can’t currently import these products is because of that fly, but allowing them to be treated by irradiation would let us safely import them.
On the issue of labelling, this seems to be a very similar issue to compulsory labelling of genetically modified foods and foods containing genetically modified ingredients (this is currently mostly compulsory in New Zealand). In that case, as with food irradiation, opposition generally seems to be driven by idealogical issues with the technology used or misinformed beliefs that it’s somehow unsafe, even though it’s entirely safe. It’s effectively a lose/lose situation – if labelling isn’t mandatory then “What are they trying to hide?” but if it is mandatory then “They wouldn’t have to put it on the label if it wasn’t bad for you”.
If you want to oppose the addition of those 6 new fruits to the list of foods that can be imported from Australia on the basis of supporting New Zealand farmers then okay, that’s a different argument altogether that has nothing to do with irradiation. There doesn’t seem to be much reason to oppose this on grounds that irradiated food may be unsafe to eat though.
Foods are not allowed to be irradiated unless they have been through a pre-market safety assessment process conducted by FSANZ
Given that irradiated food doesn’t appear to be unsafe, is there really any reason to keep labelling of irradiated food compulsory? If anything, isn’t compulsory labelling most likely to make people think that means it’s bad or unsafe when it isn’t? If it’s all about allowing consumers to make informed decisions, that would be rather counterproductive.
I’m lucky enough to know someone who’s a food scientist. Claire Suen has an MSc in Food Science from the University of Auckland, and I contacted her to ask for her thoughts on the process of food irradiation. Here are some of the things she had to say in response to some of the common arguments opposing food irradiation:
[Irradiation] changes the nature of food: carcinogenic, loss of nutrients etc.
So does cooking, burning toast, deep frying, etc. Irradiation causes minute changes to the food and some loss of nutrients such as vitamins, but these have all been thoroughly researched and the results are readily available. In short, no significant changes to the food have been found.
Regarding the lost of nutrients, I usually point out to people that this is negligible considering the nature of the food.
FSANZ have published some comprehensive risk assessment reports in the past, and using the latest report on tomato as an example:
Nevertheless, even assuming an upper estimate of vitamin A and C loss of 15% following irradiation from all fresh tomatoes, capsicums and tropical fruits (with existing irradiation permissions), estimated mean dietary intakes of these vitamins would decrease by 2% or less and remain above Estimated Average Requirements following irradiation at doses up to 1 kGy, with dietary intake typically derived from a wide range of foods.
The impact of cooking and storage time on nutrients in food is far more severe than the effects of irradiation.
Irradiated food saves cost for the manufacturers/importers/supermarkets because it eliminates otherwise costly alternatives.
Methyl bromide for example, is not 100% effective against insect eggs and larva, particularly if they are buried inside the fruit or seed. Storage pest such as beetles and weevils are extremely difficult to control and often need a combination of methods such as heat treatment, and fumigation. For herbs and spices, irradiation can be used to control pathogens such as salmonella and E. coli. No other method is as effective. But because consumers in NZ are against it, we have to use methods such as steam sterilisation and heat treatment, which impacts on the flavour and quality of the product. Consumers sometimes do not understand the amount of work MPI and the importers have to do to make sure foreign organisms do not get in the country. All it will take is a slack importer, a missed check, or an incomplete fumigation. What of the products that have to be destroyed due to microorganism contamination, or spoilage? If they had been irradiated, this wastage wouldn’t happen.
We don’t need irradiation since we can just buy local products
Unfortunately NZ is a small country and we have limited produce. I’m not saying we can’t get by without EVER importing anything, but, it seems to me that these people don’t realise just what the consequences are. Sure, we don’t have to import apples, or nectarines, but what about the tropical fruits not grown locally? Or spices? Let’s not eat fresh mango again, or curries, since pepper used to be worth its weight in gold because it’s not grown in Europe. We can’t get away from importing and by not using irradiation, NZ business have to use more costly, and less effective alternatives, which means all these cost are passed ultimately onto the consumers. I understand people’s concern that this will hurt local producers, but that is a question of economy and has nothing to do with the safety of irradiated food.
Now coming to the question of labelling
Unfortunately, it’s a no-win situation. If we label then consumers will think something is wrong with it, if we don’t label it’s as if we are hiding something. There is simply no way to beat that logic. In my opinion, if we don’t label products which have been heat treated, or fumigated, then we shouldn’t need to label for irradiation. But because consumer backlash is so strong, I wouldn’t want to give haters a chance to play the “Ah ha you are hiding something” or “give me my freedom of choice” card.
I say let’s put irradiated fruits on the shelves and label it as such so I can chose to buy it because it will be cheaper and better!
I think that last point says it all really. As a food scientist, Claire is quite familiar with the topic of food irradiation, and she would choose to preferentially buy irradiated food because she understands the process to be safe, effective, and not detrimental to the food.
If you haven’t already read my post about Osmosis Skincare’s Drinkable Sunscreen, you might want to do that before you read this one. A brief overview is that Osmosis Skincare claims drinking their “harmonized water” prevents sunburn. I complained to the Advertising Standards Authority that this claim wasn’t backed up by evidence and the complaint was upheld, now those products have been removed from their New Zealand website as a result.
In their response to this complaint (which you can read via the ASA’s website), Osmosis Skincare said that (emphasis mine):
This is a new type of technology being used in this way and Head office can reference the internal research they did showing the product to be effective, but their independent clinical trial isn’t until the 28th of June, whereby they will put 30 people outside for one hour in San Diego, CA at noon supervised by a plastic surgeon.
When reading this, I noticed that the study design seemed to be lacking a control group (which prohibits randomisation and blinding) and that the sample size seemed very small. These are all properties of low-quality science, but with only this brief summary to go by I couldn’t draw in any firm conclusions. In any case, as they were making therapeutic claims without any evidence to back them up, the complaint was upheld.
A few days ago, the head office of Osmosis Skincare issued a press release regarding this “independent clinical trial”, which has now been completed. I have to say I wasn’t particularly surprised to read that the trial was actually not independent. The press release introduces it by saying:
Osmosis Pür Medical Skincare executed the [Harmonized Water UV Neutralizer] line’s first clinical trial on June 28, 2014.
When a clinical trial is randomised, that means that participants are allocated into different groups in a way that is determined randomly. These different groups typically consist of an experimental group, which receives the treatment being tested, and a control group, which receives either a placebo or sham version of the treatment or the standard of care against which the experimental treatment is being compared. Randomly allocating participants into these groups helps avoid any systematic differences between the groups that could be a source of bias in the results.
Of course, this necessitates multiple groups for participants to be included in. With a sample size of 30, this means each group would only contain around 15 people. However, the press release isn’t done with its surprises yet:
24 patients ranging from 18 to 60 with various ethnic backgrounds and skin types were exposed to one hour of sun to one side of the body between noon and 1pm after ingesting 3ml Osmosis Harmonized Water UV Neutralizer
The press release doesn’t mention 6 other participants in a control group, so it’s not clear yet if there was another (smaller) group that just isn’t mentioned here or if, for some reason, they went with 24 participants instead of the 30 they’d planned on using earlier.
The “Summary of Results” says that:
All 24 patients were evaluated before, and immediately after the exposure as well as the following morning. There was no evidence of sunburn on 16 patients, 5 had minor or partial sunburns and 3 had notable sunburns in the study.
This proves UV Neutralizer effectively limited the sun damage for a majority of the users that consumed it.
Hang on a minute, “proves”? I’m not sure how on Earth they’d expect to be able to honestly evaluate the effectiveness of their product without first establishing some baseline to use as a comparison. It reads as though they’ve assumed that, if their product didn’t work, then all participants would have received notable sunburns. If they want to gather evidence regarding the effectiveness of their product in an intellectually honest way, they’d need to either compare it to a placebo or to the standard treatment, so they can actually see if it cause different results than nothing or if it’s as good as the real deal.
Instead, though, we get quotes like this:
The definitive results from this trial prove that the scalar wave technology in Harmonized Water works.
If you click on those links, you’ll probably notice a couple of things right away. First, instead of being published in a peer-reviewed scientific journal, the link to the clinical trial takes you to a folder on “box.com”, a cloud storage website. Second, the PDF containing photos of participants only contains 16 participants, not the 24 we’re told participated in the trial. I’ve no idea why that is the case, or how they determined which 8 participants to exclude.
The paper, entitled “Evaluation of a Novel Form of Sun Protection”, seems laid out pretty much as one would expect from a real clinical trial, published in a peer-reviewed scientific journal. Its first author is Paul Ver Hoeve, the doctor that supervised the experiment, and the second author is Ben Johnson, the founder and CEO of Osmosis Skincare (although that conflict of interest isn’t stated in the paper itself). I’m really wondering why this was ever described as “independent”. Hopefully this was just a piece of honest confusion on the part of whoever was liaising with the ASA on behalf of Osmosis Skincare NZ.
Unsurprisingly, although the authors included 8 distinct references (out of 11 in total) for the claim that topical sunscreens can cause inflammation, this claim went unreferenced:
Upon ingestion of 2-3 ml of the [harmonized] water, the scalar waves reportedly work their way through the molecules of water in the body until they reach the water in the dermis. This process has been shown to take an hour on an empty stomach, 90 minutes if any food is present in the stomach.
Paul Ver Hoeve, Ben Johnson – Evaluation of a Novel Form of Sun Protection
The “Subjects and Methods” section starts with a very concerning sentence. As far as I can tell, this is the entire basis for describing the study as “randomized”:
In this study, 24 patients were randomly selected as test subjects with no consideration for their natural skin tone.
Paul Ver Hoeve, Ben Johnson – Evaluation of a Novel Form of Sun Protection
As I mentioned earlier, when a clinical trial is described as “randomised” that means participants are randomly allocated to different treatment groups. It absolutely does not mean “patients were randomly selected as test subjects”. I’m not sure what that even means. It sounds like it’s referring to their recruitment method, but no more detail is given so I can’t really tell what this is supposed to say about the study design.
In my opinion, using this as justification for describing this experiment as a “randomized clinical trial” is very misleading.
The report goes on to state that:
The decision was made to not do a double-blind test for this application because of the ethical implications of knowingly causing a sunburn in many people.
Paul Ver Hoeve, Ben Johnson – Evaluation of a Novel Form of Sun Protection
Despite this apparent ethical concern, I can see no indication of the trial being approved by an Institutional Review Board (IRB). I believe IRB approval is required of all human subject research in the USA that is publicly funded, but privately funded research like this doesn’t have the same requirement. I’m not sure how it would affect a trial’s chances of being published in a peer-reviewed journal, perhaps someone who knows more could weigh in via the comments.
Performing “a double-blind test” in this case would require giving some participants water then leaving them to burn in the sun for an hour, and they’re right to say that’s pretty obviously unethical. I think the fact that they realise this but it’s also what they did in their trial should have been a pretty big red flag that they could use some ethical oversight.
They also don’t mention what seems to me is obviously a more ethical and rigorous way to conduct the trial. Even if they didn’t manage blind the researchers, it would have been better for everyone if the experimental group were compared to a control group that had applied topical sunscreen instead. This would allow them to have a “randomized clinical trial” that is actually randomised, as well as giving them a baseline to compare their results to so they might have a chance to draw some useful conclusions.
The actual results of the experiment were that:
There was no evidence of a sunburn on 16 patients, 5 had minor or partial sunburns and 3 had significant sunburns in the study.
Paul Ver Hoeve, Ben Johnson – Evaluation of a Novel Form of Sun Protection
Now I don’t know about you, but 1/3 of participants getting sunburned doesn’t exactly sound like a rousing success to me. They went on to try to justify these failures by saying:
All of the patients who burned said they would not normally lay [sic] out in the sun for one hour. Many of them said they burn with the use of other sunscreens as well.
Paul Ver Hoeve, Ben Johnson – Evaluation of a Novel Form of Sun Protection
The report gives no indication that the other 16 participants were asked the same questions, so there’s no way of telling if this could have contributed to the results, let alone accounted for them. As far as I know, this hasn’t prompted Osmosis Skincare to put a warning label on their products that it’s not effective for people that don’t usually expose themselves to the sun very much. It also doesn’t stop the authors from putting the results down to these answers entirely:
While the results were not 100%, the authors believe this was due solely to the excessive amount of sun they received to their relatively virgin skin and their overall health.
Paul Ver Hoeve, Ben Johnson – Evaluation of a Novel Form of Sun Protection
This trial had a tiny sample size, and was uncontrolled (therefore also non-randomised and unblinded), as well as being industry-funded and co-authored by the founder of the company that makes and sells the product. And on top of all that it didn’t even seem to have had particularly positive results.
If there’s any conclusion that can be drawn from this study, it’s that Osmosis Skincare is willing to do bad science and use its mediocre results to promote their products. Considering they were already making the same therapeutic claims before this experiment, I can’t say I find that surprising.
If Osmosis Skincare NZ stands by this research, and considers it rigorous enough to justify the sort of claims they were previously making about these products without substantiation, then they should appeal the ASA’s decision to uphold a complaint against them. I’d certainly like to see them try.
2 days ago, the Advertising Standards Authority published 4 more decisions regarding complaints I’d made. Each of these decisions was focused on a different type of product, but they were all therapeutic products. The decisions released were:
Punga Tails is a New Zealand business that, as far as I can tell, is owned and operated by naturopath Lydia Dorotich. It sells products for infants and has a focus on therapeutic products, including Baltic amber teething necklaces.
Mrs Dorotich has made various public statements claiming that Baltic amber can relieve teething, and that she personally recommends them. For example, from one of her listings on Trade Me:
As a qualified Naturopath and Medical herbalist (see my profile), I highly recommend the use of Baltic Amber for teething babies. Many parents have found that by wearing a Baltic Amber teething necklace the symptoms of teething have reduced.
On a Grabone deal some months ago, a customer stated that she was “a little unsure what these are for”. Given that Grabone generally tries very hard to avoid therapeutic claims and had not mentioned any in this deal (although the listing was for “Authentic Baltic Amber Teething Beads”), that confusion can be understood. In response, Lydia posted this:
Amber teething necklaces are to be worn against the skin. When amber is worn against the skin the benefits from the amber are absorbed into the skin and help to soothe the pain and inflammation caused by teething. They reduce drooling, red cheeks, nappy rash, swollen gums, low-grade fevers and sleeping problems associated with teething.
I have already thoroughly dealt with claims such as these in my previous post on amber teething necklaces. If you’re interested in why claims such as these are completely implausible, have a read of that post. Of course, even if these claims were plausible, it would still not be reasonable to believe them given that they are entirely unsupported by evidence.
Although both of these advertisements were in blatant violation of the Advertising Standards Authority’s Therapeutic Products Advertising Code, in that they made unsubstantiated therapeutic claims, they are not the advertisements I have complained about. Instead, I complained about the advertisements on the Punga Tails website itself.
There are various advertisements on the Punga Tails website, and they all link back to the Baltic amber FAQs page. It is on this page that most of the therapeutic claims were made. Here are some of the claims from that page that I highlighted in my complaint:
“The therapeutic effects of Baltic amber come from the succinic acid contained in it.”
“Baltic amber warms against the skin, releasing it’s therapeutic properties safely and naturally.”
“The therapeutic properties of Baltic amber include analgesic, calmative, anti-inflammatory, antispasmodic, expectorant, and febrifuge (reduces fever).”
“[Baltic amber teething necklaces] can boost the immune system and ease many ailments such as eczema, fatigue, fibromyalgia, carpal tunnel syndrome, migraines, psoriasis, menstrual cramping, pain, all types of arthritis, reduces stress, anxiety and depression.”
“Baltic amber is a natural analgesic so is ideal for pain relief with no side effects!”
“[Hazelwood] will help with the teething just as well.”
A couple of other pages on the site also made therapeutic claims regarding these products, such as:
Baltic amber is a natural way to reduce pain & inflammation WITH NO SIDE EFFECTS!
In response to my complaint, Punga Tails changed a lot of the content on their advertisements for Baltic amber teething necklaces. All of the claims that were not on the FAQs page seem to have been removed, and the FAQs page itself had quite a content overhaul. In light of this, the chairman of the ASA decided that the complaint should be considered settled.
When a complaint is settled that means the chairman has decided that as a result of the advertiser’s self-regulatory action “it would serve no further purpose to place the matter before the Complaints Board.”
Disappointingly, the FAQs page is still misleading on the subject of amber teething necklaces. The therapeutic claims it still contains are no longer as explicit, but they are still clearly there. They have generally been changed to claims about what is commonly believed, and references to its “effectiveness” remain. For example:
Succinic acid is the component of amber that is believed to contribute to the beneficial effects on teething.
This could explain why some amber teething necklaces are less effective.
Looking on the bright side for once, in the remnants of the FAQs page there is still one small part that I am mostly happy to see. One small piece of grudging honesty:
Because Baltic amber teething necklaces have not been scientifically proven we cannot make any claims as to their effectiveness.
A friend on Facebook alerted me to this one. 6Shooter is a deals website, and they’d posted a deal for an “Health Nano Quantum Energy Bracelet/ Wristband”. Given the name of this product, you might not be surprised to hear that it can basically turn you into a superhero. Here are some of the unsubstantiated therapeutic claims I listed in my complaint:
“It will transmit nutrients and oxygen to cell and expel toxin in our body,people wont have feel cold with hand and feet any more”
“It will provide energy to blood corpuscle and lower viscosity,then reduce the chance to get cardiovascular disease,heart disease and wind-stroke”
“Strengthen human body BIO energy field to prevent harmful electromagnetic wave.”
“Protect us from electromagnetic waves from computer, mobile phone, electrical appliance, telecommunications and so on,recover our body with balance and coordination.”
“Stabilize oxygen supply in blood, activate blood corpuscle”
“Provide relief from allergies and respiratory related illnesses.”
“Normalize hormonal imbalances.”
I also brought up the fact that the advertisement misused a lot of scientific terms. This was relevant to my complaint as the Therapeutic Products Advertising Code requires that:
Scientific terminology must be appropriate, clearly communicated and able to be readily understood by the audience to whom it is directed.
I explained that the product name misused the terms “nano”, “quantum”, and “energy”. It seems quite clear that whoever is trying to sell these bracelets has simply put some sciencey-sounding words in the name to help convince their target audience: innocent people who simply don’t know any better.
The advertisement also refered to a “human body BIO energy field”, yet there is no such thing as far as we’ve ever been able to detect. Finally, I referred the following as “an example of gratuitous use of pseudoscience”:
Negative ion is the basic element to maintain good health.It can neutralize oxidized substance,such as cells.So the cells are revived and improved the immunity of human body.
I also mentioned that the claim that the product would offer protection from “electromagnetic waves from computer, mobile phone, electrical appliance, telecommunications and so on” constituted playing on consumers’ fair without justifiable reason, since there is no evidence to suggest that such electromagnetic radiation is harmful to humans.
This complaint was settled, after the advertiser responded by permanently removing the product from their stock. They also stated that they never intended to mislead consumers.
I think this case is a good example of why it is important to support science information and to point out pseudoscience for what it is. There are many products such as these which rely on people’s ignorance in order to convince them by sounding like science without having any of the actual substance. If more people can be educated in how to tell the difference between science and pseudoscience, scammers like the people who make and sell these bracelets will have less of a chance for success.
After submitting my complaint about their amber teething necklaces, I found this product as well on the Punga Tails website. It’s basically smelly playdough, which is claimed to provide certain specific health benefits via aromatherapy and colour therapy. I submitted a complaint about this not so much because I was worried about health fraud, since this product appears to be harmless in that aspect, but because I also feel committed to fighting pseudoscience.
Similar to their amber teething necklaces advertisements, the majority of claims here were on an FAQs page. The misleading information here mostly related to aromatherapy and colour therapy, instead of being specific to the products being advertised, although there were a few specific claims as well. Here are a couple of the most egregious pseudoscience that were on that page:
“Colour therapy is a holistic and non-intrusive form of healing, which introduces the optimum balance of colour energies into the human organism in order to promote harmony between the body, mind and spirit.”
“If our energy centres (Chakras) become blocked or depleted, then our body cannot function properly and this, in turn, can lead to a variety of problems.”
The products also used to have specific indications. Bizarrely, even though they appeared to be marketed toward infants (including being in the “Babies Natural Care” section), one of the products was given the following indication:
Helps you quit smoking
This complaint was settled, after the advertiser removed most of the pseudoscience and therapeutic claims from the page. I was happy to see the introduction of the following disclaimers:
While Aromatherapy is not scientifically proven…
Colour therapy is not a scientifically proven therapy.
Recently, I wrote about 3 complaints against Innate Health. One of these complaints, number 13/011, was about an advertisement in Coffee News for an infrared sauna that made various unsubstantiated therapeutic claims, and bizarrely stated that the product could:
Activate every cell in your body to increase your sense of well-being
The complaint was originally upheld on the 18th of March. On the 9th of May, I received a notice from the ASA saying that an appeal that had been submitted on this complaint had been accepted. This doesn’t mean the complaint had been successfully upheld, it just means that the appeal will be heard by the complaints board and they would then decide whether or not their decision should be changed. I was contacted because, as the complainant, I was to be given an opportunity to respond to this appeal.
The appeal claimed to present new evidence, and was basically just one really long citation of a bad review. I got the impression that Barbara Good Hammond, owner of Innate Health, came across this article some time after the complaint was upheld and thought she saw an opportunity to have the decision changed.
The article in question is a review published in Alternative Medicine Review in 2011, entitled Sauna as a Valuable Clinical Tool for Cardiovascular, Autoimmune, Toxicant-induced and other Chronic Health Problems. It was written by a naturopath called Walter Crinnion.
The article was a (non-systematic) review of the evidence regarding saunas as a therapeutic intervention. It seemed to rely very heavily on pilot studies and unpublished research, and seemed to be a rather unreliable source.
In the original complaint, the complaints board decided that “Activate every cell in your body” could be considered puffery. This means that it is obviously meant to be a ridiculous exaggeration and would not be taken seriously by anyone, so does not require substantiation. As part of the appeal, the advertiser disagreed with this analysis, claiming that the statement is not puffery and could be substantiated. They then proceeded to attempt to substantiate the claim:
With the whole body being effected [sic] during therapy, circulation is enhanced to every cell. The modality of Far Infrared sauna therapy has been substantiated world wide [sic] for improving cardia-vascular [sic] function, and with the heart being directly responsible for pumping and circulating blood to every cell in the body (as reported in Anatomy & Physiology books and being general knowledge) this would have a direct impact to increase the bodies [sic] sense of well-being, as proven and shown in published clinical studies worldwide.
She then went on to quote a paragraph from Crinnion’s review that seemed to be pretty much irrelevant to the claim, then declared that the statement “is legitimate and should not be considered puffery”.
This appears to be the one thing on which both Ms Hammond and I can agree: that statement should not be considered puffery. Bizarrely, even though the advertiser specifically said the statement is not puffery the complaints board reiterated their previous decision on this statement, saying that it should be considered puffery and therefore doesn’t need to be substantiated.
The rest of the text of the appeal is fairly benign and unconvincing, but there are 38 references listed at the end of it. At first, I thought that seemed pretty intimidating. However, after a minute or so I noticed that they were copied and pasted directly from the review, including spelling mistakes and formatting errors. Worse than that, many of them were duplicates, and there were actually only 25 unique references.
I got the distinct impression that Barbara Good Hammond didn’t even attempt to find these references to look at them herself. I’m not sure how else I can explain the fact that she cited the same piece of unpublished research 3 separate times. Nonetheless, with the help of a contact at the University of Auckland, I was able to get my hands on the full text of 12 of the references, and the abstracts of a further 6 references.
I went through each reference one by one, and found them all to be either irrelevant or inadequate substantiation for the claims made in Innate Health’s advertisement. In their decision, the complaint board seemed to agree with me, stating that:
Turning to the Advertiser’s evidence, the Complaints Board considered the Advertiser had not adequately substantiated the claims. It noted that one research authority was a naturopath, not a doctor or scientist, and the saunas about which the research was done were not infra red saunas but Finnish saunas. While the Complaints Board noted the research said there may be some merit in using Finnish saunas, this did not reach the threshold to validate the very strong claims in the advertisement, particular [sic] as that research discussed a different type of sauna.
About a month ago, I ran into an excellent example of the importance of scepticism. A business card for a local gym was dropped off at my workplace, and they had a “Vibration Training” section on their website. Of course, my interests being as they are, I find myself entirely unable to resist anything involving the word “vibration”, so I took a look.
This page made 2 referenced claims:
the benefits have been well proven to match and surpass conventional forms of training – especially when time is factored in.*
*Roelants, Delecluse, Goris, Veshueren, 2004
A German skin clinic noted that the appearance of cellulite reduced on average 25.68%** with vibration training alone. When combined with conventional cardio exercise this percentage rose to 32.3 % (with a considerable increase in the total time spent exercising).
** Sanderm, 2003
The second claim, in particular, seemed strange to me. I had no idea how cellulite could be measured in a quantifiable way, but even allowing for my lack of knowledge here the figure 25.68% seemed far too specific. Something smelled fishy.
The next step, of course, was to look for the references themselves. Unfortunately, no links were provided on that page. Frustratingly, both references were also misspelled. The name of the first reference’s last author is actually Verschueren, not Veshueren, and the German skin clinic is called Sanaderm, not Sanderm (as far as I can tell, its fully qualified English name is the SANADERM Professional Clinic for Skin Disease and Allergology).
The first study wasn’t too hard to find, being available on PubMed and having been published in the International Journal of Sports Medicine. Here’s a link to the paper – Roelants et al.
The Sanaderm study, however, didn’t seem to be published anywhere. Eventually, however, I managed to find what appeared to be the study in its original German, on the website of another gym offering vibration training – Anti Cellulite Untersuchung.
Unfortunately, I don’t speak a lick of German, so the only way in which I’ve been able to read the study is via Google Translate (here’s a link to the translated study).
The first thing I looked for was how cellulite was measured. To my disbelief, I found that it was not measured in any quantifiable way. Instead, subjects were sorted into 1 of 3 qualitative categories. Here is the relevant passage from the machine translated study:
Defined for this study stages of cellulite
Cellulite is not normally visible. But if the skin Thigh, buttocks or belly with hands is pressed together, dents appear honeycomb.
Cellulite is not pushing together of the skin standing visible.
Cellulite is not pushing together of the skin lying visible.
The word order of the machine translation is unfortunately less than perfect, but I think the meaning is clear enough.
So now my question was how on Earth could they possibly get from a qualitative measurement of cellulite to so precise a measurement as a 25.68% decrease?
Unfortunately, although their results section seems to include all other measurements for each individual subject, the “Cellulite Degree” categorisation is only given for 3 individuals selected (the method of selection is not given) from each of the non-control groups.
Interestingly, one of these example subjects is given a “Cellulite Degree” categorisation of 2.5, despite this being an undefined value. I have not been able to find a justification for this datum anywhere in the study.
The appallingly bad method these researchers have used to compare their groups is to assign arbitrary values (1, 2, and 3) to each of the “Cellulite Degree” categories. It seems these values are associated with the category names of “Stage 1”, “Stage 2”, and “Stage 3”. To illustrate just how inappropriate these values are, I’d like to point out that the categories could easily have been named “A”, “B”, and “C”. There seems to be no justification anywhere in the study for the particular values that have been assigned to each category.
The researchers then added up these values for an entire group, and compared the totals of the first and final tests (omitting the results of the intermediate test that took place halfway through the study).
The sum of these values for the first test of the group that only took part in exercise on the Power Plate vibration machine (Group 1) was 46.50 (why they felt the need to record the result to 2 d.p. is entirely beyond me, and it is not due to the machine translation) and the final result was 37.00. Using these values, which are both almost entirely arbitrary, they seem to have performed the following calculations:
Note that this is absolutely not the calculation that should have been done. By this same calculation, if I start with 1 apple and eat half of it, it has decreased by 100%.
The other calculation, done for Group 2 (the group that participated in a cardio workout routine after the same vibration exercise as Group 1) appears to be of the same form:
The values that they would have gained from the correct calculations would not have been 25.68% and 32.30%, but 20.43% and 24.42%, each of which is massively less. Also recall, of course, that these percentages were still gained using the unjustified values assigned to each qualitative category.
Every single calculation resulting in a percentage change seems to have been performed in the same way, resulting in every case in a result that is wrong and overstated. They also appear to have truncated results instead of employing rounding, although the effect of this mistake is to slightly decrease results (at most by 0.01%, e.g. 1.22% instead of 1.23%).
Aside from all this, it’s also worth noting that a control group of 5 is abysmally small. On top of that, the average initial cellulite categorisation (which I realise is arbitrary, but it’s the only indication available to me of each group’s cellulite) for the control group is lower than for the other 2 groups.
The average initial value for the control group is 1.5. My first calculations for the average initial values for Groups 1 and 2 resulted in 1.94 and 1.39, respectively, but then I realised that the results tables listed “Number of volunteers” as though no participants had dropped out.
From earlier in the results section I could find that 1 participant from Group 1 and 6 participants from Group 2 dropped out. Taking these data into account, the actual average initial values for Groups 1 and 2 were 2.02 and 1.72, respectively.
It’s hard to compare these values due to their arbitrary nature, but I feel it’s worth noting that they are both significantly above the average value of 1.5 for the control group.
This study, in particular its value of 25.68%, seems to be quoted practically everywhere the product used (Power Plate) is advertised. Here are some examples I found (this list is far from exhaustive, you’ll be able to find many more by searching for such things as “25.68% vibration”):
A PDF on the website of the Windsor Spine Centre (a chiropractic organisation), that appears to have been created by Power Plate, mentioning “a 25.7% reduction of cellulite” and “a 32.3% reduction of cellulite” – Defeating Cellulite
I’d also like to mention that the participants in this study were all females aged 25-45, yet none of the places where I saw it referenced recognised that its conclusions should not be applied to males or to females outside of that age range.
Also, as far as I can tell from the machine translation, the control group underwent no training, so the difference attributable to the Power Plate vibration equipment cannot be assessed from this study. Instead of comparing exercise to exercise+vibration, they have compared no exercise to exercise+vibration.
And, of course, I have not “debunked” or “disproven” the type of training done in this study. All I have done is criticise a particularly bad study.
Hopefully the title and contents of this post are enough to make the conclusions that you should draw quite clear. Just in case they’re not, though, let me state them explicitly:
Don’t Take Their Word for It!
You should not trust others to tell good science from bad. You should especially not trust people who are trying to sell you something.
Remember the importance of scepticism, and remember what it means to be sceptical. Ask questions. Investigate. Criticise. Don’t take their word for it.