Honest Universe

Superstition, pseudoscience, and scepticism


leave a comment »

As I mentioned in an earlier post, after looking up at the night sky through binoculars for the first time over my summer holiday, I decided to buy a telescope this year. On the 27th of January, I went to see a show at the local observatory, Stardome, and ended up talking to one of the staff about the telescopes they had on sale. I came home the excited new owner of a “Celestron Powerseeker 114EQ”.

My new telescope

My new telescope

It’s a “Newtonian” telescope, also known as a “reflector”. It uses a mirror to gather light, as opposed to a “refractor” that uses a lens. The light comes in the front of my telescope and hits a concave mirror 114 mm in diamater at the other end of the tube, where it is bounced back up to a flat mirror near the opening that reflects the light out the side into the eyepiece. I have 3 eyepieces that give 45x, 90x, and 100x magnification.

My first target was Jupiter, which I’d got a brief glimpse of from one of Stardome’s much more expensive telescopes after the show I saw. When viewing it from home, I was thrilled to be able to make out its 4 Galilean moons, and tried taking a photo. It turns out, as you might be able to guess, that holding an iPhone 4 camera up against the eyepiece of a telescope in the dark, then holding it steady and pressing the “take a photo” button without bumping the phone, is actually pretty hard. I got very lucky though, and the first photograph I took clearly showed an overexposed Jupiter and its 4 largest moons:

Jupiter and the 4 Galilean moons. In no particular order: Io, Europa, Ganymede, and Callisto

Jupiter and the 4 Galilean moons. In no particular order: Io, Europa, Ganymede, and Callisto

Some time later, I was also lucky enough to take a recognisable photo of Saturn using the same technique:

Saturn. It appears on its side because of the angle I was viewing it from. Click through to see the full image.

Saturn. It appears on its side because of the angle I was viewing it from. Click through to see the full image.

That picture was taken with the highest magnification eyepiece, which has a lens only about 6 mm in diameter. It was really difficult to hold my phone steady for this, even with the trick I’d learned of using the iPhone headphones’ volume buttons as a remote for the camera app. After this, I decided I should look to see if I could buy an adapter to fit my iPhone directly onto my telescope, but while searching for one I found an article about how to make your own adapter. I didn’t follow the steps in that article, but I did decide to give it a shot. I found a piece of plastic, an old cover for part of a swimming pool pump, that fit perfectly over my telescope’s eyepiece, and put it together with a bunch of foamboard and glue to get the final product. Here are a few pictures of the process:

The first layer

The first layer

The plastic backing

The plastic backing

The second layer

The second layer

The 2 pieces combined

The 2 pieces combined

Trying it out. It was a bit heavy at this stage, and overbalanced my telescope

Trying it out. It was a bit heavy at this stage, and overbalanced my telescope

Cutting it down to size

Cutting it down to size

All trimmed down. After this I sanded the edges and it was good to go

All trimmed down. After this I sanded the edges and it was good to go

Using this new adapter and some astrophotography image processing software called Registax that lets me combine multiple images or frames of a video to form a single clean image, I’ve been easily able to take some clear images of Jupiter, the Moon, and Saturn:

Jupiter, with some of the cloud bands clearly visible. Click to see the full image.

Jupiter, with some of the cloud bands visible. Click to see the full image.

The waxing crescent Moon. Click to see the full image.

The waxing crescent Moon. Click to see the full image.

Saturn, rings and all. Click to see the full image.

Saturn, rings and all. Click to see the full image.

Registax also allows for a bit of processing to remove noise and sharpen the image. I’m not sure what I think of this yet, as I’m pretty much flailing blindly and to be honest it feels a bit like cheating, but here’s what came out the other end when I applied some of its filters to that Saturn image:

Processing image of Saturn to remove noise and sharpen it. Click to see the full image.

Processing image of Saturn to remove noise and sharpen it. Click to see the full image.

I also took some photos of Mars, but they’re all horribly overexposed and not really worth looking at. I’ve been having trouble seeing anything aside from just a circle of light when it comes to Mars. It’s tough using an iPhone 4 as a camera. It’s not possible to manually change settings like focus or exposure, and in order to take photos and videos of Jupiter that weren’t overexposed I had to lock the camera’s settings on the brightest part of the Moon (done not by tapping to focus like usual but by holding my finger on the spot for a second or so). Luckily Jupiter and the Moon are quite close in the sky at the moment so that wasn’t too much effort, but moving the telescope back and forth between the Moon and Mars was quite annoying. I’m sure there’s a better way that I’m yet to find. It possibly involves buying a decent camera.

One other thing I was finally able to do last night is resolve the Alpha Centauri system (the outermost of the 2 “pointers” that show the way to the Southern Cross) as a binary star system. I wasn’t able to photograph it though, the stars still appear very close together and my phone overexposed them to look like a single star. I guess that’s a challenge for another night.

I’m also quite looking forward to the upcoming total lunar eclipse on the 15th of April. Although I’ve read that the Moon is meant to turn a dark red during the totality of the eclipse, I’m not really sure what to expect when it comes to viewing or photographing it, which I find pretty exciting.

Written by Mark Hanna

07/04/2014 at 9:45 am

Poor Science Reporting on the Paleo Diet

leave a comment »

This morning I saw an article in the NZ Herald on the “paleo diet” that rather frustrated me. It seems like a great example of poor science reporting, trying its hardest to turn a study into a story instead of doing any actual science reporting. The role of a science reporter is not to sensationalise, it’s to accurately report on science, and that includes making the drawbacks of a study clear and not exaggerating the conclusions.

In this case though, it looks like the author chose to omit half the results of the study, presumably so as not to pollute the narrative they had chosen. The take-home message of the article can be found in the first paragraph:

the best way to lose weight is by copying our ancient ancestors, a study suggests.

I’m not even going to get into the problems with characterising the so-called “paleo diet” as “copying our ancient ancestors”, that’s been adequately covered elsewhere. The information used to support this weight loss conclusion is that the study in question found that:

Women who adopted the so-called Palaeolithic diet lost twice as much weight within six months as those who followed a modern programme based on official health guidelines.

Wow, that sounds impressive. Case closed, right? Except, if you look at the actual study (not open access, unfortunately), which of course is not linked to from the online article, you’ll find another result that is curiously omitted from the Herald article:

Both groups significantly decreased total fat mass at 6 months (−6.5 and−2.6 kg) and 24 months (−4.6 and−2.9 kg), with a more pronounced fat loss in the PD [Paleolithic-type diet] group at 6 months (P<0.001) but not at 24 months (P=0.095).

So there was a statistically significant difference in fat loss after 6 months, as mentioned in the article, but after 24 months there was no statistically significant difference in fat loss between the groups. That is a negative result.

Although there was still an observed difference in fat loss between the groups at 24 months, it wasn’t big enough for the researchers to be reasonably confident that it wasn’t just due to random variation. That’s partly due to the size of the difference observed, and also because the study was so small. 70 people split into 2 groups is very small for this kind of study, whereas a good sample size would be hundreds or even thousands of participants, not just a few dozen. Of course, such large studies are much more difficult and expensive to undertake, so a lot of smaller studies like this do happen. Sample size is very important though – small studies like this are not nearly as reliable as the much larger ones – so it’s important to remember to take the sample size into account when evaluating a study’s conclusions.

The Herald article does mention, way down near the bottom, that all of the participants in the study were obese postmenopausal women. Everywhere else, however, it avoids that caveat and seems to imply that the conclusions should be applicable to everyone, or at least to all women.

It’s also rather frustrating that the article says that the study “found [the "caveman diet"] more effective than some modern diets”, and that this study suggests it is “the best way to lose weight”, even though the study didn’t compare it with “some modern diets”. It compared it with a single other diet, one based on the Nordic Nutritional Recommendations.

If the Herald wants some tips on how to report on science, a great place to start would be to take another look at the science itself. The conclusion in the abstract of the study they’re writing about seems much more appropriate, even if it does seem a bit dismissive of the negative 24 month results:

A PD [Paleolithic-type diet] has greater beneficial effects vs an NNR [Nordic Nutritional Recommendations] diet regarding fat mass, abdominal obesity and triglyceride levels in obese postmenopausal women; effects not sustained for anthropometric measurements at 24 months. Adherence to protein intake was poor in the PD group. The long-term consequences of these changes remain to be studied.

Then again, perhaps I should be glad the Herald didn’t reprint the original headline from the Daily Telegraph:

Caveman diet twice as effective as modern diets

I’m not sure I could come up with a more misleading headline if I tried.

Despite the horrific headline, the original article does have a bit more information in its second half from the study’s primary author that was truncated from the Herald’s reprint.

Written by Mark Hanna

03/04/2014 at 12:50 pm

What Does “37% More Powerful” Really Mean?

leave a comment »

Yesterday, the Advertising Standards Authority released a decision regarding a TV advertisement for Panadol Extra. The advertisement claims that the product is “37% more powerful than standard paracetamol tablets”. Although this is not the claim that was challenged in the complaint, the advertiser, GlaxoSmithKline (GSK), provided a citation in attempt to substantiate their claim.

However, it seems to me that the citation they provided substantiates a different claim. The study they provided, Laska et al. 1984, substantiates the claim that their product is 37% more potent than standard paracetamol tablets, not 37% more effective. As far as I’ve found, in pharmacology, potency refers to the dosage required to achieve a particular effect. In claiming that their product is “37% more powerful” they didn’t mean that it is able to provide 37% more pain relief, but that you don’t have to take as much of it to get the amount of same pain relief.

In order to convince the Advertising Standards Complaints Board that saying “more powerful” when they meant “more potent” was not misleading, GSK pointed to a 2009 ASA decision in their response to the complaint:

Importantly, the claim ‘37% more powerful than regular paracetamol tablets‘ and the associated graph in question relate to the potency of Panadol Extra compared with regular paracetamol tablets and NOT its efficacy. That is, the reference to potency refers to the ratio of doses required to achieve the same analgesic effect rather than any improved efficacy result.

In October 2009, a complaint was considered by the ASA in relation to the claim that Panadol Extra is ’37% more powerful than regular paracetamol tablets’. The ASA Panel was of the view that this was an accurate description of potency and that it did not communicate efficacy improvements. The Panel was also satisfied that the claim 37% more powerful had been substantiated by the Laska 1984 study (Attachment 2). Accordingly, the Panel determined that the advertised claim was not, directly or by implication, deceiving or misleading consumers (Attachment 3).

Given the historical consideration of this claim by the ASA it is GSK’s view that the claim accurately communicates the potency of Panadol Extra and not the efficacy of this product compared to regular paracetamol tablets.

The complaints board seems to have accepted this argument, as they state in their decision that:

Firstly, the Advertiser addressed the claim “37% more powerful than standard paracetamol tablets” and the Complaints Board noted the percentage was in relation to the potency not the efficacy. It also noted the Advertiser provided robust substantiation to support the factual claim.

Partly as a result of this, the complaints board ruled to Not Uphold the complaint.

However, things aren’t quite that simple. First, the Commercial Approvals Bureau also responded to the complaint, stating that:

The claim of 37% improved efficacy over standard paracetamol is verifiable fact, and the client has sufficient data to substantiate this claim.

(Emphasis mine)

Apparently the Commercial Approvals Bureau was misled by the advertisement, interpreting its claim that the product is more powerful as regarding efficacy, not potency. To tell the truth, when I read the claim I made the same assumption. I was very surprised when GSK defended the claim by essentially saying they meant something else so it was okay, and honestly felt as though I had been misled.

It seems the complaints board have likely been misled as well. When GSK referred to the 2009 decision (09/626), they missed a very important point. That advertisement appeared in a publication specifically for medical professionals, and the complaints board had considered the likely interpretation of “more powerful” in that context. From their decision, they stated:

The Panel was of the view that within this informed environment, there would be a greater awareness and familiarity with analgesics, the difference between analgesic effect and potency, and a level of comfort with references to scientific studies and the capacity and the ability to access these studies, if further clarification was required of the reference to them.

Having made these observations, the Panel was of the view that medical practitioners reading the advertisement would understand the word “STRONGER” in the advertisement to mean potency.

In their response to this recent complaint, it seems GSK may have misled the complaints board when they told them that previous precedent has determined that “more powerful” means “more potent”, as they omitted the important and relevant fact that it was only decided to be the case for advertisements aimed specifically at healthcare professionals, not advertisements aimed at the general public such as this one.

It’s also relevant that, in some of the advertisements complaint 09/626 was about, GSK was making these claims:

Because Panadol Extra is 37% more powerful than regular paracetamol it provides extra pain relief and helps you break through the pain barrier

Panadol Extra…combines paracetamol with caffeine for 37% extra pain relief

That complaint was Upheld (in part) because the ASCB ruled that these claims had not been substantiated and were therefore misleading. Given that GSK has been willing to make this claim explicitly in the past, despite the fact that it seems to have been misleading, it would not surprise me at all if they intend for uninformed consumers to take away the same message from their more recent advertisement.

I also can’t help but wonder if the complaints board actually went through the details of complaint 09/626 when considering complaint 13/585, or if they just took GSK’s word for its contents. Their decision seems to imply the latter, unfortunately.

What do you think about the claim “37% more powerful”? Would you have assumed it meant “37% more pain relief”, or that it means you can take 37% less of the active ingredient in Panadol Extra than regular paracetamol to achieve the same result? Would you have been misled by this advertisement like the Commercial Approvals Bureau seems to have been?

Update 2014/03/27

I hadn’t realised that when I first wrote this article, but it turns out both Panadol and Panadol Extra each contain 500 mg of paracetamol per tablet. So although the main selling point of Panadol Extra seems to be that, because it also contains caffeine, you can take 37% less paracetamol to get the same analgesic effect, the pills themselves don’t actually contain any less paracetamol.

Doesn’t that make the claim that it’s more potent entirely irrelevant? They’re not claiming that it can product more pain relief at the same dose, they’re claiming that it can produce the same pain relief at a smaller dose. But then they’re not offering a smaller dose.

Maybe they expect you to cut off 37% from each Panadol Extra capsule before taking it. It seems more likely, in my opinion, that they’re just hoping people will misinterpret their claims in their favour, and expect Panadol Extra will provide 37% extra pain relief. You know, like they used to advertise before the ASA found those claims to be misleading.

Written by Mark Hanna

25/02/2014 at 1:16 pm

Telescopes Are for Everyone

leave a comment »

UPDATE 2014/17/01 1:54pm NZDT:

The content of the page has now been updated for the better, although its title still says “Telescopes for astronomy or land viewing; a great gift for him”.

Over the summer holidays, for the first time ever I took some binoculars outside on a clear night and looked up at the sky. I had some idea of what I could expect to see, but still the number of stars I could see surprised me. It was marvellous and elating, and settled my resolve to (finally) take up amateur astronomy as a hobby instead of just an idle interest. As a first step, I plan to buy a telescope. Earlier today, I searched online to see what was available and was very disappointed by something that I found.

The New Zealand website telescopes.net.nz sells telescopes, which is all well and good, but they have decided to market them as a “gift for him”. The title of the page is “Telescopes for astronomy or land viewing; a great gift for him.”, and the content advertises them like this:

A telescope makes the perfect Christmas gift for him and we have picked the best brands available in the market today. Whether you want a gift for your Dad, the man in your life or you just want to indulge yourself, you will find it here.

I was thoroughly disappointed and, horribly, not particularly shocked to see such sexism regarding this. As a white straight male, I have tremendous privilege; my male privilege would let me simply ignore this if I wanted to, as it’s not an affront to me. However, I follow various fantastic (I cannot stress that enough) female astronomers, astrophysicists, and astronauts through Twitter and blogs, and have long been aware of the existence of sexism in STEM fields. In fields such as astronomy, there is a large gender gap – these fields are dominated (in terms of numbers) by men – and this is only made worse by commonplace sexist attitudes. Even if some of this behaviour is entirely innocent, it still does active harm by excluding women and girls from astronomy. Although my privilege gives me the option to ignore this, I consciously choose to be a feminist and fight these attitudes. Telescopes are for everyone.

I shared this page on Twitter, and the great Katie Mack (@AstroKatie) sent them an email. Seeing this, I also sent an email to telescopes.net.nz, which I include in full below, urging them to change the content of their website and the attitude from which it sprang. If you would like to do the same, you can contact them at staff@telescopes.net.nz. Here is the email I sent them:

Dear Telescopes New Zealand,

I’m interested in amateur astronomy, having for the first time viewed the night sky through binoculars over the summer holidays and marvelled at how many stars I could see. I decided that in the new year, I would like to buy a telescope and take up amateur astronomy as a hobby. I found your website when searching for a telescope online.

However, when I came upon your telescopes page I was very disappointed to see it advertised telescopes as a “gift for him”, casually excluding all the women and girls that are interested in astronomy. Unfortunately, women already face significant sexism in STEM fields, including astronomy. Attitudes such as the one shown on your website only encourage this hostile environment, and harm astronomy as a whole.

I hope that you will revise the content of your website. I’m sure you can see that its current content, however innocent your intentions may have been, contributes to a harmful atmosphere of exclusion. Taking a more positive attitude toward women in astronomy will only benefit yourselves, amateur astronomers, and the field of astronomy itself.

Mark Hanna

I’ve set up a change detection service monitoring the page, and hope to see it change for the better soon. If it doesn’t then, needless to say, I certainly won’t be buying my telescope from telescopes.net.nz.

Written by Mark Hanna

17/01/2014 at 12:35 pm

An Example of Some Advertising Tricks

with one comment

By definition, when it comes to telling the truth, advertisers have a conflict of interest. They want you to buy whatever they’re advertising, so they’re going to try to show it in its best light. Often, this means using psychological tricks, and these tricks generally work best when consumers aren’t aware of them.

For example, most people are aware that the reason why products cost $39.90 instead of $40.00 is that they sound significantly less expensive, even though the difference is miniscule. Consumers tend to be aware that advertisers are prohibited from “false advertising” – they can’t tell us anything that isn’t true – and that advertising is regulated. This tends to give us a sense of security when it comes to taking advertisers at their word. Unfortunately, there are a lot of ways in which medical advertisers in particular can, and do, take advantage of this.

In today’s edition of my local newspaper, there was a full page ad placed by “The Natural Health Co”. This advertisement contains a lot of these tricks that can be used by medical advertisers to mislead consumers without technically breaking the rules, so I thought I’d use it as a case study to point out some of these tricks.

First thing’s first, here’s the advertisement:


The most common trick is one that I’ve written about before. The industry regulators of medical advertisements in New Zealand draw a distinction between “therapeutic claims” and “health claims”. Although they sound very similar to the consumer, the important difference is that the advertiser is only required to substantiate therapeutic claims. Any health claims they make can be entirely unsubstantiated and, to my knowledge, if they’re false there is no penalty. To quote the guidelines:

Health Claims are defined as claims which support the normal physiological function.

An example of a health claim made in this advertisement is “Supports cardiovascular health”, which is said for a couple of products in this advertisement. As far as I can tell, the reason why advertisers are allowed to make these claims without oversight is that they are not well-defined, and claims like that technically just mean it won’t interfere with normal physiological function, i.e. what would happen anyway in a healthy person.

So, when this advertisement says a product “Supports muscular and nervous system health”, you should interpret it as saying “This product will not interfere with your muscular or nervous system health if you’re already healthy”, and as far as I know they’re not even required to have evidence to support that.

That’s why advertisers can get away with saying that glucosamine and chondroitin supplements work “for maintenance of healthy joints”, despite the fact that statements that these substances have beneficial effects on joint health do not seem to be strongly supported by quality evidence.

Another trick used by these advertisers is making a point out of what is contained in the product. This allows the consumer to draw certain conclusions without the advertiser having to suggest them directly. It’s important to remember that advertisers will always put their best foot forward. If there’s evidence to show that a product has a beneficial effect, then they will say that instead of only saying “Potent antioxidant”.

Most people think they understand what that means (it’s good for you, right?) but, unfortunately, most people also lack the medical expertise required to make good health judgements, and are easily influenced by information like this.

I expect that’s the main reason why health-related testimonials in medical advertisements are prohibited by section 58(1)(c)(iii) of the Medicines Act 1981. Unfortunately, Medsafe seems extremely apathetic when it comes to enforcing this; I’ve contacted them about numerous violations but as far as I can tell they’ve never done anything about them. While it seems intuitive that more information is better, some types of information tend to lead to misinformed health decisions, and testimonials are foremost amongst these.

There are a lot of examples of marketing tricks used in this advertisement. I wouldn’t be surprised if there’s not even a single piece of useful information about the therapeutic properties of one of these products that is backed up by any evidence at all.

Feel free to point out more instances of marketing tricks in this ad, or mention others that you’ve seen in medical advertisements. Unfortunately, “health claims” are almost everywhere in medical ads in New Zealand. Keep an eye out for the word “supports”, as it’s usually a strong indicator that they’re making a health claim and therefore likely don’t have evidence to support it.

Written by Mark Hanna

11/12/2013 at 9:31 pm

ASA Complaints: Niagara Healthcare

with 3 comments

In September, I found an A4 insert in the New Zealand Herald advertising for Niagara Healthcare. A big red heading: “Arthritic Relief?” caught my attention, and when I looked a little closer I found it accompanied by some big red flags. This advertisement for a “FREE TREATMENT” that seemed like it could relieve practically any type of pain, as well as several other ailments, looked a little too good to be true, and experience has taught me that when something looks too good to be true, it probably is.

My first response to this advertisement was to look for any research I could find corroborating its claims. This took me to the Niagara Healthcare website for New Zealand. They appear to be based in Australia, and have a separate but nearly identical website for their New Zealand branch. Their website’s key benefits page, which states that “Much research has been conducted on the physical benefits of Niagara’s Cycloid Vibration Therapy since 1954″, contained a convenient list of therapeutic claims for me to look at:

  • Increase local area blood flow
  • Assist in the reduction of musculoskeletal pain
  • Increase joint mobilisation
  • Reduce excess oedema (swelling) whether the cause is vascular or lymphatic
  • Assist in the treatment of wounds where an improvement in circulation is a factor
  • Assist in the treatment of pressure ulcers where and [sic] improvement in local circulation is a factor

The only study I was able to find (searching Google Scholar and PubMed) with the keywords “Cycloid Vibration Therapy” was a small uncontrolled preliminary study of 21 patients. That is nowhere near enough to substantiate a therapeutic claim. Luckily for me, there were also 4 other studies cited on the webpage.

I was able to find the full text of what I believed may be the first study mentioned. This study appeared to use a Niagara Healthcare product, Lymphease, but it was only a pilot study with a small sample size and no control group, not a clinical trial as claimed on the website, and therefore not rigorous enough to substantiate any therapeutic claims.

Interestingly, although this was not stated on Niagara Healthcare’s website, this study was funded by “Cyprossage Pty Ltd”, which holds the patent for the product used in the study. Both Cyprossage Pty Ltd and Niagara Healthcare are divisions of CT Healthcare Pty Ltd, and they share the same director, Anthony Thompson. Even if everything else in these advertisements checked out, this would violate the ASA’s Therapeutic Products Advertising Code Part B2 R4.3:

Publication of research results in an advertisement must identify the researcher and the financial sponsor of the research.

I was only able to find citations of the second and fourth studies, and only the abstract of the third study. As far as I was able to tell, the second and fourth studies were not clinical trials, and the third study did not adequately account for the placebo effect via its “no treatment” control group. These papers were also published in 1984, 1981, and 1961 respectively. Worryingly, the Australian version of this webpage describes those same studies as “recent”, despite the majority of them having been published years before I was born. If this was Niagara Healthcare putting their best foot forward, it wasn’t very impressive.

I was also able to find that the Advertising Standards Authority in the UK upheld a complaint against Niagara Healthcare in 2005, on the basis that the therapeutic claims they were making were not adequately substantiated. It looked like the evidence behind the advertisement didn’t live up to the claims, which was particularly worrying considering that the print advertisement claimed that the products had been “Medically proven for 60 years”, and had been approved by TAPS. The Therapeutic Advertising Pre-vetting System, TAPS, is a service provided by the Association of New Zealand Advertisers (ANZA) that is intended to help advertisers avoid publishing ads that violate the relevant codes and legislation.

The back of the print advertisement also contained a testimonial. I still don’t understand how a medical advertisement containing a testimonial could have been approved by TAPS, considering that the Medicines Act 1981 Section 58 subclause (1)(c)(iii) effectively prohibits all testimonials in medical advertisements:

no person shall publish, or cause or permit to be published, any medical advertisement that… directly or by implication claims, indicates, or suggests that… a medical device of the kind… advertised… has beneficially affected the health of a particular person or class of persons, whether named or unnamed, and whether real or fictitious, referred to in the advertisement

After finding how problematic these advertisements seemed to be, I laid a complaint with the Advertising Standards Authority. My complaint ended up being treated as two separate complaints: one for the print advertisement and a separate one for the website advertisement. On Friday, the ASA released their decision regarding both of these complaints. They were both upheld, meaning that the ASA has told Niagara Healthcare the advertisements must be removed. As I do with all my complaints, I have set up a monitoring service so I will be notified of any changes to the web advertisement. So far, the only change is that a note that the research they cite was funded by them has been added to their Key Benefits page.

I found the advertiser’s response to my complaints quite interesting and, I think, revealing. To start with, they claim that the printed material was published incorrectly, and contained obsolete material. This seems odd to me, considering that the ad had been approved by TAPS, which requires a fee, and stating that it contained obsolete material implies that the material was once correct, but this certainly does not seem to be the case.

In attempting to substantiate their therapeutic claims, it seems the advertiser provided a clinical evaluation performed by CT Healthcare, which it called “an Australian based manufacturer”. CT Healthcare is the parent company of both Niagara Healthcare and Cyprossage (the company that funded the small trial mentioned on the Niagara website). Here’s what the ASA had to say about that:

The Complaints Board also noted the substantiation provided by the Advertiser which was a “Report Review” on “Vibration Therapy.” It said while the Advertiser provided references on the subject and the claims were of a low level, the Complaints Board were of the view that it did not provide adequate substantiation particularly because the review was not conducted independently.

The advertiser also tried to substantiate their therapeutic claims by providing the ASA with certificates from the Australian Register of Therapeutic Goods (ARTG).

[The Advertising Standards Complaints Board] was of the view that the certificates provided were not categorical evaluations of the product, but rather they confirmed registration of the products.

As well as finding that the therapeutic claims made in their advertisements were not substantiated, the complaints board said that…

The Complaints Board agreed with the Complainant that the lack of the research listed under the heading “Medical Research”, its quality and the fact that some of it had been paid for by the Advertiser was not robust enough to support the statement “much research had been conducted on physical benefits of Niagara’s Cycloid Vibration Therapy since 1954″ as the overall consumer takeout of that statement would be this meant 60 years of independent peer-reviewed medical studies which was not the case.

The most interesting part of this whole thing is, I think, the way in which the advertiser tried to defend their statement that their products have been “Medically proven for 60 years”. Here is how the advertiser tried to justify this statement:

However, to provide clarification regarding the statement on the advertisement Niagara devices have been proven for 60 years, this originates from the basis that CT Healthcare has been involved in medical research relating to the product since 1952.

The complaints board responded to this by stating that the words used in the advertisement simply did not mean what the advertisers claim they meant, and therefore exploited consumers’ lack of knowledge. I think the board’s response was entirely appropriate, and consider such behaviour from a medical advertiser, whom consumers should be able to take at their word, to be utterly reprehensible.

In the end, the complaints board said that both advertisements were in breach of Principles 2 and 3, and Part B2 Requirements 4(a) and 4(b) of the Therapeutic Products Advertising Code. They also said that the website advertisement was in violation of Part B2 Requirement 4(c). Here’s a quick rundown of what those codes are (some paraphrased by me):

Principle 2
Must not be misleading and claims must be substantiated
Principle 3
Must observe a high standard of social responsibility
Part B2
Refers to advertisements for medical devices targeting consumers
Requirement 4(a)
Must not be misleading
Requirement 4(b)
Must not abuse trust or exploit lack of knowledge
Requirement 4(c)
Must not exploit the superstitious or, without justifiable reason, play on fear or cause distress.

You can read the full decision of the complaints board, including my original complaint and the advertiser’s response, on the ASA’s website:

I’ve also uploaded a scanned copy of the print advertisement that you can look at: Niagara Healthcare Herald Insert

Even though the ASA’s Advertising Code of Ethics Basic Principle 1 and its Therapeutic Products Code Principle 1 both require that “All advertisements must comply with the laws of New Zealand”, the complaints board had this to say about the testimonial in the print advertisement:

The Complaints Board noted that compliance with the laws of New Zealand under Basic Principle 1 under the Code of Ethics and Principle 1 of the Therapeutic Products Advertising Code were also raised in the complaint. While acknowledging they are part of Advertising Code, the Complaints Board agreed that whether or not the advertisements complied with the laws of New Zealand was a matter for the Courts.

I’m of two minds about this. For one, I agree that it’s appropriate for the ASA not to overstep their authority, and that the courts are the appropriate place for it to be determined whether or not the law has been breached. However, this precedent effectively makes the first principles of the majority of their codes useless, by placing them outside of their own jurisdiction.

If the complaints board is not willing to consider whether or not an advertisement is in breach of New Zealand law, then the advertising codes should be modified to emulate the relevant laws. These include sections 57 and 58 of the Medicines Act 1981, particularly section 58 subclause (1)(c)(iii), which effectively prohibits the use of testimonials in medical advertisements.

This is a step that has been taken by at least one other New Zealand body that is involved in regulating advertising. The New Zealand Chiropractors Board’s Advertising Guideline section 3(f) prohibits the use of testimonials, in accordance with the Medicines Act.

In my opinion, perhaps the most important aspect of this complaint, taking into account that it was upheld, was that the print advertisement had been approved by TAPS. Even though the complaints board found that the advertisement was full of misleading claims that weren’t backed up by the required evidence, the advertiser was able to convince TAPS to approve this ad for publishing.

Another complaint (not one of mine) about an advertisement approved by TAPS was also recently upheld on the basis that it contained unsubstantiated therapeutic claims: Complaint 13/372 against BioMag.

Written by Mark Hanna

03/12/2013 at 12:15 pm

Why Testimonials Aren’t Enough Part 1

leave a comment »

The business of “natural health” rests heavily on the use of testimonials. They are used in advertisements by people selling therapeutic products and services, and you’ll hear them as anecdotes from people that you know telling you what worked for them. Intuitively, it makes sense to trust in this sort of experience, but unfortunately testimonials and personal experience are not good ways of evaluating a treatment option.

I don’t expect you to take my word for this. Maybe you were told by a doctor that you’d need an operation, then you had reiki therapy and after that your doctor said the problem was no longer there. Perhaps your first child had terrible teething troubles, but on your second child you used a Baltic amber teething necklace and they didn’t have the same problems, but you swear if you forget to put it on them they become agitated. Or maybe you’ve been spraying a colloidal silver solution onto the back of your throat whenever you feel a cold coming on and you haven’t been sick in years. Who am I to doubt or deny your experience?

These are all testimonials that I have heard personally, not from advertisements but from individual people relating their own experiences to me. But still, I remain unconvinced that reiki is any more than an exotic twist on faith healing (that is just as ineffective), that Baltic amber teething necklaces are anything but expensive yet inert jewellery, and that colloidal silver is much good for anything other than causing argyria.

In this series of blog posts, I intend to explain to you why I don’t consider anecdotes like these to be useful in drawing any conclusions about therapeutic interventions. But first, I’d like to point out that I am not trying to be dismissive of personal experience. I don’t think anecdotes are all lies, or anything of that nature, and personal experience can certainly be useful in drawing all sorts of conclusions in everyday life. The only conclusion I am arguing for here is that anecdotes are not useful for evaluating the efficacy of therapeutic interventions.

In searching for any truth, we have to be very careful not to jump to conclusions. There will always be a vast number of potential explanations for any observation, and if we really care about the truth then we can’t just pick the explanation that we like the most, or even the one that we think is most likely. Some possible explanations can be ruled out right from the start, if they’re impossible to test, but the explanations that can be tested are known as hypotheses. If we want to determine whether or not one particular hypothesis is correct, we should design and carry out a test that will rule out every other potential cause of our observation.

Note that this method of testing does not prove anything. Instead, it focuses on ruling out everything else, until only one idea is left standing. The key to designing a good test of an intervention is to make sure anything you observe is as unlikely as possible to be due to anything other than the intervention. This means that, in order to design a good test of an intervention, it is important to have a good understanding of what these other potential causes are.

Part 1: After This, Therefore Because of This

There’s a formal logical fallacy that’s usually known by its latin name post hoc ergo propter hoc, which translates to “After this, therefore because of this”. The fallacy is of the form:

  1. A happened, then B happened
  2. Therefore A caused B

Of course, the reason why this is a logical fallacy is that it’s entirely possible that something other than A was the cause of B. This doesn’t mean that the conclusion is false, but it does mean that it is not necessarily true.

Anecdotes take the same form as the above example: “I tried treatment X and I got better”. Although experiences like this can result in strong beliefs, the fact that the improvement happened after the treatment does not mean the treatment necessarily helped at all. Instead, the improvement could have been due to a few different things.

Self-Limiting Conditions

Many common health conditions are self-limiting. This means that, left to their own devices, they will almost always go away in time. The common cold is an example of a self-limiting illness. Unless you are seriously immunocompromised, if you catch a cold you will be fine again after a few days. This includes things like the flu, teething, colic, and acne. Pretty much everything that isn’t a chronic illness and won’t kill you is self-limiting.

Regression to the Mean

Even when nothing external seems to be changing, your health is not constant. Instead, it fluctuates over time around a baseline level of health that itself changes over longer amounts of time. This baseline is basically your average health over a certain period of time; the mean. The tendency for your wellbeing to return to this mean after a fluctuation is known as regression to the mean.

This is a picture of 300 random data points generated in Microsoft Excel. Starting with 0, I added a random number between -0.5 and 0.5 to the running total 310 times, and then took a 10 point running average to smooth the resulting curve.

Regression to the mean

As you can see, even though the changes are all random, trends do form and the data oscillate around a particular mean. Especially over longer periods of time, the data will tend to return to that mean.

I’ve indicated the 2 most prominent downward trends with arrows. As you might imagine, such low points in a person’s health could motivate a person to take a therapeutic intervention in order to reverse this trend. After the intervention, they’ll likely start to feel better, but as you can see by this graph such variations can happen randomly, and it can be very hard to say whether an improvement was caused by something in particular or if it was just the result of regression to the mean.

For example, I get frequent headaches. However, the frequency and intensity of those headaches varies from day to day, just due to random chance. I’d be more likely to decide to seek a therapeutic intervention on a particularly bad day. However, considering that my wellbeing is fluctuating around a mean value I’d expect my headaches to return to their “normal” level, unless of course something has changed to make them worse on average. If I take an intervention and then the next day my headaches are better, how can I know whether it’s due to the intervention or regression to the mean?

Spontaneous Remission

Even with illnesses that are not self-limiting, spontaneous remission that has no obvious cause is something that does happen occasionally. I’m not familiar with the data on this, so I won’t go into it in too much depth, but it is worth knowing that even some serious illnesses can get better on their own, so even some sudden recoveries from serious illnesses can happen on their own, whether an intervention has recently been used or not.

As you may have noticed, these things all have a common theme. They describe ways in which health can improve on its own, which make it difficult to tell whether a particular improvement is due to an intervention or if it would have happened anyway. Ideally, in order to tell the difference, we’d travel back in time in order to try without the intervention and see what would have happened in that case, but unfortunately that’s not an option. The next best method is to have what is known as a control that has the same problem but doesn’t get the treatment.

However, as I discussed earlier, health fluctuates on its own. If the person receiving the intervention improves and the person acting as the control stays the same or gets worse, we still can’t be too sure that the intervention was helping. Variations between different people can make outcomes difficult to interpret as well. Like how random fluctuations will tend to return to the mean over longer periods of time, testing more people will smooth over these random variations. The more people we include in both the treatment group and the control group, the better, as having more observations will help us to tell whether any effect we observe is due to random variation or due to the intervention itself.

Having a control group and a large sample size are 2 aspects of a good test of a therapeutic intervention, but that’s not all there is to it. In my next post, I’ll discuss some other potential confounding factors, and how we can modify our test in order to account for them.


Get every new post delivered to your Inbox.

Join 29 other followers