Movement of the Moon

If you were up late last Wednesday, you’d have gotten the chance to watch the second total lunar eclipse this year. A lunar eclipse occurs when the Earth moves directly between the Moon and the Sun, therefore blocking the light to the Moon and making it turn dark.

This is very cool to watch, especially if you get a good look at the Moon during totality, when it is in the darkest part of the Earth’s shadow known as the umbra. At this point, the only sunlight reaching the Moon is that which is refracted through the Earth’s atmosphere. For the same reason as why sunrises and sunsets appear a lovely reddish colour, this light is also quite red, and as the Moon reflects some of this light back at us it appears a dim bronze colour.

This is one of my favourite space facts, and I find it quite poetic – you’re looking at the reflected light of all the sunrises and all the sunsets in the world, all at once. There are many spectacular photos of this effect online if you care to search for them too. But this is not what I want to write about in this article.

If you were watching Wednesday’s eclipse from Auckland, as I was, you’d probably have been disappointed to see that it was quite cloudy for the duration of totality. However, you were probably able to get a good view of the first part of the eclipse, when the Moon is moving into the outer part of the Earth’s shadow known as the penumbra.

Greg O’Beirne managed to put together a great compilation of this part of the eclipse:

Photo by Greg O'Bierne
Photo by Greg O’Beirne

As you can see, it looks rather like the Moon is having a bite taken out of it. In this compilation the position of the Moon is held roughly constant but in reality it’s the Moon that is moving here. During a lunar eclipse, we are given the rare opportunity to directly observe the Moon’s orbital motion.

Because the Earth is spinning, the Moon always appears to move across the sky from east to west (in the southern hemisphere, this effectively means it is moving right to left). The time it takes to move across the sky varies with the time of year, but it takes roughly 25 hours to do a full circuit.

Because of the Earth’s spinning, generally the only way we can usually observe the Moon’s orbital motion is by looking for it at the same time every day. If you do this, then instead of watching it migrate east to west over a day, you’ll see it move slowly from west to east over a couple of weeks.

In order to watch the Moon move directly, it would be possible to watch it move relative to a stationary background object. The stars could serve this purpose while the Moon is up at night, although generally it’s bright enough that it’s very difficult to see any nearby stars. The occultation of Saturn earlier this year, which I watched from home through my telescope, gave me a chance to observe its movement against the relatively stationary planet. Like with the stars though, you simply wouldn’t be able to observe this with your naked eye.

A solar eclipse is also an opportunity to directly watch the Moon’s orbital motion, as we can compare it to the Sun, as its movement is fairly negligible for everyday purposes when compared with that of the Moon. The problem there, of course, is that you can’t look directly at it without damaging your eyes. A lunar eclipse gives you the same opportunity except, unlike a solar eclipse, you can watch it directly.

If we ignore the Earth’s spin, then both the Sun and the Earth’s shadow (which, of course, must always be directly opposite one another) each take one full year to move the whole way across the sky. Because the 365 1/4 days in a year is very close to the 360 degrees in a circle, we can say that they move roughly 1 degree every 24 hours, or half a degree every 12 hours. This sounds pretty slow, but half a degree is roughly the size of the full Moon so it isn’t entirely negligible.

It takes about an hour for the Moon to move fully into the outer part of the Earth’s shadow, so in this time it moves roughly 1/12th the diamater of the Moon. For the sake of simplicity, let’s ignore this motion as well. Below I’ve put together a (rather clumsy and very imprecise) animated gif, using the images from Greg O’Beirne’s great compilation as its frames, to show this motion of the Moon holding the position of the Earth’s shadow roughly constant:

Lunar Eclipse Oct 2014 Animation

If you want to view this for yourself, the next lunar eclipse visible from New Zealand isn’t too far away. You may have read that there won’t be another total eclipse visible from New Zealand until 2018, but on the 4th of April 2015 there will be a partial solar eclipse in the evening where you’ll be able to see this. In the meantime, have a look up at the sky occasionally and notice where the Moon is (using landmarks as a guide to remember its position will help). If you make it habitual to do this at a specific time (I do it every day when I leave for work, for example) then you’ll be able to watch the Moon’s slow movement backwards across the sky.

Poor Science Reporting on the Paleo Diet

This morning I saw an article in the NZ Herald on the “paleo diet” that rather frustrated me. It seems like a great example of poor science reporting, trying its hardest to turn a study into a story instead of doing any actual science reporting. The role of a science reporter is not to sensationalise, it’s to accurately report on science, and that includes making the drawbacks of a study clear and not exaggerating the conclusions.

In this case though, it looks like the author chose to omit half the results of the study, presumably so as not to pollute the narrative they had chosen. The take-home message of the article can be found in the first paragraph:

the best way to lose weight is by copying our ancient ancestors, a study suggests.

I’m not even going to get into the problems with characterising the so-called “paleo diet” as “copying our ancient ancestors”, that’s been adequately covered elsewhere. The information used to support this weight loss conclusion is that the study in question found that:

Women who adopted the so-called Palaeolithic diet lost twice as much weight within six months as those who followed a modern programme based on official health guidelines.

Wow, that sounds impressive. Case closed, right? Except, if you look at the actual study (not open access, unfortunately), which of course is not linked to from the online article, you’ll find another result that is curiously omitted from the Herald article:

Both groups significantly decreased total fat mass at 6 months (−6.5 and−2.6 kg) and 24 months (−4.6 and−2.9 kg), with a more pronounced fat loss in the PD [Paleolithic-type diet] group at 6 months (P<0.001) but not at 24 months (P=0.095).

So there was a statistically significant difference in fat loss after 6 months, as mentioned in the article, but after 24 months there was no statistically significant difference in fat loss between the groups. That is a negative result.

Although there was still an observed difference in fat loss between the groups at 24 months, it wasn’t big enough for the researchers to be reasonably confident that it wasn’t just due to random variation. That’s partly due to the size of the difference observed, and also because the study was so small. 70 people split into 2 groups is very small for this kind of study, whereas a good sample size would be hundreds or even thousands of participants, not just a few dozen. Of course, such large studies are much more difficult and expensive to undertake, so a lot of smaller studies like this do happen. Sample size is very important though – small studies like this are not nearly as reliable as the much larger ones – so it’s important to remember to take the sample size into account when evaluating a study’s conclusions.

The Herald article does mention, way down near the bottom, that all of the participants in the study were obese postmenopausal women. Everywhere else, however, it avoids that caveat and seems to imply that the conclusions should be applicable to everyone, or at least to all women.

It’s also rather frustrating that the article says that the study “found [the “caveman diet”] more effective than some modern diets”, and that this study suggests it is “the best way to lose weight”, even though the study didn’t compare it with “some modern diets”. It compared it with a single other diet, one based on the Nordic Nutritional Recommendations.

If the Herald wants some tips on how to report on science, a great place to start would be to take another look at the science itself. The conclusion in the abstract of the study they’re writing about seems much more appropriate, even if it does seem a bit dismissive of the negative 24 month results:

A PD [Paleolithic-type diet] has greater beneficial effects vs an NNR [Nordic Nutritional Recommendations] diet regarding fat mass, abdominal obesity and triglyceride levels in obese postmenopausal women; effects not sustained for anthropometric measurements at 24 months. Adherence to protein intake was poor in the PD group. The long-term consequences of these changes remain to be studied.

Then again, perhaps I should be glad the Herald didn’t reprint the original headline from the Daily Telegraph:

Caveman diet twice as effective as modern diets

I’m not sure I could come up with a more misleading headline if I tried.

Despite the horrific headline, the original article does have a bit more information in its second half from the study’s primary author that was truncated from the Herald’s reprint.

Telescopes Are for Everyone

UPDATE 2014/17/01 1:54pm NZDT:

The content of the page has now been updated for the better, although its title still says “Telescopes for astronomy or land viewing; a great gift for him”.


Over the summer holidays, for the first time ever I took some binoculars outside on a clear night and looked up at the sky. I had some idea of what I could expect to see, but still the number of stars I could see surprised me. It was marvellous and elating, and settled my resolve to (finally) take up amateur astronomy as a hobby instead of just an idle interest. As a first step, I plan to buy a telescope. Earlier today, I searched online to see what was available and was very disappointed by something that I found.

The New Zealand website telescopes.net.nz sells telescopes, which is all well and good, but they have decided to market them as a “gift for him”. The title of the page is “Telescopes for astronomy or land viewing; a great gift for him.”, and the content advertises them like this:

A telescope makes the perfect Christmas gift for him and we have picked the best brands available in the market today. Whether you want a gift for your Dad, the man in your life or you just want to indulge yourself, you will find it here.

I was thoroughly disappointed and, horribly, not particularly shocked to see such sexism regarding this. As a white straight male, I have tremendous privilege; my male privilege would let me simply ignore this if I wanted to, as it’s not an affront to me. However, I follow various fantastic (I cannot stress that enough) female astronomers, astrophysicists, and astronauts through Twitter and blogs, and have long been aware of the existence of sexism in STEM fields. In fields such as astronomy, there is a large gender gap – these fields are dominated (in terms of numbers) by men – and this is only made worse by commonplace sexist attitudes. Even if some of this behaviour is entirely innocent, it still does active harm by excluding women and girls from astronomy. Although my privilege gives me the option to ignore this, I consciously choose to be a feminist and fight these attitudes. Telescopes are for everyone.

I shared this page on Twitter, and the great Katie Mack (@AstroKatie) sent them an email. Seeing this, I also sent an email to telescopes.net.nz, which I include in full below, urging them to change the content of their website and the attitude from which it sprang. If you would like to do the same, you can contact them at staff@telescopes.net.nz. Here is the email I sent them:

Dear Telescopes New Zealand,

I’m interested in amateur astronomy, having for the first time viewed the night sky through binoculars over the summer holidays and marvelled at how many stars I could see. I decided that in the new year, I would like to buy a telescope and take up amateur astronomy as a hobby. I found your website when searching for a telescope online.

However, when I came upon your telescopes page I was very disappointed to see it advertised telescopes as a “gift for him”, casually excluding all the women and girls that are interested in astronomy. Unfortunately, women already face significant sexism in STEM fields, including astronomy. Attitudes such as the one shown on your website only encourage this hostile environment, and harm astronomy as a whole.

I hope that you will revise the content of your website. I’m sure you can see that its current content, however innocent your intentions may have been, contributes to a harmful atmosphere of exclusion. Taking a more positive attitude toward women in astronomy will only benefit yourselves, amateur astronomers, and the field of astronomy itself.

Sincerely,
Mark Hanna

I’ve set up a change detection service monitoring the page, and hope to see it change for the better soon. If it doesn’t then, needless to say, I certainly won’t be buying my telescope from telescopes.net.nz.

Why Testimonials Aren’t Enough

The business of “natural health” rests heavily on the use of testimonials. They are used in advertisements by people selling therapeutic products and services, and you’ll hear them as anecdotes from people that you know telling you what worked for them. Intuitively, it makes sense to trust in this sort of experience, but unfortunately testimonials and personal experience are not good ways of evaluating a treatment option.

I don’t expect you to take my word for this. Maybe you were told by a doctor that you’d need an operation, then you had reiki therapy and after that your doctor said the problem was no longer there. Perhaps your first child had terrible teething troubles, but on your second child you used a Baltic amber teething necklace and they didn’t have the same problems, but you swear if you forget to put it on them they become agitated. Or maybe you’ve been spraying a colloidal silver solution onto the back of your throat whenever you feel a cold coming on and you haven’t been sick in years. Who am I to doubt or deny your experience?

These are all testimonials that I have heard personally, not from advertisements but from individual people relating their own experiences to me. But still, I remain unconvinced that reiki is any more than an exotic twist on faith healing (that is just as ineffective), that Baltic amber teething necklaces are anything but expensive yet inert jewellery, and that colloidal silver is much good for anything other than causing argyria.

In this series of blog posts, I intend to explain to you why I don’t consider anecdotes like these to be useful in drawing any conclusions about therapeutic interventions. But first, I’d like to point out that I am not trying to be dismissive of personal experience. I don’t think anecdotes are all lies, or anything of that nature, and personal experience can certainly be useful in drawing all sorts of conclusions in everyday life. The only conclusion I am arguing for here is that anecdotes are not useful for evaluating the efficacy of therapeutic interventions.


In searching for any truth, we have to be very careful not to jump to conclusions. There will always be a vast number of potential explanations for any observation, and if we really care about the truth then we can’t just pick the explanation that we like the most, or even the one that we think is most likely. Some possible explanations can be ruled out right from the start, if they’re impossible to test, but the explanations that can be tested are known as hypotheses. If we want to determine whether or not one particular hypothesis is correct, we should design and carry out a test that will rule out every other potential cause of our observation.

Note that this method of testing does not prove anything. Instead, it focuses on ruling out everything else, until only one idea is left standing. The key to designing a good test of an intervention is to make sure anything you observe is as unlikely as possible to be due to anything other than the intervention. This means that, in order to design a good test of an intervention, it is important to have a good understanding of what these other potential causes are.


After This, Therefore Because of This

There’s a formal logical fallacy that’s usually known by its latin name post hoc ergo propter hoc, which translates to “After this, therefore because of this”. The fallacy is of the form:

  1. A happened, then B happened
  2. Therefore A caused B

Of course, the reason why this is a logical fallacy is that it’s entirely possible that something other than A was the cause of B. This doesn’t mean that the conclusion is false, but it does mean that it is not necessarily true.

Anecdotes take the same form as the above example: “I tried treatment X and I got better”. Although experiences like this can result in strong beliefs, the fact that the improvement happened after the treatment does not mean the treatment necessarily helped at all. Instead, the improvement could have been due to a few different things.

Self-Limiting Conditions

Many common health conditions are self-limiting. This means that, left to their own devices, they will almost always go away in time. The common cold is an example of a self-limiting illness. Unless you are seriously immunocompromised, if you catch a cold you will be fine again after a few days. This includes things like the flu, teething, colic, and acne. Pretty much everything that isn’t a chronic illness and won’t kill you is self-limiting.

Regression to the Mean

Even when nothing external seems to be changing, your health is not constant. Instead, it fluctuates over time around a baseline level of health that itself changes over longer amounts of time. This baseline is basically your average health over a certain period of time; the mean. The tendency for your wellbeing to return to this mean after a fluctuation is known as regression to the mean.

This is a picture of 300 random data points generated in Microsoft Excel. Starting with 0, I added a random number between -0.5 and 0.5 to the running total 310 times, and then took a 10 point running average to smooth the resulting curve.

Regression to the mean

As you can see, even though the changes are all random, trends do form and the data oscillate around a particular mean. Especially over longer periods of time, the data will tend to return to that mean.

I’ve indicated the 2 most prominent downward trends with arrows. As you might imagine, such low points in a person’s health could motivate a person to take a therapeutic intervention in order to reverse this trend. After the intervention, they’ll likely start to feel better, but as you can see by this graph such variations can happen randomly, and it can be very hard to say whether an improvement was caused by something in particular or if it was just the result of regression to the mean.

For example, I get frequent headaches. However, the frequency and intensity of those headaches varies from day to day, just due to random chance. I’d be more likely to decide to seek a therapeutic intervention on a particularly bad day. However, considering that my wellbeing is fluctuating around a mean value I’d expect my headaches to return to their “normal” level, unless of course something has changed to make them worse on average. If I take an intervention and then the next day my headaches are better, how can I know whether it’s due to the intervention or regression to the mean?

Spontaneous Remission

Even with illnesses that are not self-limiting, spontaneous remission that has no obvious cause is something that does happen occasionally. I’m not familiar with the data on this, so I won’t go into it in too much depth, but it is worth knowing that even some serious illnesses can get better on their own, so even some sudden recoveries from serious illnesses can happen on their own, whether an intervention has recently been used or not.


As you may have noticed, these things all have a common theme. They describe ways in which health can improve on its own, which make it difficult to tell whether a particular improvement is due to an intervention or if it would have happened anyway. Ideally, in order to tell the difference, we’d travel back in time in order to try without the intervention and see what would have happened in that case, but unfortunately that’s not an option. The next best method is to have what is known as a control that has the same problem but doesn’t get the treatment.

However, as I discussed earlier, health fluctuates on its own. If the person receiving the intervention improves and the person acting as the control stays the same or gets worse, we still can’t be too sure that the intervention was helping. Variations between different people can make outcomes difficult to interpret as well. Like how random fluctuations will tend to return to the mean over longer periods of time, testing more people will smooth over these random variations. The more people we include in both the treatment group and the control group, the better, as having more observations will help us to tell whether any effect we observe is due to random variation or due to the intervention itself.

Having a control group and a large sample size are 2 aspects of a good test of a therapeutic intervention, but that’s not all there is to it. In my next post, I’ll discuss some other potential confounding factors, and how we can modify our test in order to account for them.

Why is Replication so Important?

One of the most important principles of the scientific method is reproducibility. A valid result should be able to be replicated independently, whereas an invalid result (originally achieved due to some error or perhaps just chance) will not be able to be consistently reproduced.

This is a concept that I didn’t fully understand for a long time. I had reasoned that, say, doubling the sample size of an experiment should be just as good a way of confirming its results as performing the same experiment a second time with a different sample of the same size. This seemed intuitive to me, but eventually I came to understand why it is not the case.

The reason why this is the case has to do with researcher degrees of freedom. In an original experiment, the experimenters have the freedom to make certain choices. Some choices may have been made beforehand, whereas others are made after the study has been started. These choices may not all be made consciously, or they may be made consciously with only unconscious bias, but the fact that the choices are made at all after the experiment has begun affects the reliability of the results.

In contrast, when a well-designed experiment is being replicated, the choices have all been made beforehand, as the replication follows exactly the same protocols as the original study. This includes making all the same measurements and undergoing the same analysis. This reduces the researcher degrees of freedom for the replication experiment, so if the same results can be reproduced that’s a good indication that the original results were accurate, whereas if they can’t then it likely means they were due to some combination of bias and chance.


In some ways this can be pretty intuitive. For example, if the experimenters were to carry out a variety of statistical analyses on their data and select the one that was most favourable to their hypothesis, then replicating the experiment with that same method analysis selected beforehand will reduce the bias from the initial decision.

Another great use for replication is in confirming the results of subgroup comparisons. For example, if I were studying the effect of a new drug on reducing blood pressure, I might perform comparisons between several subgroups and find it to be particularly effective in, say, people with type 1 diabetes. However, as more comparisons are made, the bar for statistical significance gets higher and higher. If I need to be 95% confident that my result is not due to chance for it to be statistically significant, then I can expect 1 in 20 results to appear significant by chance alone. There’s a great xkcd strip that demonstrates how this can lead to unreliable results (remember to read the strip’s alt text while you’re there) – xkcd: Significant


I find a scenario known as the “Monty Hall problem” (or the “Three door problem”) to be an illustrative analogy of the importance of researcher degrees of freedom, especially in showing how unintuitive this importance can be. The problem goes something like this:

Imagine you’re a contestant in a game show. In front of you there are 3 doors, and you have to pick one of them. You have been told that behind 1 of these doors there is a car, but behind each of the other 2 doors there is a goat. You get to keep whatever is behind the door you open and, of course, you want to win the car.

After you have made your initial choice, the host of the game show opens one of the 2 doors that you didn’t choose and shows you that there is a goat behind it. Now, after seeing this you are given the opportunity to change your choice.

Intuitively it feels as though changing your choice would not affect your chances of winning. After all, you know that the door you picked has a 1 in 3 chance of having the car behind it, and if you’d picked the remaining door first you’d have the same chance.

However, changing your choice at this point will double your odds of picking the car.


I find the easiest way to understand this is to run through each possibility in order to show the outcomes.

When you first pick a door, there are 2 possibilities – either you’ve picked the door with the car, or you’ve picked one of the doors with a goat. In 1 of every 3 attempts you will pick the correct door first and, if you don’t change your decision, you’ll get the car. In the remaining 2 attempts you will pick an incorrect door first and get a goat. So, the chance of picking the car if you don’t change your decision is 1/3.

What if you do change your choice, though? 1/3 times you will have picked the car originally, so when you change your decision you will lose. However, what if the first door you picked had a goat behind it? In this case, the host will open the other door that has a goat behind it, so the only remaining door is the one with the car. This means that if you change your choice you are twice as likely to win, because your chance of winning would be 2/3.

This same power applies to researcher degrees of freedom. Decisions made once some or all of the data are known can have an effect on the reliability of the result.


In order to show the power of replication, let’s re-imagine the Monty Hall problem. This time, you know there are 3 doors, and that behind each of them is either a car or a goat, and the same objects aren’t always behind the same doors. However, you don’t know if there are 2 goats and 1 car or if there are 2 cars and 1 goat.

Now, let’s imagine you want to test the hypothesis that there are 2 cars and 1 goat behind the doors. In order to test this, you pick a door that you think has a car behind it. Once you’ve made that choice, however, you somehow find out that one of the other doors has a goat behind it, and knowing this makes you (consciously or unconsciously, it doesn’t matter) change your decision to the remaining door.

In order to determine the probability that you’d pick the car, you’d need to repeat this many times. Using this approach, you’d pick the car about 2 out of every 3 attempts. This could lead you to conclude that there must be 2 cars behind the doors, instead of just one. However, we know that this is not the case! We would be able to show this by replicating the experiment.

In this case, an experiment to replicate these results might pick the same doors as the first experiment had eventually picked. Because this experiment has reduced the researcher degrees of freedom by making those choices beforehand, the result of the original experiment will be found not to be reproducible.


Of course, this analogy is exaggerated so as to make my point very obvious. In reality, the biases involved are much more subtle, but they are still there. The important thing to realise is that it is possible, even with the absolute best of intentions, to come to an unreliable result due to unconscious bias. In fact, every single decision can be made honestly and seem justified at the time, but the accumulated effect of many such choices will have an effect on the results. It’s because of this that replication is so important in science.

Common choices that can affect the reliability of results by being made after the experiment has started include when to stop the experiment, how to analyse the data, and which subgroup comparisons to carry out.

The explanation that really helped me realise this was by Steve Novella in episode 373 of the SGU podcast (the relevant segment starts at 12:04), discussing replications of the Psi Research done by Daryl Bem. He’s written a post about the original research on his own blog, Neurologica Blog: Bem’s Psi Research, and a post on the replications on Science-Based Medicine: The Power of Replication – Bem’s Psi Research.

In a nutshell, Bem’s experiments were well-designed (essentially they carried out some classic psychological experiments in reverse order) and the results were statistically significant and seemed to imply that the subjects exhibited precognition: the ability to predict supposedly random future events. However, when some of his experiments were replicated with all of the decisions made beforehand the results showed no ability better than what would be expected by chance.


Sometimes, good science gives strange and unexpected results. In some cases, such as in Bem’s psi research, the results could even be called extraordinary. However, false positives can and do occur for a multitude of reasons, so in cases like this it’s important to remember that “extraordinary claims require extraordinary evidence”. In the face of such claims the correct course of action is neither to jump on the bandwagon nor to discard the results as false but to be on the lookout for quality replication. We live in an honest universe, and with time truth will out.

Partial Solar Eclipse in Auckland

This morning there was a solar eclipse. From Auckland, I was able to see the sun about 87% covered by the moon just before 10:30 this morning. My girlfriend Eileen managed to get 4 pairs of solar viewing glasses from Stardome observatory last night (apparently the last 4 they had at the time), and I took 3 with me to work to share around.

I managed to take a few mediocre photos of it by putting the solar filter in front of my iPhone’s camera lens. Most of the time it was all glare, but if there was just the right amount of cloud coverage you could photograph the crescent shape. Here’s the best photo that I managed to take:

Partial solar eclipse
Partial solar eclipse

Meanwhile, Eileen was viewing it from Auckland Uni and sent me this great photo of a projection of the eclipse, which shows how you can safely view it if you lack the equipment to look directly at it:

Solar eclipse projection
A projection of the partial solar eclipse viewed from Auckland University

Like she said:

It is quite neat – holding the sun in the palm of your hand

On a related note, here’s a photo from earlier this year that she took of a similar projection showing the transit of Venus:

Projection of the transit of Venus
A projection of the transit of Venus