The New Zealand Herald has an article today about a cool and very popular image of the Moon positioned perfectly within a radio satellite, produced by astrophotographer Chris Pegman: Supermoon image goes into media orbit
The article talks about how there has been debate online about whether or not this could be taken without resorting to Photoshop. It concludes that “the verdict was that it might be, but it would require an incredible amount of planning” but this isn’t strictly correct.
The apparent rotation of the Moon changes as it travels through the sky. When it rises, it will appear to be “on its side” relative to when it is at its zenith, and when it sets it will have rotated further still.
This is most obvious with a crescent Moon. Depending on if it’s waxing or waning, the Moon will rise with the crescent facing either down or up, then when it’s at its zenith the crescent will be facing sideways, and as it sets it will have rotated around further. Of course, the lit side of the Moon always faces the Sun. It’s the fact that the Earth rotates beneath us that makes it look like the Moon is rotating as it travels across the sky.
Here’s an example of this which I took with my phone in July, showing a waning crescent Moon shortly before sunset:
We can see from the lunar maria (the dark areas) that the Moon in Chris Pegman’s picture is rotated how it would be if (when viewed from the southern hemisphere) it were near its peak, not near the horizon, so his picture couldn’t be produced without artificial manipulation.
Mark Gee is a fantastic astrophotographer from Wellington. In October he captured a time lapse of a full moon rising, in which you can clearly see that angle of the Moon is not the same as in Chris Pegman’s image when it rises: Supermoon rises over New Zealand timelapse.
There’s a Twitter account called Fake Astropix, which tweets fake astronomical images with the reasons why they are recognised as fake (well, as much as can be given within a tweet).
I find these reasons can be very educational and thought provoking. For example, it’s impossible to take a photo from Earth where the Sun and Moon don’t appear to be roughly the same apparent size. Also, the full moon can’t appear next to the Sun in the sky (remember the lit side faces the Sun). So “debunking” these fake astronomical images can be a good educational exercise that makes you think a bit more carefully about how things work in our solar system.
Do you have any fake astronomical images that you can share, along with the reason why you can tell it must have been faked?
Have you seen any astronomical images that you think might be fake but you’re not sure? Share them here and let’s investigate, and see if we can learn something.
I was born in June 1989. Just 2 months after that, the Voyager 2 spacecraft flew past Neptune on its way out of our solar system and for the first time ever we saw real close up pictures of the solar system’s outermost giant planet.
Of course, I was too young to remember this, but it means that as I’ve been growing up I’ve been able to see amazing and beautiful photos of the 8 planets of our solar system. Pluto is no longer classified as a planet, but nonetheless it has been conspicuously missing from this set for my whole life.
That is, until this week. After travelling for 9 1/2 years, the New Horizons spacecraft finally flew past the only remaining “classic planet” that we had not yet explored. We are seeing real close up pictures of a new world, one that we’ve never seen like this before. And it’s glorious:
This image is made by compositing the high resolution black and white image taken from LORRI (LOng Range Reconnaissance Imager) and colour imagery taken from Ralph (New Horizons’ colour imager). It was sent back to Earth by New Horizons prior to its exciting flyby of Pluto on Tuesday night (closest approach was 11:50 pm Tuesday 2015/07/14, New Zealand time), having been taken 16 hours ahead of time.
It sent back data ahead of time because during the 24 hour flyby New Horizons was busy collecting data. All of its instrumentation is built into the body of the spacecraft, so in order to point them the whole spacecraft needs to rotate. This means that in order to talk to Earth it has to look away from Pluto, so we had to wait in patient agony while it was collecting data.
After the flyby, New Horizons sent a packet of telemetry data back to Earth, containing information about how its systems were doing. The flyby was the most dangerous part of the mission; travelling at about 14 kilometres per second, an impact with even a tiny piece of debris could absolutely destroy the spacecraft and near Pluto is the most likely place to encounter such debris. Since New Horizons was travelling into the unknown, no one could guarantee this wouldn’t happen, although NASA was confident that the chance of such a collision was very low.
The “Phone Home” signal reached Earth at 12:52:37 pm on Wednesday 2015/07/15 (New Zealand time), telling us that everything went perfectly. New Horizons survived the flyby! The next stage of the mission is to send down all the data it collected, but getting data back from Pluto is hard. Even at the speed of light signals take over 4 hours to cross the distance, and the transfer rate varies from just 1 kb/s to a whopping 4 kb/s. It’ll take around 16 months to get all the data from the flyby down to Earth.
This morning (7am Thursday 2015/07/16 New Zealand time) NASA held a press conference in which they released some new images, including a high resolution image of Pluto’s largest moon Charon:
There’s a lot of interesting stuff going on here, for example that little notch you can see in the upper right is a canyon that’s 6-10 kilometres deep. So deep that you’re looking through it to the space behind Charon. The dark polar region at the top, which has been informally named “Mordor”, is also interesting. The fact that impact craters allow lighter material to peek through makes it seem as though the dark material on the surface is just thin layer. Apparently one possible cause of this could be some form of atmospheric transfer from Pluto.
At the press conference, NASA also released a much higher resolution of an area in the “heart” region of Pluto. The heart has been named Tombaugh Regio after Clyde Tombaugh, who discovered Pluto in 1930. Although he died in 1997, New Horizons carried some of his ashes on board to honour his request for his ashes to be taken to space. Here’s the image:
This is a very interesting image. The first thing you might notice is the mountains. These are about 3.5 kilometres tall, and almost certainly made of water ice. Also, there are no impact craters in this photo. That’s strange, really strange. It means that Pluto’s surface is new, probably less than 100 million years old.
Pluto is the first icy world we’ve seen that isn’t also the moon of a gas giant. Icy moons like Saturn’s moon Enceladus get stretched and heated by the tidal forces placed on them due to their close proximity to much more massive bodies. Tidal forces are caused by a gradient in the strength of gravity – parts of a body closer to a source of gravity experience a stronger force than those further away, and when this gradient is strong (which happens when you’re nearby a much larger body) the tidal forces are greater. The Moon’s tidal influence on Earth drives our oceanic tides, and the extreme tidal forces you’d experience crossing the event horizon of a black hole are what would turn you into a space noodle.
But Pluto isn’t near any other large bodies that could exert this sort of tidal force on it. Charon is around half its size, but Pluto and Charon are both tidally locked to one another, meaning the same parts always face each other, so the tidal forces never change. There must be some other process driving geological activity on Pluto, and we don’t know what that is yet.
This isn’t the end for New Horizons, even after the 16 month period of sending all its data down it’s going to continue into the outer region of the solar system known as the Kuiper belt, and hopefully visit one or two more icy worlds.
For now, though, we have finally completed our reconnaissance of the solar system. This is the end of the beginning.
After my earlier post on this topic, I talked to a few people about why they thought these stretching reflections happened. There were a few different ideas, and when I talked to my brother about it he pointed out something in one of the images on my last post that was inconsistent with my explanation.
My hypothesis would have predicted that reflections would stretch down, but not up. However, looking more carefully at this image, the reflection of the Sun is clearly both stretching down and stretching up to the horizon. So it can’t be explained just by the surface appearing to be rougher as it gets closer to the observer.
However, in that discussion we came up with a new hypothesis. As I said in my last post, if we imagine a rough surface as being made up of a lot of small flat mirrors at random angles, some of them will be at the correct angle to reflect light toward you so you’ll see a reflection in those places. The new hypothesis was that the angle required for this would be less extreme above and below the reflection than to the side of it.
In order to test this, I needed 3 things:
A light source
A flat reflective surface
A flat surface to rest it all on
Luckily, these things were all readily at hand. For a light source, I used a nearby lamp. My phone’s screen made a good flat reflective surface. I used the alarm remote for my car as the wedge, and rested everything on the floor. I’m sure you could find similar objects to reproduce this experiment for yourself.
First, I lined up the lamp, my phone, and myself so that I could see the lamp’s reflection in the centre of my phone’s screen when it was sitting flat on the floor. Then, using my makeshift wedge I tiled the screen of my phone away from me, then moved the tilted reflective surface towards me until the lamps’ reflection was in the middle of the screen.
I then repeated this for the other directions – away from me, to the left, and to the right. Because my phone isn’t square, I also rotated it so it was landscape when I moved it towards me and away from me, but portrait when moving it left and right. That made it easier to judge when the reflection was in the centre of its screen.
What I found was that I had to move the phone a lot further toward me or away from me than I had to move it left or right in order to see the reflection again. I think this explains, at least in part, why reflections on rough surfaces appear to be stretched towards you.
We can get a rough approximation of the outline of a reflection on a rough surface by assuming it has a maximum roughness, i.e. the maximum angle at which one of those little mirrors that make up its rough surface could be tilted. Then, the approximate outline of the reflection would be along the curve where a mirror at that maximum angle, facing in the right direction, would reflect light toward you.
On a perfectly flat surface, this maximum angle is 0. So the shape of the reflection is exactly as you’d expect, undistorted.
However, as the maximum roughness of the surface increases, the outline moves out from the undistorted reflection. And the reflection doesn’t just get larger, it gets stretched towards you. It’s because the angle required to reflect it at you is less within that outline that reflections on rough surfaces appear to be stretched.
The simulation works by sending out rays from the observer to hit different parts of a horizontal reflective surface. When a ray hits the surface, the simulation calculates the angle that would be required at that point to cause the simulation’s light source (displayed as a red dot) to be reflected there. Places where there would be a reflection are shaded according to the required angle, with brighter yellow areas being flatter, and areas where there would be no reflection are black. The simulation also draws a reflected red dot to show where the reflection would be on a very flat surface.
There are a few numbers you can configure to see how the shape of the shadow changes under various scenarios:
Light source distance
The distance “into the screen” that the light source (the red dot) is from you.
Light source height
How much higher than you the light source is. You’ll want to make sure it’s higher than the reflector.
How much lower (using negative numbers) the reflective surface is than you. The simulation doesn’t look above horizontal for reflections, so this won’t work with positive numbers.
The maximum amount of roughness the reflective surface can have. Higher numbers are rougher, lower numbers are flatter.
How far apart the rays are, in degrees. The default setting is 0.1 degrees. Larger step sizes will make the simulation run faster, but it will be less precise.
The simulation shows how reflections can be stretched vertically in this way, depending on the roughness of the reflecting surface and the relative positions of the observer and the light source. If you make the light source very far away and near the horizon, you’ll see that the reflection can stretch all the way up to the horizon just like the Sun’s reflection in that picture.
However, there’s still a decent amount of horizontal spreading so I don’t think this entirely explains the stretched reflections. Yesterday, I saw this beautiful photo on Twitter, taken by Ian Griffin of a sunset in Otago:
In this photo, there is pretty much no horizontal stretching. This can be seen in the black lines in the reflection caused by trees blocking the Sun’s light – if the reflection were stretching sideways then these would be blurred and wouldn’t have such a uniform thickness.
There could be a few things helping in this case. Because this particular example is taken with water being the reflective surface, and the observer was standing at the shore, the waves are mostly perpendicular to the line of sight. That would help minimise horizontal scattering.
It can’t be just that, though, because the same stretching is seen on rough surfaces where the roughness has no direction, such as wet roads:
I think the rest of this could possibly be explained by surfaces that reflect the light straight towards you from under the light source appearing larger, because they’re angled towards you. Surfaces to either side of the reflection could also reflect the light towards you, but perspective would cause them to be foreshortened and therefore contribute less to the overall picture.
On a rough reflective surface like the ocean or a dark wet road, reflections from bright lights like city lights, car brake lights, or the Moon appear stretched vertically. Why is this?
When a surface is perfectly flat, like a regular mirror, the image we see in the reflection isn’t distorted at all. Even if we put a mirror flat on the ground, we wouldn’t see a vertically stretched reflection like this.
Neither the road nor the ocean are perfectly flat though. Their surfaces are rough, and this rough surface scatters light when it’s reflected. If we imagine that each piece of the surface was a little flat mirror, with each piece facing in a random direction, some of these would be at the right angle to reflect light from a source (like the Sun) directly into our eyes, and most would not. We’d only see a reflection in those pieces that are at the correct angle to reflect the light into our eyes.
The further these little mirrors are from the area where we’d see the reflection in a flat mirror, the more extreme an angle they will need in order to still reflect the light at us. If every one of these little surfaces was really really tiny, what we’d expect to see is a blurry reflection. The smaller the pieces get, the less blurry the reflection would get.
We can actually see this in effect when we compare pictures of the Sun reflected off the ocean. When you’re quite near the ocean, all the different reflecting surfaces are relatively large so the reflection is quite blurry and broken (especially if there are lots of waves):
In comparison, if we look at a reflection of the Sun on the ocean that was taken from space, all the waves and ripples that distort the reflection are far too tiny to see, and as a result the reflection is quite clear and crisp:
Another difference that’s quite apparent between these photos is the vertical stretching that I’ve been wondering about. From up close, it’s very stretched. From a distance? Not so much. This gives me a thought, one that actually hadn’t occurred to me until I got to this point in writing this post and saw those images one after another:
What if it’s important that there’s a significant relative distance between the closest and furthest parts of the surface that are reflecting the light source?
From a long way away, these distances appear quite small. For example, if I’m 1 km away from a surface, then a 1 m distance between two points on that surface is really quite small. If I’m only a metre away myself though, then that’s a very significant distance.
As we just saw, reflections on non-flat surfaces are more blurry when they’re closer to you, so what if this vertical stretching is actually just the reflection getting more blurry towards the bottom, because that part of the road or ocean is closer? As it’s more blurry, this would let the edge of the reflection creep out further, and could look like stretching.
If I’m right, then I should be able to see the same type of stretching if I look at a reflection on a vertical surface, except the stretching would be horizontal in that case. I should also be able to replicate the same stretching effect if I can get a reflecting surface that is smooth on top and gets rougher towards the bottom, and look at a reflection of a light in it like I would a normal mirror (i.e. with the reflecting surface vertical and the light source behind me).
Let me know what you think of this idea in the comments, and if you have any ideas of your own for why we see these stretched reflections. Any ideas about how I could try to disprove my idea would be welcome too! In the meantime, I’ll try to do these experiments, and see if I can find an expert to talk to about this question.
On the second Wednesday of every month, there’s a great Twitter chat on science communication in New Zealand: #SciCommNZ
Unfortunately I’m always busy on Wednesday evenings while this is going on, but I’ve tried to participate as much as I can by joining in late and reading through each discussion. The questions that have been asked have made me think about the things I write about on this blog, and some of the things I’d like to write about:
After having these thoughts churn around in my head for a few weeks, I’ve come up with something I’d like to try.
There are a lot of “everyday science” questions that I see asked and answered fairly often. Common examples include “why is the sky blue?” (which is not quite as simple as you might think) and “how do rainbows work?”. I really like these questions, but I feel sometimes like they’ve all been done many times already.
Of course, they haven’t all been done many times already. But I do feel like I see the same “everyday science” questions over and over again. I think they’re great and really interesting the first time you encounter them, so I want more.
As a remedy to this, and as an attempt to do something different and (hopefully) interesting with my science communication, I’m going to start asking some of my own everyday science questions. This might be a bit grandiose, but I’m calling this little project of mine Natural Curiousity
The format may change as I get into it, but the way I see this happening is to take every question in two parts:
First, I’ll write a post framing the question and some of my own thoughts (as a non-scientist) on what the potential answers might be, and what some problems with those potential answers might be. I want to try to do this without any Googling, but I might try a few homemade experiments. My hope would be that posts like these could get some interesting discussion going, but I guess we’ll see.
After that, I’d like to talk with someone who is an expert in a relevant topic and get their thoughts on the question, both on the potential answers brought up in the first post and on what they think the answer probably is and why. This isn’t something I’ve done before, so I hope I’ll be able to find some experts who’ll be happy to find some time to talk to me about this.
If you have any everyday science questions that you’ve been wondering about, let me know in the comments. I’ll update this post with links to posts using this format as I publish them.