Dear citizen scientists!
Back then I promised to get HiRISE to image this scene again when the ice is gone. The image is now here, very fresh from the production pipeline:
I framed this image exactly the same as the original “topic-starter”. This way everybody can see that we are talking about the same feature, as 2 months ago. The ice cover is not completely gone yet: one can still see small persistent leftovers in some shadowed places, mostly in small channels. But ice has cleared the feature in question. It happened to be a crater with an unusual inner surface. As often it is hard to see if it is a cone or a depression, but we know that the sunlight comes from bottom right-ish and this tells us it is a crater. To see it better, here is the best zoom HiRISE has to offer (appr. 30 cm/pix resolution):
Without the ice the crater floor looks smooth and its sides show very pronounced polygons. The ice in the trenches between these polygons created the illusion of a bright branching “crown” in our original image. And smooth blanketing on one side of the crater had smooth ice cover that tricked people into seeing Dalek or octagonal-based chimney. So sorry that neither of those are real!
When I saw the polygons inside the crater, I decided to ask my more knowledgeable colleague about them. Mike Mellon (from Southwest Research Institute in Boulder, CO) did a lot of research about polygons on Mars and Earth. He had a look at both, old and new, HiRISE images for us:
“I can see that there are loads of eroded polygons in the region. There are large ones with clear evidence of subdivision into smaller forms. Smaller polygons cover nearly all the surface (roughly 3 meters in diameter). In some areas they are clear and in others they are so small and flat they are a little hard to see. They are completely invisible at the old image because of its poor resolution. The polygons inside the crater are on the order of 5 meters with more deeply incised troughs, so they are easier to see. The crater floor looks featureless. I see these same larger polygons in the neighboring pits, the pits that appear arrayed N-S. The larger polygon size in the crater may be caused by sun light on the crater slopes.
The presence of polygons and their sizes are consistent with ice cemented soils about 2-5 meters deep. But it is not easy to interpret the details and especially hard to eliminate the seasonal effects from this discussion, as this location is very close to the polar cap and spider-related processes happens here and there.”
With this we came back to where we have started from: fans and spiders. The image made it to this project because it had them all. Now plus polygons, minus the chimney.
Here at PlanetFour we are really happy to share with you color images taken with the HiRISE camera. But we should have written one disclaimer somewhere a long time ago:
HiRISE is a great camera, but it, unfortunately, does not show you the martian surface in the colors that you would see with your own naked eyes.
If you would find yourself on Mars (lucky you!) things would look different. Sorry for this, but actually, no camera, even here on Earth, does show you exactly the same as what you see. You might be familiar with the situation when two different cameras take two images in the same setting and one image turns out reddish while the other one – totally greenish. Why is that? The answer lies in the way cameras, or more strictly saying camera detectors, create color images.
Let’s start from creating a black and white image. This is fairly simple: each pixel of our CCD is sensitive to light, i.e. the more light it gets the higher digital number it records. When you arrange many of pixels next to each other and place them all in front of a contrast scene, dark areas will get small numbers, bright areas – large numbers, and you have created a digital image of the scene. This image contains only intensity information and no color information. In real life most standard CCD cameras are sensitive in the wide visible range with enhancement in the red side of the spectrum.
By the way, our eyes have something like a CCD: the retina, an array of cells sensitive to light. For example, an array of rod cells would most closely resemble a black and white CCD. Rods are very sensitive and allow us to see in rather dim light, but only in shades of gray. Additionally, we have a set of cone cells of three different types. These are not as sensitive as rod cells and highly specialized: each type is sensitive to different parts of the visual spectra, Red, Green and Blue. Combination of the signals from these gives us color vision.
As one can see on the plot below, the standard CCD’s spectrum coverage is larger compared to the human eye and it is more sensitive in the red side of the spectrum. Human vision coverage is shown here separately for scotopic vision (means vision with rod cells, i.e. in low light conditions) and photopic (with cone cells, i.e. color vision.)
The figure is from http://www.microscopyu.com
To create a color image HiRISE combines images from three CCDs each of which has a color filter placed in front of it. For a realistic color image we would ideally want to combine Red, Green and Blue images. Or any other set of colors that, when mixed, can reproduce the full color spectrum available to the human eye.
In reality HiRISE has Red, Infra-Red and Greenish-Blue filters. The scientific team of the camera selected this set of colors to be able to distinguish different minerals in the top surface of Mars from each other but also for some technical reasons. It is hard (and expensive) to make a CCD sensitive in the blue side of the spectrum. It is also hard to produce filters that let only very narrow spectral band to pass through. This is why the HiRISE Blue filter is slightly greenish.
If one simply merges the available HiRISE filters together, the result is similar to the left side of the image below. Here the Red channel carries the IR image from HiRISE, the Green channel the Red HiRISE image, and the Blue channel the Green-Blue HiRISE image. And they all are scaled in such a way, that their maximum and minimums are the same (what we call “normalized”). One can see in a small diagram in the top left corner the histograms for the three channels, each drawn in the color they represent. And where they overlap, the color tells you the resulting color of the mix. Because green and blue channels are so similar (HiRISE IR and Red filters are overlapping), our image is mostly yellow-green.
On the right side you see more realistic RGB image created by the HiRISE team to give us better color impression. Smart computer code written by HiRISE engineers artificially created a synthetic green channel from the information available in the three real HiRISE channels. It also slightly increased the “weight” of blue and green channels while putting the three new channels together. In this image it helps to highlight the bluish fans. In other cases it might be useful to increase only blue (or maybe red?) to highlight other features, like minerals in the ground. We can see the effect of these changes in the histogram in the top right corner – red channel stayed mostly same, while blue and green are now narrow and tall, actually off the charts on this plot.
The conclusion? The color information in the images you see in this project is real. The way this information is put together to create a color image varies from image to image. This makes it harder to compare colors between different images, but easier to see features inside each image. There is always a tradeoff, always. For now, let’s admire our pretty color images one at a time. With a pinch of colourful salt.
by Anya Portyankina and Klaus-Michael Aye
I stole the title of this post from our discussion forum. I hope the author – Paul Johnson – forgives me. It expresses well the feeling, that many people had while arguing if the lines that they see are depressions or elevations. But this 1st rule is not 100% correct. Let me explain!
Like this example:
Do you see those squiggle lines as channels or as ridges? Chances are, you are seeing those as ridges. What about the next image?
Of course, you understand that this is the same image, only in the second case we’ve rotated it 180 deg.
So why do you see same features once as positive topography and once as negative?And oh, by the way, how to know which one is the real way?
Very often I get confused by this myself. But i know, that the trick lies inside my own brain. The brain is an amazing machine for the fast feature recognition. It is very useful in everyday life: we need to fast react to the objects around us, so to become efficient, brain developed “fast tracks” that are very useful in most everyday situations and help making microsecond-fast decisions.
One of them is finding faces in everything around us: leaves, clouds, wallpaper, Martian landscape… It is helpful for a human to recognize another human! That’s why people keep finding faces everywhere, also on Mars. Interesting stretch to this is our amazing ability to see a face “correct way” in the negative-face optical illusion. Here you can see a video about it.
Another hard-wired fact for our brain is that light comes from above. You must agree, most of time it does. So, to decide if the surface bends away from us or towards us, our brain assumes that light is from top (and somehow top-left for most people, but not 100% of people).
Unfortunately, what happened in our project is that most images have light coming from lower right of the image. Opposite to what our brain prefers! This is why it is so hard to see spider troughs as channels. As in the first image above.
The fact that most our images got that “unlucky” illumination is not a coincidence. Here is why:
You can imagine Mars as a globe and a spacecraft that flies around it following almost polar orbit, i.e. it goes from North to South, flies above almost south pole and flies from South to North on the other side of Mars. And then repeats all again.We do not rotate our camera relative to the spacecraft, we only can rotate/tilt the spacecraft as a whole. The image of the southern polar areas, that the spacecraft will take on the descending branch of the orbit (flying from North to South) will have sun light coming from below, because sun comes from the equatorial area, same as on Earth. If we take image on ascending branch (moving from South to North), sun will come from the top of the image.
Now, one side of Mars has day, another one is in the darkness of the night, so we can not image there. This means, we mostly get one of those branches for imaging. It depends on the exact orbit parameters, which one.
As a result, most of the images you have got to see have sun from lower side. Not all of them, because rarely, but we do image on the other side of Mars, mostly in summer, when polar areas get polar day.
It’s unlucky, that we got the most problematic sun position for our project. Preparing the images we first had an intention to map them to the Martian surface, but it turned to be more time consuming than useful. We anyway wanted you to mark dark fans and blotches (not spider channels!) and for that not-mapped images are good enough.
So, the 1st corrected rule of Mars: when you see an image of Martian landscape for the first time, do not believe your brain straight away. First check the direction of the sun. You can see which part is in shadow and which is illuminated and then figure out the topography.
Ever fancied taking a trip to the planet Mars? On Planet Four (http://www.planetfour.org) we’re asking everyone to help us find ‘fans’ on the Martian surface. Your classifications on this site will enable researchers to better understand the climate and surface of Mars. The images you see here are taken using a high-resolution camera in orbit around the Red Planet. We collect together everybody’s markings of objects on these images and average the result. This is a task that computers are not reliable at, but which humans are really good at. By crowdsourcing your efforts, we can produce data that will show us what is happening in these images.
Every winter Mars’ polar region is blanketed by a layer of frozen carbon dioxide (dry ice). In the Martian spring, this temporary polar cap warms back up and rapidly changes directly from ice to gas (a process called sublimation). This produces geyser-like eruptions through weak spots in the polar cap. Dust is trapped in these eruptions and is carried away by the wind where it then falls in a fan shape across the surface nearby – you can see these fans in the images on this page.
Over the winter the seasonal ice layer is transformed to translucent slab ice, which allows sunlight to penetrate to the ground below in the spring. The ground warms up causing the ice to sublimate from the bottom. This results in gas becoming trapped below the ice layer, under increasing pressure. When a crack or a rupture develops the gas flows out the opening. The escaping gas carries along loose material eroded from the ground. The gas and fine material flow up to the top of the ice layer and out into the ambient wind, as shown in the diagram above. The material lands on top of the seasonal ice layer, downwind of the vent, in fan-shaped deposits. After the seasonal ice layer is gone the fans blend back into the surface material and are no longer visible.
This same activity occurs every spring, and slowly erodes channels in the ground. Often the channels are radially organized, thus earning the colloquial name “spiders”, and more formally “araneiform.” Channels are wide and shallow, generally less than 2 meters deep. In the winter the spiders are visible because the ice is draped over them; in the summer when the terrain is ice-free we see that the spiders are indeed channels carved into the surface, shown in the time-lapse sequence below.
The fans are markers for the wind direction and speed at the time that the gas was escaping and carrying its load of entrained material. Sometimes the vents close and re-open and sometimes the wind changes direction, with the result that we see multiple fans from a single source region. This is all raw data that can be compared to predictions of models for Mars’ atmospheric dynamics, known as global circulation models (GCMs). Sometimes when the conditions are just right the gas will condense into fresh frost particles and form bright fans.
We hope to share more about the background to this project via this blog. Meanwhile, why not go and find some fans? Visit http://www.planetfour.org to start classifying.