Archive by Author | Michael Aye

Ho Ho Ho, Happy New Year! New paper! New data!

Hello beloved Planet Four Citizen Scientists!

This is Michael, reporting back after a long break of activity.

What happened in 2022? Anya and I were very busy with moving our family with our toddler son, dog, and cat back to Germany. We have finally settled into our new jobs here in Berlin and I finally had some time to prepare a new data set for our original Planet Four project!

Another good news is that another paper based on Planet Four results was accepted this year: Planet Four: A Neural Network’s search for polar spring-time fans on Mars, led by our Australian colleagues, Mark McDonnell and Eriita Jones, with Meg being the corresponding author. The paper investigates how a basic approach of machine-learning to our fan and blotch identification problem compares to the results of our citizen-science approach. And we can happily report that the citizen science approach fares very well and cannot be reached yet in its precision for getting the best out of the data.

However, what we can learn from the paper as well that there is quite some potential to make the work less boring and more efficient for citizen scientists, by using the machine-learning approach to filter out data that with large certainty does not contain anything of interest for marking. We are looking forward to implementing some of these approaches into our pipeline this year, so that we can churn through datasets more quickly, with your help!

The paper is published here https://www.sciencedirect.com/science/article/pii/S0019103522004006 and available for free here https://arxiv.org/abs/2210.09152

About the new data: this time we are going for extending the time series coverage of some of our favourite regions of interest: Inca City, Giza, and Macclesfield. This data spans the mission years 5 to 8 for Macclesfield, and 7 to 8 for Inca City and Giza.

We sincerely hope that many of you can find your way back to us after such a long break and we wish you lots of fun in exploring more beautiful Martian landscapes.

Wishing you a healthy and successful 2023!

Michael, for the Planet Four team!

Research Experience for Undergraduates

There is a nice program for undergraduates here in the US that is called REU: Research Experience for Undergraduates.

Within this framework the Planet Four team here in Boulder, CO was fortunate to receive funding support by the United Arabic Emirates for one of these undergraduate positions for this summer.

Working with us was Shahad Badri, and the project we came up with for her was to look at blotch data from Planet Four. Our hypothesis was that it should be relatively straight forward and provide us with insights on jet eruption physics alone, because no wind was involved in depositing the jet deposits.

As usual, the reality turned out to be more complicated than our naive thinking, so we are still digesting the results of plotting blotch area and blotch eccentricities over the time of the Martian year (= Ls = Solar Longitude), but we wanted to show you the nice poster that Shahad came up with at the end of the summer project.

As in any exciting science project, the analysis created more questions than we had before. We will need to juxtapose these results with geometrical parameters of the fans, to see where we maybe have transitions between fans and blotches due to ground winds, shifting deposits around, making blotches look more fan-like.

Thanks to Shahad for her diligent work over this summer!

final poster.png

Full res PDF

Planet Four paper accepted!

Dear fellow Planet-Four-ians,

It is my great pleasure to announce that the Icarus journal has accepted our paper “Planet Four: Probing Springtime Winds on Mars by Mapping the Southern Polar CO2 Jet Deposits” for publication!

The edits requested by the reviewers were minor, we addressed what we thought was appropriate for the already huge scope this paper tries to encompass and the editors agreed to our submitted revision. I have also updated the arXiv preprint version with that submitted revision and it is now available in its final “content” form here: https://arxiv.org/abs/1803.10341. We publicly acknowledge everyone who contributed to the classifications that went into this paper and gave us their permission to use their name on the page https://www.planetfour.org/authors.

We now have entered the phase of typesetting the article where the formatting towards the style of the journal is happening and things like placement of figures is being decided on.

Next in line of activities for Planet Four is waiting for the selection of NASA’s Solar System Workings proposals, where we submitted in spring to receive funding for a deeper exploitation of the results of Planet Four and to use it to guide the creation of a geophysical model of CO2 jets. We expect that the selections are made in the first half of September, according to recent information we have received.

Fingers crossed that we can continue further together on this exciting venture!

Tag an image or two at https://planetfour.org !

First publication submitted!

We have finally submitted our first paper for the original Planet Four project to the Icarus journal where it is now officially “Under Review”!

(Above figure is one of the paper where we demonstrate one of the reduction steps to identify noise and create averaged clustered markings. I think it demonstrates well the power of our chosen methodology.)

Thank you to everyone to stay with us for so long without seeing any published results, but I think when you will see the work and care that we put into it, you will understand why it took us so long. One of the reasons was, as we possible mentioned before on this blog, that our Zooniverse project is actually one of the most difficult ones, where we ask all of you to precisely mark objects in the data presented to you. This required a spatial clustering pipeline with a long evaluation and fine-tuning phase.

Which brings me to the point of “see[ing] the work”: we have now managed to have the submitted preprint published on the well known arxiv.org preprint server and you can get your hands on a copy right now! Just click on this link and you will be sent to the arXiv page for our preprint:

https://arxiv.org/abs/1803.10341

Enjoy the (long!) read and don’t shy away to put any questions you have in the comments section below!

Planet Four Clustering investigations

We want to share a quick update on the depth and details we are investigating to close out the last issues for our analysis pipeline for identifying the fans and blotches from the classifications from the original Planet Four.

We recently realized that it might be a good idea to allow different limits for clustering depending on the general marking size. Intuitively this seems to make sense as one automatically takes a bit more care the smaller an object is to mark.

However, more clustering versatility also means more parameters to set which need to be tested for their efficacy.

Below you can see two plots, one for “fan” markings, the other for “blotch” markings, that show different parameter settings for a clustering run and their effects on the final result.

This planet four image tile with the ID ’17a’ is one of the more problematic ones due to its very large but diffusively defined blotch and the markings are, understandably, all over the place.

17a_fan_scalefalse_radiitrueEach plot title has the values EPS and EPS_LARGE called out. These are the above mentioned distance limits for clustering to happen. Here I leave the EPS value, the one for smaller markings constant over several tests, while I step the one for larger markings, EPS_LARGE, between 50 and 90 in steps of 20.

17a_fan_scalefalse_radiitrueAs one can see the large blotch is “surviving” in all cases (which it wasn’t before we introduced the split-by-size clustering approach), while in the fan case it only survives when the “MS” parameter, the number of minimum markings that a surviving cluster needs to have, is at 5. When requesting 7, it’s just not enough markings to have it survive. But that’s okay, because I’m pretty sure that we will have this survive as a blotch rather than a fan, due to the higher number of markings that voted for that.

 

 

Status of analysis pipeline

Dear Citizen Scientists!

Long time no hear from me, sorry guys! Last year I was struggling to manage 4 projects in parallel, but at least one of them is finally funded PlanetFour activity (since last August), yeah!

I’m now down to three projects, with another one almost done, leaving me more time on PlanetFour. Things are progressing slowly, but steadily. To recap, here’s where we are:

We have identified 5 major software pipelines that are required for the full analysis of the PlanetFour data, starting from your markings to results that are on a level that they can be used in a publication or shown at a conference. Four of these pipelines are basically done and stable, with the fifth one existing as a manual prototype but not yet put into a stable chain of code that can run from beginning to end. Figure 1 shows the first four pipelines that are finished.

Pipeline_shrunk

Figure 1: The current manifestation of the PlanetFour analysis pipeline.

The need of the fifth pipeline was only discovered recently, when we tried to create the first science plots from PlanetFour data: Some of the HiRISE input data that we use is of such high resolution (almost factor 2 better than the next level down) that the Citizen scientists discover a lot more detail than in the other data. This led to an un-natural jump of marked objects over time, making us wonder for a bit why so late in the polar summer a sudden increase in activity would occur. Until I checked the binning mode of the HiRISE data that was used for those markings. All of the ‘funny data’ were taken in the highest resolution possible (while others for data-transport margins are binned down by a factor of 2 or 4).

So, we now understand that we need to filter and/or sort for the imaging mode that HiRISE was in when the data was taken, which is not a big deal, it just needs to be implemented in a stable fashion instead of trial-and-error code in a Jupyter notebook.

Okay, the other thing that is new: For months we were clustering your markings together using only the x,y base coordinates of fans and the center x,y coordinates of blotches. This simplest approach worked already quite well, but a closer review of the acceptance and rejection rates revealed that some of the more ‘artistically’-motivated markings would survive this reduction scheme and create final average objects that would have seemed to come from nowhere at a quick glance. Take Figure 2 for example:

artistic_marking_without_angle_clustering

Figure 2: Process chain for one PlanetFour image_id. Upper left: The HiRISE tile as presented to Citizen scientist. Upper middle and right: The raw fan and blotch markings as created by YOU! 😉 Lower right and middle: The reduced cluster average markings. Lower left: After fnotching and cutting on 50% certainty, the resulting end products.

One can see that the lower left image, the end of the first 3 pipelines, contains some markings that seem to come out of nowhere. They are in fact created by an artistic set of fans visible in the upper middle plot, where three fan markings are put where no visible ground features are, and because the base points of these 3 fans are nicely touching each other, they survive the clustering reduction, as the algorithm thinks it is a group of valid markings. Or, better said, it *thought* so. As I taught it better now, and it includes the direction as a criterion for the clustering as well. As Figure 3 shows, this helps cleaning up the magical fans out of nowhere.

artistic_marking_with_angle_clustering

One can see, there’s still some double-blotches visible, but another loop over those remaining ones, checking for close-ness to each other will unify those as well.

One last thing I want to mention is “fnotching”, as some of you might wonder what that actually means. In difficult-to-read terrain or lightning, or when the features on the ground are kinda hard to distinguish between fans and blotches, it happens that the same ground object is marked both as fan and blotch, and both often enough to survive the clustering. We call these chimera objects “fnotches”, glued together from FaNs and blOTCHES. 😉 What we do is looping over objects that survive the clustering, and if a fan and blotch are close to each other, we store how many Citizens have voted for both, create a statistical weight out of that (the ‘fnotch’-value) and store that, too, with the fnotch object. Then, at a later point, depending on the demands of certainty, we can ‘cut’ on that value, and for example say that we only consider something as a fan if 75% of all Citizens that marked this object have marked it as a fan. That way we can create final object catalogs depending on the science project that the catalog is being used for.

We have just submitted another conference abstract with the most recent updates to the 47th Lunar and Planetary Science conference, and I seriously, seriously want our paper to be submitted until then, so that you all can see what wonderful stuff we created from all your hard work!

Wish us luck and have a Happy 2016 everyone! Or, as the star of one of my favorite video blogs, HealthCare Triage, keeps saying: To the research!

Michael

Introducing Planet Four: Terrains

Dear Martian Citizen Scientists!

We are excited to introduce to you a new companion citizen science project to Planet Four called “Planet Four: Terrains” built on the Zooniverse’s new platform. You have explored with us here in Planet Four some of the most detailed surface observations ever made in our solar system and many of you have acknowledged and wondered about all the other amazing features visible in these images that we did not ask to be studied, like spiders, networks of channels and weirdly looking craters. (some of you will remember that one of these even led to a re-observation of the same crater with the HiRISE camera).

5143480cea305267e90010f4

HiRISE imaged spiders Image Credit: NASA/JPL/University of Arizona

It is an interesting fact that when one decides to make a camera that can resolve a lot of small details, that it will not be able to scan a lot of area. One has to decide, as long as we don’t have infinite data transport capabilities and infinite mission time at other planets and moons in the solar system. That’s why the Mars Reconnaissance Orbiter (MRO), the spacecraft that houses the HiRISE camera that produced all the images in the Planet Four project has a complementary camera system onboard to provide context, appropriately called CTX for ConTeXt camera. It has a lower resolution than HiRISE (approx 5-6 m compared to HiRISE’s 25 to 50 cm) but takes images from a far larger region than HiRISE.

APFT00000e28

CTX image – Image Credit:NASA/JPL-Caltech/Malin Space Science System

So here is our idea: We confirmed that many of the features you were asking about are still recognizable with the lower resolution images of CTX. Therefore we would like your help in gathering spatial statistics in where around the south pole we can find which kind of patterns on the ground that are related to CO2 ice activities. Your help in classifying CTX data into a set of ground patterns will serve to decide where the HiRISE camera will be pointed next during 2016’s south polar spring season observation campaign. This way your contributions directly improve the scientific output of both CTX and the HiRISE camera and we are very excited to provide to you a way to point the highest resolution camera in the solar system to the most interesting areas of the Martian south pole!

You can find the new project, a more detailed science case description and an awesome spotter’s guide at this address: http://terrains.planetfour.org

Thanks as always for your time and your enthusiasm!

Michael

Clustering the PlanetFour results

Our beloved PlanetFour citizen scientists have created a wealth of data that we are currently digging through. Each PlanetFour image tile is currently being retired after 30 randomly selected citizens pressed the ‘Submit’ button on it. Now, we obviously have to create software to analyze the millions of responses we have collected from the citizen scientists, and sometimes objects in the image are close to each other, just like in the lower right corner of Figure 1.

APF0000zg7_raw_HiRISE_frame

Figure 1: Original HiRISE cutout tile that is being shown to 30 random PlanetFour citizen scientists.

And, naturally, everybody’s response to what can be seen in this HiRISE image is slightly different, but fret not: this is what we want! Because the “wisdom of the crowd effect” entails that the mean value of many answers are very very close to the real answer. See Figure 2 below for an example of the markings we have received.

APF0000zg7_original_markings

Figure 2: Original markings of P4 Citizen scientists.

Note the amount of markings in the lower right, covering both individual fans that are visible in Figure 1. It is understandable that the software analyzing these markings needs to be able to distinguish what a marking was for, what visual object in the image was meant to be marked by the individual Citizen scientists. And I admit, looking at this kind of overwhelming data, I was a bit skeptical that it can be done. Which would still be fine, because one of our main goals is wind directions to be determined and as long as every subframe results in the indication of a wind direction, we have learned A LOT! But if we can disentangle these markings to show us individual fans, we could even learn more: We can count the amount of activity per image more precisely, to learn how ‘active’ this area is. And we even can learn about changes of wind direction happening, if at the same source of activity two different wind directions can be distinguished. For that, we need to be able to separate these markings as good as possible.

And we are very glad to tell you that that indeed seems possible, using modern data analysis techniques called “clustering” that looks at relationships between data points and how they can be combined into more meaningful statements. Specifically, we are using the so called “DBSCAN” clustering algorithm (LINK), which allows us to choose the number of markings required to be defined a cluster family and the maximum of distance allowed for a different marking before being ‘rejected’ from that cluster family. Once the cluster members have been determined, simple mean values of all marking parameters are taken to determine the resulting marking, and Figure 3 shows the results of that.

APF0000zg7_clustered_markings

Figure 3: Clustered markings for P4 tile ZG7

Just look at how beautifully the clustering has merged all the markings into results have combined all the markings into data that very precisely resembles what can be seen in the original data! The two fans in the lower right have been identified with stunning precision!

For an even more impressive display of this, have a look at the animated GIF below that allows you to track the visible fans, how they are being marked and how these markings are combined in a very precise representation of the object on the ground. It’s marvelous and I’m simply blown away by the quality of the data that we have received and how well this works!

APF0000zg7_animated_gif

Figure 4: Animated GIF for the clustering of the markings of APF0000zg7

This is not meant to say though that all is peachy and we can sit back and push some buttons to get these nice results. Sometimes they don’t look as nice as these, and we need to carefully balance the amount of work we invest into fixing those because we need to get the publication out into the world, so that all the Citizen scientists can see the fruit of their labor! And sometimes it’s not even clear to us if what we see is a fan or a blotch, but that distinction is of course only a mental help for the fact if there was wind blowing at the time of a CO2 gas eruption or not. So we have some ideas how to deal with those situations and that is one of the final things we are working on before submitting the paper. We are very close so please stay tuned and keep submitting these kind of stunningly precise markings!

For your viewing pleasure I finish with another example of how nicely the clustering algorithm works to create final markings for a PlanetFour image:

APF0000zmj_animated_gif

Figure 5: Animation for the clustered markings process of P4 image ZMJ

The Sun is back!

 I wrote a post on my new blog showing how I go about finding out what’s currently going on at the southpole of Mars!
Sorry for the cross-linking, but there’s no way to show the nice IPython notebooks (combing text and code) in a clear and pretty format here in WordPress:

Animated markings

After the first day of our currently ongoing workshop for Citizen Science I have reached my personal goal for the day and created a tool to display an animated version for all the marked blotches of a PlanetFour image.

Here’s the result:

More to come in the upcoming days.