
Credits: NASA’s Goddard Space Flight Center and NASA Center for Climate Simulation.
Read the full story on NASA’s Visualization Explorer site.

Credits: NASA’s Goddard Space Flight Center and NASA Center for Climate Simulation.
Read the full story on NASA’s Visualization Explorer site.
I started this blog in 2012; in these 3 1/2 years it has been a wonderful way to channel some of my interests in image processing, geophysics, and visualization (in particular colour), and more recently Python.
During this time, among other things, I learned how to build and maintain a blog, I packaged a popular Matlab function, wrote an essay for Agile Geoscience’s first book on Geophysics, presented at the 2012 CSEG Geoconvention, and wrote two tutorials for The Leading Edge. Last, but not least, I made many new friends and professional connections.
Starting with 2016 I would like to concentrate my efforts on building useful (hopefully) and fun (for me at least) open source (this one is for sure) tools in Python. This is going to be my modus operandi:
Here are a couple of examples of ongoing projects:
The idea for this project was inspired by Matt Hall of Agile Geoscience. The plan is to eventually build a web app that will:
This is a project started at the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience with Elwyn Galloway, Evan Saltman, and Ben Bougher. The original idea, proposed by Elwyn at the Hackathon, was to make an app that would turn an image of geological sketch into a model, as in the figure below.
The implementation of the finished app involves using morphological filtering and other image processing methods to enhance the sketch image and convert it into a model with discrete bodies, then pass it on to Agile’s modelr.io to create a synthetic.
The remote sensing used to study the human body is very similar to the remote sensing used to study the subsurface. Apart from a scaling factor (due to the different frequencies of the signals used) the only major difference between the two methods of investigation is in that radiologists and doctors looking at an x-ray, ultrasound, or CAT scan image know what to look for in those images, as bones, tissues, and anomalies, have known characteristics, whereas the subsurface is always to a large extent unknown.
In this short visual post I am going to use a CAT scan of King Tut’s skull to explore the effect on the image quality of progressive decimation of the data followed by upsampling it back to the initial size. I will also look at the effect of these manipulations on the results of edge detection.
With this I want to simulate the progressive reduction in imaging quality that happens when going from high density 3D seismic acquisition to medium density 3D seismic to high quality, but sparse 2D seismic lines.
Here’s the input image in Figure 1.
Figure 1. CAT scan of King Tut’s skull – Supreme Council of Antiquities. guardians.net/hawass/press_release_tutankhamun_ct_scan_results.htm
In Figure 2 I am showing the image after import into a Jupiter Notebook and conversion to grayscale, and the result of edge detection using the Sobel filter. Notice the excellent quality of the edge detection result.
Figure 2. Original image, or ground truth for the experiment, and edge detection result.
To simulate a high-resolution 3D seismic acquisition I decimated the original image by a factor of 4 in both directions. The resulting image (no interpolation) is, shown in Figure 3, is of good quality, and so is the edge detection result.
Figure 3. Simulated high-resolution 3D survey and edge detection result.
The image in Figure 4 results from a further decimation by a factor of 2 of the image in Figure 3, then interpolation to upsample to the same size as the image in Figure 4. The image and the edge detection are still of fair quality overall, but some of the smaller features have either disappeared, merged, or faded.
Figure 4. Simulated medium resolution 3D survey and edge detection result.
Now look at Figure 5: this is the equivalent of a high quality (in one direction) 2D dataset. Although we can still guess at what this represents, I would argue this is a result of our a priori knowledge of what it is supposed to represent – a human skull; and yet I don’t think anybody would want their doctor to make a diagnosis based on this image.
Figure 5. Simulated set of very high-resolution 2D lines.
The image in Figure 6 results from 2D interpolation (my intention is to simulate the result we would get by gridding 2D data to get a continuous image. We can now definitely interpret this as a skull, but the edge detection result is very unsatisfactory.
Figure 6. Simulated interpolation of 2D lines.
In future post we will explore the effects of adding periodic noise (similar to seismic acquisition footprint) on these images and on the edge detection results. I will also show you how to remove it using 2D FFT filters, as promised (now more than a year ago) in my post Moiré Patterns.
While on a flight to Denmark a couple of years ago I happened to read this interview with Swedish astronaut Christer Fuglesang. Towards the end he talks about feasibility (and proximity) of our future missions to the Moon. This subject always gest me excited. If there’s one thing I’m dying to see is a manned missions to the Moon and Mars in my lifetime (I was born in 1971, so I missed by a split hair the first Moon Landing).
I hope we do it soon Christer!
My personal take is that of the dreamer, the 12 years old: why go back to the Moon? Because it’s there…. I mean look at it (photo credits: my uncle, Andrea Niccoli)!!
Beautiful red Moon.
Beautiful Moon. In evidence the Sinus Iridum in the top left, the Copernicus crater in the centre of the image, and the Apenninus Montes just North of it with the Eratosthenes crater.
Beautiful Moon. In evidence the Gassendi crater in the centre of the image, and the Tycho crater to the right with one of the Rays.
On a more serious note, this is what Lunar scientist Paul Spudis has to say about why we should go back:
Rift valleys rewrite moon’s fiery history
NASA’s LRO Creating Unprecedented Topographic Map of Moon
Moon composition mosaic Galileo
Recent geological activity Lunar Reconnaissance Orbiter
NASA’s Beyond Earth Solar System exploration program
We choose the Moon – Wonderful, interactive recreation of Apollo 11 Lunar Landing – 40th Anniversary celebration
Raw Video: Restored Video of Apollo 11 Moonwalk
All Apollos’ Lunar Surface Journals
BBC – In video: When man went to the Moon
BBC –Aldrin: I was relieved to be second
Space 1999 (cheesy, but unforgotten ’70s TV show, full pilot) – Moon breaks away from Earth.
Dumb and dumber – We landed on the moon – pure Jim Carey’s genius!!!.
There, I said it, and aloud: ‘EXCEL’!
I got this idea of making a modern (no, I am NOT kidding) educational tool to interactively construct and study Ricker wavelets after reading William Ashcroft’s A Petroleum Geologist’s Guide to Seismic Reflection. The book is a gem, in my opinion (I discovered it thanks to a blog post by Matt Hall a couple of years ago) and comes with some interesting tools on a DVD, which are unfortunately a bit outdated. So I went back to my old Geophysics Bible (Yilmaz’ book Seismic Data Analysis), and mashed a few ideas up; you can Download the Excel file here. Please feel free to use it, peruse it, and abuse it, in your classes, labs, nightmares, etcetera…
The tool is fairly simple. In Sheet 1 the user enters the dominant frequency of the desired Ricker wavelet, as shown in the middle of Figure 1. From that informatin the wavelet is constructed using the equation A = g^2 * 1/exp g^2 where g is the ration between frequency f (in increments of 5 Hz up to an arbitrary 125 Hz – but this could be easily changed!) and the dominant frequency f1 just entered. frequencies. The frequency spectrum of the wavelet is shown as a graph.
Figure 1
In Figure 2 I show the same Sheet, but for a wavelet of dominant frequency equal to 50 Hz.
Figure 2
A number of ancillary graphs, shown in Figure 3, display individual building blocks of the formula A = g^2 * 1/exp g^2.
Figure 3
In Sheet 2 the user can view a plot of the wavelet in the time domain, add Constant phase shifts and Linear phase shifts, and experiment with different combinations of them, as shown in Figures 4-7, below.
Figure 4
Figure 5
Figure 6
Figure 7
Finally, Sheet 3 displays a plot of the wavelet, and of the individual frequency components that make it up. Have fun!!!
Figure 8
A Petroleum Geologist’s Guide to Seismic Reflection, William Ashcroft, 2011, Wiley.
Seismic Data Analysis, Öz Yilmaz, 2001, Society of Exploration Geophysicists.
To plot a wavelet [in Python]. blog post by Evan Bianco.
I love meandering rivers. This is a short post with some (I think) great images of meandering rivers and oxbow lakes.
In my previous post Google Earth and a 5 minute book review: Geology illustrated I reviewed one of my favorite coffee table books: Geology Illustrated by John S. Shelton.
The last view in that post was as close a perspective replica of Figure 137 in the book as I could get using Google Earth, showing the meander belt of the Animas River a few miles from Durango, Colorado.
What a great opportunity to create a short time lapse: a repeat snapshots of the same landscape nearly 50 years apart. This is priceless: 50 years are in many cases next to nothing in geological terms, and yet there are already some significant differences in the meanders in the two images, which I have included together below.
Meander belt and floodplain of the Animas River a few miles from Durango, Colorado. From Geology Illustrated, John S. Shelton, 1966. Copyright of the University of Washington Libraries, Special Collections (John Shelton Collection, Shelton 2679).
Meander belt and floodplain of the Animas River as above, as viewed on Google Earth in 2013.
For example, it looks like the meander cutoff in the lower left portion of the image had likely ‘just’ happened in the 60s, whereas at the time the imagery used by Google Earth was acquired, the remnant oxbow lake seems more clearly defined. Another oxbow lake in the center has nearly disappeared, perhaps in part due to human activity.
Below are two pictures of the Fraser River in the Robson Valley that I took in the summer of 2013 during a hike to McBride Peak. This is an awesome example of oxbow lake. We can still see where the river used to run previous to the jump.
And this is a great illustration (from Infographic illustrations) showing how these oxbow lakes are created. I love the hands-on feel…
The last is an image of tight meander loop from a previous post: Some photos of Northern British Columbia wildlife and geology.
In Welcome to MyCarta, part II, I mentioned sketch2model a fun App project started at the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience.
One of my team mates from the hackathon, Elwyn Galloway, just started writing at scibbatical a new science blog on WordPress.
In the next little while, Elwyn and I will be authoring together and crosspost a short series on the hackathon project.
For now, welcome Elwyn!
In Visualization tips for geoscientists: MATLAB, Part III I showed there’s a qualitative correlation between occurrences of antimony mineralization in the southern Tuscany mining district and the distance from lineaments derived from the total horizontal derivative (also called maximum horizontal gradient).
Let’s take a look at the it below (distance from lineaments increases as the color goes from blue to green, yellow, and then red).
However, in a different map in the same post I showed that lineaments derived using the maxima of the hyperbolic tilt angle (Cooper and Cowan, 2006, Enhancing potential field data using filters based on the local phase) are offset systematically from those derived using the total horizontal derivative.
Let’s take a look at the it below: in this case Bouguer gravity values increase as the color goes from blue to green, yellow, and then red; white polygons are basement outcrops.
The lineaments from the total horizontal derivative are in black, those from the maxima of hyperbolic tilt angle are in gray. Which lineaments should be used?
The ideal way to map the location of density contrast edges (as a proxy for geological contacts) would be to gravity forward models, or even 3D gravity inversion, ideally constrained by all available independent data sources (magnetic or induced-polarization profiles, exploratory drilling data, reflection seismic interpretations, and so on).
The next best approach is to map edges using a number of independent gravity data enhancements, and then only use those that collocate.
Cooper and Cowan (same 2006 paper) demonstrate that no single-edge detector method is a perfect geologic-contact mapper. Citing Pilkington and Keating (2004, Contact mapping from gridded magnetic data – A comparison of techniques) they conclude that the best approach is to use “collocated solutions from different methods providing increased confidence in the reliability of a given contact location”.
I show an example of such a workflow in the image below. In the first column from the left is a map of the residual Bouguer gravity from a smaller area of interest in the southern Tuscany mining district (where measurements were made on a denser station grid). In the second column from the left are the lineaments extracted using three different (and independent) derivative-based data enhancements followed by skeletonization. The same lineaments are superimposed on the original data in the third column from the left. Finally, in the last column, the lineaments are combined into a single collocation map to increase confidence in the edge locations (I applied a mask so as to display edges only where at least two methods collocate).
If you want to learn more about this method, please read my note in the Geophysical tutorial column of The Leading Edge, which is available with open access here.
To run the open source Python code, download the iPython/Jupyter Notebook from GitHub.
With this notebook you will be able to:
1) create a full suite of derivative-based enhanced gravity maps;
2) extract and refine lineaments to map edges;
3) create a collocation map.
These technique can be easily adapted to collocate lineaments derived from seismic data, with which the same derivative-based enhancements are showing promising results (Russell and Ribordy, 2014, New edge detection methods for seismic interpretation.)