Computer vision in geoscience: recover seismic data from images, introduction

In a recent post titled Unweaving the rainbow, Matt Hall described our joint attempt (partly successful) to create a Python tool to enable recovery of digital data from any pseudo-colour scientific image (and a seismic section in particular, like the one in Figure 1), without any prior knowledge of the colormap.

Seismic picture on wall

Figure 1. Test image: a photo of a distorted seismic section on my wall.

Please check our GitHub repository for the code and slides and watch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:

In the next two post, coming up shortly, I will describe in greater detail my contribution to the project, which focused on developing a computer vision pipeline to automatically detect  where the seismic section is located in the image, rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section. The full workflow is included below (with sections I-VI developed to date):

  • I – Image preparation, enhancement:
    1. Convert to gray scale
    2. Optional: smooth or blur to remove high frequency noise
    3. Enhance contrast
  • II – Find seismic section:
    1. Convert to binary with adaptive or other threshold method
    2. Find and retain only largest object in binary image
    3. Fill its holes
    4. Apply opening and dilation to remove minutiae (tick marks and labels)
  • III – Define rectification transformation
    1. Detect contour of largest object find in (2). This should be the seismic section.
    2. Approximate contour with polygon with enough tolerance to ensure it has 4 sides only
    3. Sort polygon corners using angle from centroid
    4. Define new rectangular image using length of largest long and largest short sides of initial contour
    5. Estimate and output transformation to warp polygon to rectangle
  • IV – Warp using transformation
  • V – Blanking annotations inside seismic section (if rectangular):
    1. Start with output of (4)
    2. Pre-process and apply canny filter
    3. Find contours in the canny filter smaller than input size
    4. Sort contours (by shape and angular relationships or diagonal lengths)
    5. Loop over contours:
      1. Approximate contour
      2. If approximation has 4 points AND the 4 semi-diagonals are of same length: fill contour and add to mask
  • VI – Use mask to remove text inside rectangle in the input and blank (NaN) the whole rectangle. 
  • VII – Optional: tools to remove arrows and circles/ellipses:
    1. For arrows – contours from (4) find ones with 7 sizes and low convexity (concave) or alternatively Harris corner and count 7 corners, or template matching
    2. For ellipses – template matching or regionprops
  • VIII – Optional FFT filters to remove timing lines and vertical lines

You can download from GitHub all the tools for the automated workflow (parts I-VI) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

The first post focuses on the image pre-processing and enhancement, and the detection of the seismic line (sections I and II, in green); the second one deals with the rectification of the seismic (sections IV to V, in blue). They are not meant as full tutorials, rather as a pictorial road map to (partial) success, but key Python code snippets will be included and discussed.

Logarithmic spiral, nautilus, and rainbow

The other day I stumbled into an interesting article on The Guardian online: The medieval bishop who helped to unweave the rainbow. In the article I learned for the first time of Robert Grosseteste, a 13th century British scholar (with an interesting Italian last name: Grosse teste = big heads) who was also the Bishop of Lincoln.

The Bishops’ interests and investigations covered diverse topics, making him a pre-renaissance polymath; however, it is his 1225 treatise on colour, the De Colore, that is receiving much attention.

In a recent commentary on Nature Physics (All the colours of the rainbow), and reference therein (A three-dimensional color space from the 13th century),  Smithson et al. (who also recently published a new critical edition/translation of the treatise with analysis and critical commentaries) analyze the 3D colorspace devised by Grosseteste, who claimed it allows the generation of all possible colours and to describe the variations of colours among different rainbows.

As we learn from Smithson et al., Grosseteste’s colorpsace had three dimensions, quantified by physical properties of the incident light and the medium: these are the scattering angle (which produces variation of hue within a rainbow), the purity of the scattering medium (which produces variation between different rainbows and is linked to the size of the water droplets in the rainbow), and the altitude of the sun (which produces variation in the light incident on a rainbow). The authors were able to model this colorspace and also to show that the locus of rainbow colours generated in that colorspace forms a spiral surface (a family of spiral curves, each form a specific rainbow) in  the perceptual CIELab colorspace.

I found this not only fascinating – a three-dimensional, perceptual colorspace from the 13th century!! – but also a source of renewed interest in creating the perfect perceptual colormaps by spiralling through CIELab.

My first attempt of colormap spiralling in CIELab, CubicYF, came to life by selecting hand-picked colours on CIELab colour charts at fixed lightness values (found in this document by Gernot Hoffmann). The process was described in this post, and you can see an animation of the spiral curve in CIELab space (created with the 3D color inspector plugin in ImageJ) in the video below:

Some time later, after reading this post by Rob Simmon (in particular the section on the NASA Ames Color Tool), and after an email exchange with Rob, I started tinkering with the idea of creating perceptual rainbow colormaps in CIELab programmatically, by using a helix curve or an Archimedean spiral, but reading Smithson et al. got me to try the logarithmic spiral.

So I started my experiments with a warm-up and tried to replicate a Nautilus using a logarithmic spiral with a growth ratio equal to 0.1759. You may have read that the rate at which a Nautilus shell grows can be described by the golden ratio phi, but in fact the golden spiral constructed from a golden rectangle is not a Nautilus Spiral (as an aside, as I was playing with the code I recalled reading some time ago Golden spiral, a nice blog post (with lots of code) by Cleve Moler, creator of the first version of Matlab,  who simulated a golden spiral using a continuously expanding sequence of golden rectangles and inscribed quarter circles).

My nautilus-like spiral, plotted in Figure 1, has a growth ratio of 0.1759 instead of the golden ratio of 1.618.

nautilus logarithmic spiral with growth ratio = 0.1759

Figure 1: nautilus-like spiral with growth ratio = 0.1759

And here’s the colormap (I called it logspiral) I came up with after a couple of hours of hacking: as hue cycles from 360 to 90 degrees, chroma spirals outwardly (I used a logarithmic spiral with polar equation c1*exp(c2*h) with a growth ratio c2 of 0.3 and a constant c1 of 20), and lightness increases linearly from 30 to 90.

Figure 2 shows the trajectory in the 2D CIELab a-b plane; the colours shown are the final RGB colours. In Figure 3 the trajectory is shown in 3D CIELab space. The coloured lightness profiles were made using the Colormapline submission from the Matlab File Exchange.

2D logspiral colormap in CIELab a-b plane

Figure 2: logspiral colormap trajectory in CIELab a-b plane

logspiral colormap trajectory in CIELab 3D space

Figure 3: logspiral colormap in CIELab 3D space

 

N.B. In creating logspiral, I was inspired by Figure 2 in the Nature Physics paper, but there are important differences in terms of colorspace, lightness profile and perception: I am not certain their polar coordinates are equivalent to Lightness, Chroma, and Hue, although they could; and, more importantly, the three-dimensional spirals based on Grosseteste’s colorpsace go from low lightness at low scattering angles to much higher values at mid scattering angles, and then drop again at high scattering angle (remember that these spirals describe real world rainbows), whereas lightness in logspiral lightness is strictly monotonically increasing.

In my next post I will share the Matlab code to generate a full set of logspiral colormaps sweeping the hue circle from different staring colours (and end colors) and also the slower-growing logarithmic spirals to make a set of monochromatic colormaps (similar to those in Figure 2 in the Nature Physics paper).

 

Visualizing colormap artifacts

In Evaluate and compare colormaps, I have shown how to extract and display the lightness profile of a colormap using Python. I do this routinely with colormaps, but I realize it takes an effort, and not all users may feel comfortable using code to test whether a colormap is perceptual or not.

This got me thinking that there is perhaps a need for a user-friendly, interactive tool to help identify colormap artifacts, and wondering how it would look like.

In a previous post, Comparing color palettes, I plotted the elevation for the South American continent from the Global Land One-km Base Elevation Project using four different color palettes. In Figure 1 below I plot again 3 of those: rainbow, linear lightness rainbow, and grayscale, respectively, from left to right. In maps like these some artifacts are very evident. For example there’s a classic film negative effect in the map on the left, where the Guiana Highlands and the Brazilian Highlands, both in blue, seem to stand lower than the Amazon basin, in violet. This is due to the much lower lightness (or alternatively intensity) of the colour blue compared to the violet.

South_America_maps_post

Figure 1

 

However, other artifacts are more subtle, like the inversion of the highest peaks in the Andes, which are coloured in red, relative to their surroundings, in particular the Altipiano, an endorheic basin that includes Lake Titicaca.

My idea for this tool is simple, and consists of two windows. The first is a basemap window which can display either a demo dataset or user data loaded from an ASCII grid file. In this window the user would interactively select a profile by building a polyline with point-and-click, like the one in Figure 2 in white.

South_America_map_window

Figure 2

The second window would show the elevation profile with the colour fill assigned based on the colormap, like in Figure 3 at the bottom (with colormap to the right), and with a profile of the corresponding colour intensities (on a scale 1-255) at the top.

In this view it is immediately evident that, for example, the two highest peaks near the center, coloured in red, are relative intensity lows. Another anomaly is the absolute intensity low on the right side, corresponding to the colour blue, where the elevation profile varies smoothly.

Figure 3

Figure 3

I created this concept prototype using a combination of Matlab, Python, and Surfer. I welcome suggestions for possible additional features, and would like to hear form folks interested in collaboration on a web app (ideally in Python).

Color palettes for seismic structure maps and attributes

I created three color palettes for structure maps (seismic horizons, elevation maps, etcetera) and seismic attributes. To read about the palettes please check these previous blog posts:

The rainbow is dead…long live the rainbow! – Part 5 – CIE Lab linear L* rainbow
The rainbow is dead series – Part 7 – Perceptual rainbow palette – the method
The rainbow is dead series – Part 7 – Perceptual rainbow palette – the goodies

The palettes are available as plain ASCII files and also formatted for a number of platforms and software products:

Geosoft
Hampson-Russell
Kingdom
Madagascar
Matlab
OpendTect
Petrel
Seisware
Surfer
Voxelgeo

Please download them from my Color Palettes page and follow instructions therein.

Enjoy!

linearlfb

Image courtesy of Sergey Fomel of the Madagascar Development blog

The rainbow is dead…long live the rainbow! – The rainbow is dead…long live the rainbow! – Perceptual palettes, part 4 – CIE Lab heated body

  In my last post I discussed the two main issues with the rainbow color palette from the point of view of human color vision, and concluded one of these issues is insurmountable.

But before I move to presenting alternative color palettes, let me give you one last example of how bad the rainbow is. It was sent to me by Antony Price, a member of the LinkedIn group Worldwide Geophysicists. Antony created a grayscale and a rainbow-colored version – using the same data range and number of intervals – of the satellite altimeter derived free-air gravity map of the world [1].  I am showing the two maps below.

Continue reading

The rainbow is dead…long live the rainbow! – The rainbow is dead…long live the rainbow! – Perceptual palettes, part 3

Inroduction

Following the first post in this series, Steve commented:

Matteo, so would I be correct in assuming that the false structures that we see in the rainbow palette are caused by inflection points in the brightness? I always assumed that the lineations we pick out are caused by our flawed color perception but it looks from your examples that they are occurring where brightness changes slope. Interesting.

As I mention in my brief reply to the reader’s comment, I’ve done some reading and more experiments to try to understand better the reasons behind the artifacts in the rainbow, and I am happy to share my conclusions. This is also a perfect lead into the rest of the series.

Human vision vs. the rainbow – issue number 1

I think there are two issues that make us see the rainbow the way we see it; they are connected but more easily examined separately. The first one is that we humans perceive some colors as lighter (for example green) and some as darker (for example blue) at a given light level, which is because of the difference in the fundamental color response of the human eye for red, green, and blue (the curves describing the responses are called discrimination curves).

There is a well written explanation for the phenomenon on this website (and you can find here color matching functions similar to those used there to create the diagram). The difference in the sensitivity of our cones explains why in the ROYGBIV color palette (from the second post in this series) the violet and blue appear to us darker than red, and red in turn darker than green and yellow. The principle … applies also to mixes involving the various cones (colours), hence the natural brightness of yellow which stimulates the two most reactive sets of cones in the eye. We could call this a flaw in color perception (I am not certain of what the evolutionary advantage might be), which is responsible for the erratic appearance of the lightness (L*) plot for the palette shown below (If you would like to know more about this plot and get the code to make it to evaluate color palettes, please read the first post in this series).

So to answer Steve, I think yes, the lineations we pick in the rainbow are caused by inflection points in the lightness profile, but those in turn are caused by the differences in color responses of our cones. But there’s more!

Continue reading

The rainbow is dead…long live the rainbow! – series outline

The rainbow is dead…long live the rainbow! – Part 1

The rainbow is dead…long live the rainbow! – Part 2: a rainbow puzzle

The rainbow is dead…long live the rainbow! – Part 3

The rainbow is dead…long live the rainbow! – Part 4 – CIE Lab heated body

The rainbow is dead…long live the rainbow! – Part 5 – CIE Lab linear L* rainbow

The rainbow is dead series – Part 6 -Comparing color palettes

The rainbow is dead series – Part 7 – Perceptual rainbow palette – the method

The rainbow is dead series – Part 7 – Perceptual rainbow palette – the godies

The rainbow is dead…long live the rainbow! – The rainbow is dead…long live the rainbow! – Perceptual palettes, part 2: a rainbow puzzle

ROYGBIV or YOGRVIB?

If you are interested in the topic of color palettes for scientific data, and the rainbow in particular, I would say you ought to read this 2007 IEEE visualization paper by Borland and Taylor: Rainbow Color Map (Still) Considered Harmful. It clearly and elegantly illustrates why the rainbow palette should be avoided when displaying scientific data. I like Figure 1 in the paper in particular. The illustration shows how it is easy to order perceptually a set of 4 paint chips of different gray intensity, but not at all easy to order 4 paint chips colored red, green, yellow, and blue. The author’s argument is that the rainbow colors are certainly ordered, from shorter to longer wavelengths, but they are not perceptually ordered. In this post I wanted to extend the chips example to all 7 colors in the rainbow and try to demonstrate the point in a quantitative way.

Here below is a 256-sample rainbow palette I created interpolating between the RGB values for the seven colors of the rainbow red, orange, yellow, green, blue, indigo, and violet (ROY G BIV):

On this palette I see a number of perceptual artifacts, the most notable ones being a sharp edge at the yellow and a flat zone at the green. The existence of these edges I tried to explain quantitatively in the first post of this series.

Now, to go back to the experiment, from the original RGB values for the non interpolated colors I created the 7 color chips below . Question: can you order them based on their perceived intensity?

I think if you have full color vision (more on the topic of rainbow and impaired color vision in the next section of this post) eventually you will be able to order them as I did.If not, try now below. In this new image I converted the color chips to gray chips using the values obtained in Matlab with this formula:

INT = (0.2989 * RGB(:,1) + 0.5870* RGB(:,2) + 0.1140 * RGB(:,3))';

Give it a try, then hover with your mouse over the image to read the intensity values.

roygbiv_intensityroygbiv_intensity_values

Not surprisingly, the values are not in any particular order. This reinforces the notion that although the rainbow colors are ordered by increasing wavelength (or decreasing in this case) , they are not perceptually ordered. (See this comment to my previous post). Below I rearranged the gray chips by increasing intensity.

And now I reconverted from gray to RGB colors and adjusted the distance between each pair of chips so that it is proportional to the intensity difference between the chips in the pair (I actually had to artificially change the value for green and orange so they would not overlap). That was an epiphany for me. And the name is funny too, BIV R GOY, or YOG R VIB…

I said that it was an epiphany because I realize the implications of trying to create a palette by interpolating through these colors with those distances. So I did it, and I am showing it below in the top color palette. We jumped out of the frying pan, into the fire! We went from perceptual artifacts that are inherent to the rainbow (reproduced in reverse order from blue to red to facilitate comparison as the bottom palette) to interpolation artifacts in the intensity ordered rainbow. Hopeless!

ROYGBIV puzzle

As if what I have shown in the previous section wasn’t scary enough, I took 7 squares and colored them using the same RGB values for Red, Orange, Yellow, Green, Blue, Indigo, and Violet. Then I used the Dichromacy plug-in in ImageJ to simulate how these colors would be seen by a viewer with Deuteranopia (the more common form of color vision deficiency). I then shuffled the squares in random order on a square canvas, and numbered them 1-7 in clockwise order.

Puzzle: can you pair the squares numbered 1 through 7 with the colors R though V? I will give away the obvious one, which is the yellow:

1=Y
2=?
3=?
4=?
5=?
6=?
7=?

Cannot do it? For the solution just hover over the image with your mouse. If you like the animation and would like to use it on your blog, twitter, Facebook, get the GIF file version here. Please be kind enough to link it back to this post.

roygbiv_random_deuteranoperoygbiv_random

Conclusion

When I tried myself I could not solve the puzzle, and that finally convinced me that trying to fix the rainbow was a hopeless cause. Even if we could, it would still confuse a good number of people (about 8% of male have one form or the other of color vision deficiency). From the next post on I will show what I got when I tried to create a better, more perceptual rainbow from scratch.

Related posts (MyCarta)

The rainbow is dead…long live the rainbow! – the full series

What is a colour space? reblogged from Colour Chat

Color Use Guidelines for Mapping and Visualization

A rainbow for everyone

Is Indigo really a colour of the rainbow?

Why is the hue circle circular at all?

A good divergent color palette for Matlab

Related topics (external)

Color in scientific visualization

The dangers of default disdain

Color tools

How to avoid equidistant HSV colors

Non-uniform gradient creator

Colormap tool

Color Oracle – color vision deficiency simulation – stand alone (Window, Mac and Linux)

Dichromacy –  color vision deficiency simulation – open source plugin for ImageJ

Vischeck – color vision deficiency simulation – plugin for ImageJ and Photoshop (Windows and Linux)

For teachers

NASA’s teaching resources for grades 6-9: What’s the Frequency, Roy G. Biv?