Busting bad colormaps with Python and Panel

I have not done much work with, or written here on the blog about colormaps and perception in quite some time.

Last spring, however, I decided to build a web-based app to show the effects of using a bad colormaps. This stemmed from two needs: first, to further my understanding of Panel, after working through the awesome tutorial by James Bednar, Panel: Dashboards (at PyData Austin 2019); and second, to enable people to explore interactively the effects of bad colormaps on their perception, and consequently the ability to on interpret faults on a 3D seismic horizon.

I introduced the app at the Transform 2020 virtual subsurface conference, organized by Software Underground last June. Please watch the recording of my lightning talk as it explains in detail the machinery behind it.

I am writing this post in part to discuss some changes to the app. Here’s how it looks right now:

The most notable change is the switch from one drop-down selector to two-drop-down selectors, in order to support both the Matplotlib collection and the Colorcet collection of colormaps. Additionally, the app has since been featured in the resource list on the Awesome Panel site, an achievement I am really proud of.

awesome_panel

You can try the app yourself by either running the notebook interactively with Binder, by clicking on the button below:
Binder

or, by copying and pasting this address into your browser:

https://mybinder.org/v2/gh/mycarta/Colormap-distorsions-Panel-app/master?urlpath=%2Fpanel%2FDemonstrate_colormap_distortions_interactive_Panel

Let’s look at a couple of examples of insights I gained from using the app. For those that jumped straight to this example, the top row shows:

  • the horizon, plotted using the benchmark grayscale colormap, on the left
  • the horizon intensity, derived using skimage.color.rgb2gray, in the middle
  • the Sobel edges detected on the intensity, on the right

and the bottom row,  shows:

  • the horizon, plotted using the Matplotlib gist_rainbow colormap, on the left
  • the intensity of the colormapped, in the middle. This is possible thanks to a function that makes a figure (but does not display it), plots the horizon with the specified colormap, then saves plot in the canvas to an RGB numpy array
  • the Sobel edges detected on the colormapped intensity, on the right

I think the effects of this colormaps are already apparent when comparing the bottom left plot to the top left plot. However, simulating perception can be quite revealing for those that have not considered these effects before. The intensity in the bottom middle plot is very washed out in the areas corresponding to green color in the bottom left, and as a result, many of the faults are not visible any more, or only with much difficulty, which is demonstrated by the Sobel edges in the bottom right.

And if you are not quite convinced yet, I have created these hill-shaded maps, using Matt Hall”s delightful function from this notebook (and check his blog post):

Below is another example, using the Colocrcet cet_rainbow which is is one of Peter Kovesi’s perceptually uniform colormaps.  I use many of Peter’s colormaps, but never used this one, because I use my own perceptual rainbow, which does not have  a fully saturated yellow, or a fully saturated red. I think the app demonstrate, that even though they are more subtle , this rainbow still is introducing some artifacts. The yellow colour creates narrow flat bands, visible in the intensity and Sobel plots, and indicated by yellow arrows; the red colour is also bad as usual, causing an artificial decrease in intensity(magenta arrows).

Computer vision in geoscience: recover seismic data from images, introduction

In a recent post titled Unweaving the rainbow, Matt Hall described our joint attempt (partly successful) to create a Python tool to enable recovery of digital data from any pseudo-colour scientific image (and a seismic section in particular, like the one in Figure 1), without any prior knowledge of the colormap.

Seismic picture on wall

Figure 1. Test image: a photo of a distorted seismic section on my wall.

Please check our GitHub repository for the code and slides and watch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:

In the next two post, coming up shortly, I will describe in greater detail my contribution to the project, which focused on developing a computer vision pipeline to automatically detect  where the seismic section is located in the image, rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section. The full workflow is included below (with sections I-VI developed to date):

  • I – Image preparation, enhancement:
    1. Convert to gray scale
    2. Optional: smooth or blur to remove high frequency noise
    3. Enhance contrast
  • II – Find seismic section:
    1. Convert to binary with adaptive or other threshold method
    2. Find and retain only largest object in binary image
    3. Fill its holes
    4. Apply opening and dilation to remove minutiae (tick marks and labels)
  • III – Define rectification transformation
    1. Detect contour of largest object find in (2). This should be the seismic section.
    2. Approximate contour with polygon with enough tolerance to ensure it has 4 sides only
    3. Sort polygon corners using angle from centroid
    4. Define new rectangular image using length of largest long and largest short sides of initial contour
    5. Estimate and output transformation to warp polygon to rectangle
  • IV – Warp using transformation
  • V – Blanking annotations inside seismic section (if rectangular):
    1. Start with output of (4)
    2. Pre-process and apply canny filter
    3. Find contours in the canny filter smaller than input size
    4. Sort contours (by shape and angular relationships or diagonal lengths)
    5. Loop over contours:
      1. Approximate contour
      2. If approximation has 4 points AND the 4 semi-diagonals are of same length: fill contour and add to mask
  • VI – Use mask to remove text inside rectangle in the input and blank (NaN) the whole rectangle. 
  • VII – Optional: tools to remove arrows and circles/ellipses:
    1. For arrows – contours from (4) find ones with 7 sizes and low convexity (concave) or alternatively Harris corner and count 7 corners, or template matching
    2. For ellipses – template matching or regionprops
  • VIII – Optional FFT filters to remove timing lines and vertical lines

You can download from GitHub all the tools for the automated workflow (parts I-VI) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

The first post focuses on the image pre-processing and enhancement, and the detection of the seismic line (sections I and II, in green); the second one deals with the rectification of the seismic (sections IV to V, in blue). They are not meant as full tutorials, rather as a pictorial road map to (partial) success, but key Python code snippets will be included and discussed.

New Horizons truecolor Pluto recolored in Viridis and Inferno

Oh, the new, perceptual MatplotLib colormaps…..

Here’s one stunning, recent Truecolor image of Pluto from the New Horizons mission:

Credit: NASA/JHUAPL/SwRI

Original image: The Rich Color Variations of Pluto. Credit: NASA/JHUAPL/SwRI. Click on the image to view the full feature on New Horizon’s site

Below, I recolored using two of the new colormaps:

colormappedNew_Horizons_Pluto

Recolored images: I like Viridis, by it is Inferno that really brings to life this image, because of its wider hue and lightness range!

 

NASA’s beautiful ‘Planet On Fire’ images and video

Please give credit for this item to: NASA's Goddard Space Flight Center and NASA Center for Climate Simulation Australia photo courtesy of Flagstaffotos

Credits: NASA’s Goddard Space Flight Center and NASA Center for Climate Simulation.

 

NASA_PLANET_ON_FIRE

Click on the image to watch the original video on NASA’s Visualization Explorer site.

Read the full story on NASA’s Visualization Explorer site.

Reinventing the color wheel – part 2

In the first post of this series I argued that we should not build colormaps for azimuth (or phase) data by interpolating linearly between fully saturated hues in RGB or HSL space.

A first step towards the ideal colormap for azimuth data would be to interpolate between isoluminant colours instead. Kindlmann et al. (2002) published isoluminant RGB values for red, yellow, green, cyan, blue, and magenta based on a user study. The code in the next block show how to interpolate between those published colours to get 256-sample R, G, and B arrays (with magenta repeated at both ends), which can then be combined into a isoluminant colormap for azimuth data.

01 r = np.array([0.718, 0.847, 0.527, 0.000, 0.000, 0.316, 0.718])
02 g = np.array([0.000, 0.057, 0.527, 0.592, 0.559, 0.316, 0.000])
03 b = np.array([0.718, 0.057, 0.000, 0.000, 0.559, 0.991, 0.718])
04 x = np.linspace(0,256,7)
05 xnew = np.arange(256)
06 r256 = np.interp(xnew, x, r)
07 g256 = np.interp(xnew, x, g)
08 b256 = np.interp(xnew, x, b)

This is a good example in general of how to interpolate to a finer sampling one or more sequence of values using the interp function from the Numpy library. In line 04 we define 7 evenly spaced points between 0 and 255; this will be the sample coordinate for the r, g, and b colours created in lines 01-03. In line 05 we create the new coordinates we will be interpolating r, g, and b values at in lines 06-08 (all integers between 0 and 256). The full code will come in the Notebook accompanying the last post in this series.

This new colormap is used in the bottom map of the figure below, whereas in the top map we used a conventional HSV azimuth colormap (both maps show the dip azimuth calculated on the Penobscot horizon). The differences are subtle, but with the isoluminant colormap we are guaranteed there are no perceptual artifacts due to the random variations in lightness of the fully saturated HSV colors.

Azimuth_compare

Another possible strategy to create a perceptual colormap for azimuth data would be to set lightness and chroma to constant values in LCH space and interpolate between hues. This is the Matlab colormap I previously created, and shown in Figure 4 of New Matlab isoluminant colormap for azimuth data. In the next post, I will show how to do this in Python.

Read more on colors and seismic data

The last two posts on Agile show you how to corender seismic amplitude and continuity from a time slice using a 2D colormap,  and then how to corender 3 attributes from a horizon slice.

Reference

Kindlmann, E. et al. (2002). Face-based Luminance Matching for Perceptual Colour map Generation – Proceedings of the IEEE conference on Visualization.

Reinventing the color wheel – part 1

In New Matlab isoluminant colormap for azimuth data I showcased a Matlab colormap that I believe is perceptually superior to the conventional, HSV-based colormaps for azimuth data, in that it does not superimposes on the data the color artifacts that plague all rainbows. However, it still has a limitation, which is that the main colours do not correspond exactly to the four compass directions N, E, W, and S.

My intention with this series is to go back to square one, deconstruct the conventional colormaps for azimuth, and build a new one that has all the desired properties of both perceptual linearity, and correct location of the main colors. All reproducible in Python.

If we wanted to build from scratch a colormap for azimuth (or phase) data the main tasks would be to generate a sequence of distinguishable colours at opposite quadrants, or compass directions (like 0 and 180 degrees, or N and S), and to wrap around the sequence with the same colour at the two ends.

But to do that, we should avoid interpolating linearly between fully saturated hues in RGB or HSL space.

To illustrate why, it is useful to look at the figure below. On the left is a hue circle with primary, secondary, and tertiary colours in a counter-clockwise sequence: red, rose, magenta, violet, blue, azure, cyan, aquamarine, electric green, chartreuse, yellow, and orange. The colour chips are placed at evenly spaced angular distances according to their hue (in radians).

hue-wheel-compare

Left, primary, secondary, and tertiary colour chips arranged using hue for angular distance; right, the same colour chips arranged using intensity for angular distance.

This looks familiar and seems like a natural ordering of colors, so we may be tempted in building a colormap, to just take that sequence, wrap it around at the red (or the magenta) and linearly interpolate to 256 colours to get a continuous colormap [1], and use it for azimuth data, which is how usually the conventional azimuth colormaps are built.

On the right side in the figure the chips have been rearranged according to their intensity on a counter-clockwise sequence from 0 to 255 with 0 at three hours; so, for example blue, which is the darkest colour with an intensity of 29, is close to the beginning of the sequence, and yellow, the brightest with an intensity of 225, is close to the end. Notice that the chips are no longer equidistant.

The most striking is that the blue and the yellow chips are more separated than the other chips, and for this reason blue and yellow features seem to stand out a lot more in a map when using this color sequence, which can be both distracting and confusing. A good example is Figure 3 in New Matlab isoluminant colormap for azimuth data.

Also, yellow and red, being two chips apart in the left circle in the figure above, are used to colour azimuths 60 degrees apart, and so do cyan and green. However, if we look at the right circle, we realize that the yellow and red chips are much further apart than the cyan and green chips [2] in the perceptual dimension of intensity; therefore, features colored in yellow and red  could be perceived as much further apart (in azimuth) than cyan and green.

These differences may be subtle, but in my opinion they become important when dip azimuth is combined with other attributes, perhaps using a 3D colormap, and the resulting map is used for detailed structural interpretation. There is a really good example of this type of 3D colormap in Chopra and Marfurt (2007), where dip azimuth is rendered with hue modulation, dip magnitude with saturation modulation, and coherence with lightness modulation.

A code snippet with the main Python commands to generate the two polar scatterplots in the figure is listed, and explained below. The full code can be found in this Jupiter Notebook.

01 import matplotlib.colors as clr
02 keys=['red', '#FF007F', 'magenta', '#7F00FF', 'blue', '#0080FF','cyan', '#00FF80',
'#00FF00', '#7FFF00', 'yellow', '#FF7F00']
03 my_cmap = clr.ListedColormap(keys)
04 x = np.arange(12)
05 color = my_cmap(x)
06 n = 12
07 theta = 2*np.pi*(np.linspace(0,1,13)) 
08 r = np.ones(13)*2.5
09 area = 200*r**2 # size of color chips
10 c = plt.scatter(theta, r, c=color, s=area)
11 theta_i = 2*np.pi*(sorted_intensity/255.0)
12 colors = my_sorted_cmap(np.arange(12))
13 c = plt.scatter(theta_i, r, c=colors, s=area)

In line 01 we import the Colors module from the Matplotlib library, then line 02 creates the desired sequence of colours (red, rose, magenta, violet, blue, azure, cyan, aquamarine, electric green, chartreuse, yellow, and orange) using either the name or Hex code, and line 03 generates the colormap. Then we use lines 04 and 05 to assign colours to the chips in the first scatterplot (left), and lines 06, 07, and 09 to specify the number of chips, the angular distances between chips, and the area of the chips, respectively. Line 10 generates the plot. The modifications in lines 11-14 will result in the scatterplot on the right side in the figure (the sorted intensity is calculated in much the same way as in my Geophysical tutorial – How to evaluate and compare colormaps in Python).

 

[1] Or, perhaps, just create 12 discrete colour classes to group azimuth values in bins of pi/6 (30 degrees) each, and wrap around again at the magenta, to generate a discrete colormap.

[2] The green chip is almost completely covered by the orange chip.

Logarithmic spiral, nautilus, and rainbow

The other day I stumbled into an interesting article on The Guardian online: The medieval bishop who helped to unweave the rainbow. In the article I learned for the first time of Robert Grosseteste, a 13th century British scholar (with an interesting Italian last name: Grosse teste = big heads) who was also the Bishop of Lincoln.

The Bishops’ interests and investigations covered diverse topics, making him a pre-renaissance polymath; however, it is his 1225 treatise on colour, the De Colore, that is receiving much attention.

In a recent commentary on Nature Physics (All the colours of the rainbow), and reference therein (A three-dimensional color space from the 13th century),  Smithson et al. (who also recently published a new critical edition/translation of the treatise with analysis and critical commentaries) analyze the 3D colorspace devised by Grosseteste, who claimed it allows the generation of all possible colours and to describe the variations of colours among different rainbows.

As we learn from Smithson et al., Grosseteste’s colorpsace had three dimensions, quantified by physical properties of the incident light and the medium: these are the scattering angle (which produces variation of hue within a rainbow), the purity of the scattering medium (which produces variation between different rainbows and is linked to the size of the water droplets in the rainbow), and the altitude of the sun (which produces variation in the light incident on a rainbow). The authors were able to model this colorspace and also to show that the locus of rainbow colours generated in that colorspace forms a spiral surface (a family of spiral curves, each form a specific rainbow) in  the perceptual CIELab colorspace.

I found this not only fascinating – a three-dimensional, perceptual colorspace from the 13th century!! – but also a source of renewed interest in creating the perfect perceptual colormaps by spiralling through CIELab.

My first attempt of colormap spiralling in CIELab, CubicYF, came to life by selecting hand-picked colours on CIELab colour charts at fixed lightness values (found in this document by Gernot Hoffmann). The process was described in this post, and you can see an animation of the spiral curve in CIELab space (created with the 3D color inspector plugin in ImageJ) in the video below:

Some time later, after reading this post by Rob Simmon (in particular the section on the NASA Ames Color Tool), and after an email exchange with Rob, I started tinkering with the idea of creating perceptual rainbow colormaps in CIELab programmatically, by using a helix curve or an Archimedean spiral, but reading Smithson et al. got me to try the logarithmic spiral.

So I started my experiments with a warm-up and tried to replicate a Nautilus using a logarithmic spiral with a growth ratio equal to 0.1759. You may have read that the rate at which a Nautilus shell grows can be described by the golden ratio phi, but in fact the golden spiral constructed from a golden rectangle is not a Nautilus Spiral (as an aside, as I was playing with the code I recalled reading some time ago Golden spiral, a nice blog post (with lots of code) by Cleve Moler, creator of the first version of Matlab,  who simulated a golden spiral using a continuously expanding sequence of golden rectangles and inscribed quarter circles).

My nautilus-like spiral, plotted in Figure 1, has a growth ratio of 0.1759 instead of the golden ratio of 1.618.

nautilus logarithmic spiral with growth ratio = 0.1759

Figure 1: nautilus-like spiral with growth ratio = 0.1759

And here’s the colormap (I called it logspiral) I came up with after a couple of hours of hacking: as hue cycles from 360 to 90 degrees, chroma spirals outwardly (I used a logarithmic spiral with polar equation c1*exp(c2*h) with a growth ratio c2 of 0.3 and a constant c1 of 20), and lightness increases linearly from 30 to 90.

Figure 2 shows the trajectory in the 2D CIELab a-b plane; the colours shown are the final RGB colours. In Figure 3 the trajectory is shown in 3D CIELab space. The coloured lightness profiles were made using the Colormapline submission from the Matlab File Exchange.

2D logspiral colormap in CIELab a-b plane

Figure 2: logspiral colormap trajectory in CIELab a-b plane

logspiral colormap trajectory in CIELab 3D space

Figure 3: logspiral colormap in CIELab 3D space

 

N.B. In creating logspiral, I was inspired by Figure 2 in the Nature Physics paper, but there are important differences in terms of colorspace, lightness profile and perception: I am not certain their polar coordinates are equivalent to Lightness, Chroma, and Hue, although they could; and, more importantly, the three-dimensional spirals based on Grosseteste’s colorpsace go from low lightness at low scattering angles to much higher values at mid scattering angles, and then drop again at high scattering angle (remember that these spirals describe real world rainbows), whereas lightness in logspiral lightness is strictly monotonically increasing.

In my next post I will share the Matlab code to generate a full set of logspiral colormaps sweeping the hue circle from different staring colours (and end colors) and also the slower-growing logarithmic spirals to make a set of monochromatic colormaps (similar to those in Figure 2 in the Nature Physics paper).

 

Convert Matlab colormap to Surfer colormap

In the comment section of my last post, Steve asked if I had code to generate a Surfer.clr file from my Matlab colormaps.

Some time ago I did write a simple Matlab .m file to write a colormap to a variable with the correct Surfer format, but at the time I was content to have the variable output to a .txt file, which I would then open in a text editor to add a 1-line header and change the file extension to .clr.

I went back, cleaned up the script, and automated all the formatting. This is my revised code (you may need to change the target directory c:\My Documents\MATLAB):

%% Make a Matlab colormap
% one of the colormaps from my function, Perceptually improved colormaps
sawtooth=pmkmp(256,'swtth');
%% Initialize variable for Surfer colormap
%reduce to 101 samples
sawtooth_surfer=zeros(101,5);
%% Make Surfer colormap
% add R,G,B columnns
for i=1:3 
sawtooth_surfer(:,i+1)=round(interp1([1:1:256]',sawtooth(:,i),(linspace(1,256,101))')*255);
end
% add counter and alpha (opacity) columns
sawtooth_surfer(:,1)=linspace(0,100,101);
sawtooth_surfer(:,5)=ones(101,1)*255;
%% Create output file 
filename='c:\My Documents\MATLAB\stth4surf.clr'; 
fileID=fopen(filename,'wt');
%%  Write Surfer colormap header
fprintf(fileID,'ColorMap 2 1\n');
fclose(fileID);
 
%% Add the colormap:
dlmwrite('c:\My Documents\MATLAB\stth4surf.clr',...
    sawtooth_surfer,'precision',5,'delimiter','\t', '-append');