What your brain does with colours when you are not “looking” – part 2

In What your brain does with colours when you are not “looking”, part 1, I displayed some audio spectrogram data (courtesy of Giuliano Bernardi at the University of Leuven) using 5 different colormaps to render the amplitude values: Jet (until recently Matlab’s standard colormap), grayscale, linear lightness rainbow, modified heated body, and cube lightness rainbow. I then asked readers to cast a vote for what they thought was the best colormap to visualize this dataset.

I was curious to see how all these colormaps fared, but my expectation was that Jet would sink to the bottom.  I was really surprised to see it came on top, one vote ahead of the linear lightness rainbow (21 and 20 votes out of 62, respectively). The modified heated body followed with 11 votes.

My surprise comes from the fact that Jet carries perceptual artifacts within the progression of colours (see for example this post). One way to demonstrate these artifacts is to convert the 2D map into a 3D surface where again we use Jet to colour amplitude values, but we use the intensities from the 2D map for the elevation. This can be done for example using the Interactive 3D Surface Plot plugin for ImageJ (as in my previous post ). The resulting surface is shown in Figure 1. This is almost exactly what your brain would do when you look at the 2D map colored with Jet in the previous post.

Surface_Plot_of_spectrogram_jet

Figure 1

In Figure 2 the same data is now displayed as a surface where amplitude values were used for the elevation, with a very light sun shading to help a bit with the perception of relief, but no colormap at all. to When comparing Figure 1 with Figure 2 one of the artifacts is immediately recognized: the highest values in Figure 2, which honours the data, become a relative low in Figure 1. This is because red has lower intensity than yellow and therefore data colored in red in 2D are plotted at a lower elevation than data colored in yellow, even though the amplitudes of the latter were lowest.

spectrogram_surf

Figure 2

For these reasons, I did not expect Jet to be the top pick. On the other hand, I think Jet is perhaps favoured because with consistent use, our brain, learns in part to accommodate for these non-perceptual artifacts in 2D maps, and because it has at least two regions of higher contrast (higher magnitude gradient) than other colormaps. Unfortunately, as I wrote in a recently published tutorial, these regions are randomly placed in the colormap, and the gradients are variable, so we gain on contrast but lose on faithfulness in representing the data structure.

Matt Hall wrote a great comment following the previous post, really making an argument for switching between multiple colormaps in the interpretation stage to explore  and highlight features in both the signal and the noise in the data, and that perhaps no single colormap is best overall. I agree 100% on almost everything Matt said, except perhaps on the best overall: looking at the 2D maps, at least with this dataset, I feel the heated body could be the best overall colormap, even if marginally. In Figure 3, Figure 4, Figure 5, and Figure 6 I show the 3D displays obtained by converting the 2D grayscale, linear lightness rainbow, modified heated body, and cube llightness rainbow, respectively. Looking at the 3D displays altogether gives me a confirmation of that feeling.

What do you think?

Surface_Plot_of_spectrogram_gray

Figure 3

Surface_Plot_of_spectrogram_lin_L_rainbow

Figure 4

Surface_Plot_of_spectrogram_mod_heated_body

Figure 5

Surface_Plot_of_spectrogram_CubicYF

Figure 6

Comparing color palettes

Introduction

In my last post I introduced a CIE Lab linear L* rainbow palette from a paper by Kindlmann et al. [1]. I used this palette with a map of South America created with data from the Global Land One-km Base Elevation Project at the National Geophysical Data Center. The map is the third one in the figure below.

South_America_maps_LinearL_rainbow

Based on visual inspection I argued that linear L* colored map compares more favourably with the grayscale – my perceptual benchmark – on the right – than the first and second, which use my ROYGBIV rainbow palette (from this post) and a classic rainbow palette, respectively. I noted that looking at the intensity of the colorbars may help in the assessment: the third and fourth colorbars are very similar and both look perceptually linear, whereas the first and second do not.

So it seems that among the three color palettes the third ones is the best, but…..

… prove it!

All the above is fine and reasonable, and yet it is still very much subjective. How can I prove it, convince myself this is indeed the case?

Well, of course one way is to use my L* profile and Great Pyramid tests with Matlab code from the first post of this series. Look at the two figures below: comparison of the lightness L* plots clearly shows the linear L* palette is far more perceptual than the ROYGBIV.

L plot linear L L plot ROYGBIV

One disadvantage of this method is that you have to use Matlab, which is neither free nor cheap, and have to be comfortable with some code and ASCII file manipulation.

Just recently I had an idea for an open source alternative with ImageJ and the 3D color inspector plugin. The only preparatory step required is to save a palette colorbar as a raster image. Then open the image in ImageJ, run the plugin and display the colorbar in Lab space in a 3D view. There are many options to change the scale of the plot, the perspective, and how the colors are displayed (e.g. frequency weighted, median cut, etcetera). The view can be rotated manually, and also automatically.  Below I am showing the rotating animations for the same two palettes.

Discussion

The whole process, including the recording of the animations using  the Quicktime screencast feature, took me less than 10 minutes, and it leaves no doubt as to which one is the best color palette. Let me know what you think.

A few observations: in 3D the ROYGBIV palette is even more strikingly and obviously non-monotonic. The lightness gradient varies in magnitude, resulting in non-uniform contrast. Compare for example the portion between blue and green to that between green and yellow: these have approximately the same number of samples but very different change in lightness value between the extremes. The gradient sign also changes, producing perceptual inversions, for example with the yellow to red section following the blue to yellow. These inversions may result in perceived elevation inversions, for example, if using this palette to display elevation data. On the other hand, the linear L* palette nicely spirals upwards with L* changing monotonically from 0 to 100.

References

[1] Kindlmann, G. Reinhard, E. and Creem, S., 2002, Face-based Luminance Matching for Perceptual Colormap Generation, IEEE – Proceedings of the conference on Visualization ’02

Related posts (MyCarta)

The rainbow is dead…long live the rainbow! – the full series

What is a colour space? reblogged from Colour Chat

Color Use Guidelines for Mapping and Visualization

A rainbow for everyone

Is Indigo really a colour of the rainbow?

Why is the hue circle circular at all?

A good divergent color palette for Matlab

Related topics (external)

Color in scientific visualization

The dangers of default disdain

Color tools

How to avoid equidistant HSV colors

Non-uniform gradient creator

Colormap tool

Color Oracle – color vision deficiency simulation – stand alone (Window, Mac and Linux)

Dichromacy –  color vision deficiency simulation – open source plugin for ImageJ

Vischeck – color vision deficiency simulation – plugin for ImageJ and Photoshop (Windows and Linux)

For teachers

NASA’s teaching resources for grades 6-9: What’s the Frequency, Roy G. Biv?

ImageJ and 3D Color inspector plugin

http://rsbweb.nih.gov/ij/docs/concepts.html

http://rsb.info.nih.gov/ij/plugins/color-inspector.html

The rainbow is dead…long live the rainbow! – Perceptual palettes, part 5 – CIE Lab linear L* rainbow

Some great examples

After my previous post in this series there was a great discussion on perceptual color palettes with some members of the Worldwide Geophysicists group on LinkedIn. Ian MacLeod shared some really good examples, and uploaded it in here.

HSL linear L rainbow palette

Today I’d like to share a color palette that I really like:

It is one of the palettes introduced in a paper by Kindlmann et al. [1]. The authors created their palettes with a technique they call luminance controlled interpolation. They explain it in this online presentation. However they used different palettes (their isoluminant rainbow, and their heated body) so if you find it confusing I recommend you look at the paper first. Indeed, this is a good read if you are interested in colormap generation techniques; it is one of the papers that encouraged me to develop the methodology for my cube law rainbow, which I will introduce in an upcoming post.

This is how I understand their method to create the palette: they mapped six pure-hue rainbow colors (magenta, blue, cyan, green, yellow, and red) in HSL space, and adjusted the Luminance by changing the HSL Lightness value to ‘match’ that of six control points evenly spaced along the gray scale palette. After that, they interpolated linearly along the L axis between 0 and 1 using the equation presented in the paper.

CIE Lab linear L* rainbow palette

For this post I will try to create a similar palette. In fact, initially I was thinking of just replicating it, so I imported the palette as a screen capture image into Matlab, reduced it to a 256×3 RGB colormap matrix, and converted RGB values to Lab to check its linearity in lightness. Below I am showing the lightness profile, colored by value of L*, and the Great Pyramid of Giza – my usual test surface –  also colored by L* (notice I changed the X axis of both L* plots from sample number to Pyramid elevation to facilitate comparison of the two figures).

Clearly, although the original palette was constructed to be perceptually linear, it is not linear following my import. Notice in particular the notch in the profile in the blue area, at approximately 100 m elevation. This artifact is also visible as a flat-looking blue band in the pyramid.

I have to confess I am not too sure why the palette has this peculiar lightness profile. I suspect this may be because their palette is by construction device dependent (see the paper) so that when I took the screen capture on my monitor I introduced the artifacts.

The only way to know for sure would be to use their software to create the palette, or alternatively write the equation from the paper into Matlab code and create a palette calibrated on my monitor, then compare it to the screen captured one. Perhaps one day I will find the time to do it but having developed my own method to create a perceptual palette my interest in this one became just practical: I wanted to get on with it and use it.

Fixing and testing the palette

Regardless of what the cause might be for this nonlinear L* profile, I decide to fix it and I did it by simply replacing the original profile with a new one, linearly changing between 0.0 and 1.0. Below I am showing the L* plot for this adjusted palette, and the Great Pyramid of Giza, both again colored by value of L*.

The pyramid with the adjusted palette seems better: the blue band is gone, and it looks great. I am ready to try it on a more complex surface. For that I have chosen the digital elevation data for South America available online through the Global Land One-km Base Elevation Project at the National Geophysical Data Center. To load and display the data in Matlab I used the first code snippet in Steve Eddin’s post on the US continental divide  (modified for South America data tiles). Below is the data mapped using the adjusted palette. I really like the result: it’s smooth and it looks right.

South_America_LinearL_solo

But how do I know, really? I mean, once I move away from my perfectly flat pyramid surface, how do I know what to expect, or not expect? In other words, how would I know if an edge I see on the map above is an artifact, or worse, that the palette is not obscuring real edges?

In some cases the answer is simple. Let’s take a look at the four versions of the map in my last figure. The first on the left was generated using th ROYGBIV palette I described in this post. It would be obvious to me, even if I never looked at the L* profile, that the blue areas are darker than the purple areas, giving the map a sort of inverted image look.

South_America_maps_LinearL_rainbow

But how about the second map from the left? For this I used the default rainbow from a popular mapping program. This does not look too bad at first sight. Yes, the yellow is perceived as a bright, sharp edge, and we now know why that is, but other than that it would be hard to tell if there are artifacts. After a second look the whole area away from the Andes is a bit too uniform.

A good way to assess these maps is to use grayscale, which we know is a good perceptual option, as a benchmark. This is the last map on the right. The third map of South America was coloured using my adjusted linear L* palette. This maps looks more similar to our grayscale benchmark. Comparison of the colorbars will also help: the third and fourth are very similar and both look perceptually linear, whereas the third does show flatness in the blue and green areas.

Let me know what you think of these examples. And as usual, you are welcome to use the palette in your work. You can download it here.

UPDATE

With my following post, Comparing color palettes, I introduced my new method to compare palettes with ImageJ and the 3D color inspector plugin. Here below are the recorded 3D animations of the initial and adjusted palettes respectively. In 3D it is easier to see there is an area of flat L* between the dark purple and dark blue in the initial color palette. The adjusted color palette instead monotonically spirals upwards.

References

[1] Kindlmann, G. Reinhard, E. and Creem, S., 2002, Face-based Luminance Matching for Perceptual Colormap Generation, IEEE – Proceedings of the conference on Visualization ’02

Related posts (MyCarta)

The rainbow is dead…long live the rainbow! – the full series

What is a colour space? reblogged from Colour Chat

Color Use Guidelines for Mapping and Visualization

A rainbow for everyone

Is Indigo really a colour of the rainbow?

Why is the hue circle circular at all?

A good divergent color palette for Matlab

Related topics (external)

Color in scientific visualization

The dangers of default disdain

Color tools

How to avoid equidistant HSV colors

Non-uniform gradient creator

Colormap tool

Color Oracle – color vision deficiency simulation – stand alone (Window, Mac and Linux)

Dichromacy –  color vision deficiency simulation – open source plugin for ImageJ

Vischeck – color vision deficiency simulation – plugin for ImageJ and Photoshop (Windows and Linux)

For teachers

NASA’s teaching resources for grades 6-9: What’s the Frequency, Roy G. Biv?

ImageJ and 3D Color inspector plugin

http://rsbweb.nih.gov/ij/docs/concepts.html

http://rsb.info.nih.gov/ij/plugins/color-inspector.html

The rainbow is dead…long live the rainbow! – The rainbow is dead…long live the rainbow! – Perceptual palettes, part 4 – CIE Lab heated body

  In my last post I discussed the two main issues with the rainbow color palette from the point of view of human color vision, and concluded one of these issues is insurmountable.

But before I move to presenting alternative color palettes, let me give you one last example of how bad the rainbow is. It was sent to me by Antony Price, a member of the LinkedIn group Worldwide Geophysicists. Antony created a grayscale and a rainbow-colored version – using the same data range and number of intervals – of the satellite altimeter derived free-air gravity map of the world [1].  I am showing the two maps below.

Continue reading

colour maps

The perfect lead into my series on perceptual color palettes. Great post!

The original article on the Guardian is here. And here is the conversation that lead to improved map, as put together on Storify.

I thought it’d be interesting to run a simulation of what the map would actually look lie to viewers with the 3 types of color deficient vision. Below are my results for the first map. It is obvious from this simulation that while the map is OK for Tritanope viewers, the green and red areas are very confusing for Protanope and Deuteranope viewers.

Continue reading

An example of Forensic Image Processing in ImageJ

In a previous post I introduced ImageJ, a very powerful open source  image processing software. ImageJ allows users to display, edit, analyze, process, and filter images, and its capabilities are greatly increased by hundreds of plugins.

In a future post I will be showing how to use the watershed transform in ImageJ for medical image analysis and advanced geoscience map interpretation and terrain analysis.

Today I am posting a submission entry by guest Ron DeSpain, an image and signal analysis software developer. Ron’s note is about Feature Detection for Fingerprint Matching in ImageJ. I was thrilled to receive this submission as I really have a soft spot for Forensic science. Additionally, it is a nice way to introduce skeletonization, which I will be using in a future series on automatic detection of lineaments in geophysical maps. So, thanks Ron!

Please check this page for reference on fingerprint terminology. And if you are interested in the topic and would like to start a discussion, or make a suggestion,  please use the comment section below. You can also contact Ron directly at ron_despain@hotmail.com if you want to enquire about the code.

==========================================================================================

Initial Feature Detection Steps for Fingerprint Matching – by Ron DeSpain

A common fingerprint pre-processing method called the crossings algorithm is used to extract from a fingerprint features called minutiae.  Minutiae are located at the end of fingerprint ridges and where the ridges split (bifurcations) as shown in Figure 1.  Once detected, minutiae are correlated with a database of known fingerprint minutiae sets.  This article discusses the very first step in detecting these minutiae in a fingerprint.

Figure 1  Types of Fingerprint Minutiae

This fingerprint is available in a free database of fingerprint images at http://bias.csr.unibo.it/fvc2000/download.asp

I got the idea for this convolution based minutiae extractor from a paper similar to Afsar et al. [reference] where a slightly different counting scheme is used to identify minutiae.

This algorithm depends on the fact that the end and bifurcation patterns have unique numbers of crossings in a 3×3 local region, as depicted in Figure 2.  This means that by simply counting the crossings you could detect the minutiae.

Figure 2 Minutiae Patterns

The pseudocode for this algorithm is as follows:

  1. Convert the image to binary, normalized to 0 to 1 range, floating point data
  2. Skeletonize the image
  3. Convolve the skeleton with the unit 3×3 matrix to count the crossings
  4. Multiply the skeletonized image by the convolved image = Features Image
  5. Threshold the Features  image at 2 for ridge ends
  6. Threshold  the Features image  at 4 for bifurcations

The following imageJ macro will identify minutiae using this simple pattern recognition technique.  You can download and install ImageJ free from http://imagej.nih.gov/ij/download.html.  Don’t forget to get the user’s manual and macro coding guide from this site if you want to modify my macro.

//Minutiae Detection Macro
open();
run("Duplicate...", "title=Skeleton");
starttime = getTime();
run("Make Binary");
run("Skeletonize");
run("32-bit");
run("Divide...", "value=255.000");
run("Enhance Contrast", "saturated=0 normalize");
run("Duplicate...", "title=Convolution");
run("Convolve...", "text1=[1 1 1\n1 1 1\n1 1 1\n] stack");
imageCalculator("Multiply create 32-bit", "Skeleton","Convolution");
endtime = getTime();
selectWindow("Result of Skeleton");
rename("Features");
run("Tile");
run("Threshold...");
print("Processing Time (ms) = "+(endtime - starttime));
setTool(11);
selectWindow("Features");
run("Sync Windows");

Copy this code to a text file (.txt), drop it into the ImageJ macros folder, install and run it in ImageJ using the image at the end of this article.

The output of the above macro is shown in Figure 3 below:

Figure 3 ImageJ Macro Output

Setting the threshold control to show pixels with a value of 2 in red highlights will show the ridge end detections as shown in Figure 4.   Note that the noise in the image produces false detections, which have to be identified with further processing not addressed here.

Figure 4 Ridge End Detections

Bifurcations are similarly found by setting the threshold to 4 as shown in Figure 5:

Figure 5 Bifurcations Detected

There are two fingerprint processing macros on the Mathworks user community file exchange for Matlab users and free fingerprint verification SDK at http://www.neurotechnology.com/free-fingerprint-verification-sdk.html  for those of you who would like to dig deeper into this subject.

You can copy and save the fingerprint image I used in this article directly from this document’s Figure 6 to get you started either via screen capture, or right-click the image download.

Figure 6  Original Image

Reference

Afsar, F. A., M. Arif, and M. Hussain. “Fingerprint identification and verification system using minutiae matching.” National Conference on Emerging Technologies. 2004.

Lending you a hand with impage processing – introduction to ImageJ

In a previous post I used an x-ray of my left hand to showcase some basic image visualization techniques in Matlab.

If you are interested in learning image processing and analysis on your own (just like I did) but are not too interested in the programming side of things or would rather find a noncommercial alternative I’d recommend ImageJ. I just stumbled into it a few weeks ago and was immediately drawn to it.

ImageJ is a completely free, open source, Java-based image processing environment. It allows users to display, edit, analyze, process, and filter images, and its capabilities are greatly increased by hundreds of plugins on the official webpage and elsewhere.

It is used extensively by biomedical and medical image processing professionals (check this fantastic tutorial by the Montpellier RIO imaging lab), but is popular in many fields, from A-stronomy (you can read a brief review in here) to Z-oology (check this site).

I decided to give it a try right away. Within an hour of installing it on my iMac I had added the Interactive 3D SurfacePlot plugin, loaded the hand x-ray image, displayed it and adjusted the z scale, smoothing, lighting, and intensity thresholds to what (preliminarily) seemed optimal.

For each discrete adjustment I saved a screen capture, then I reimported as an image sequence in ImageJ and easily saved the sequence as an AVI movie, which is here below. I’m hoping this will give you a sense of how I iteratively converged to a good result.

Continue reading