I was curious to see how all these colormaps fared, but my expectation was that Jet would sink to the bottom. I was really surprised to see it came on top, one vote ahead of the linear lightness rainbow (21 and 20 votes out of 62, respectively). The modified heated body followed with 11 votes.
My surprise comes from the fact that Jet carries perceptual artifacts within the progression of colours (see for example this post). One way to demonstrate these artifacts is to convert the 2D map into a 3D surface where again we use Jet to colour amplitude values, but we use the intensities from the 2D map for the elevation. This can be done for example using the Interactive 3D Surface Plot plugin for ImageJ (as in my previous post Lending you a hand with image processing – introduction to ImageJ). The resulting surface is shown in Figure 1. This is almost exactly what your brain would do when you look at the 2D map colored with Jet in the previous post.
Figure 1
In Figure 2 the same data is now displayed as a surface where amplitude values were used for the elevation, with a very light sun shading to help a bit with the perception of relief, but no colormap at all. to When comparing Figure 1 with Figure 2 one of the artifacts is immediately recognized: the highest values in Figure 2, which honours the data, become a relative low in Figure 1. This is because red has lower intensity than yellow and therefore data colored in red in 2D are plotted at a lower elevation than data colored in yellow, even though the amplitudes of the latter were lowest.
Figure 2
For these reasons, I did not expect Jet to be the top pick. On the other hand, I think Jet is perhaps favoured because with consistent use, our brain, learns in part to accommodate for these non-perceptual artifacts in 2D maps, and because it has at least two regions of higher contrast (higher magnitude gradient) than other colormaps. Unfortunately, as I wrote in a recently published tutorial, these regions are randomly placed in the colormap, and the gradients are variable, so we gain on contrast but lose on faithfulness in representing the data structure.
Matt Hall wrote a great comment following the previous post, really making an argument for switching between multiple colormaps in the interpretation stage to explore and highlight features in both the signal and the noise in the data, and that perhaps no single colormap is best overall. I agree 100% on almost everything Matt said, except perhaps on the best overall: looking at the 2D maps, at least with this dataset, I feel the heated body could be the best overall colormap, even if marginally. In Figure 3, Figure 4, Figure 5, and Figure 6 I show the 3D displays obtained by converting the 2D grayscale, linear lightness rainbow, modified heated body, and cube llightness rainbow, respectively. Looking at the 3D displays altogether gives me a confirmation of that feeling.
Since then Giuliano has been kind enough to provide me with the data for one of his spectrograms, so I am resuming the discussion. Below here is a set of 5 figures generated in Matlab from the same data using different colormaps. With this post I’d like to get readers involved and ask to cast your vote for the colormap you prefer, and even drop a line in the comments section to tell us the reason for your preference.
In the second post I’ll show the data displayed with the same 5 colormaps but using a different type of visualization, which will reveal what our brain is doing with the colours (without our full knowledge and consent), and then I will ask again to vote for your favourite.
In my last post I introduced a CIE Lab linear L* rainbow palette from a paper by Kindlmann et al. [1]. I used this palette with a map of South America created with data from the Global Land One-km Base Elevation Project at the National Geophysical Data Center. The map is the third one in the figure below.
Based on visual inspection I argued that linear L* colored map compares more favourably with the grayscale – my perceptual benchmark – on the right – than the first and second, which use my ROYGBIV rainbow palette (from this post) and a classic rainbow palette, respectively. I noted that looking at the intensity of the colorbars may help in the assessment: the third and fourth colorbars are very similar and both look perceptually linear, whereas the first and second do not.
So it seems that among the three color palettes the third ones is the best, but…..
… prove it!
All the above is fine and reasonable, and yet it is still very much subjective. How can I prove it, convince myself this is indeed the case?
Well, of course one way is to use my L* profile and Great Pyramid tests with Matlab code from the first post of this series. Look at the two figures below: comparison of the lightness L* plots clearly shows the linear L* palette is far more perceptual than the ROYGBIV.
One disadvantage of this method is that you have to use Matlab, which is neither free nor cheap, and have to be comfortable with some code and ASCII file manipulation.
Just recently I had an idea for an open source alternative with ImageJ and the 3D color inspector plugin. The only preparatory step required is to save a palette colorbar as a raster image. Then open the image in ImageJ, run the plugin and display the colorbar in Lab space in a 3D view. There are many options to change the scale of the plot, the perspective, and how the colors are displayed (e.g. frequency weighted, median cut, etcetera). The view can be rotated manually, and also automatically. Below I am showing the rotating animations for the same two palettes.
Discussion
The whole process, including the recording of the animations using the Quicktime screencast feature, took me less than 10 minutes, and it leaves no doubt as to which one is the best color palette. Let me know what you think.
A few observations: in 3D the ROYGBIV palette is even more strikingly and obviously non-monotonic. The lightness gradient varies in magnitude, resulting in non-uniform contrast. Compare for example the portion between blue and green to that between green and yellow: these have approximately the same number of samples but very different change in lightness value between the extremes. The gradient sign also changes, producing perceptual inversions, for example with the yellow to red section following the blue to yellow. These inversions may result in perceived elevation inversions, for example, if using this palette to display elevation data. On the other hand, the linear L* palette nicely spirals upwards with L* changing monotonically from 0 to 100.
Before starting my series on perceptual color palettes I thought it was worth mentioning an excellent function I found some time ago on the Matlab File Exchange. The function is called Light and Bartlein Color Maps. It was a Matlab Pick of the week, and it can be used to create four color palettes discussed in the EOS paper by Light and Bartlein. Each of these palettes is suited for a specific task, and the authors claim they are non confusing for viewers with color vision deficiencies.
In the remainder of this post I will showcase one of the palettes, called orange-white-purple, as it is good divergent scheme [1]. With the code below I am going to load the World Topography Matlab demo data, create the palette and use it to display the data.
%% load World Topography Matlab demo
load topo;
%% create Light Bartlein orange-white-purple diverging scheme
LB=flipud(lbmap(256,'BrownBlue')); % flip it so blue is for negative(ocean)
% and green for positive (land)
%% plot map
fig2 = figure;
imagesc(flipud(topo));
axis equal
axis tight
axis off
set(fig2,'Position',[720 400 980 580]);
title(' Non-symmetric divergent orange-white-purple palette','Color',...
'k','FontSize',12,'FontWeight','demi');
colormap(LB);
colorbar;
And here is the result below. I like this color scheme better than many othera for divergent data. One only issue in the figure, although not inherently due to the palette itself [2], is that the centre of the palette is not at the zero. This is a problem since the zero is such an important element in ratio data, in this case representing sea level.
MAKING THE PALETTE SYMMETRIC AROUND THE ZERO
The problem fortunately can be easily fixed by clipping the data limit to a symmetric range. In Matlab this has to be done programmatically, and rather than going about it with trial and error I like to do it automatically with the code below:
Stephen Westland of Colour chat recently posted about a clever new LED traffic light developed in Japan. Here’s my tweet with the link to Westland’s original blog post:
I really like the idea of making a traffic light that works for everyone: for people with full color vision and people with color vision deficiencies. In fact, I think we should do the same with our color palettes. Why do I say that?
A rainbow for everyone
Take a look at Figure 1 below. This is a map of the Bouguer Gravity (terrain corrected Bouguer Gravity to be precise) in Southern Tuscany, colored using a rainbow palette. I intentianally left out the colorbar. For a moment ignore the sharp gradient changes at the yellow and cyan color (that is one of the topics of my upcoming series “The rainbow is dead…long live the rainbow!”). Can you tell which color is representing high values an which low? If you have used a mnemonic like ROY B GIV and can tell that highs are towards the Southwest and lows towards the Northeast, then you are right and you also have full color vision, just like me. Great, because this post is exacly for us, the “normals”.
In a previous post I introduced ImageJ, a very powerful open source image processing software. ImageJ allows users to display, edit, analyze, process, and filter images, and its capabilities are greatly increased by hundreds of plugins.
In a future post I will be showing how to use the watershed transform in ImageJ for medical image analysis and advanced geoscience map interpretation and terrain analysis.
Today I am posting a submission entry by guest Ron DeSpain, an image and signal analysis software developer. Ron’s note is about Feature Detection for Fingerprint Matching in ImageJ. I was thrilled to receive this submission as I really have a soft spot for Forensic science. Additionally, it is a nice way to introduce skeletonization, which I will be using in a future series on automatic detection of lineaments in geophysical maps. So, thanks Ron!
Please check this page for reference on fingerprint terminology. And if you are interested in the topic and would like to start a discussion, or make a suggestion, please use the comment section below. You can also contact Ron directly at ron_despain@hotmail.com if you want to enquire about the code.
Initial Feature Detection Steps for Fingerprint Matching – by Ron DeSpain
A common fingerprint pre-processing method called the crossings algorithm is used to extract from a fingerprint features called minutiae. Minutiae are located at the end of fingerprint ridges and where the ridges split (bifurcations) as shown in Figure 1. Once detected, minutiae are correlated with a database of known fingerprint minutiae sets. This article discusses the very first step in detecting these minutiae in a fingerprint.
I got the idea for this convolution based minutiae extractor from a paper similar to Afsar et al. [reference] where a slightly different counting scheme is used to identify minutiae.
This algorithm depends on the fact that the end and bifurcation patterns have unique numbers of crossings in a 3×3 local region, as depicted in Figure 2. This means that by simply counting the crossings you could detect the minutiae.
Figure 2 Minutiae Patterns
The pseudocode for this algorithm is as follows:
Convert the image to binary, normalized to 0 to 1 range, floating point data
Skeletonize the image
Convolve the skeleton with the unit 3×3 matrix to count the crossings
Multiply the skeletonized image by the convolved image = Features Image
Threshold the Features image at 2 for ridge ends
Threshold the Features image at 4 for bifurcations
The following imageJ macro will identify minutiae using this simple pattern recognition technique. You can download and install ImageJ free from http://imagej.nih.gov/ij/download.html. Don’t forget to get the user’s manual and macro coding guide from this site if you want to modify my macro.
Copy this code to a text file (.txt), drop it into the ImageJ macros folder, install and run it in ImageJ using the image at the end of this article.
The output of the above macro is shown in Figure 3 below:
Figure 3 ImageJ Macro Output
Setting the threshold control to show pixels with a value of 2 in red highlights will show the ridge end detections as shown in Figure 4. Note that the noise in the image produces false detections, which have to be identified with further processing not addressed here.
Figure 4 Ridge End Detections
Bifurcations are similarly found by setting the threshold to 4 as shown in Figure 5:
Figure 5 Bifurcations Detected
There are two fingerprint processing macros on the Mathworks user community file exchange for Matlab users and free fingerprint verification SDK at http://www.neurotechnology.com/free-fingerprint-verification-sdk.html for those of you who would like to dig deeper into this subject.
You can copy and save the fingerprint image I used in this article directly from this document’s Figure 6 to get you started either via screen capture, or right-click the image download.
Figure 6 Original Image
Reference
Afsar, F. A., M. Arif, and M. Hussain. “Fingerprint identification and verification system using minutiae matching.” National Conference on Emerging Technologies. 2004.
Today I would like to show a way to quickly create a pseudo-3D display from this map:
Original image
The map is a screen capture of a meandering river near Galena, Alaska, taken in Google Earth. I love this image; it is one of my favorite maps for several reasons. First of all it is just plainly and simply a stunningly beautiful image. Secondly, and more practically, the meanders look not too dissimilar to what they would appear on a 3D seismic time slice displayed in grayscale density which is great because it is difficult to get good 3D seismic examples to work with. Finally, this is a good test image from the filtering standpoint as it has a number of linear and curved features of different sizes, scales, and orientation.The method I will use to enhance the display is the shift and subtract operation illustrated in The Scientist and Engineer’s Guide to Digital Signal Processing along with other 3×3 edge modification methods. The idea is quite simple, and yet extremely effective – we convolve the input image with a filter like this one:
I love a visualization well done, whether by me or someone else. In fact, I love visualization period. I find there’s always something to learn by looking at an image or animation, and always look for something new and interesting I can learn.
Today I would like to share with readers some of the things I learned, saw, or admired over time.
Where to start? Visualization in Google Earth
Yes, we all know nowadays how to use Google Earth to plan our next vacation, check our old neighbor’s new house, etcetera. But visualization? Yes indeed. Just today I was at a roundtable meeting of Fig Tree members (Fig Tree is an NGO that supports international development projects) and I learned how Google Earth is used by many NGOs for project planning: for a start check the Mercy Corps‘ Rough Google Earth Guide, these map overlay tools, and the official gallery.
Specialize in one discipline if you can. See what the experts in that field do. For example I am a Geophysicist and do a lot of seismic visualization and interpretation, so I look at what folks like Steve Lynch, or Art Barnes to name a couple, and follow Agile Geoscience blog. Again, keep abreast of the latest technology: Google Earth is increasingly being used in seismic exploration planning and visualization. You can find some examples here and even get some seismic overlays and display them yourself: if you have Google Earth just download this KMZ file and double-click.
I am also always curious about other fields and browse for examples incessantly. I am interested in music, and I was thrilled to find this great review of Music Visualization. I am also interested in Astronomy and Planetary Exploration, and over time I have found some amazing visualizations. This video for instance is a volume rendered animation of the star-forming region L1448 created by Nick Holliman(Durham University) in VolView, an open source volume visualization program.
A while back I learned a lot on from books like A Beginner’s Guide to Constructing the Universe, and its exploration of the relationships between nature, art, science, symbols, and numbers; I regularly go back to it.
Check answers on sites like Quora or Stack Overflow. Check regularly or better subscribe to visualization or specialistic blogs: I mentioned already VizThink and Fell in Love with Data. I also like the excellent Datavisualization.ch and FlowingData, where you can actually find an extensive list of blogs.
Study what others do
Take a look at the groundbreaking work of Hans Roslins with his Gapminder:
Check this video on Designing for Visual Efficiency from Vizthink to learn how to declutter your visualizations:
Here’s an interesting visualization project from IMB: sign up on Many Eyes to not only browse several examples of visualizations but also to upload your own data and outsource the visualization project.
Read The Data Visualization Beginner’s Toolkit series from Fell in Love with Data blog. This is the introduction to the series. In the first post he reviews books and other resources. In the second post he introduces some rules and more importantly the software tools. There’s a feature interview with Moritz Stefaner on data visualization freelancing: