Just posted a tweet yesterday on this great iPhone and Android App to correct for Color Vision Deficiency:
And here’s a screen capture:
The perfect lead into my series on perceptual color palettes. Great post!
The original article on the Guardian is here. And here is the conversation that lead to improved map, as put together on Storify.
I thought it’d be interesting to run a simulation of what the map would actually look lie to viewers with the 3 types of color deficient vision. Below are my results for the first map. It is obvious from this simulation that while the map is OK for Tritanope viewers, the green and red areas are very confusing for Protanope and Deuteranope viewers.
In my previous post on this topic I left two loose ends: one in the main text about shading in 3D, and one in the comment section to follow-up on a couple of points in Evan’s feedback. I finally managed to go back and spend some time on those and that is what I am posting about today.
I was trying to write some code to apply the shading with transparency and the surf command. In fact, I’ve been trying, and asking around in the Matlab community for more than one year. But to no avail. I think it is not possible to create the shading directly that way. But I did find a workaround. The breakthrough came when I asked myself this question: can I find a way to capture in a variable the color and the shading associated with each pixel in one of the final 2D maps from the previous post? If I could do that, then it would be possible to assign the colors and shading in that variable using this syntax for the surf command:
surf(data,c);
where data is the gravity matrix and c is the color and shading matrix. To do it in practice I started from a suggestion by Walter Robertson on the Matlab community in his answer to my question on this topic.
The full code to do that is below here, followed by an explanation including 3 figures. As for the other post, since the data set I use is from my unpublished thesis in Geology, I am not able to share it, and you will have to use your own data, but the Matlab code is simply adapted.
%% cell 1 figure; shadedpcolor(x,y,data,(1-normalise(slope)),[-5.9834 2.9969],[0 1],0.45,cube1,0); axis equal; axis off; axis tight shadedcolorbar([-5.9834 2.9969],0.55,cube1);
In cell 1 using again shadedpcolor.m, normalise.m, and cube1 color palette I create the 2D shaded image, which I show here in Figure 1.
Figure 1
This is a fantastic initiative.
http://www.nsf.gov/news/special_reports/scivis/challenge.jsp
In my last post I described how to create a powerful, nondirectional shading for a geophysical surface using the slope of the data to assign the shading intensity (i.e. areas of greater slope are assigned darker shading). Today I will show hot to create a similar effect in Matlab.
Since the data set I use is from my unpublished thesis in Geology, I am not able to share it, and you will have to use your own data, but the Matlab code is simply adapted. The code snippets below assume you have a geophysical surface already imported in the workspace and stored in a variable called “data”, as well as the derivative in a variable called “data_slope”.
Some time ago I read this interesting Image Processing blog post by Steve Eddins at Mathworks on overlaying images using transparency. I encourage readers to take a look at this and other posts by Steve, he’s great! That particular blog post gave me the idea to use transparency and the slope to create my favorite shading in Matlab.
In addition to the code below you will need normalise.m from Peter Kovesi‘s website, and to import the color palette cube1.
%% alpha transparency code snippet black = cat(3, zeros(size(data)), zeros(size(data)), ... zeros(size(data))); % make a truecolor all-black image gray=black+0.2; % make a truecolor all-gray image alphaI=normalise(data_slope); % create transparency weight matrix % using data_slope imagesc(data);colormap(cube1); % display data hold on h = imagesc(gray); % overlay gray image on data hold off set(h, 'AlphaData', alphaI); % set transparency of gray layer using axis equal; % weight matrix axis tight; axis off;
And here is the result in Figure 1 below – not bad!
Figure 1. Shaded using transparency
I have been thinking for a while about writing on visualization of geophysical data. I finally got to it, and I am now pleased to show you a technique I use often. This tutorial has shaped up into 2 independent posts: in the first post I will show how to implement the technique with Surfer, in the second one with Matlab (you will need access to a license of Surfer 8.08 or later, and Matlab 2007a or later to replicate the work done in the tutorial).
I will illustrate the technique using gravity data since it is the data I developed it for. In an upcoming series of gravity exploration tutorials I will discuss in depth the acquisition, processing, enhancement, and interpretation of gravity data (see [1] and [4]). For now, suffice it to say that gravity prospecting is useful in areas where rocks with different density are laterally in contact, either stratigraphic or tectonic, producing a measurable local variation of the gravitational field. This was the case for the study area (in the Monti Romani of Southern Tuscany) from my thesis in Geology at the University of Rome [2].
In this part of the Apennine belt, a Paleozoic metamorphic basement (density ~2.7 g/cm3) is overlain by a thick sequence of clastic near-shore units of the Triassic-Oligocene Tuscany Nappe (density ~2.3 g/cm3). The Tuscan Nappe is in turn covered by the Cretaceous-Eocene flish units of the Liguride Complex (density ~2.1 g/cm3).
During the deformation of the Apennines, NE verging compressive thrusts caused doubling of the basement. The tectonic setting was later complicated by tensional block faulting with formation of horst-graben structures generally extend along NW-SE and N-S trends which were further disrupted by later and still active NE-SW normal faulting (see [2], and reference therein, for example [3]).
This complex tectonic history placed the basement in lateral contact with the less dense rocks of the younger formations and this is reflected in the residual anomaly map [4] of Figure 1. Roughly speaking, there is a high in the SE quadrant of ~3.0 mgal corresponding to the location of the largest basement outcrop, an NW-SE elongated high of ~0.5 mgal in the centre bound by lows on both the SW and NE (~-6.0 and ~-5.0 mgal, respectively), and finally a local high in the N.W. quadrant of ~-0.5 mGal. From this we can infer that in this area can infer that the systems of normal faults caused differential sinking of the top of basement in different blocks leaving an isolated high in the middle, which is consistent with the described tectonic history [2]. Notice that grayscale representation is smoothly varying, reflecting (and honoring) the structure inherent in the data. It does not allow good visual discrimination and comparison of differences, but from the interpretation standpoint I recommend to always start out with it: once a first impression is formed it is difficult to go back. There is time later to start changing the dispaly.
OK, now that we formed a first impression, what can we try to improve on this display? The first thing we can do to increase the perceptual contrast is to add color, as I have done in Figure 2. This is an improvement, now we are able to appreciate smaller changes, quickly assess differences, or conversely identify areas of similar anomaly. Adding the third dimension and perspective is a further improvement, as seen in figure 3. But there’s still something missing. Even though we’ve added color, relief, and perspective, the map looks a bit “flat”.
Figure 2 – Colored residual anomalies in milligals. This version of the map was generate using the IMAGE MAP option in Surfer.Adding contours is a good option to further bring out details in the data, and I like the flexibility of contours in Surfer. For example, for Figure 4 I assigned (in Contour Properties, Levels tab) a dashed line style to negative residual contours, and a solid line style to positive residual contours, with a thicker line for the zero contour. This can be done by modifying the style for each level individually, or by creating two separate contours, one for the positive data, one for the negative data, which is handy when several contour levels are present. The one drawback of using contours this way is that it is redundant. We used 3 weapons – color, relief, and contours – to dispaly one dataset, and to characterize just one property, the shape of gravity anomaly. In geoscience it is often necessary, and desireable to show multiple (relevant) datasets in one view, so this is a bit of a waste. I would rather spare the contours, for example, to overlay and compare anomalous concentrations in gold pathfinder elements on this gravity anomaly map (one of the objectives of the study, being the Monti Romani an area of active gold exploration).
The alternative to contours is the use of illumination, or lighting, which I used in Figure 5. Lighting is doing a really good job: now we can recognize there is a high frequency texture in the data and we see some features both in the highs and lows. But there’s a catch: we are now introducing perceptual artifacts, in the form of bright white highlight, which is obscuring some of the details where the surface is orthogonal to the point source light.
There is a way to illuminate the surface without introducing artifact – and that is really wanted to show you with this tutorial – which is to use a derivative of the data to assign the shading intensity (areas of greater gradient were assigned darker shading) [5]. In this case I choose the terrain slope, which is the slope in the direction of steepest gradient at any point in the data (calculated in a running window). The result is a very powerful shading. Here is how you can do it in Surfer:
1) CREATE TERRAIN SLOPE GRID (let’s call this A): go to GRID > CALCULUS > TERRAIN SLOPE
Result is shown in Figure 6 below:
2) CREATE COMPLEMENT OF TERRAIN SLOPE AND NORMALIZE TO [1 0] RANGE (to assign darker shading to areas of greater slope. This is done with 3 operations:
i) GRID > MATH> B=A – min(A)
where min(A) is the minimum value, which you can read off the grid info (for example you would double click on the map above to open the Map Properties and there’s an info button next to the Input File field) .
ii) GRID > MATH> C=B /max(B)
iii) GRID > MATH> D= 1-C
Result is shown in Figure 7 below. This looks really good, see how now the data seems almost 3D? It would work very well just as it is. However, I do like color, so I added it back in Figure 8. This is done by draping the grayscale terrain slope complement IMAGE MAP as an overlay over the colored residual anomaly SURFACE MAP, and setting the Color Modulation to BLEND in the 3D Surface Properties in the Overlay tab. I really do like this display in Figure 8, I think it is terrific. Let me know if you like it best too.
Finally, in Figure 9, I added a contour of the anomaly in the Gold Pathfiners, to reiterate the point I made above that contours are best spared for a second dataset.
In my next post I will show you how to do all of the above programmatically in Matlab (and share the code). Meanwhile, comments, suggestions, requests are welcome. Have fun mapping and visualizing!
Did you lie the colormap? In future series on perceptually balanced colormaps I will tell you how I created it. For now, if you’d like to try it on your data you can download it here:
cube1 – generic format with RGB triplets;
Cube1_Surfer – this is preformatted for Surfer with 100 RGB triplets and header line. Dowload the .doc file, open and save as plain text, then change extension to .clr;
Cube1_Surfer_inverse – the ability to flip color palette is not implemented in Surfer (at least not in version 8) so I am including the flipped version of above. Again, dowload the .doc file, open and save as plain text, then change extension to .clr.
Visualization tips for geoscientists: Matlab
Visualization tips for geoscientists: Matlab, part II
Visualization tips for geoscientists: Matlab, part III
Image Processing Tips for Geoscientists – part 1
Compare lists from text files using Matlab – an application for resource exploration
Basement structure in central and southern Alberta: insights from gravity and magnetic maps
Making seismic data come alive
Mapping multiple attributes to three- and four-component color models — A tutorial
Enhancing Fault Visibility Using Bump Mapped Seismic Attributes
I would like to thank Michele di Filippo at the Department of Earth Science, University of Rome La Sapienza, to whom I owe a great deal. Michele, my first mentor and a friend, taught me everything I know about the planning and implementation of a geophysical field campaign. In the process I also learned from him a great deal about geology, mapping, Surfer, and problem solving. Michele will make a contribution to the gravity exploration series.
[1] If you would like to learn more about gravity data interpretation please check these excellent notes by Martin Unsworth Unsworth, Professor of Physics at the Earth and Atmospheric Sciences department, University of Alberta.
[2] Niccoli, M., 2000: Gravity, magnetic, and geologic exploration in the Monti Romani of Southern Tuscany, unpublished field and research thesis, Department of Earth Science, University of Rome La Sapienza.
[3] Moretti A., Meletti C., Ottria G. (1990) – Studio stratigrafico e strutturale dei Monti Romani (GR-VT) – 1: dal Paleozoico all’Orogenesi Alpidica. Boll. Soc. Geol. It., 109, 557-581. In Italian.
[4] Typically reduction of the raw data is necessary before any interpretation can be attempted. The result of this process of reduction is a Bouguer anomaly map, which is conceptually equivalent to what we would measure if we stripped away everything above sea level, therefore observing the distribution of rock densities below a regular surface. It is standard practice to also detrend the Bouguer anomaly to separate the influence of basin or crustal scale effects, from local effects, as either one or the other is often the target of the survey. The result of this procedure is typically called Residuals anomaly and often shows subtler details that were not apparent due to the regional gradients. Reduction to rsiduals makes it easier to qualitatively separate mass excesses from mass deficits. For a more detailed review of gravity exploration method check again the notes in [1] and refer to this article on the CSEG Recorder and reference therein.
[5] Speaking in general, 3D maps without lighting often have a flat appearance, which is why light sources are added. The traditional choice is to use single or multiple directional light sources, but the result is that only linear features orthogonal to those orientations will be highlighted. This is useful when interpreting for lineaments or faults (when present), but not in all circumstances, and requires a lot of experimenting. in other cases, like this one , directional lighting introduces a bright highlight, which obscures some detail. A more generalist, and in my view more effective alternative, is to use information derived from the data itself for the shading. One way to do that is to use a high pass filtered version of the data. i will show you how to do that in matlab in the next tutorial. Another solution, which I favored in this example, is to use a first derivative of the data.
I love a visualization well done, whether by me or someone else. In fact, I love visualization period. I find there’s always something to learn by looking at an image or animation, and always look for something new and interesting I can learn.
Today I would like to share with readers some of the things I learned, saw, or admired over time.
Yes, we all know nowadays how to use Google Earth to plan our next vacation, check our old neighbor’s new house, etcetera. But visualization? Yes indeed. Just today I was at a roundtable meeting of Fig Tree members (Fig Tree is an NGO that supports international development projects) and I learned how Google Earth is used by many NGOs for project planning: for a start check the Mercy Corps‘ Rough Google Earth Guide, these map overlay tools, and the official gallery.
Here’s a tutorial on annotating Google Earth:
and some beautiful visualizations created using Matlab in conjunction with Google Earth.
There are scores of great books on visualization. I introduce two I really like in my more recent post Two great visualization books. Reader Ron DeSpain mentioned these free online courses in a comment: Introduction to Infographics and Data Visualization, by Roberto Cairo, which is also the author of The functional art, a blog and a book, and Information Visualization MOOC, by Katy Börner and colleagues at the University of Indiana.
Specialize in one discipline if you can. See what the experts in that field do. For example I am a Geophysicist and do a lot of seismic visualization and interpretation, so I look at what folks like Steve Lynch, or Art Barnes to name a couple, and follow Agile Geoscience blog. Again, keep abreast of the latest technology: Google Earth is increasingly being used in seismic exploration planning and visualization. You can find some examples here and even get some seismic overlays and display them yourself: if you have Google Earth just download this KMZ file and double-click.
I am also always curious about other fields and browse for examples incessantly. I am interested in music, and I was thrilled to find this great review of Music Visualization. I am also interested in Astronomy and Planetary Exploration, and over time I have found some amazing visualizations. This video for instance is a volume rendered animation of the star-forming region L1448 created by Nick Holliman (Durham University) in VolView, an open source volume visualization program.
Credits: Harvard Astronomical Medicine Project
Just recently I found on Visurus a time-lapse (1950 -2011) geocentric map of the visible universe.
A while back I learned a lot on from books like A Beginner’s Guide to Constructing the Universe, and its exploration of the relationships between nature, art, science, symbols, and numbers; I regularly go back to it.
Look for synergies and collaborations. Here’s what can happen when you put together a geologist and a design expert:
Check answers on sites like Quora or Stack Overflow. Check regularly or better subscribe to visualization or specialistic blogs: I mentioned already VizThink and Fell in Love with Data. I also like the excellent Datavisualization.ch and FlowingData, where you can actually find an extensive list of blogs.
Take a look at the groundbreaking work of Hans Roslins with his Gapminder:
Check this video on Designing for Visual Efficiency from Vizthink to learn how to declutter your visualizations:
Ignite Toronto 2: Ryan Coleman – Designing for visual efficiency at Ignite Toronto on Vimeo.
and this one on Journalism in the age of data:
Journalism in the Age of Data from Geoff McGhee on Vimeo.
Here’s an interesting visualization project from IMB: sign up on Many Eyes to not only browse several examples of visualizations but also to upload your own data and outsource the visualization project.
A subject I think is particularly important is how to use color in your presentations and visualizations. Use color sparingly and sensibly, and ad hoc. Know about color deficiencies and confusing color schemes and the difference a perceptually appropriate colormap can do. With Vischeck, Color Oracle, and Dichromacy you can simulate how people with different color vision deficiencies will see your images and decide if you need a different colormap; for two color contrast for presentations, webpages, diagrams, use the Accessibility Color Wheel. Select colormaps according to task, and avoid artifacts when using color blending. Reduce the hue range when possible and choose it based on the concept of color harmonization. After all, the hue circle isn’t really circular at all and contains non-spectral colors (purple). You can design harmonious schemes with Color Wheel Pro, and check the palettes’ mood using Colour monitor, a wonderful tool by Richard Weeler. Avoid at all costs rainbow and similar color palettes.
Finally, A fantastic resource on color: References and Resources for Visualization Professionals by by Robert Simmon at NASA’s Earth Observatory
Thanks to this technology you can now build interactive applications with seamless dynamic zooming. Look at Steve Lynch’s seismic visualization built using MS Silverlight, and Well Visualization in Prezi by Evan Bianco.
The best tools are your brain and your hand. Draw and sketch a lot. Concept maps are a great tool for brainstorming and tinkering with ideas, whether on paper or your computer; check NASA’s Mars Exploration Concept Map. Finally, I strongly encourage you to read Experiences in Visual Thinking.
Read The Data Visualization Beginner’s Toolkit series from Fell in Love with Data blog. This is the introduction to the series. In the first post he reviews books and other resources. In the second post he introduces some rules and more importantly the software tools. There’s a feature interview with Moritz Stefaner on data visualization freelancing:
Interview: Moritz Stefaner on Data Visualization Freelancing from FILWD on Vimeo.
And if you are intimidated by having to pick up programming skills, he has a post that is just right for you.
It should go without saying, but unfortunately it does not: if data are sensitive, don’t forget about privacy.
visualisingdata’s essential collection of visualisation resources
Visualization Tools & Resources
How to avoid equidistant HSV colors
Color Oracle – color vision deficiency simulation – stand alone (Window, Mac and Linux)
Dichromacy – color vision deficiency simulation – open source plugin for ImageJ
Vischeck – color vision deficiency simulation – plugin for ImageJ and Photoshop (Windows and Linux)