Mild or wild: robustness through morphological filtering

This guest post (first published here) is by Elwyn Galloway, author of Scibbatical on WordPress. It is the forth in our series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience. Happy reading.

 

We’re highlighting a key issue that came up in our project, and describing what how we tackled it. Matteo’s post on Morphological Filtering does a great job of explaining what we implemented in sketch2model. I’ll build on his post to explain the why and how. In case you need a refresher on sketch2model, look back at sketch2model, Sketch Image Enhancement, Linking Edges with Geomorphological Filtering.

Morphological Filtering

As Matteo demonstrated by example, sketch2model’s ability to segment a sketch properly depends on the fidelity of a sketch.

fill_before_after_closing

An image of a whiteboard sketch (left) divides an area into three sections. Without morphological filtering, sketch2model segments the original image into two sections (identified as orange, purple) (centre). The algorithm correctly segments the area into three sections (orange, purple, green) when morphological filtering is applied (right).

To compensate for sketch imperfections, Matteo suggested morphological filtering on binarized images. Morphological filtering is a set of image modification tools which modify the shape of elements in an image. He suggested using the closing tool for our purposes. Have a look at Matteo’s Post for insight into this and other morphological filters.

One of the best aspects of this approach is that it is simple to apply. There is essentially one parameter to define: a structuring element. Since you’ve already read Matteo’s post, you recall his onion analogy explaining the morphological filtering processes of erosion and dilation – erosion is akin to removing an onion layer, dilation is adding a layer on. You’ll also recall that the size of the structuring element is the thickness of the layer added to, or removed from, the onion. Essentially, the parameterization of this process comes down to choosing the thickness of the onion layers.

Sketch2model uses dilation followed by erosion to fill gaps left between sketch lines (morphological dilation followed by erosion is closing). Matteo created this really great widget to illustrate closing using an interactive animation.

closing_demo1

Matteo’s animation was created using this interactive Jupyter notebook. Closing connects the lines of the sketch.

Some is Good, More is Better?

Matteo showed that closing fails if the structural element used is too small. So just make it really big, right? Well, there can be too much of a good thing. Compare what happens when you use an appropriately sized structuring element (mild) to the results from an excessively large structuring element (wild).

over-morph filtering.png

Comparing the results of mild and wild structuring elements: if the structuring element is too large, the filter compromises the quality of the reproduction.

Using a morphological filter with a structural element that is too small doesn’t fix the sketches, but using a structural element that is too large compromises the sketch too. We’re left to find an element that’s just right. Since one of the priorities for sketch2model was to robustly handle a variety of sketches with as little user input as possible — marker on whiteboard, pencil on paper, ink on napkin — we were motivated to find a way to do this without requiring the user to select the size of the structuring element.

Is there a universal solution? Consider this: a sketch captured in two images, each with their own resolution. In one image, the lines of the sketch appear to be approximately 16 pixels wide. The same lines appear to be 32 pixels wide in the other image. Since the size of the structuring element is defined in terms of pixels, it becomes apparent the ideal structuring element cannot be “one size fits all”.

res_vs_res

High-resolution (left) versus low-resolution (right) image of the same portion of a sketch. Closing the gap between the lines would require a different size structuring element for each image: about 5 pixels for high-resolution or 1 pixel for low-resolution.

Thinking Like a Human

Still motivated to avoid user parameterization for the structuring element, we explored ways to make the algorithm intelligent enough to select an appropriate structuring element on its own. Ultimately, we had to realize a few things before we came up with something that would work:

  1. When capturing an image of a sketch, users compose very similar images (compose in the photographic sense of the word): sketch is centered and nearly fills the captured image.
  2. The image of a sketch is not the same as a user’s perception of a sketch: a camera may record imperfections (gaps) in a sketch that a user does not perceive.
  3. The insignificance of camera resolution: a sketched feature in captured at two different resolutions would have two different lengths (in pixels), but identical lengths when defined as a percentage of image size.

With these insights, we deduced that the gaps we were trying to fill with morphological filtering would be those that escaped the notice of the sketch artist.

Recognizing the importance of accurate sketch reproduction, our solution applies the smallest structuring element possible that will still fill any unintentional gaps in a sketch. It does so in a way that is adaptable.

A discussion about the definition of “unintentional gap” allowed us to create a mandate for the closing portion of our algorithm. Sketch2model should fill gaps the user doesn’t notice. The detail below the limit of the user’s perception should not affect the output model. A quick “literature” (i.e. Google) search revealed that a person’s visual perception is affected by many factors beyond the eye’s optic limits. Without a simple formula to define a limit, we did what any hacker would do… define it empirically. Use a bunch of test images to tweak the structuring element of the closing filter to leave the perceptible gaps and fill in the imperceptible ones. In the sketch2model algorithm, the size of structuring element is defined as a fraction of the image size, so it was the fraction that we tuned empirically.

Producing Usable Results

Implicit in the implementation is sketch2model’s expectation that the user’s sketch, and their image of the sketch are crafted with some care. The expectations are reasonable: connect lines you’d like connected; get a clear image of your sketch. Like so much else in life, better input gives better results.

paper_pen_wow2_beforeafter.jpg

Input (left) and result (right) of sketch2model.

To produce an adaptable algorithm requiring as little user input as possible, the sketch2model team had to mix a little image processing wizardry with some non-technical insight.

Have you tried it? You can find it at sketch2model.com. Also on GitHub.


Previous posts in the sketch2model series: sketch2model, Sketch Image Enhancement, Linking Edges with Geomorphological Filtering.

Perceptual rainbow palette – the method

With this post I would like to introduce my new, perceptually balanced rainbow color palette. I used the palette for the first time in How to assess a colourmap, an essay I wrote for 52 Things You Should Know About Geophysics, edited by Matt Hall and Evan Bianco of Agile Geoscience.

In my essay I started with the analysis of the spectrum color palette, the default  in some seismic interpretation softwares, using my Lightness L* profile plot and Great Pyramid of Giza test surface (see this post for background on the tests and to download the Matlab code). The profile and the pyramid are shown in the top left image and top right image in Figure 1, from the essay.

spectrum vs cubeYF

Figure 1

In the plot the value of L* varies with the color of each sample in the spectrum, and the line is colored accordingly. This erratic profile highlights several issues with spectrum: firstly, the change in lightness is not monotonic. For example it increases from black (L*=0) to magenta [M] then drops from magenta to blue [B], then increases again and so on. This is troublesome if spectrum is used to map elevation because it will interfere with the correct perception of relief, particularly if shading is added. Additionally, the curve gradient changes many times, indicating a nonuniform perceptual distance between samples. There are also plateaus of nearly flat L*, creating bands of constant color (a small one at the blue, and a large one at the green [G]).

The Great Pyramid has monotonically increasing elevation (in feet – easier to code) so there should be no discontinuities in the surface if the color palette is perceptual. However, clearly using the spectrum we have introduced many artificial discontinuities that are not present in the data. For the bottom row in FIgure 1 I used my new color palette, which has a nice, monotonic, compressive Lightness profile (bottom left). Using this palette the pyramid surface (bottom right) is smoothly colored, without any perceptual artifact.

This is how I created the palette: I started with RGB triplets for magenta, blue, cyan, green, and yellow (no red), which I converted to L*a*b* triplets using Colorspace transformations, a Matlab function available on the Matlab File Exchange. I modified the new L* values by fitting them to an approximately cube law L* function (this is consistent with Stevens’ power law of perception), and adjusted a* and b* values using Lab charts like the one in Figure 2 (from CIELab Color Space by Gernot Hoffmann, Department of Mechanical Engineering, University of Emden)  to get 5 colors moving up the L* axis along an imaginary spiral (I actually used tracing paper). Then I interpolated to 256 samples using the same ~cube law, and finally reconverted to RGB [1].

L*50_RGBval

Figure 2

There was quite a bit of trial and error involved, but I am very happy with the results. In the animations below I compare the spectrum and the new palette, which I call cubeYF, as seen in CIELab color space. I generated these animations with the method described in this post, using the 3D color inspector plugin in ImageJ:

I also added Matlab’s default Jet rainbow – a reminder that defaults may be a necessity, but in many instances not the ideal choice:

OK, the new palette looks promising, insofar as modelling is concerned. But how would it fare using some real data? To answer this question I used a residual gravity map from my unpublished thesis in Geology at the University of Rome. I introduced this map and discussed the geological context and objectives of the geophysical study in a previous post, so please refer to that if you are curious about it. In this post I will go straight to the comparison of the color palettes; if you are unfamiliar with gravity data, try to imagine negative residuals as elevation below sea level, and positive residuals as elevation above seal level – you won’t miss out on anything.

In Figures 3 to 6 I colored the data using the above three color palettes, and grayscale as benchmark. I generated these figures using Matlab code I shared in my post Visualization tips for geoscientists: Matlab, and I presented three of them (grayscale, Spectrum, and cubeYF) at the 2012 convention of the Canadian Society of Exploration Geophysicists in Calgary (the extended abstract, which I co-authored with Steve Lynch of 3rd Science, is available here).

In Figure 3, the benchmark for the following figures, I use grayscale to represent the data, assigning increasing intensity from most negative gravity residuals in black to most positive residuals in white (as labeled next to the colorbar). Then, I used terrain slope to create shading: the higher the slope, the darker the shading that is assigned, which results in a pseudo-3D display that is very effective (please refer to Visualization tips for geoscientists: Surfer, for an explanation of the method, and Visualization tips for geoscientists: Matlab for code).

Figure 3 - Grayscale benchmark

Figure 3 – Grayscale benchmark

In Figure 4 I color the pseudo-3D surface with the cubeYF rainbow. Using this color palette instead of grayscale allows viewers to appreciate smaller changes, more quickly assess differences, or conversely identify areas of similar anomaly, while at the same time preserving the peudo-3D effect. Now compare Figure 4 with Figure 5, where we use the spectrum to color the surface: this palette introduces several artefacts (sharp edges and bands of constant hue) which confuse the display and interfere with the perception of pseudo-relief, all but eliminating the effect.  For Figure 6 I used Matlab’s default Jet color palette, which is better that the spectrum, and yet the relief effect is somewhat lost (due mainly to a sharp yellow edge and cyan band).

campi cube YF

Figure 4 – cube YF rainbow

campi spectrum

Figure 5 – Industry spectrum

campi jet

Figure 6 – Matlab Jet

It looks like both spectrum and jet are poor choices when used for color representation of a surface, with the new color palette a far superior alternative. In the CSEG convention paper mentioned above (available here) Steve and I went further by showing that the spectrum not only has these perceptual artifacts and edges, but it is also very confusing for viewers with deficient color vision, a condition that occurs in about 8% of Caucasian males. We did that using computer software [2] to simulate how viewers with two types of deficient color vision, Deuteranopia and Tritanopia, would see the two colored surfaces, and we compare the results. In other words, we are now able to see the images as they would see them. Please refer to the paper for a full discussion on these simulation.

In here, I show in Figures  7 to 9 the Deuteranope simulations for cubeYF, spectrum, and jet, respectively. In all three simulations the hue discrimination has decreased, but while the spectrum and jet are now even more confusing, the cubeYF has preserved the relief effect.

Deuteranope Simulation of campi cube YF

Deuteranope Simulation of cube YF

Deuteranope Simulation of campi spectrum

Deuteranope Simulation of Industry spectrum

Deuteranope Simulation of campi jet

Deuteranope Simulation of Matlab Jet

That’s it for today. In my next post, to be published very shortly, you will get the palette, and a lot more.

References

A more perceptual color palette for structure maps, CSEG/CSPG 2012 convention, Calgary

How to assess a colourmap, in 52 Things You Should Know About Geophysics

Notes

[1] An alternative to the method I used would be to start directly in CIELab color space, and use a some kind of spiral *L lightness profile programmatically.  For example:

– Using 3D helical curves from: http://www.mathworks.com/matlabcentral/fileexchange/25177-3d-curves

– Using Archimedes spiral

– Expanding on code by Steve Eddins at Mathworks (A path through L*a*b* color space) in this article , one could create a spiral cube lightness with something like:

%% this creates best-fit pure power law function 
%  Inspired by wikipedia - http://en.wikipedia.org/wiki/Lightness
l2=linspace(1,power(100,0.42),256); 
L2=(power(l2,1/0.42))'; 

%% this makes cielab real cube function spiral 
radius = 50; 
theta = linspace(0.6*pi, 2*pi, 256).'; 
a = radius * sin(theta); b = radius * cos(theta); 
Lab1 = [L2, a, b]; RGB_realcube=colorspace('RGB<-Lab',(Lab1));

[2] The simulations are created using ImageJ, an open source image manipulation program, and the Vischeck plug-in. I later discovered Dichromacy, anther ImageJ plug-in for these simulations, which has the advantage of being an open source plugin. They can also be performed on the fly (no upload needed) using the online tool Color Oracle.

Related posts

The rainbow is dead…long live the rainbow! – Perceptual palettes, part 5 – CIE Lab linear L* rainbow

Some great examples

After my previous post in this series there was a great discussion on perceptual color palettes with some members of the Worldwide Geophysicists group on LinkedIn. Ian MacLeod shared some really good examples, and uploaded it in here.

HSL linear L rainbow palette

Today I’d like to share a color palette that I really like:

It is one of the palettes introduced in a paper by Kindlmann et al. [1]. The authors created their palettes with a technique they call luminance controlled interpolation. They explain it in this online presentation. However they used different palettes (their isoluminant rainbow, and their heated body) so if you find it confusing I recommend you look at the paper first. Indeed, this is a good read if you are interested in colormap generation techniques; it is one of the papers that encouraged me to develop the methodology for my cube law rainbow, which I will introduce in an upcoming post.

This is how I understand their method to create the palette: they mapped six pure-hue rainbow colors (magenta, blue, cyan, green, yellow, and red) in HSL space, and adjusted the Luminance by changing the HSL Lightness value to ‘match’ that of six control points evenly spaced along the gray scale palette. After that, they interpolated linearly along the L axis between 0 and 1 using the equation presented in the paper.

CIE Lab linear L* rainbow palette

For this post I will try to create a similar palette. In fact, initially I was thinking of just replicating it, so I imported the palette as a screen capture image into Matlab, reduced it to a 256×3 RGB colormap matrix, and converted RGB values to Lab to check its linearity in lightness. Below I am showing the lightness profile, colored by value of L*, and the Great Pyramid of Giza – my usual test surface –  also colored by L* (notice I changed the X axis of both L* plots from sample number to Pyramid elevation to facilitate comparison of the two figures).

Clearly, although the original palette was constructed to be perceptually linear, it is not linear following my import. Notice in particular the notch in the profile in the blue area, at approximately 100 m elevation. This artifact is also visible as a flat-looking blue band in the pyramid.

I have to confess I am not too sure why the palette has this peculiar lightness profile. I suspect this may be because their palette is by construction device dependent (see the paper) so that when I took the screen capture on my monitor I introduced the artifacts.

The only way to know for sure would be to use their software to create the palette, or alternatively write the equation from the paper into Matlab code and create a palette calibrated on my monitor, then compare it to the screen captured one. Perhaps one day I will find the time to do it but having developed my own method to create a perceptual palette my interest in this one became just practical: I wanted to get on with it and use it.

Fixing and testing the palette

Regardless of what the cause might be for this nonlinear L* profile, I decide to fix it and I did it by simply replacing the original profile with a new one, linearly changing between 0.0 and 1.0. Below I am showing the L* plot for this adjusted palette, and the Great Pyramid of Giza, both again colored by value of L*.

The pyramid with the adjusted palette seems better: the blue band is gone, and it looks great. I am ready to try it on a more complex surface. For that I have chosen the digital elevation data for South America available online through the Global Land One-km Base Elevation Project at the National Geophysical Data Center. To load and display the data in Matlab I used the first code snippet in Steve Eddin’s post on the US continental divide  (modified for South America data tiles). Below is the data mapped using the adjusted palette. I really like the result: it’s smooth and it looks right.

South_America_LinearL_solo

But how do I know, really? I mean, once I move away from my perfectly flat pyramid surface, how do I know what to expect, or not expect? In other words, how would I know if an edge I see on the map above is an artifact, or worse, that the palette is not obscuring real edges?

In some cases the answer is simple. Let’s take a look at the four versions of the map in my last figure. The first on the left was generated using th ROYGBIV palette I described in this post. It would be obvious to me, even if I never looked at the L* profile, that the blue areas are darker than the purple areas, giving the map a sort of inverted image look.

South_America_maps_LinearL_rainbow

But how about the second map from the left? For this I used the default rainbow from a popular mapping program. This does not look too bad at first sight. Yes, the yellow is perceived as a bright, sharp edge, and we now know why that is, but other than that it would be hard to tell if there are artifacts. After a second look the whole area away from the Andes is a bit too uniform.

A good way to assess these maps is to use grayscale, which we know is a good perceptual option, as a benchmark. This is the last map on the right. The third map of South America was coloured using my adjusted linear L* palette. This maps looks more similar to our grayscale benchmark. Comparison of the colorbars will also help: the third and fourth are very similar and both look perceptually linear, whereas the third does show flatness in the blue and green areas.

Let me know what you think of these examples. And as usual, you are welcome to use the palette in your work. You can download it here.

UPDATE

With my following post, Comparing color palettes, I introduced my new method to compare palettes with ImageJ and the 3D color inspector plugin. Here below are the recorded 3D animations of the initial and adjusted palettes respectively. In 3D it is easier to see there is an area of flat L* between the dark purple and dark blue in the initial color palette. The adjusted color palette instead monotonically spirals upwards.

References

[1] Kindlmann, G. Reinhard, E. and Creem, S., 2002, Face-based Luminance Matching for Perceptual Colormap Generation, IEEE – Proceedings of the conference on Visualization ’02

Related posts (MyCarta)

The rainbow is dead…long live the rainbow! – the full series

What is a colour space? reblogged from Colour Chat

Color Use Guidelines for Mapping and Visualization

A rainbow for everyone

Is Indigo really a colour of the rainbow?

Why is the hue circle circular at all?

A good divergent color palette for Matlab

Related topics (external)

Color in scientific visualization

The dangers of default disdain

Color tools

How to avoid equidistant HSV colors

Non-uniform gradient creator

Colormap tool

Color Oracle – color vision deficiency simulation – stand alone (Window, Mac and Linux)

Dichromacy –  color vision deficiency simulation – open source plugin for ImageJ

Vischeck – color vision deficiency simulation – plugin for ImageJ and Photoshop (Windows and Linux)

For teachers

NASA’s teaching resources for grades 6-9: What’s the Frequency, Roy G. Biv?

ImageJ and 3D Color inspector plugin

http://rsbweb.nih.gov/ij/docs/concepts.html

http://rsb.info.nih.gov/ij/plugins/color-inspector.html