sketch2model – linking edges with mathematical morphology

Introduction

As written by Elwyn in the first post of this seriessketch2model was conceived at the 2015 Calgary Geoscience Hackathon as a web and mobile app that would turn an image of geological sketch into a geological model, and then use Agile Geoscience’s modelr.io to create a synthetic seismic model.

original project promo image

The skech2model concept: modelling at the speed of imagination. Take a sketch (a), turn it into an earth model (b), create a forward seismic model (c). Our hack takes you from a to b.

One of the main tasks in sketch2model is to identify each and every geological body in a sketch  as a closed polygon. As Elwyn wrote, “if the sketch were reproduced exactly as imagined, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended – an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross, see example below). We applied some morphological filtering to compensate for the sketch imperfections.

Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).

The cartoon below shows what would be the final output of sketch2model in the two cases in the example above (non closed and closed gap).

My objective with this post is to explain visually how we correct for some of these imperfections within sketch2model. I will focus on the use of morphological closing,  which consist in applying in sequence a dilation and an erosion, the two fundamental morphological operations.

Quick mathematical morphology review

All morphological operations result from the interaction of an image with a structuring element (a kernel) smaller than the image and typically in the shape of a square, disk, or diamond. In most cases the image is binary, that is pixels take either value of 1, for the foreground objects, or 0 for the background. The structuring element operates on the foreground objects.

Morphological erosion is used to remove pixels on the foreground objects’ boundaries. How ‘deeply’ the boundaries are eroded depends on the size of the structuring element (and shape, but in this discussion I will ignore the effect of changing the shape). This operation is in my mind analogous to peeling off a layer from an onion; the thickness of the layer is related to the structuring element size.

Twan Maintz in his book Digital and medical image processing describes the interaction of image and structuring element during erosion this way: place the structuring element anywhere in the image: if it is fully contained in the foreground object (or in one of the objects) then the origin (central) pixel of the structuring element (and only that one) is part of the eroded output. The book has a great example on page 129.

Dilation does the opposite of erosion: it expands the object boundaries (adding pixels) by an amount that is again related to the size of the structuring element. This is analogous to me to adding back a layer to the onion.

Again, thanks to Maintz the interaction of image and structuring element in dilation can be intuitively described: place the structuring element anywhere in the image: does it touch any of the foreground objects? If yes then the origin of the structuring element is part of the dilated result. Great example on pages 127-128.

Closing is then for me akin to adding a layer to an onion (dilation) and then peeling it back off (erosion) but with the major caveat that some of the changes produced by the dilation are irreversible: background holes smaller than the structuring element that are filled by the dilation are not restored by the erosion. Similarly, lines in the input image separated by an amount of pixels smaller than the size of the structuring element are linked by the dilation and not disconnected by the erosion, which is exactly what we wanted for sketch2model.

Closing demo

If you still need further explanation on these morphological operations, I’d recommend reading further on the ImageMagik user guide the sections on erosion, dilation, and closing, and the examples  on the Scikit-image website.

As discussed in the previous section, when applying closing to a binary image, the external points in any object in the input image will be left unchanged in the output, but holes will be filled, partially or completely, and disconnected objects like edges (or lines in sketches) can become connected.

We will now demonstrate it below with Python-made graphics but without code; however,  you can grab the Jupyter notebook with complete Python code on GitHub.

I will use this model binary image containing two 1-pixel wide lines. Think of them as lines in a sketch that should have been connected, but are not.

We will attempt to connect these lines using morphological closing with a disk-shaped structuring element of size 2. The result is plotted in the binary image below, showing that closing was successful.

But what would have happened with a smaller structuring element, or with a larger one? In the case of a disk of size 1, the closing magic did not happen:

Observing this result, one would increase the size of the structuring element. However, as Elwyn will show in the next post, also too big a structuring element would have detrimental effects, causing subsequent operations to introduce significant artifacts in the final results. This has broader implications for our sketch2model app: how do we select automatically (i.e. without hard coding it into the program) the appropriate structuring element size? Again, Elwyn will answer that question; in the last section I want to concentrate on explaining how the closing machinery works in this case.

In the next figure I have broken down the closing operation into its component dilation and erosion, and plotted them step by step to show what happens:

So we see that the edges do get linked by the dilation, but by only one pixel, which the following erosion then removes.

And now let’s break down the closing with disk of size two into its component. This is equivalent to applying two consecutive passes of dilation with disk of size 1, and then two consecutive passes of erosion with disk of size 1, as in the demonstration in the next figure below (by the way, if we observed carefully the second panel above we could predict that the dilation with a disk of size two would result in a link 3-pixel wide instead of 1-pixel wide, which the subsequent erosion will not disconnect).

Below is a GIF animated version of this demo, cycling to the above steps; you can also run it yourself by downloading and running the Jupyter notebook on GitHub.

Additional resources

Closing Jupyter notebook with complete Python code on GitHub

sketch2model Jupyter notebook with complete Python code on GitHub 

More reading on Closing, with examples

Related Posts

sketch2model (2015 Geoscience Hackathon, Calgary)

sketch2model – sketch image enhancements

Mapping and validating geophysical lineaments with Python

sketch2model – sketch image enhancements

This is the second post of in a series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience.

The first post was written by Elwyn Galloway and published on both his Scibbatical blog and here on MyCarta. In that article Elwyn mentioned the need for an adaptive image conditioning workflow for binarization of photos with geological sketches in images. Binarization is the process of converting a natural image to a binary image (check this simple but awesome interactive demonstration of binarization), which in our case is necessary to separate the sketch from the background.

The following is a demonstration of the preliminary image processing operations applied to the input photo when sketch2model is run. The full code listing of this demonstration is available as a Jupyter notebook on GitHub. Also in GitHub you can find a Jupyter Notebook with the fully documented version of sketch2model.

First we import one of the photos with sketches and convert it to a grayscale image.

im = io.imread('paper_breaks.png')
im = color.rgb2gray(im[0:-1:2,0:-1:2])

Next we enhance the grayscale image with a couple of cascaded processes. But before we do that, let’s graph the intensity values across the image to understand the degree of contrast between sketch edges and background, which ultimately will determine our success in separating them from it. We show this in the figure below, on the left, for one column of pixels (y direction). The black line across the input image on the right shows the location of the column selected. It is fairly obvious from the plot on the left that the intensity of the background is not uniform, due to variable light conditions when the photo was taken, and towards the right (e.g. bottom of the photo) it gets closer to that of the edges. In some images it might even become less than the intensity of the edge. This highlights the need for (preemptively) applying the enhancements illustrated in the remainder of the post.

The first enhancement is called compressor, or limiter. I read many years ago that it is used in electronics to find hard edges in data: the idea is to square each element in the data (image, or other type of data), smooth the result (enough to remove high frequency variations but not so much as to eliminate variability), take the square root, and finally divide each element in the input by the square root result.

I experimented with this method (at the time using Matlab and its Image Processing Toolbox) using the same gravity dataset from my 2015 geophysical tutorial on The Leading Edge (see the post Mapping and validating geophysical lineaments with Python). An example of one such experiments is shown in the figure below where: the top left map is the Bouguer data; the centre top map is the squared data; the top right is the result of a Gaussian blur; the bottom left the result of square root, and centre right is the final output, where the hardest edges in the original data have been enhanced.

The most important parameter in this process is the choice of the smoothing or blur; using a Gaussian kernel of different size more subtle edges are enhanced, as seen in the bottom right map (these are perhaps acquisition-related gridding artifacts).

In our sketch2model implementation the size of the Gaussian kernel is hardcoded; it was chosen following trial and error on multiple photos of sketches and yielded optimal results in the greatest majority of them. We were planning to have the kernel size depend on the size of the input image, but left the implementation to our ‘future work’ list.’

Here’s the compressor code from sketch2model:

# compressor or limiter (electronics): find hard edges in data with long 
# wavelength variations in amplitude
# step 1: square each element in the image to obtain the power function
sqr = im**2
# step 2: gaussian of squared image
flt2 = sp.ndimage.filters.gaussian_filter(sqr,21)
# step 3: divide the intensity of each original pixel by the square root 
# of the smoothed square
cmprs= im/(np.sqrt(flt2))

and a plot of the result (same column of pixels as in the previous one):

From the plot above we see that now the background intensity is uniform and the contrast has been improved. We can maximize it with contrast stretching, as below:

# contrast stretching
p2, p98 = np.percentile(cmprs, (2, 98))
rescale = exposure.rescale_intensity(cmprs, in_range=(p2, p98))

We now have ideal contrast between edges and background, and can get a binary image with the desired sketch edges using a scalar threshold:

# binarize image with scalar threshold
binary = ~(color.rgb2gray(rescale) > 0.5)

Bingo!

sketch2model

This guest post (first published here) is by Elwyn Galloway, author of Scibbatical on WordPress. It is the first in our series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience. Happy reading.

Collaboration in action. Evan, Matteo, and Elwyn (foreground, L to R) work on sketch2model at the 2015 Calgary Geoscience Hackathon. Photo courtesy of Penny Colton.

Welcome to an epic blog crossover event. Two authors collaborating to tell a single story over the course of several articles.

We’ve each mentioned the sketch2model project on our respective blogs, MyCarta and scibbatical, without giving much detail about it. Apologies if you’ve been waiting anxiously for more. Through the next while, you’ll get to know sketch2model as well as we do.

The sketch2model team came together at the 2015 Geoscience Hackathon (Calgary), hosted by Agile Geoscience. Elwyn and Evan Saltman (epsalt on twitter and GitHub) knew each other from a previous employer, but neither had met Matteo before. All were intrigued by the project idea, and the individual skill sets were diverse enough to combine into a well-rounded group. Ben Bougher, part of the Agile Geoscience team, assisted with the original web interface at the hackathon. Agile’s take on this hackathon can be found on their blog.

Conception

The idea behind sketch2model is that a user should be able to easily create forward seismic models. Modelling at the speed of imagination, allowing seamless transition from idea to synthetic seismic section. It should happen quickly enough to be incorporated into a conversation. It should happen where collaboration happens.

The skech2model concept: modelling at the speed of imagination. Take a sketch (a), turn it into an earth model (b), create a forward seismic model (c). Our hack takes you from a to b.

Geophysicists like to model wedges, and for good reasons. However, wedge logic can get lost on colleagues. It may not effectively demonstrate the capability of seismic data in a given situation. The idea is not to supplant that kind of modeling, but to enable a new, lighter kind of modeling. Modeling that can easily produce results for twelve different depositional scenarios as quickly as they can be sketched on a whiteboard.

The Hack

Building something mobile to turn a sketch into a synthetic seismic section is a pretty tall order for a weekend. We decided to take a shortcut by leveraging an existing project: Agile’s online seismic modelling package, modelr. The fact that modelr works through any web browser (including a smartphone) kept things mobile. In addition, modelr’s existing functionality allows a user to upload a png image and use it as a rock property model. We chose to use a web API to interface our code with the web application (as a bonus, our approach conveniently fit with the hackathon’s theme of Web). Using modelr’s capabilities, our hack was left with the task of turning a photo of a sketched geologic section into a png image where each geologic body is identified as a different color. An image processing project!

Agile is a strong proponent for Python in geophysics (for reasons nicely articulated in their blog post), and the team was familiar with the language to one extent or another. There was no question that it was the language of choice for this project. And no regrets!

We aimed to create an algorithm robust enough to handle any image of anything a user might sketch while accurately reproducing their intent. Marker on whiteboard presents different challenges than pencil on paper. Light conditions can be highly variable. Sketches can be simple or complex, tidy or messy. When a user leaves a small gap between two lines of the sketch, should the algorithm take the sketch as-is and interpret a single body? Or fill the small gap and interpret two separate bodies?

Our algorithm needs to be robust enough to handle a variety of source images: simple, complex, pencil, marker, paper, white board (check out the glare on the bottom left image). These are some of the test images we used.

Matteo has used image processing for geoscience before, so he landed on an approach for our hack almost instantly: binarize the image to distinguish sketch from background (turn color image into a binary image via thresholding); identify and segregate geobodies; create output image with each body colored uniquely.

Taking the image of the original sketch (left) and creating a binary image (right) is an integral part of the sketch2model process.

Python has functions to binarize a color image, but for our applications, the results were very inconsistent. We needed a tool that would work for a variety of media in various lighting conditions. Fortunately, Matteo had some tricks up his sleeve to precondition the images before binarization. We landed on a robust flow that can binarize whatever we throw at it. Matteo will be crafting a blog post on this topic to explain what we’ve implemented.

Once the image is binarized, each geological body must be automatically identified as a closed polygon. If the sketch were reproduced exactly as imagined, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended — an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross, see example below). We applied some morphological filtering to compensate for the sketch imperfections. If applied too liberally, this type of filtering causes unwanted side effects. Elwyn will explore how we struck a balance between filling unintentional gaps and accurate sketch reproduction in an upcoming blog post.

Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).

Compared to the binarization and segmentation, generating the output is a snap. With this final step, we’ve transformed a sketch into a png image where each geologic body is a different color. It’s ready to become a synthetic seismic section in modelr.

Into the Wild

“This is so cool. Draw something on a whiteboard and have a synthetic seismogram right on your iPad five seconds later. I mean, that’s magical.”

Sketch2model was a working prototype by the end of the hackathon. It wasn’t the most robust algorithm, but it worked on a good proportion of our test images. The results were promising enough to continue development after the hackathon. Evidently, we weren’t the only ones interested in further development because sketch2model came up on the February 17th episode of Undersampled Radio. Host Matt Hall: “This is so cool. Draw something on a whiteboard and have a synthetic seismogram right on your iPad five seconds later. I mean, that’s magical.”

Since the hackathon, the algorithm and web interface have progressed to the point that you can use it on your own images at sketch2model.com. To integrate this functionality directly into the forward modelling process, sketch2model will become an option in modelr. The team has made this an open-source project, so you’ll also find it on GitHub. Check out the sketch2model repository if you’re interested in the nuts and bolts of the algorithm. Information posted on these sites is scant right now, but we are working to add more information and documentation.

Sketch2model is designed to enable a new kind of collaboration and creativity in subsurface modelling. By applying image processing techniques, our team built a path to an unconventional kind of forward seismic modelling. Development has progressed to the point that we’ve released it into the wild to see how you’ll use it.

Moiré Patterns

Moiré pattern

Some time ago I reblogged a post from El Ojo Inoportuno showing Moiré pattern, which resulted from taking a photo of a circular pattern of (beautiful) tiles. This phenomenon is caused by undersampling and is also called space aliasing. There’s a very good explanation of space aliasing and another stunning Moiré example on Agile Geoscience’s post N is for Nyquist.

Creating Moiré patterns

One way to get Moiré pattern is to superimpose two identical, transparent line gratings and rotate one by an angle. You can see an animation of this on Wolfram Mathworld here; notice that the pattern varies with the angle. In the same page there’s also an example of Moiré Patterns generated by plotting series of curves on a computer screen, which is very similar to taking the photo of circular tiles shown in the Ojo Inoportuno photo. Again the interference is caused by representing circles with a finite size pixel grid. If you are interested you can experiment with these effects and many more by downloading templates from this site. Figure 1 shows my own Moiré from circular patterns.

Figure 1

 

There is a program for interactive Moiré pattern experiments called iMoiré.

Another way to get a Moiré pattern is to scan a picture printed with halftone. There’s a simple explanation of this scanning-generated interference here. Again this is a matter of aliasing, or undersampling. Here’s a good example:

Figure 2

The original image is a lovely watercolor by  Ettore Roesler Franz showing medieval houses along the Tiber river in Rome. The Moiré Pattern results from scanning the watercolor from one of the book collections (the image was posted on Flickr here).

How to remove Moiré pattern from digital images

For a quick solution, there’s a good article with detailed instructions on how to remove Moiré pattern in Photoshop, Paint Shop Pro, etcetera. For a more advanced workflow there’s an excellent top hat filter in Photoshop included in Reindeer Graphic’s FoveaPro plugin. In Figure 3, I created a sort of pictorial chart of this workflow using low resolution copies of examples in The Image Processing Cookbook, by John C. Russ.

Figure 3

 

In future posts I plan to show how to remove Moire’ pattern with open source code images  using Python, and then to extend the workflow to the removal (or attenuation) of acquisition footprint in seismic data, which has a very similar appearance in the 2D Fourier domain, and can be filtered with very similar techniques.

 

 

Perceptual rainbow palette – the method

With this post I would like to introduce my new, perceptually balanced rainbow color palette. I used the palette for the first time in How to assess a colourmap, an essay I wrote for 52 Things You Should Know About Geophysics, edited by Matt Hall and Evan Bianco of Agile Geoscience.

In my essay I started with the analysis of the spectrum color palette, the default  in some seismic interpretation softwares, using my Lightness L* profile plot and Great Pyramid of Giza test surface (see this post for background on the tests and to download the Matlab code). The profile and the pyramid are shown in the top left image and top right image in Figure 1, from the essay.

Figure 1

In the plot the value of L* varies with the color of each sample in the spectrum, and the line is colored accordingly. This erratic profile highlights several issues with spectrum: firstly, the change in lightness is not monotonic. For example it increases from black (L*=0) to magenta [M] then drops from magenta to blue [B], then increases again and so on. This is troublesome if spectrum is used to map elevation because it will interfere with the correct perception of relief, particularly if shading is added. Additionally, the curve gradient changes many times, indicating a nonuniform perceptual distance between samples. There are also plateaus of nearly flat L*, creating bands of constant color (a small one at the blue, and a large one at the green [G]).

The Great Pyramid has monotonically increasing elevation (in feet – easier to code) so there should be no discontinuities in the surface if the color palette is perceptual. However, clearly using the spectrum we have introduced many artificial discontinuities that are not present in the data. For the bottom row in FIgure 1 I used my new color palette, which has a nice, monotonic, compressive Lightness profile (bottom left). Using this palette the pyramid surface (bottom right) is smoothly colored, without any perceptual artifact.

This is how I created the palette: I started with RGB triplets for magenta, blue, cyan, green, and yellow (no red), which I converted to L*a*b* triplets using Colorspace transformations, a Matlab function available on the Matlab File Exchange. I modified the new L* values by fitting them to an approximately cube law L* function (this is consistent with Stevens’ power law of perception), and adjusted a* and b* values using Lab charts like the one in Figure 2 (from CIELab Color Space by Gernot Hoffmann, Department of Mechanical Engineering, University of Emden)  to get 5 colors moving up the L* axis along an imaginary spiral (I actually used tracing paper). Then I interpolated to 256 samples using the same ~cube law, and finally reconverted to RGB [1].

Figure 2

There was quite a bit of trial and error involved, but I am very happy with the results. In the animations below I compare the spectrum and the new palette, which I call cubeYF, as seen in CIELab color space. I generated these animations with the method described in this post, using the 3D color inspector plugin in ImageJ:

I also added Matlab’s default Jet rainbow – a reminder that defaults may be a necessity, but in many instances not the ideal choice:

OK, the new palette looks promising, insofar as modelling is concerned. But how would it fare using some real data? To answer this question I used a residual gravity map from my unpublished thesis in Geology at the University of Rome. I introduced this map and discussed the geological context and objectives of the geophysical study in a previous post, so please refer to that if you are curious about it. In this post I will go straight to the comparison of the color palettes; if you are unfamiliar with gravity data, try to imagine negative residuals as elevation below sea level, and positive residuals as elevation above seal level – you won’t miss out on anything.

In Figures 3 to 6 I colored the data using the above three color palettes, and grayscale as benchmark. I generated these figures using Matlab code I shared in my post Visualization tips for geoscientists: Matlab, and I presented three of them (grayscale, Spectrum, and cubeYF) at the 2012 convention of the Canadian Society of Exploration Geophysicists in Calgary (the extended abstract, which I co-authored with Steve Lynch of 3rd Science, is available here).

In Figure 3, the benchmark for the following figures, I use grayscale to represent the data, assigning increasing intensity from most negative gravity residuals in black to most positive residuals in white (as labeled next to the colorbar). Then, I used terrain slope to create shading: the higher the slope, the darker the shading that is assigned, which results in a pseudo-3D display that is very effective (please refer to Visualization tips for geoscientists: Surfer, for an explanation of the method, and Visualization tips for geoscientists: Matlab for code).

Figure 3 – Grayscale benchmark

In Figure 4 I color the pseudo-3D surface with the cubeYF rainbow. Using this color palette instead of grayscale allows viewers to appreciate smaller changes, more quickly assess differences, or conversely identify areas of similar anomaly, while at the same time preserving the peudo-3D effect. Now compare Figure 4 with Figure 5, where we use the spectrum to color the surface: this palette introduces several artefacts (sharp edges and bands of constant hue) which confuse the display and interfere with the perception of pseudo-relief, all but eliminating the effect.  For Figure 6 I used Matlab’s default Jet color palette, which is better that the spectrum, and yet the relief effect is somewhat lost (due mainly to a sharp yellow edge and cyan band).

Figure 4 – cube YF rainbow

Figure 5 – Industry spectrum

Figure 6 – Matlab Jet

It looks like both spectrum and jet are poor choices when used for color representation of a surface, with the new color palette a far superior alternative. In the CSEG convention paper mentioned above (available here) Steve and I went further by showing that the spectrum not only has these perceptual artifacts and edges, but it is also very confusing for viewers with deficient color vision, a condition that occurs in about 8% of Caucasian males. We did that using computer software [2] to simulate how viewers with two types of deficient color vision, Deuteranopia and Tritanopia, would see the two colored surfaces, and we compare the results. In other words, we are now able to see the images as they would see them. Please refer to the paper for a full discussion on these simulation.

In here, I show in Figures  7 to 9 the Deuteranope simulations for cubeYF, spectrum, and jet, respectively. In all three simulations the hue discrimination has decreased, but while the spectrum and jet are now even more confusing, the cubeYF has preserved the relief effect.

Deuteranope Simulation of cube YF

Deuteranope Simulation of Industry spectrum

Deuteranope Simulation of Matlab Jet

That’s it for today. In my next post, to be published very shortly, you will get the palette, and a lot more.

References

A more perceptual color palette for structure maps, CSEG/CSPG 2012 convention, Calgary

How to assess a colourmap, in 52 Things You Should Know About Geophysics

Notes

[1] An alternative to the method I used would be to start directly in CIELab color space, and use a some kind of spiral *L lightness profile programmatically.  For example:

– Using 3D helical curves from: http://www.mathworks.com/matlabcentral/fileexchange/25177-3d-curves

– Using Archimedes spiral

– Expanding on code by Steve Eddins at Mathworks (A path through L*a*b* color space) in this article , one could create a spiral cube lightness with something like:

%% this creates best-fit pure power law function 
%  Inspired by wikipedia - http://en.wikipedia.org/wiki/Lightness
l2=linspace(1,power(100,0.42),256); 
L2=(power(l2,1/0.42))'; 

%% this makes cielab real cube function spiral 
radius = 50; 
theta = linspace(0.6*pi, 2*pi, 256).'; 
a = radius * sin(theta); b = radius * cos(theta); 
Lab1 = [L2, a, b]; RGB_realcube=colorspace('RGB<-Lab',(Lab1));

[2] The simulations are created using ImageJ, an open source image manipulation program, and the Vischeck plug-in. I later discovered Dichromacy, anther ImageJ plug-in for these simulations, which has the advantage of being an open source plugin. They can also be performed on the fly (no upload needed) using the online tool Color Oracle.

Related posts

Edge detection as image fidelity test

This post is a quick follow-up to Dithering, a very interesting post by Cris Luengo, developer of DIPimage, a free Matlab image analysis toolbox.

Dithering is a graphic method that arranges black and white pixels in an image with certain patterns, to make it appear as though there are many intermediate gray levels. It is used when working with limited palettes. In his post Cris compares several algorithms that perform dithering.

As I commented in the post, after reading it I thought of a way to quantify the effectiveness of the various methods in replicating the original image: we can use Canny and Sobel filters to detect edges on the dithered results, and on the original. I show some of these in the image matrix below:

Looking at these results I argued that the structure-aware dithering did a much better job at preserving the edges in the original and the Canny and Sobel picked up on this (as do our eyes when we look at the results in the top row).

Visualization tips for geoscientists – series outline

Visualization tips for geoscientists: Surfer

Visualization tips for geoscientists: Matlab

Visualization tips for geoscientists: Matlab, part II

Visualization tips for geoscientists: Matlab, part III

Visualization tips for geoscientists: Matlab, part III

 Introduction

Last weekend I had a few hours to play with but needed a short break from writing about color palettes, so I decided to go back and finish up (for now) this series on geoscience visualization in Matlab. In the first post of the series I expanded on work by Steve Eddins at Mathworks on overlaying images using influence maps and demonstrated how it could be used to enhance the display of a single geophysical dataset.

Using transparency to display multiple data sets an example

At the end of the second post I promised I would go back and show an example of using transparency and influence maps for other tasks, like overlaying of different attributes. Here’s my favorite example in Figure 1. The image is a map in pastel colors of the Bouguer Gravity anomaly for the Southern Tuscany region of Italy, with three other layers superimposed using the techniques above mentioned.

It is beyond the objectives of this post to discuss at length about gravity exploration methods or to attempt a full interpretation of the map. I will go back to it at a later time as I am planning a full series on gravity exploration using this data set, but if you are burning to read more about gravity interpretation please check these excellent notes by Martin Unsworth, Professor of Physics at the Earth and Atmospheric Sciences department, University of Alberta, and note 4 at the end of this post. Otherwise, and for now, suffice it to say that warm colors (green to yellow to red) in the Bouguer gravity map indicate, relatively speaking, excess mass in the subsurface and blue and purple indicate deficit of mass in the subsurface.

The black and grey lines are lineaments extracted from derivatives of the Bouguer gravity data using two different methods [1]. The semitransparent, white-filled polygons show the location of some of the  basement outcrops (the densest rocks in this area).

Lineaments extracted from gravity data can correspond to contacts between geological bodies of different density, so a correlation can be expected between basement outcrops and some of the lineaments, as they are often placed in lateral contact with much lesser dense rocks. This is often exploited in mineral exploration in areas such as this where mineralization occurs at or in the vicinity of this contacts. As an example, I show in Figure 2 the occurrences (AGIP – RIMIN, unpublished industry report, 1989) of silicization (circles) and antimony deposits (triangles), superimposed on the distance from one of the set of lineaments (warm colors indicate higher distance) from Figure 1.

The fact that different methods give systematically shifted results is a known fact, often due the trade-off between resolution and stability, whereby the more stable methods are less affected by noise, but often produce smoother edges over deeper contacts, and their maxima may not correspond. This is in addition to the inherent ambiguity of gravity data, which cannot, by themselves, be interpreted uniquely. To establish which method might be more correct in this case (none is a silver bullet) I tried to calibrate the results using basement outcrops (i.e. does either method more closely match the outcrop edges?). Having done that, I would have more confidence in making inferences on possible other contacts in the subsurface suggested by lineament. I would say the black lines do a better overall job in the East, the gray perhaps in the West. So perhaps I’m stuck? I will get back to this during my gravity series.

Figure 1

Figure 2

Matlab code

As usual I am happy to share the code I used to make the combined map of Figure 1. Since the data I use is in part from my unpublished thesis in Geology and in part from Michele di Filippo at the University of Rome, I am not able to share it, and you will have to use your own data, but the Matlab code is simply adapted. The code snippet below assume you have a geophysical surface already imported in the workspace and stored in a variable called “dataI”, as well as the outcrops in a variable called “basement”, and the lineaments in “lnmnt1” and “lnmnt2”. It also uses my cube1 color palette.

 
% part 1 - map gravity data
figure; imagesc(XI,YI,dataI); colormap(cube1); hold on;
%
% part 2 - dealing with basement overlay
white=cat(3, ones(size(basement)), ones(size(basement)),...
 ones(size(basement)));
ttt=imagesc(Xb,Yb,white); % plots white layer for basement
%
% part 3 - dealing with lineaments overlays
black=cat(3, zeros(size(lnmnt1)), zeros(size(lnmnt1)),...
 zeros(size(lnmnt1)));
grey=black+0.4;
basement_msk=basement.*0.6;
kkk=imagesc(XI,YI,black); % plots black layer for lineament 1
sss=imagesc(XI,YI,gray); % plots gray layer for lineament 2
hold off
%
% part 4 - set influence maps
set(ttt, 'AlphaData', basement_msk); % influence map for basement
set(kkk, 'AlphaData', lnmnt1); % influence map for linement 1
set(sss, 'AlphaData', lnmnt2); % influence map for linement 2
%
% making it pretty
axis equal
axis tight
axis off
set(gca,'YDir','normal');
set(gcf,'Position',[180 150 950 708]);
set(gcf,'OuterPosition',[176 146 958 790]);

Matlab code, explained

OK, let’s break it down starting from scratch. I want first to create a figure and display the gravity data, then hold it so I can overlay the other layers on top of it. I do this with these two commands:

figure;imagesc(XI,YI,dataI);

hold on;

The layer I want to overlay first is the one showing the basement outcrops. I make a white basement layer covering the full extent of the map, which is shown in Figure 3, below.

Figure 3

I create it and plot it with the commands:

white=cat(3, ones(size(basement)), ones(size(basement)), ones(size(basement)));

ttt=imagesc(Xb,Yb,white);

The handle  ttt is to be used in combination with the basement influence map to produce the partly transparent basement overlay: remember that I wanted to display the outcrops in white color, but only partially opaque so the colored gravity map can still be (slightly) seen underneath. I make the influence map, shown in Figure 4, with the command:

basement_msk=basement.*0.6;

Since the original binary variable “basement” had values of 1 for the outcrops and 0 elsewhere, whit the command above I assign an opacity of 0.6 to the outcrops, which will be applied when the next command, below, is run, achieving the desired result.

set(ttt, ‘AlphaData’, basement_msk); % uses basement influence map

Figure 4

For the lineaments I do things in a similar way, except that I want those plotted with full opacity since they are only 1 pixel wide.

As an example I am showing in Figure 5 the black layer lineament 1 and in Figure 6 the influence map, which has values of 1 (full opacity) for the lineament and 0 (full transparency) for everywhere else.

Figure 5

Figure 6

Now a few extra lines to make things pretty, and this is what I get, shown below in Figure 7: not what I expected!

Figure 7

The problem is in these two commands:

white=cat(3, ones(size(basement)), ones(size(basement)), ones(size(basement)));

ttt=imagesc(Xb,Yb,white);

I am calling the layer white but really all I am telling Matlab is to create a layer with maximum intensity (1). But the preceding colormap(cube1) command assigned a salmon-red color to the maximum intensity in the figure, and so that is what you get for the basement overlay.

Again, to get the result I wanted, I had to come up with a trick like in the second post examples. This is the trick:

I create a new color palette with this command:

cube1edit=cube1; cube1edit(256,:)=1;  

The new color palette has last RGB triplet actually defined as white, not salmon-red.

Then I replace this line:

figure; imagesc(XI,YI,dataI); colormap(cube1); hold on;

with the new line:

figure; imagesc(XI,YI,dataI, [15 45]); colormap (cube1edit); hold on;

The highest value in dataI is around 43. By spreading the color range from [15 43] to [15 45], therefore exceeding max(dataI) I ensure that white is used for the basement overlay but not in any part of the map where gravity is highest but there is no basement outcrop. In other words, white is assigned in the palette but reserved to the overlay.

Please let me know if that was clear. If it isn’t I will try to describe it better.

Notes

[1] One method is the total horizontal derivative. The other method is the hyperbolic tilt angle – using Matlab code by Cooper and Cowan (reference). This is how I produced the two overlays:  first I calculated the total horizontal derivative and the tilt angle, then I found the maxima to use as the overlay layers. This is similar to Figure 3e in Cooper and Cowan, but I refined my maxima result by reducing them to 1-pixel-wide lines (using a thinning algorithm).

Reference

Cooper, G.R.J., and Cowan, D.R. (2006) – Enhancing potential field data using filters based on the local phase  Computers & Geosciences 32 (2006) 1585–1591

Related posts (MyCarta)

Visualization tips for geoscientists: Surfer

Visualization tips for geoscientists: Matlab

Visualization tips for geoscientists: Matlab, part II

Image Processing Tips for Geoscientists – part 1