sketch2model – linking edges with mathematical morphology

Introduction

As written by Elwyn in the first post of this seriessketch2model was conceived at the 2015 Calgary Geoscience Hackathon as a web and mobile app that would turn an image of geological sketch into a geological model, and then use Agile Geoscience’s modelr.io to create a synthetic seismic model.

original project promo image

The skech2model concept: modelling at the speed of imagination. Take a sketch (a), turn it into an earth model (b), create a forward seismic model (c). Our hack takes you from a to b.

One of the main tasks in sketch2model is to identify each and every geological body in a sketch  as a closed polygon. As Elwyn wrote, “if the sketch were reproduced exactly as imagined, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended – an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross, see example below). We applied some morphological filtering to compensate for the sketch imperfections.

Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).

Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).

The cartoon below shows what would be the final output of sketch2model in the two cases in the example above (non closed and closed gap).

fill_before_after_closing

My objective with this post is to explain visually how we correct for some of these imperfections within sketch2model. I will focus on the use of morphological closing,  which consist in applying in sequence a dilation and an erosion, the two fundamental morphological operations.

Quick mathematical morphology review

All morphological operations result from the interaction of an image with a structuring element (a kernel) smaller than the image and typically in the shape of a square, disk, or diamond. In most cases the image is binary, that is pixels take either value of 1, for the foreground objects, or 0 for the background. The structuring element operates on the foreground objects.

Morphological erosion is used to remove pixels on the foreground objects’ boundaries. How ‘deeply’ the boundaries are eroded depends on the size of the structuring element (and shape, but in this discussion I will ignore the effect of changing the shape). This operation is in my mind analogous to peeling off a layer from an onion; the thickness of the layer is related to the structuring element size.

Twan Maintz in his book Digital and medical image processing describes the interaction of image and structuring element during erosion this way: place the structuring element anywhere in the image: if it is fully contained in the foreground object (or in one of the objects) then the origin (central) pixel of the structuring element (and only that one) is part of the eroded output. The book has a great example on page 129.

Dilation does the opposite of erosion: it expands the object boundaries (adding pixels) by an amount that is again related to the size of the structuring element. This is analogous to me to adding back a layer to the onion.

Again, thanks to Maintz the interaction of image and structuring element in dilation can be intuitively described: place the structuring element anywhere in the image: does it touch any of the foreground objects? If yes then the origin of the structuring element is part of the dilated result. Great example on pages 127-128.

Closing is then for me akin to adding a layer to an onion (dilation) and then peeling it back off (erosion) but with the major caveat that some of the changes produced by the dilation are irreversible: background holes smaller than the structuring element that are filled by the dilation are not restored by the erosion. Similarly, lines in the input image separated by an amount of pixels smaller than the size of the structuring element are linked by the dilation and not disconnected by the erosion, which is exactly what we wanted for sketch2model.

Closing demo

If you still need further explanation on these morphological operations, I’d recommend reading further on the ImageMagik user guide the sections on erosion, dilation, and closing, and the examples  on the Scikit-image website.

As discussed in the previous section, when applying closing to a binary image, the external points in any object in the input image will be left unchanged in the output, but holes will be filled, partially or completely, and disconnected objects like edges (or lines in sketches) can become connected.

We will now demonstrate it below with Python-made graphics but without code; however,  you can grab the Jupyter notebook with complete Python code on GitHub.

I will use this model binary image containing two 1-pixel wide lines. Think of them as lines in a sketch that should have been connected, but are not.

We will attempt to connect these lines using morphological closing with a disk-shaped structuring element of size 2. The result is plotted in the binary image below, showing that closing was successful.

But what would have happened with a smaller structuring element, or with a larger one? In the case of a disk of size 1, the closing magic did not happen:

Observing this result, one would increase the size of the structuring element. However, as Elwyn will show in the next post, also too big a structuring element would have detrimental effects, causing subsequent operations to introduce significant artifacts in the final results. This has broader implications for our sketch2model app: how do we select automatically (i.e. without hard coding it into the program) the appropriate structuring element size? Again, Elwyn will answer that question; in the last section I want to concentrate on explaining how the closing machinery works in this case.

In the next figure I have broken down the closing operation into its component dilation and erosion, and plotted them step by step to show what happens:

non_closed_break_red_two

So we see that the edges do get linked by the dilation, but by only one pixel, which the following erosion then removes.

And now let’s break down the closing with disk of size two into its component. This is equivalent to applying two consecutive passes of dilation with disk of size 1, and then two consecutive passes of erosion with disk of size 1, as in the demonstration in the next figure below (by the way, if we observed carefully the second panel above we could predict that the dilation with a disk of size two would result in a link 3-pixel wide instead of 1-pixel wide, which the subsequent erosion will not disconnect).

closed_break_red

Below is a GIF animated version of this demo, cycling to the above steps; you can also run it yourself by downloading and running the Jupyter notebook on GitHub.

closing_demo

Additional resources

Closing Jupyter notebook with complete Python code on GitHub

sketch2model Jupyter notebook with complete Python code on GitHub 

More reading on Closing, with examples

Related Posts

sketch2model (2015 Geoscience Hackathon, Calgary)

sketch2model – sketch image enhancements

Mapping and validating geophysical lineaments with Python

sketch2model – sketch image enhancements

This is the second post of in a series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience.

The first post was written by Elwyn Galloway and published on both his Scibbatical blog and here on MyCarta. In that article Elwyn mentioned the need for an adaptive image conditioning workflow for binarization of photos with geological sketches in images. Binarization is the process of converting a natural image to a binary image (check this simple but awesome interactive demonstration of binarization), which in our case is necessary to separate the sketch from the background.

The following is a demonstration of the preliminary image processing operations applied to the input photo when sketch2model is run. The full code listing of this demonstration is available as a Jupyter notebook on GitHub. Also in GitHub you can find a Jupyter Notebook with the fully documented version of sketch2model.

First we import one of the photos with sketches and convert it to a grayscale image.

im = io.imread('paper_breaks.png')
im = color.rgb2gray(im[0:-1:2,0:-1:2])

paper_breaks_post

Next we enhance the grayscale image with a couple of cascaded processes. But before we do that, let’s graph the intensity values across the image to understand the degree of contrast between sketch edges and background, which ultimately will determine our success in separating them from it. We show this in the figure below, on the left, for one column of pixels (y direction). The black line across the input image on the right shows the location of the column selected. It is fairly obvious from the plot on the left that the intensity of the background is not uniform, due to variable light conditions when the photo was taken, and towards the right (e.g. bottom of the photo) it gets closer to that of the edges. In some images it might even become less than the intensity of the edge. This highlights the need for (preemptively) applying the enhancements illustrated in the remainder of the post.

section_raw

The first enhancement is called compressor, or limiter. I read many years ago that it is used in electronics to find hard edges in data: the idea is to square each element in the data (image, or other type of data), smooth the result (enough to remove high frequency variations but not so much as to eliminate variability), take the square root, and finally divide each element in the input by the square root result.

I experimented with this method (at the time using Matlab and its Image Processing Toolbox) using the same gravity dataset from my 2015 geophysical tutorial on The Leading Edge (see the post Mapping and validating geophysical lineaments with Python). An example of one such experiments is shown in the figure below where: the top left map is the Bouguer data; the centre top map is the squared data; the top right is the result of a Gaussian blur; the bottom left the result of square root, and centre right is the final output, where the hardest edges in the original data have been enhanced.

Sketch2model_compressor_campi

The most important parameter in this process is the choice of the smoothing or blur; using a Gaussian kernel of different size more subtle edges are enhanced, as seen in the bottom right map (these are perhaps acquisition-related gridding artifacts).

In our sketch2model implementation the size of the Gaussian kernel is hardcoded; it was chosen following trial and error on multiple photos of sketches and yielded optimal results in the greatest majority of them. We were planning to have the kernel size depend on the size of the input image, but left the implementation to our ‘future work’ list.’

Here’s the compressor code from sketch2model:

# compressor or limiter (electronics): find hard edges in data with long 
# wavelength variations in amplitude
# step 1: square each element in the image to obtain the power function
sqr = im**2
# step 2: gaussian of squared image
flt2 = sp.ndimage.filters.gaussian_filter(sqr,21)
# step 3: divide the intensity of each original pixel by the square root 
# of the smoothed square
cmprs= im/(np.sqrt(flt2))

and a plot of the result (same column of pixels as in the previous one):

section_compressed

From the plot above we see that now the background intensity is uniform and the contrast has been improved. We can maximize it with contrast stretching, as below:

# contrast stretching
p2, p98 = np.percentile(cmprs, (2, 98))
rescale = exposure.rescale_intensity(cmprs, in_range=(p2, p98))

section_stretched

We now have ideal contrast between edges and background, and can get a binary image with the desired sketch edges using a scalar threshold:

# binarize image with scalar threshold
binary = ~(color.rgb2gray(rescale) > 0.5)

section_binary

Bingo!

sketch2model

This guest post (first published here) is by Elwyn Galloway, author of Scibbatical on WordPress. It is the first in our series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience. Happy reading.

DSC_0568

Collaboration in action. Evan, Matteo, and Elwyn (foreground, L to R) work on sketch2model at the 2015 Calgary Geoscience Hackathon. Photo courtesy of Penny Colton.

Welcome to an epic blog crossover event. Two authors collaborating to tell a single story over the course of several articles.

We’ve each mentioned the sketch2model project on our respective blogs, MyCarta and scibbatical, without giving much detail about it. Apologies if you’ve been waiting anxiously for more. Through the next while, you’ll get to know sketch2model as well as we do.

The sketch2model team came together at the 2015 Geoscience Hackathon (Calgary), hosted by Agile Geoscience. Elwyn and Evan Saltman (epsalt on twitter and GitHub) knew each other from a previous employer, but neither had met Matteo before. All were intrigued by the project idea, and the individual skill sets were diverse enough to combine into a well-rounded group. Ben Bougher, part of the Agile Geoscience team, assisted with the original web interface at the hackathon. Agile’s take on this hackathon can be found on their blog.

Conception

The idea behind sketch2model is that a user should be able to easily create forward seismic models. Modelling at the speed of imagination, allowing seamless transition from idea to synthetic seismic section. It should happen quickly enough to be incorporated into a conversation. It should happen where collaboration happens.

original project promo image

The skech2model concept: modelling at the speed of imagination. Take a sketch (a), turn it into an earth model (b), create a forward seismic model (c). Our hack takes you from a to b.

Geophysicists like to model wedges, and for good reasons. However, wedge logic can get lost on colleagues. It may not effectively demonstrate the capability of seismic data in a given situation. The idea is not to supplant that kind of modeling, but to enable a new, lighter kind of modeling. Modeling that can easily produce results for twelve different depositional scenarios as quickly as they can be sketched on a whiteboard.

The Hack

Building something mobile to turn a sketch into a synthetic seismic section is a pretty tall order for a weekend. We decided to take a shortcut by leveraging an existing project: Agile’s online seismic modelling package, modelr. The fact that modelr works through any web browser (including a smartphone) kept things mobile. In addition, modelr’s existing functionality allows a user to upload a png image and use it as a rock property model. We chose to use a web API to interface our code with the web application (as a bonus, our approach conveniently fit with the hackathon’s theme of Web). Using modelr’s capabilities, our hack was left with the task of turning a photo of a sketched geologic section into a png image where each geologic body is identified as a different color. An image processing project!

Agile is a strong proponent for Python in geophysics (for reasons nicely articulated in their blog post), and the team was familiar with the language to one extent or another. There was no question that it was the language of choice for this project. And no regrets!

We aimed to create an algorithm robust enough to handle any image of anything a user might sketch while accurately reproducing their intent. Marker on whiteboard presents different challenges than pencil on paper. Light conditions can be highly variable. Sketches can be simple or complex, tidy or messy. When a user leaves a small gap between two lines of the sketch, should the algorithm take the sketch as-is and interpret a single body? Or fill the small gap and interpret two separate bodies?

composite_examples

Our algorithm needs to be robust enough to handle a variety of source images: simple, complex, pencil, marker, paper, white board (check out the glare on the bottom left image). These are some of the test images we used.

Matteo has used image processing for geoscience before, so he landed on an approach for our hack almost instantly: binarize the image to distinguish sketch from background (turn color image into a binary image via thresholding); identify and segregate geobodies; create output image with each body colored uniquely.

binary_example

Taking the image of the original sketch (left) and creating a binary image (right) is an integral part of the sketch2model process.

Python has functions to binarize a color image, but for our applications, the results were very inconsistent. We needed a tool that would work for a variety of media in various lighting conditions. Fortunately, Matteo had some tricks up his sleeve to precondition the images before binarization. We landed on a robust flow that can binarize whatever we throw at it. Matteo will be crafting a blog post on this topic to explain what we’ve implemented.

Once the image is binarized, each geological body must be automatically identified as a closed polygon. If the sketch were reproduced exactly as imagined, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended — an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross, see example below). We applied some morphological filtering to compensate for the sketch imperfections. If applied too liberally, this type of filtering causes unwanted side effects. Elwyn will explore how we struck a balance between filling unintentional gaps and accurate sketch reproduction in an upcoming blog post.

example_of_morph_filtering

Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).

Compared to the binarization and segmentation, generating the output is a snap. With this final step, we’ve transformed a sketch into a png image where each geologic body is a different color. It’s ready to become a synthetic seismic section in modelr.

Into the Wild

“This is so cool. Draw something on a whiteboard and have a synthetic seismogram right on your iPad five seconds later. I mean, that’s magical.”

Sketch2model was a working prototype by the end of the hackathon. It wasn’t the most robust algorithm, but it worked on a good proportion of our test images. The results were promising enough to continue development after the hackathon. Evidently, we weren’t the only ones interested in further development because sketch2model came up on the February 17th episode of Undersampled Radio. Host Matt Hall: “This is so cool. Draw something on a whiteboard and have a synthetic seismogram right on your iPad five seconds later. I mean, that’s magical.”

Since the hackathon, the algorithm and web interface have progressed to the point that you can use it on your own images at sketch2model.com. To integrate this functionality directly into the forward modelling process, sketch2model will become an option in modelr. The team has made this an open-source project, so you’ll also find it on GitHub. Check out the sketch2model repository if you’re interested in the nuts and bolts of the algorithm. Information posted on these sites is scant right now, but we are working to add more information and documentation.

Sketch2model is designed to enable a new kind of collaboration and creativity in subsurface modelling. By applying image processing techniques, our team built a path to an unconventional kind of forward seismic modelling. Development has progressed to the point that we’ve released it into the wild to see how you’ll use it.

Mapping and validating geophysical lineaments with Python

In Visualization tips for geoscientists: MATLAB, Part III I showed there’s a qualitative correlation between occurrences of antimony mineralization in the southern Tuscany mining district and the distance from lineaments derived from the total horizontal derivative (also called maximum horizontal gradient).

Let’s take a look at the it below (distance from lineaments increases as the color goes from blue to green, yellow, and then red).

regio_distance_mineral_occurrences

However, in a different map in the same post I showed that lineaments derived using the maxima of the hyperbolic tilt angle (Cooper and Cowan, 2006, Enhancing potential field data using filters based on the local phase) are offset systematically from those derived using the total horizontal derivative.

Let’s take a look at the it below: in this case Bouguer gravity values increase as the color goes from blue to green, yellow, and then red; white polygons are basement outcrops.

The lineaments from the total horizontal derivative are in black, those from the maxima of hyperbolic tilt angle are in gray. Which lineaments should be used?

The ideal way to map the location of density contrast edges (as a proxy for geological contacts) would be  to gravity forward models, or even 3D gravity inversion, ideally constrained by all available independent data sources (magnetic or induced-polarization profiles, exploratory drilling data, reflection seismic interpretations, and so on).

The next best approach is to map edges using a number of independent gravity data enhancements, and then only use those that collocate.

Cooper and Cowan (same 2006 paper) demonstrate that no single-edge detector method is a perfect geologic-contact mapper. Citing Pilkington and Keating (2004, Contact mapping from gridded magnetic data – A comparison of techniques) they conclude that the best approach is to use “collocated solutions from different methods providing increased confidence in the reliability of a given contact location”.

I show an example of such a workflow in the image below. In the first column from the left is a map of the residual Bouguer gravity from a smaller area of interest in the southern Tuscany mining district (where measurements were made on a denser station grid). In the second column from the left are the lineaments extracted using three different (and independent) derivative-based data enhancements followed by skeletonization. The same lineaments are superimposed on the original data in the third column from the left. Finally, in the last column, the lineaments are combined into a single collocation map to increase confidence in the edge locations (I applied a mask so as to display edges only where at least two methods collocate).

collocation_teaser

If you want to learn more about this method, please read my note in the Geophysical tutorial column of The Leading Edge, which is available with open access here.
To run the open source Python code, download the iPython/Jupyter Notebook from GitHub.

With this notebook you will be able to:

1) create a full suite of derivative-based enhanced gravity maps;

2) extract and refine lineaments to map edges;

3) create a collocation map.

These technique can be easily adapted to collocate lineaments derived from seismic data, with which the same derivative-based enhancements are showing promising results (Russell and Ribordy, 2014, New edge detection methods for seismic interpretation.)

Reinventing the color wheel – part 2

In the first post of this series I argued that we should not build colormaps for azimuth (or phase) data by interpolating linearly between fully saturated hues in RGB or HSL space.

A first step towards the ideal colormap for azimuth data would be to interpolate between isoluminant colours instead. Kindlmann et al. (2002) published isoluminant RGB values for red, yellow, green, cyan, blue, and magenta based on a user study. The code in the next block show how to interpolate between those published colours to get 256-sample R, G, and B arrays (with magenta repeated at both ends), which can then be combined into a isoluminant colormap for azimuth data.

01 r = np.array([0.718, 0.847, 0.527, 0.000, 0.000, 0.316, 0.718])
02 g = np.array([0.000, 0.057, 0.527, 0.592, 0.559, 0.316, 0.000])
03 b = np.array([0.718, 0.057, 0.000, 0.000, 0.559, 0.991, 0.718])
04 x = np.linspace(0,256,7)
05 xnew = np.arange(256)
06 r256 = np.interp(xnew, x, r)
07 g256 = np.interp(xnew, x, g)
08 b256 = np.interp(xnew, x, b)

This is a good example in general of how to interpolate to a finer sampling one or more sequence of values using the interp function from the Numpy library. In line 04 we define 7 evenly spaced points between 0 and 255; this will be the sample coordinate for the r, g, and b colours created in lines 01-03. In line 05 we create the new coordinates we will be interpolating r, g, and b values at in lines 06-08 (all integers between 0 and 256). The full code will come in the Notebook accompanying the last post in this series.

This new colormap is used in the bottom map of the figure below, whereas in the top map we used a conventional HSV azimuth colormap (both maps show the dip azimuth calculated on the Penobscot horizon). The differences are subtle, but with the isoluminant colormap we are guaranteed there are no perceptual artifacts due to the random variations in lightness of the fully saturated HSV colors.

Azimuth_compare

Another possible strategy to create a perceptual colormap for azimuth data would be to set lightness and chroma to constant values in LCH space and interpolate between hues. This is the Matlab colormap I previously created, and shown in Figure 4 of New Matlab isoluminant colormap for azimuth data. In the next post, I will show how to do this in Python.

Read more on colors and seismic data

The last two posts on Agile show you how to corender seismic amplitude and continuity from a time slice using a 2D colormap,  and then how to corender 3 attributes from a horizon slice.

Reference

Kindlmann, E. et al. (2002). Face-based Luminance Matching for Perceptual Colour map Generation – Proceedings of the IEEE conference on Visualization.

Spectral lightness rainbow colormap

Spectral lightness rainbow

Quick post to share my replica of Ethan Montag ‘s Spectral lightness colormap from this paper. My version has a linear Lightness profile (Figure 1) that increases monotonically from 0 (black) to 100 (white) while Hue cycles from 180 to 360 degrees and then wraps around and continues from 0 to 270.

You can download the colormap files (comma separated in MS Word .doc format) with RGB values in 0-1 range and 0-255 range.

L-plot

Figure 1

Spectral-L

Figure 2

Matlab code

To run this code you will need Colorspace, a free function from Matlab File Exchange, for the color space transformations.

%% generate Chroma, Lightness, Hue components
h2=linspace(0,270,60*2.56);
h1=linspace(180,360,103);
h=horzcat(h1,h2);
c=ones(1,256)*25;
l=linspace(0,100,256);
%% merge together
lch = vertcat(l,c,h)';
%% convert to RGB
rgb = colorspace('LCH->rgb',lch);
%% Plot color-coded lightness profile
figure;
h=colormapline(1:1:256,lch(:,1),[],rgb);
set(h,'linewidth',2.8);
title('Montag Spectral L* lightness plot ','Color','k','FontSize',12,'FontWeight','demi');
%%  Pyramid test data
PY=zeros(241,241);
for i = 1:241
    temp=(0:1:i-1);
      PY(i,1:i)=temp(:);
end
test=PY.';
test=test-1;
test(test==-1)=0;
test1=test([2:1:end,1],:);
PY1=PY+test1;
PY2=fliplr(PY1);
PY3= flipud(PY1);
PY4=fliplr(PY3);
GIZA=[PY1,PY2;PY3,PY4].*2;
x=linspace(1,756,size(GIZA,1));
y=x;
[X,Y]=meshgrid(x,y);
clear i test test1 PY PY1 PY2 PY3 PY4 temp;

%% display Pyramid surface
fig1=figure;
surf(X,Y,GIZA,GIZA,'FaceColor','interp','EdgeColor','none');
view(-35,70);
colormap(rgb);
axis off;
grid off;
colorbar;
% set(fig1,'Position',[720 400 950 708]);
% set(fig1,'OuterPosition',[716 396 958 790]);
title('Montag Spectral L*','Color','k','FontSize',12,'FontWeight','demi');

Aknowledgements

The coloured lightness profiles were made using the Colormapline submission from the Matlab File Exchange.

Moiré Patterns

Moiré pattern

Some time ago I reblogged a post from El Ojo Inoportuno showing Moiré pattern, which resulted from taking a photo of a circular pattern of (beautiful) tiles. This phenomenon is caused by undersampling and is also called space aliasing. There’s a very good explanation of space aliasing and another stunning Moiré example on Agile Geoscience’s post N is for Nyquist.

Creating Moiré patterns

One way to get Moiré pattern is to superimpose two identical, transparent line gratings and rotate one by an angle. You can see an animation of this on Wolfram Mathworld here; notice that the pattern varies with the angle. In the same page there’s also an example of Moiré Patterns generated by plotting series of curves on a computer screen, which is very similar to taking the photo of circular tiles shown in the Ojo Inoportuno photo. Again the interference is caused by representing circles with a finite size pixel grid. If you are interested you can experiment with these effects and many more by downloading templates from this site. Figure 1 shows my own Moiré from circular patterns.

circ2

Figure 1

 

There is a program for interactive Moiré pattern experiments called iMoiré.

Another way to get a Moiré pattern is to scan a picture printed with halftone. There’s a simple explanation of this scanning-generated interference here. Again this is a matter of aliasing, or undersampling. Here’s a good example:

Figure 2

The original image is a lovely watercolor by  Ettore Roesler Franz showing medieval houses along the Tiber river in Rome. The Moiré Pattern results from scanning the watercolor from one of the book collections (the image was posted on Flickr here).

How to remove Moiré pattern from digital images

For a quick solution, there’s a good article with detailed instructions on how to remove Moiré pattern in Photoshop, Paint Shop Pro, etcetera. For a more advanced workflow there’s an excellent top hat filter in Photoshop included in Reindeer Graphic’s FoveaPro plugin. In Figure 3, I created a sort of pictorial chart of this workflow using low resolution copies of examples in The Image Processing Cookbook, by John C. Russ.

top_hat1

Figure 3

 

In future posts I plan to show how to remove Moire’ pattern with open source code images  using Python, and then to extend the workflow to the removal (or attenuation) of acquisition footprint in seismic data, which has a very similar appearance in the 2D Fourier domain, and can be filtered with very similar techniques.

 

 

Colormap compromise

At the end the series The rainbow is dead…long live the rainbow! I introduced CubicYF, a rainbow-like color palette with a slightly compressive, monotonic increasing lightness profile. The red color is missing from the palette because green and red at people with deuteranopia, the most common type of color vision deficiency, confuse red and green, so I took one out. An alternative solution to taking the red out is to have a yellow-like color with lower lightness followed by a red-like color of higher lightness, like in the LinearL palette described The rainbow is dead…long live the rainbow! – Part 5. A third solution, which is a compromise in perceptual terms, is CubicL (sometimes I called it cube1), the color palette I used with the gravity data from my degree thesis in geology in the series Visualization tips for geoscientists. An example map from the series in which I used CubicL is in Figure 1 below:

data cube1_final_shading_slope

Figure 1

To generate CubicL  I used a compressive function for the increase in lightness very much like the one for CubicYF, except in this case I allowed lightness to decrease from about 90 in the yellow portion of the colormap to about 80 to get to the red. As a result, there is an inversion in the lightness trend, indicated by the arrow in the bottom row in Figure 2. However, I believe this is an acceptable compromise, and one that is pleasing for people accustomed to rainbow (it has red), because the inversion is smooth, with a gentle downward-facing concavity. I also believe this colormap is still a perceptual improvement over the rainbow or spectrum so commonly used in seismic visualization, which typically have at least 3 sharp lightness inversions (indicated by arrows in the top row of Figure 2 below).

spectrum_vs_cubeY

Figure 2

CubicL is one of the colormaps available in my Matlab File Exchange function Perceptually Improved Colormaps.