In part 1 of this short series I demonstrated how to detect the portion occupied by the seismic section in an image (Figure 1).
Figure 1
The result was a single binary image with the white object representing the pixels occupied by the seismic section (Figure 2).
Figure 2
You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.
Next I want to use this binary object to derive a transformation function to rectify to a rectangle the seismic section in the input image.
The first step is to detect the contour of the object. Notice that because we used morphological operations it is not a perfect quadrilateral: it has rounded corners and some of the sides are bent, therefore the second step will be to approximate the contour with a polygon with enough tolerance to ensure it has 4 sides only(this took some trial and error but 25 turned out to be a good value for the parameter for a whole lot of test images I tried).
In reality, the two steps are performed together using the functions find_contours (there is only one to find, really) and approximate_polygon from the skimage.measure module, as below:
The variable coords contains the coordinate for the corner points of the polygon (the first point is repeated last to close the polygon), which in Figure 3 I plotted superimposed to the input binary object.
Figure 3 – approximated polygon
A problem with the output of approximate_polygon is that the points are not ordered; to solve it I adapted a function from a Stack Overflow answer to sort them based on the angle from their centroid:
def ordered(points):
x = points[:,0]
y = points[:,1]
cx = np.mean(x)
cy = np.mean(y)
a = np.arctan2(y - cy, x - cx)
order = a.ravel().argsort()
x = x[order]
y = y[order]
return np.vstack([x,y])
I call the function as below to get the corners in the contour without the last one (repetition of the first point).
sortedCoords = ordered(coords[:-1]).T
I can then plot them using colors in a predefined order to convince myself the indeed are sorted:
Figure 4 – corners sorted in counter-clockwise order
The next bit of code may seem a bit complicated but it is not. With coordinates of the corners known, and their order as well, I can calculate the largest width and height of the input seismic section, and I use them to define the size of the registered output section, which is to be of rectangular shape:
Now I have everything I need to rectify the seismic section in the input image: it is warped using homologous points (the to sets of four corners) and a transformation function.
As anticipated in the introductory post of this short series I am going to demonstrate how to automatically detect where a seismic section is located in an image (be it a picture taken from your wall, or a screen capture from a research paper), rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section.
You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.
In this part one I will be focusing on the image preparation and enhancement, and the automatic detection of the seismic section (all done using functions from numpy, scipy, and scikit-image). In order to do that, first I convert the input image (Figure 1) containing the seismic section to grayscale and then enhance it by increasing the image contrast (Figure 2).
Figure 1 – input image
Figure 2 – grayscale image
All it takes to do that is three lines of code as follows:
For a good visual intuition of what actually is happening during the contrast stretching, check my post sketch2model – sketch image enhancements: in there I show intensity profiles taken across the same image before and after the process.
Finding the seismic section in this image involve four steps:
converting the grayscale image to binary with a threshold (in this example a global threshold with the Otsu method)
finding and retaining only the largest object in the binary image (heuristically assumed to be the seismic section)
filling its holes
applying morphological operations to remove minutiae (tick marks and labels)
# (i) label all white objects (the ones in the binary image).
# scipy.ndimage.label actually labels 0s (the background) as 0 and then
# every non-connected, nonzero object as 1, 2, ... n.
label_objects, nb_labels = scipy.ndimage.label(binary_global)
# (ii) calculate every labeled object's binary size (including that
# of the background)
sizes = numpyp.bincount(label_objects.ravel())
# (3) set the size of the background to 0 so that if it happened to be
# larger than the largest white object it would not matter
sizes[0] = 0
# (4) keep only the largest object
binary_objects = remove_small_objects(binary_global, max(sizes))
Figure 4 – isolated seismic section
# Remove holes (black regions inside white object)
binary_holes = scipy.ndimage.morphology.binary_fill_holes(binary_objects)
Figure 5 – holes removed
enhanced = opening(binary_holes, disk(7))
Figure 6 – removed residual tick marks and labels
That’s it!!!
You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.
In the next post, we will use this polygonal binary object both as a basis to capture the actual coloured seismic section from the input image and to derive a transformation to rectify it to a rectangle.
In Evaluate and compare colormaps, I have shown how to extract and display the lightness profile of a colormap using Python. I do this routinely with colormaps, but I realize it takes an effort, and not all users may feel comfortable using code to test whether a colormap is perceptual or not.
This got me thinking that there is perhaps a need for a user-friendly, interactive tool to help identify colormap artifacts, and wondering how it would look like.
In a previous post, Comparing color palettes, I plotted the elevation for the South American continent from the Global Land One-km Base Elevation Project using four different color palettes. In Figure 1 below I plot again 3 of those: rainbow, linear lightness rainbow, and grayscale, respectively, from left to right. In maps like these some artifacts are very evident. For example there’s a classic film negative effect in the map on the left, where the Guiana Highlands and the Brazilian Highlands, both in blue, seem to stand lower than the Amazon basin, in violet. This is due to the much lower lightness (or alternatively intensity) of the colour blue compared to the violet.
Figure 1
However, other artifacts are more subtle, like the inversion of the highest peaks in the Andes, which are coloured in red, relative to their surroundings, in particular the Altipiano, an endorheic basin that includes Lake Titicaca.
My idea for this tool is simple, and consists of two windows. The first is a basemap window which can display either a demo dataset or user data loaded from an ASCII grid file. In this window the user would interactively select a profile by building a polyline with point-and-click, like the one in Figure 2 in white.
Figure 2
The second window would show the elevation profile with the colour fill assigned based on the colormap, like in Figure 3 at the bottom (with colormap to the right), and with a profile of the corresponding colour intensities (on a scale 1-255) at the top.
In this view it is immediately evident that, for example, the two highest peaks near the center, coloured in red, are relative intensity lows. Another anomaly is the absolute intensity low on the right side, corresponding to the colour blue, where the elevation profile varies smoothly.
Figure 3
I created this concept prototype using a combination of Matlab, Python, and Surfer. I welcome suggestions for possible additional features, and would like to hear form folks interested in collaboration on a web app (ideally in Python).
I recently added to my Matlab File Exchange function, Perceptually improved colormaps, a colormap for periodic data like azimuth or phase. I am going to briefly showcase it using data from my degree thesis in geology, which I used before, for example in Visualization tips for geoscientists – Matlab. Figure 1, from that post, shows residual gravity anomalies in milligals.
Figure 1
Often we’re interested in characterizing these anomalies by calculating the direction of maximum dip at each point on the surface, and for that direction display the azimuth, or dip azimuth. I’ve done this for the surface of residual anomalies from Figure 1 and displayed the azimuth in Figure 2. Azimuth from 0 to 360 degrees are color-coded using Jet, Matlab’s standard colormap (until recently). Typically I do not trust azimuth values when the dip is close to zero because it is often contaminated by noise so I would use shading to de-saturate the colors where dip has the lowest values, but for ease of discussion I haven’t done so in this case.
Figure 2. Azimuth values color-coded with Jet.
There are two problems with Figure 2. First, the well-known problems with the jet colormap. For example, blue is too dark and blue areas appear as bands of constant colour. Yellow is much lighter than any other colour so we see artificial yellow edges that are not really present in the data. But there is an additional issue in Figure 2 because azimuths close in value to 0 and 360 degrees are colored with blue and red, respectively, instead of a single color as they should, causing an additional artificial edge.
In Figure 3 I recolored the map using a colormap that replicates those used in many geophysical software tools to display azimuth or phase data. This is better because it wraps around at 360 degrees but the perceptual issues are unresolved: in this case red, yellow and blue all appear as sharp perceptual edges.
Figure 3. Azimuth values color-coded with generic azimuth colormap.
Figure 4. Azimuth values color-coded with isoluminant azimuth colormap.
In Figure 4 I used my new colormap, called isoAZ (for isoluminant azimuth). This colormap is much better because not only does it wraps around at 360 degrees, but also lightness is held constant for all colors, which eliminates the perceptual anomalies. All the artificial yellow, red, and blue edges are gone, only real edges are left. This can be more easily appreciated in the figure below: if you hover with your mouse over it you are able to switch back and forth between Figure 3 and Figure 4.
From an interpretation point of view, azimuths 180 degrees apart are of opposing colours, which is ideal for dip azimuth data because it allows us to easily recognize folds where dips of opposite direction are juxtaposed at an edge. One example is the sharp edge in the northwest quadrant of Figure 4, where magenta is juxtaposed to green. If you look at Figure 1 you see that there’s a relative high in this area (the edge in Figure 4) with dips of opposite direction on either side (East and West, or 0 and 360 degrees).
The colormap was created in the Lightness-Chroma-Hue color space, a polar transform of the Lab color space, where lightness is the vertical axis and at each value of lightness, chroma is the radial coordinate and hue the polar angle. One limitation of this approach is that due to theirregular shape of the color gamut section at each lightness value, we can never exceed chroma values of about 38-40 (at lightness = 65 in Matlab; in Python, with extensive trial and error, I have not been able to go past 36 using the Scikit-image Color module), which make the resulting colors pale, pastely.
it creates For those that want to experiment with it further, I used just a few lines of code similar to the ones below:
radius = 38; % chroma
theta = linspace(0, 2*pi, 256)'; % hue
a = radius * cos(theta);
b = radius * sin(theta);
L = (ones(1, 256)*65)'; % lightness
Lab = [L, a, b];
RGB=colorspace('RGB<-Lab',Lab(end:-1:1,:));
This code is a modification from an example by Steve Eddins on a post on his Matlab Central blog. In Steve’s example the colormap cycles through the hues as lightness increases monotonically (which by the way is an excellent way to generate a perceptual rainbow). In this case lightness is kept constant and hue cycles through the entire 360 degrees and wraps around. Also, instead of using the Image Processing Toolbox, I used Colorspace, a free function from Matlab File Exchange, for the color space transformations.
For data like fracture orientation where azimuths 180 degrees apart are equivalent it is better to stack two of these isoluminant colormaps in a row. In this way we place opposing colors 90 degrees apart, whereas color 180 degrees apart are the same. You can do it using Matlab commands repmat or vertcat, as below:
radius = 38; % chroma
theta = linspace(0, 2*pi, 128)'; % hue
a = radius * cos(theta);
b = radius * sin(theta);
L = (ones(1, 128)*65)'; % lightness
Lab = [L, a, b];
rgb=colorspace('RGB<-Lab',Lab(end:-1:1,:));
RGB=vertcat(rgb,rgb);
In my last post I introduced a CIE Lab linear L* rainbow palette from a paper by Kindlmann et al. [1]. I used this palette with a map of South America created with data from the Global Land One-km Base Elevation Project at the National Geophysical Data Center. The map is the third one in the figure below.
Based on visual inspection I argued that linear L* colored map compares more favourably with the grayscale – my perceptual benchmark – on the right – than the first and second, which use my ROYGBIV rainbow palette (from this post) and a classic rainbow palette, respectively. I noted that looking at the intensity of the colorbars may help in the assessment: the third and fourth colorbars are very similar and both look perceptually linear, whereas the first and second do not.
So it seems that among the three color palettes the third ones is the best, but…..
… prove it!
All the above is fine and reasonable, and yet it is still very much subjective. How can I prove it, convince myself this is indeed the case?
Well, of course one way is to use my L* profile and Great Pyramid tests with Matlab code from the first post of this series. Look at the two figures below: comparison of the lightness L* plots clearly shows the linear L* palette is far more perceptual than the ROYGBIV.
One disadvantage of this method is that you have to use Matlab, which is neither free nor cheap, and have to be comfortable with some code and ASCII file manipulation.
Just recently I had an idea for an open source alternative with ImageJ and the 3D color inspector plugin. The only preparatory step required is to save a palette colorbar as a raster image. Then open the image in ImageJ, run the plugin and display the colorbar in Lab space in a 3D view. There are many options to change the scale of the plot, the perspective, and how the colors are displayed (e.g. frequency weighted, median cut, etcetera). The view can be rotated manually, and also automatically. Below I am showing the rotating animations for the same two palettes.
Discussion
The whole process, including the recording of the animations using the Quicktime screencast feature, took me less than 10 minutes, and it leaves no doubt as to which one is the best color palette. Let me know what you think.
A few observations: in 3D the ROYGBIV palette is even more strikingly and obviously non-monotonic. The lightness gradient varies in magnitude, resulting in non-uniform contrast. Compare for example the portion between blue and green to that between green and yellow: these have approximately the same number of samples but very different change in lightness value between the extremes. The gradient sign also changes, producing perceptual inversions, for example with the yellow to red section following the blue to yellow. These inversions may result in perceived elevation inversions, for example, if using this palette to display elevation data. On the other hand, the linear L* palette nicely spirals upwards with L* changing monotonically from 0 to 100.
Last weekend I had a few hours to play with but needed a short break from writing about color palettes, so I decided to go back and finish up (for now) this series on geoscience visualization in Matlab. In the first post of the series I expanded on work by Steve Eddins at Mathworks on overlaying images using influence maps and demonstrated how it could be used to enhance the display of a single geophysical dataset.
Using transparency to display multiple data sets an example
At the end of the second post I promised I would go back and show an example of using transparency and influence maps for other tasks, like overlaying of different attributes. Here’s my favorite example in Figure 1. The image is a map in pastel colors of the Bouguer Gravity anomaly for the Southern Tuscany region of Italy, with three other layers superimposed using the techniques above mentioned.
It is beyond the objectives of this post to discuss at length about gravity exploration methods or to attempt a full interpretation of the map. I will go back to it at a later time as I am planning a full series on gravity exploration using this data set, but if you are burning to read more about gravity interpretation please check these excellent notes by Martin Unsworth, Professor of Physics at the Earth and Atmospheric Sciences department, University of Alberta, and note 4 at the end of this post. Otherwise, and for now, suffice it to say that warm colors (green to yellow to red) in the Bouguer gravity map indicate, relatively speaking, excess mass in the subsurface and blue and purple indicate deficit of mass in the subsurface.
The black and grey lines are lineaments extracted from derivatives of the Bouguer gravity data using two different methods [1]. The semitransparent, white-filled polygons show the location of some of the basement outcrops (the densest rocks in this area).
Lineaments extracted from gravity data can correspond to contacts between geological bodies of different density, so a correlation can be expected between basement outcrops and some of the lineaments, as they are often placed in lateral contact with much lesser dense rocks. This is often exploited in mineral exploration in areas such as this where mineralization occurs at or in the vicinity of this contacts. As an example, I show in Figure 2 the occurrences (AGIP – RIMIN, unpublished industry report, 1989) of silicization (circles) and antimony deposits (triangles), superimposed on the distance from one of the set of lineaments (warm colors indicate higher distance) from Figure 1.
The fact that different methods give systematically shifted results is a known fact, often due the trade-off between resolution and stability, whereby the more stable methods are less affected by noise, but often produce smoother edges over deeper contacts, and their maxima may not correspond. This is in addition to the inherent ambiguity of gravity data, which cannot, by themselves, be interpreted uniquely. To establish which method might be more correct in this case (none is a silver bullet) I tried to calibrate the results using basement outcrops (i.e. does either method more closely match the outcrop edges?). Having done that, I would have more confidence in making inferences on possible other contacts in the subsurface suggested by lineament. I would say the black lines do a better overall job in the East, the gray perhaps in the West. So perhaps I’m stuck? I will get back to this during my gravity series.
Figure 1
Figure 2
Matlab code
As usual I am happy to share the code I used to make the combined map of Figure 1. Since the data I use is in part from my unpublished thesis in Geology and in part from Michele di Filippo at the University of Rome, I am not able to share it, and you will have to use your own data, but the Matlab code is simply adapted. The code snippet below assume you have a geophysical surface already imported in the workspace and stored in a variable called “dataI”, as well as the outcrops in a variable called “basement”, and the lineaments in “lnmnt1” and “lnmnt2”. It also uses my cube1 color palette.
% part 1 - map gravity data
figure; imagesc(XI,YI,dataI); colormap(cube1); hold on;
%
% part 2 - dealing with basement overlay
white=cat(3, ones(size(basement)), ones(size(basement)),...
ones(size(basement)));
ttt=imagesc(Xb,Yb,white); % plots white layer for basement
%
% part 3 - dealing with lineaments overlays
black=cat(3, zeros(size(lnmnt1)), zeros(size(lnmnt1)),...
zeros(size(lnmnt1)));
grey=black+0.4;
basement_msk=basement.*0.6;
kkk=imagesc(XI,YI,black); % plots black layer for lineament 1
sss=imagesc(XI,YI,gray); % plots gray layer for lineament 2
hold off
%
% part 4 - set influence maps
set(ttt, 'AlphaData', basement_msk); % influence map for basement
set(kkk, 'AlphaData', lnmnt1); % influence map for linement 1
set(sss, 'AlphaData', lnmnt2); % influence map for linement 2
%
% making it pretty
axis equal
axis tight
axis off
set(gca,'YDir','normal');
set(gcf,'Position',[180 150 950 708]);
set(gcf,'OuterPosition',[176 146 958 790]);
Matlab code, explained
OK, let’s break it down starting from scratch. I want first to create a figure and display the gravity data, then hold it so I can overlay the other layers on top of it. I do this with these two commands:
figure;imagesc(XI,YI,dataI);
hold on;
The layer I want to overlay first is the one showing the basement outcrops. I make a white basement layer covering the full extent of the map, which is shown in Figure 3, below.
The handle ttt is to be used in combination with the basement influence map to produce the partly transparent basement overlay: remember that I wanted to display the outcrops in white color, but only partially opaque so the colored gravity map can still be (slightly) seen underneath. I make the influence map, shown in Figure 4, with the command:
basement_msk=basement.*0.6;
Since the original binary variable “basement” had values of 1 for the outcrops and 0 elsewhere, whit the command above I assign an opacity of 0.6 to the outcrops, which will be applied when the next command, below, is run, achieving the desired result.
For the lineaments I do things in a similar way, except that I want those plotted with full opacity since they are only 1 pixel wide.
As an example I am showing in Figure 5 the black layer lineament 1 and in Figure 6 the influence map, which has values of 1 (full opacity) for the lineament and 0 (full transparency) for everywhere else.
Figure 5
Figure 6
Now a few extra lines to make things pretty, and this is what I get, shown below in Figure 7: not what I expected!
I am calling the layer white but really all I am telling Matlab is to create a layer with maximum intensity (1). But the preceding colormap(cube1) command assigned a salmon-red color to the maximum intensity in the figure, and so that is what you get for the basement overlay.
Again, to get the result I wanted, I had to come up with a trick like in the second post examples. This is the trick:
I create a new color palette with this command:
cube1edit=cube1; cube1edit(256,:)=1;
The new color palette has last RGB triplet actually defined as white, not salmon-red.
Then I replace this line:
figure; imagesc(XI,YI,dataI); colormap(cube1); hold on;
with the new line:
figure; imagesc(XI,YI,dataI, [15 45]); colormap (cube1edit); hold on;
The highest value in dataI is around 43. By spreading the color range from [15 43] to [15 45], therefore exceeding max(dataI) I ensure that white is used for the basement overlay but not in any part of the map where gravity is highest but there is no basement outcrop. In other words, white is assigned in the palette but reserved to the overlay.
Please let me know if that was clear. If it isn’t I will try to describe it better.
Notes
[1] One method is the total horizontal derivative. The other method is the hyperbolic tilt angle – using Matlab code by Cooper and Cowan (reference). This is how I produced the two overlays: first I calculated the total horizontal derivative and the tilt angle, then I found the maxima to use as the overlay layers. This is similar to Figure 3e in Cooper and Cowan, but I refined my maxima result by reducing them to 1-pixel-wide lines (using a thinning algorithm).
Reference
Cooper, G.R.J., and Cowan, D.R. (2006) – Enhancing potential field data using filters based on the local phase Computers & Geosciences 32 (2006) 1585–1591