Computer vision in geoscience: recover seismic data from images – part 2

In part 1 of this short series I demonstrated how to detect the portion occupied by the seismic section in an image (Figure 1).

Figure 1

The result was a single binary image with the white object representing the pixels occupied by the seismic section (Figure 2).

Figure 2

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

Next I want to use this binary object to derive a transformation function to rectify to a rectangle the seismic section in the input image.

The first step is to detect the contour of the object. Notice that because we used morphological operations it is not a perfect quadrilateral: it has rounded corners and some of the sides are bent, therefore the second step will be to approximate the contour with a polygon with enough tolerance to ensure it has 4 sides only(this took some trial and error but 25 turned out to be a good value for the parameter for a whole lot of test images I tried).

In reality, the two steps are performed together using the functions find_contours (there is only one to find, reallyand approximate_polygon from the skimage.measure module, as below:

contour = np.squeeze(find_contours(enhanced, 0))
coords = approximate_polygon(contour, tolerance=25)

The variable coords contains the coordinate for the corner points of the polygon (the first point is repeated last to close the polygon), which in Figure 3 I plotted superimposed to the input binary object.

Figure 3 – approximated polygon

A problem with the output of  approximate_polygon is that the points are not ordered; to solve it I adapted a function from a Stack Overflow answer to sort them based on the angle from their centroid:

def ordered(points):
  x = points[:,0]
  y = points[:,1]
  cx = np.mean(x)
  cy = np.mean(y)
  a = np.arctan2(y - cy, x - cx)
  order = a.ravel().argsort()
  x = x[order]
  y = y[order]
  return np.vstack([x,y])

I call the function as below to get the corners in the contour without the last one (repetition of the first point).

sortedCoords = ordered(coords[:-1]).T

I can then plot them using colors in a predefined order to convince myself the indeed are sorted:

plt.scatter(sortedCoords[:, 1], sortedCoords[:, 0], s=60, 
 color=['magenta', 'cyan', 'orange', 'green'])

Figure 4 – corners sorted in counter-clockwise order

The next bit of code may seem a bit complicated but it is not. With coordinates of the corners known, and their order as well, I can calculate the largest width and height of the input seismic section, and I use them to define the size of the registered output section, which is to be of rectangular shape:

w1 = np.sqrt(((sortedCoords[0, 1]-sortedCoords[3, 1])**2)
  +((sortedCoords[0, 0]-sortedCoords[3, 0])**2))
w2 = np.sqrt(((sortedCoords[1, 1]-sortedCoords[2, 1])**2)
  +((sortedCoords[1, 0]-sortedCoords[2, 0])**2))

h1 = np.sqrt(((sortedCoords[0, 1]-sortedCoords[1, 1])**2)
  +((sortedCoords[0, 0]-sortedCoords[1, 0])**2))
h2 = np.sqrt(((sortedCoords[3, 1]-sortedCoords[2, 1])**2)
  +((sortedCoords[3, 0]-sortedCoords[2, 0])**2))

w = max(int(w1), int(w2))
h = max(int(h1), int(h2))

and with those I define the coordinates of the output corners used to derive the transformation function:

dst = np.array([
  [0, 0],
  [h-1, 0],
  [h-1, w-1],
  [0, w-1]], dtype = 'float32')

Now I have everything I need to rectify the seismic section in the input image: it is warped using homologous points (the to sets of four corners) and a transformation function.

dst[:,[0,1]] = dst[:,[1,0]]
sortedCoords[:,[0,1]] = sortedCoords[:,[1,0]]
tform = skimage.transform.ProjectiveTransform()
tform.estimate(dst,sortedCoords)
warped =skimage.transform.warp(img, tform, output_shape=(h-1, w-1))

Notice that I had to swap the x and y coordinates to calculate the transformation function. The result is shown in Figure 5: et voilà!

Figure 5 – rectified seismic section

You can download from GitHub the code to try this yourself (both part 1 and part 2, and some of the optional features outlined in the introduction, like removing the rectangle with label inside the section) as well as an example Jupyter Notebook showing how to run it.

Computer vision in geoscience: recover seismic data from images – part 1

As anticipated in the introductory post of this short series I am going to demonstrate how to automatically detect where a seismic section is located in an image (be it a picture taken from your wall, or a screen capture from a research paper), rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section.

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

In this part one I will be focusing on the image preparation and enhancement, and the automatic detection of the seismic section (all done using functions from numpy, scipy, and scikit-image)In order to do that, first I convert the input image  (Figure 1) containing the seismic section to grayscale and then enhance it by increasing the image contrast (Figure 2).

Figure 1 – input image

 

Figure 2 – grayscale image

All it takes to do that is three lines of code as follows:

gry = skimage.color.rgb2gray(img);
p2, p95 = numpy.percentile(gry, (2, 95))
rescale = exposure.rescale_intensity(gry, in_range=(p2, p95))

For a good visual intuition of what actually is happening during the contrast stretching, check my post sketch2model – sketch image enhancements: in there  I show intensity profiles taken across the same image before and after the process.

Finding the seismic section in this image involve four steps:

  1. converting the grayscale image to binary with a threshold (in this example a global threshold with the Otsu method)
  2. finding and retaining only the largest object in the binary image (heuristically assumed to be the seismic section)
  3. filling its holes
  4. applying morphological operations to remove minutiae (tick marks and labels)

Below I list the code, and show the results.

global_thresh = threshold_otsu(rescale)
binary_global = rescale < global_thresh

Figure 3 – binary image

# (i) label all white objects (the ones in the binary image).
# scipy.ndimage.label actually labels 0s (the background) as 0 and then
# every non-connected, nonzero object as 1, 2, ... n.
label_objects, nb_labels = scipy.ndimage.label(binary_global)

# (ii) calculate every labeled object's binary size (including that 
# of the background)
sizes = numpyp.bincount(label_objects.ravel())

# (3) set the size of the background to 0 so that if it happened to be 
# larger than the largest white object it would not matter
sizes[0] = 0

# (4) keep only the largest object
binary_objects = remove_small_objects(binary_global, max(sizes))

Figure 4 – isolated seismic section

# Remove holes (black regions inside white object)
binary_holes = scipy.ndimage.morphology.binary_fill_holes(binary_objects)

Figure 5 – holes removed

enhanced = opening(binary_holes, disk(7))

Figure 6 – removed residual tick marks and labels

That’s it!!!

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

In the next post, we will use this polygonal binary object both as a basis to capture the actual coloured seismic section from the input image and to derive a transformation to rectify it to a rectangle.

Mapping and validating geophysical lineaments with Python

In Visualization tips for geoscientists: MATLAB, Part III I showed there’s a qualitative correlation between occurrences of antimony mineralization in the southern Tuscany mining district and the distance from lineaments derived from the total horizontal derivative (also called maximum horizontal gradient).

Let’s take a look at the it below (distance from lineaments increases as the color goes from blue to green, yellow, and then red).

regio_distance_mineral_occurrences

However, in a different map in the same post I showed that lineaments derived using the maxima of the hyperbolic tilt angle (Cooper and Cowan, 2006, Enhancing potential field data using filters based on the local phase) are offset systematically from those derived using the total horizontal derivative.

Let’s take a look at the it below: in this case Bouguer gravity values increase as the color goes from blue to green, yellow, and then red; white polygons are basement outcrops.

The lineaments from the total horizontal derivative are in black, those from the maxima of hyperbolic tilt angle are in gray. Which lineaments should be used?

The ideal way to map the location of density contrast edges (as a proxy for geological contacts) would be  to gravity forward models, or even 3D gravity inversion, ideally constrained by all available independent data sources (magnetic or induced-polarization profiles, exploratory drilling data, reflection seismic interpretations, and so on).

The next best approach is to map edges using a number of independent gravity data enhancements, and then only use those that collocate.

Cooper and Cowan (same 2006 paper) demonstrate that no single-edge detector method is a perfect geologic-contact mapper. Citing Pilkington and Keating (2004, Contact mapping from gridded magnetic data – A comparison of techniques) they conclude that the best approach is to use “collocated solutions from different methods providing increased confidence in the reliability of a given contact location”.

I show an example of such a workflow in the image below. In the first column from the left is a map of the residual Bouguer gravity from a smaller area of interest in the southern Tuscany mining district (where measurements were made on a denser station grid). In the second column from the left are the lineaments extracted using three different (and independent) derivative-based data enhancements followed by skeletonization. The same lineaments are superimposed on the original data in the third column from the left. Finally, in the last column, the lineaments are combined into a single collocation map to increase confidence in the edge locations (I applied a mask so as to display edges only where at least two methods collocate).

collocation_teaser

If you want to learn more about this method, please read my note in the Geophysical tutorial column of The Leading Edge, which is available with open access here.
To run the open source Python code, download the iPython/Jupyter Notebook from GitHub.

With this notebook you will be able to:

1) create a full suite of derivative-based enhanced gravity maps;

2) extract and refine lineaments to map edges;

3) create a collocation map.

These technique can be easily adapted to collocate lineaments derived from seismic data, with which the same derivative-based enhancements are showing promising results (Russell and Ribordy, 2014, New edge detection methods for seismic interpretation.)

An example of Forensic Image Processing in ImageJ

In a previous post I introduced ImageJ, a very powerful open source  image processing software. ImageJ allows users to display, edit, analyze, process, and filter images, and its capabilities are greatly increased by hundreds of plugins.

In a future post I will be showing how to use the watershed transform in ImageJ for medical image analysis and advanced geoscience map interpretation and terrain analysis.

Today I am posting a submission entry by guest Ron DeSpain, an image and signal analysis software developer. Ron’s note is about Feature Detection for Fingerprint Matching in ImageJ. I was thrilled to receive this submission as I really have a soft spot for Forensic science. Additionally, it is a nice way to introduce skeletonization, which I will be using in a future series on automatic detection of lineaments in geophysical maps. So, thanks Ron!

Please check this page for reference on fingerprint terminology. And if you are interested in the topic and would like to start a discussion, or make a suggestion,  please use the comment section below. You can also contact Ron directly at ron_despain@hotmail.com if you want to enquire about the code.

==========================================================================================

Initial Feature Detection Steps for Fingerprint Matching – by Ron DeSpain

A common fingerprint pre-processing method called the crossings algorithm is used to extract from a fingerprint features called minutiae.  Minutiae are located at the end of fingerprint ridges and where the ridges split (bifurcations) as shown in Figure 1.  Once detected, minutiae are correlated with a database of known fingerprint minutiae sets.  This article discusses the very first step in detecting these minutiae in a fingerprint.

Figure 1  Types of Fingerprint Minutiae

This fingerprint is available in a free database of fingerprint images at http://bias.csr.unibo.it/fvc2000/download.asp

I got the idea for this convolution based minutiae extractor from a paper similar to Afsar et al. [reference] where a slightly different counting scheme is used to identify minutiae.

This algorithm depends on the fact that the end and bifurcation patterns have unique numbers of crossings in a 3×3 local region, as depicted in Figure 2.  This means that by simply counting the crossings you could detect the minutiae.

Figure 2 Minutiae Patterns

The pseudocode for this algorithm is as follows:

  1. Convert the image to binary, normalized to 0 to 1 range, floating point data
  2. Skeletonize the image
  3. Convolve the skeleton with the unit 3×3 matrix to count the crossings
  4. Multiply the skeletonized image by the convolved image = Features Image
  5. Threshold the Features  image at 2 for ridge ends
  6. Threshold  the Features image  at 4 for bifurcations

The following imageJ macro will identify minutiae using this simple pattern recognition technique.  You can download and install ImageJ free from http://imagej.nih.gov/ij/download.html.  Don’t forget to get the user’s manual and macro coding guide from this site if you want to modify my macro.

//Minutiae Detection Macro
open();
run("Duplicate...", "title=Skeleton");
starttime = getTime();
run("Make Binary");
run("Skeletonize");
run("32-bit");
run("Divide...", "value=255.000");
run("Enhance Contrast", "saturated=0 normalize");
run("Duplicate...", "title=Convolution");
run("Convolve...", "text1=[1 1 1\n1 1 1\n1 1 1\n] stack");
imageCalculator("Multiply create 32-bit", "Skeleton","Convolution");
endtime = getTime();
selectWindow("Result of Skeleton");
rename("Features");
run("Tile");
run("Threshold...");
print("Processing Time (ms) = "+(endtime - starttime));
setTool(11);
selectWindow("Features");
run("Sync Windows");

Copy this code to a text file (.txt), drop it into the ImageJ macros folder, install and run it in ImageJ using the image at the end of this article.

The output of the above macro is shown in Figure 3 below:

Figure 3 ImageJ Macro Output

Setting the threshold control to show pixels with a value of 2 in red highlights will show the ridge end detections as shown in Figure 4.   Note that the noise in the image produces false detections, which have to be identified with further processing not addressed here.

Figure 4 Ridge End Detections

Bifurcations are similarly found by setting the threshold to 4 as shown in Figure 5:

Figure 5 Bifurcations Detected

There are two fingerprint processing macros on the Mathworks user community file exchange for Matlab users and free fingerprint verification SDK at http://www.neurotechnology.com/free-fingerprint-verification-sdk.html  for those of you who would like to dig deeper into this subject.

You can copy and save the fingerprint image I used in this article directly from this document’s Figure 6 to get you started either via screen capture, or right-click the image download.

Figure 6  Original Image

Reference

Afsar, F. A., M. Arif, and M. Hussain. “Fingerprint identification and verification system using minutiae matching.” National Conference on Emerging Technologies. 2004.