Using Python to calculate northern hemisphere’s surface land coverage

Yesterday during my lunch break I was rather bored; it is unseasonably cold for the fall, even in Calgary, and a bit foggy too.
For something to do I browsed the Earth Science beta on Stack Exchange looking for interesting questions (as an aside, I encourage readers to look at the unanswered questions).
There was one that piqued my curiosity, “In the northern hemisphere only, what percentage of the surface is land?”.
It occurred to me that I could get together an answer using an equal area projection map and a few lines of Python code; and indeed in 15 minutes I whipped-up this workflow:

  • Invert and import this B/W image of equal area projection (Peters) for the Northern hemisphere (land = white pixels).
Peters_projection,_black_north

Source of original image (full globe): Wikimedia Commons

  • Store the image as a Numpy array.
  • Calculate the total number of pixels in the image array (black + white).
  • Calculate the total number of white pixels (1s) by summing the entire array. Black pixels (0s) will not contribute.
  • Calculate percentage of white pixels.

The result I got is 40.44%. Here’s the code:

# import libraries
import numpy as np
from skimage import io
from matplotlib import pyplot as plt

# import image
url = 'https://mycartablog.com/wp-content/uploads/2018/09/peters_projection_black_north.png'
north_equal_area = io.imread(url, as_grey=True)

# check the image
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1)
ax.set_xticks([])
ax.set_yticks([])
plt.imshow(north_equal_area, cmap = 'gray');

# Do the calculations
r, c = np.shape(north_equal_area)
sz =  r*c
s = np.sum(north_equal_area)
print(np.round(s/sz*100, decimals=2))
>>> 40.44

As suggested in a comment to my initial answer, I run the same Python script for the entire globe and got the expected 30% land coverage:

# import image 
url = 'https://mycartablog.com/wp-content/uploads/2018/09/peters_projection_black_full.png'
equal_area = io.imread(url1, as_grey=True)

# Do the calculations 
r1, c1= np.shape(equal_area)
sz1 =  r1*c1
s1 = np.sum(equal_area)
print(np.round(s1/sz1*100, decimals=2))
>>> 30.08

 

Adding normal and reverse faults to an impedance model with Python

This is a short and sweet follow-up to yesterday’s post Making many synthetic seismic models in Python, in which I want to show how to add oblique faults to an impedance model, as opposed to a vertical one.

In the pseudo-code below, with the first line of code I select one of the impedance models from the many I created, then in lines 2 and 3, respectively, make two new versions, one shifted up 5 samples, one shifted down 5 samples. The impedance values for the extra volume added – at the bottom in the first case, the top in the second – are derived from the unshifted impedance model.

Next, I make two copies of the original, call them normal and reverse, and then replace the upper triangle with the upper triangle of the shifted down and shifted up versions, respectively.

unshifted = impedances[n]
up = sp.ndimage.interpolation.shift(unshifted, (5,0), cval=unshifted[0,0] * 0.9)
down = sp.ndimage.interpolation.shift(unshifted, (-5,0), cval=unshifted[0,-1] * 0.8)
normal = copy.deepcopy(unshifted)
reverse = copy.deepcopy(unshifted)
sz = len(unshifted)-1
normal[np.triu_indices(sz)] = up[np.triu_indices(sz)]
reverse[np.triu_indices(sz)] = down[np.triu_indices(sz)]

Done!
The results are shown in the figure below.

Left: unfaulted impedance model; center, model with normal fault; right, model with reverse fault.

Making many synthetic seismic models in Python

In this short post I show how to adapt Agile Scientific‘s Python tutorial x lines of code, Wedge model and adapt it to make 100 synthetic models in one shot: X  impedance models times X wavelets times X random noise fields (with I vertical fault).

You can download the notebook with the full code from GitHubN.B. the code is optimized for Python 2.7.

I begin by making a 6-layer model, shown in Figure 1:

model = numpy.zeros((50,49), dtype=np.int) 
model[8:16,:] = 1
model[16:24,:] = 2
model[24:32,:] = 3
model[32:40,:] = 4
model[40:,:] = 5

Figure 1. Initial 6-layer model

next I make some Vp-rho pairs (rock 0, rock 1, … , rock5):

rocks = numpy.array([[2700, 2750],  # Vp, rho
                  [2400, 2450],
                  [2600, 2650], 
                  [2400, 2450],
                  [2800, 3000], 
                  [3100, 3200],])

and then create 10 slightly different variations of the Vp-rho pairs one of which are is shown in Figure 2:

rnd = numpy.random.rand(10,6,2)*0.2
manyrocks = np.array([rocks + rocks*rn for rn in rnd], dtype=np.int)
earth = manyrocks[model]

Figure 2. A Vp-rho pair (earth model)

at which point I can combine Vp-rho pairs to make 10 impedance models, then insert a vertical fault with:

impedances = [np.apply_along_axis(np.product, -1, e).astype(float) for e in earth]# Python 2
faulted = copy.deepcopy(impedances)
for r, i in zip(faulted, np.arange(len(faulted))):
    temp = np.array(r)
    rolled = np.roll(np.array(r[:,:24]), 4, axis = 0)
    temp[:,:24]=rolled
    faulted[i]=temp

Figure 3. Four faulted impedance models.

next I calculate reflection coefficients (Figure 4)and convolve them with a list of 10 Ricker wavelets (generated using Agile’s Bruges) to make synthetic seismic models, shown in Figure 5.

rc =  [(flt[1:,:] - flt[:-1,:]) / (flt[1:,:] + flt[:-1,:]) for flt in faulted]
ws = [bruges.filters.ricker(duration=0.098, dt=0.002, f=fr) 
      for fr in [35, 40, 45, 50, 55, 60, 65, 70, 75, 80]]
synth = np.array([np.apply_along_axis(lambda t: np.convolve(t, w, mode='same'), axis=0,
      arr=r) for r in rc for w in ws ])

Figure 4. Four reflection coefficients series.

Figure 5. Four synthetic seismic models with vertical fault.

The last bit is the addition of noise, with the result is shown in Figure 6:

blurred = sp.ndimage.gaussian_filter(synth, sigma=1.1)
noisy = blurred + 0.5 * blurred.std() * np.random.random(blurred.shape)

Figure 6. Four synthetic seismic models with vertical fault and noise.

Done!

The notebook with the full code is on GitHub, let me know if you find this useful or if you come up with more modeling ideas.

 

UPDATE, April 22, 2019.

I received an email from a reader, Will, asking some clarification about this article and the making of many impedance models.  I’m re-posting here what I wrote back.

I think the key to understand this is how we multiply velocity by density in each of the possible earth model.

Looking at the notebook, the earth model array has shape:
print (np.shape(earth))
>>> (10, 50, 49, 2)
with the last axis having dimension 2: one Vp and one Rho, so in summary 10 models of size 50×49, each with a Vp and a Rho.
So with this other block of code:
impedances = [np.apply_along_axis(np.product, 
              -1, e).astype(float) for e in earth]

you use numpy.apply_along_axis  to multiply Vp by Rho along the last dimension, -1 , using numpy.product, and the list comprehension [... for e in earth] to do it for all models in the array earth.

 

Computer vision in geoscience: recover seismic data from images – part 1

As anticipated in the introductory post of this short series I am going to demonstrate how to automatically detect where a seismic section is located in an image (be it a picture taken from your wall, or a screen capture from a research paper), rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section.

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

In this part one I will be focusing on the image preparation and enhancement, and the automatic detection of the seismic section (all done using functions from numpy, scipy, and scikit-image)In order to do that, first I convert the input image  (Figure 1) containing the seismic section to grayscale and then enhance it by increasing the image contrast (Figure 2).

Figure 1 – input image

 

Figure 2 – grayscale image

All it takes to do that is three lines of code as follows:

gry = skimage.color.rgb2gray(img);
p2, p95 = numpy.percentile(gry, (2, 95))
rescale = exposure.rescale_intensity(gry, in_range=(p2, p95))

For a good visual intuition of what actually is happening during the contrast stretching, check my post sketch2model – sketch image enhancements: in there  I show intensity profiles taken across the same image before and after the process.

Finding the seismic section in this image involve four steps:

  1. converting the grayscale image to binary with a threshold (in this example a global threshold with the Otsu method)
  2. finding and retaining only the largest object in the binary image (heuristically assumed to be the seismic section)
  3. filling its holes
  4. applying morphological operations to remove minutiae (tick marks and labels)

Below I list the code, and show the results.

global_thresh = threshold_otsu(rescale)
binary_global = rescale < global_thresh

Figure 3 – binary image

# (i) label all white objects (the ones in the binary image).
# scipy.ndimage.label actually labels 0s (the background) as 0 and then
# every non-connected, nonzero object as 1, 2, ... n.
label_objects, nb_labels = scipy.ndimage.label(binary_global)

# (ii) calculate every labeled object's binary size (including that 
# of the background)
sizes = numpyp.bincount(label_objects.ravel())

# (3) set the size of the background to 0 so that if it happened to be 
# larger than the largest white object it would not matter
sizes[0] = 0

# (4) keep only the largest object
binary_objects = remove_small_objects(binary_global, max(sizes))

Figure 4 – isolated seismic section

# Remove holes (black regions inside white object)
binary_holes = scipy.ndimage.morphology.binary_fill_holes(binary_objects)

Figure 5 – holes removed

enhanced = opening(binary_holes, disk(7))

Figure 6 – removed residual tick marks and labels

That’s it!!!

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

In the next post, we will use this polygonal binary object both as a basis to capture the actual coloured seismic section from the input image and to derive a transformation to rectify it to a rectangle.

What is acquisition footprint noise in seismic data?

Acquisition footprint is a noise field that appears on 3D seismic amplitude slices or horizons as an interwoven linear crosshatching parallel to the source line and receiver line directions. It is for the most part an expression of inadequate acquisition geometry, resulting in insufficient sampling of the seismic wave field (aliasing) and irregularities in the offset and azimuth distribution, particularly in the cross line direction.

Sometimes source-generated noise and incorrect processing (for example residual NMO due to erroneous velocity picks, incomplete migration, or other systematic errors) can accentuate the footprint.

This noise can interfere with the mapping of stratigraphic features and fault patterns, posing a challenge to seismic interpreters working in both exploration and development settings.

To demonstrate the relevance of the phenomenon I show below a gallery of examples from the literature of severe footprint in land data: an amplitude time slice (Figure 1a) and a vertical section (Figure 1b) from a Saudi Arabian case study, some seismic attributes (Figures 2, 3, 4, and 5), and also some modeled streamer data (Figure 6).

Bannagi combo

Figure 1. Amplitude time slice (top, time = 0.44 s) showing footprint in both inline and crossline direction, and amplitude section (bottom) highlighting the effect in the vertical direction. From Al-Bannagi et al. Copyrighted material.

Penobscop_sobel

Figure 2. Edge detection (Sobel filter) on the Penobscot 3D horizon (average time ~= 0.98 s) displaying N-S footprint. From Hall.

F3_shallow_sobel

Figure 3. Edge detection (Sobel filter) on a shallow horizon (average time ~= 0.44 s)  from the F3 Netherlands 3D survey displaying E-W footprint.

Davogustto and Marfurt

Figure 4. Similarity attribute (top , time = 0.6 s), and most positive curvature (bottom, time = 1.3 s), both showing footprint. From Davogustto and Marfurt. Copyrighted material.

Chopra-Larsen

Figure 5. Amplitude time slice (top, time = 1.32 s) the corresponding  coherence section  (bottom) both showing footprint. From Chopra and Larsen. Copyrighted material.

Long et al

Figure 6. Acquisition footprint in the form of low fold striation due to dip streamer acquisition. From Long et al. Copyrighted material.

In my next post I will review (with more examples form literature) some strategies available to either prevent or minimize the footprint with better acquisition parameters and modeling of the stack response; I will also discuss some ways the footprint can be attenuated after the acquisition of the data (with bin regularization/interpolation, dip-steered median filters, and kx ky filters, from simple low-pass to more sophisticated ones) when the above mentioned strategies are not available, due to time/cost constraint or because the interpreter is working with legacy data.

In subsequent posts I will illustrate a workflow to model synthetic acquisition footprint using Python, and how to automatically remove it in the Fourier domain with frequency filters, and then how to remove it from real data.

References

Al-Bannagi et al. 2005 – Acquisition footprint suppression via the truncated SVD technique: Case studies from Saudi Arabia: The Leading Edge, SEG, 24, 832– 834.

Chopra and Larsen,  2000 – Acquisition Footprint, Its Detection and Removal: CSEG Recorder, 25 (8).

Davogusto and Martfurt, 2011 – Footprint Suppression Applied to Legacy Seismic Data Volumes: 31st Annual GCSSEPM Foundation Bob F Perkins Research Conference 2011.

F3 Netherlands open access 3D:  info on SEG Wiki

Hall, 2014 –  Sobel filtering horizons (open source Jupyter Notebook on GitHub).

Long et al., 2004 – On the issue of strike or dip streamer shooting for 3D multi-streamer acquisition: Exploration Geophysics, 35(2), 105-110.

Penobscot open access 3D:  info on SEG Wiki

Mild or wild: robustness through morphological filtering

This guest post (first published here) is by Elwyn Galloway, author of Scibbatical on WordPress. It is the forth in our series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience. Happy reading.

 

We’re highlighting a key issue that came up in our project, and describing what how we tackled it. Matteo’s post on Morphological Filtering does a great job of explaining what we implemented in sketch2model. I’ll build on his post to explain the why and how. In case you need a refresher on sketch2model, look back at sketch2model, Sketch Image Enhancement, Linking Edges with Geomorphological Filtering.

Morphological Filtering

As Matteo demonstrated by example, sketch2model’s ability to segment a sketch properly depends on the fidelity of a sketch.

fill_before_after_closing

An image of a whiteboard sketch (left) divides an area into three sections. Without morphological filtering, sketch2model segments the original image into two sections (identified as orange, purple) (centre). The algorithm correctly segments the area into three sections (orange, purple, green) when morphological filtering is applied (right).

To compensate for sketch imperfections, Matteo suggested morphological filtering on binarized images. Morphological filtering is a set of image modification tools which modify the shape of elements in an image. He suggested using the closing tool for our purposes. Have a look at Matteo’s Post for insight into this and other morphological filters.

One of the best aspects of this approach is that it is simple to apply. There is essentially one parameter to define: a structuring element. Since you’ve already read Matteo’s post, you recall his onion analogy explaining the morphological filtering processes of erosion and dilation – erosion is akin to removing an onion layer, dilation is adding a layer on. You’ll also recall that the size of the structuring element is the thickness of the layer added to, or removed from, the onion. Essentially, the parameterization of this process comes down to choosing the thickness of the onion layers.

Sketch2model uses dilation followed by erosion to fill gaps left between sketch lines (morphological dilation followed by erosion is closing). Matteo created this really great widget to illustrate closing using an interactive animation.

closing_demo1

Matteo’s animation was created using this interactive Jupyter notebook. Closing connects the lines of the sketch.

Some is Good, More is Better?

Matteo showed that closing fails if the structural element used is too small. So just make it really big, right? Well, there can be too much of a good thing. Compare what happens when you use an appropriately sized structuring element (mild) to the results from an excessively large structuring element (wild).

over-morph filtering.png

Comparing the results of mild and wild structuring elements: if the structuring element is too large, the filter compromises the quality of the reproduction.

Using a morphological filter with a structural element that is too small doesn’t fix the sketches, but using a structural element that is too large compromises the sketch too. We’re left to find an element that’s just right. Since one of the priorities for sketch2model was to robustly handle a variety of sketches with as little user input as possible — marker on whiteboard, pencil on paper, ink on napkin — we were motivated to find a way to do this without requiring the user to select the size of the structuring element.

Is there a universal solution? Consider this: a sketch captured in two images, each with their own resolution. In one image, the lines of the sketch appear to be approximately 16 pixels wide. The same lines appear to be 32 pixels wide in the other image. Since the size of the structuring element is defined in terms of pixels, it becomes apparent the ideal structuring element cannot be “one size fits all”.

res_vs_res

High-resolution (left) versus low-resolution (right) image of the same portion of a sketch. Closing the gap between the lines would require a different size structuring element for each image: about 5 pixels for high-resolution or 1 pixel for low-resolution.

Thinking Like a Human

Still motivated to avoid user parameterization for the structuring element, we explored ways to make the algorithm intelligent enough to select an appropriate structuring element on its own. Ultimately, we had to realize a few things before we came up with something that would work:

  1. When capturing an image of a sketch, users compose very similar images (compose in the photographic sense of the word): sketch is centered and nearly fills the captured image.
  2. The image of a sketch is not the same as a user’s perception of a sketch: a camera may record imperfections (gaps) in a sketch that a user does not perceive.
  3. The insignificance of camera resolution: a sketched feature in captured at two different resolutions would have two different lengths (in pixels), but identical lengths when defined as a percentage of image size.

With these insights, we deduced that the gaps we were trying to fill with morphological filtering would be those that escaped the notice of the sketch artist.

Recognizing the importance of accurate sketch reproduction, our solution applies the smallest structuring element possible that will still fill any unintentional gaps in a sketch. It does so in a way that is adaptable.

A discussion about the definition of “unintentional gap” allowed us to create a mandate for the closing portion of our algorithm. Sketch2model should fill gaps the user doesn’t notice. The detail below the limit of the user’s perception should not affect the output model. A quick “literature” (i.e. Google) search revealed that a person’s visual perception is affected by many factors beyond the eye’s optic limits. Without a simple formula to define a limit, we did what any hacker would do… define it empirically. Use a bunch of test images to tweak the structuring element of the closing filter to leave the perceptible gaps and fill in the imperceptible ones. In the sketch2model algorithm, the size of structuring element is defined as a fraction of the image size, so it was the fraction that we tuned empirically.

Producing Usable Results

Implicit in the implementation is sketch2model’s expectation that the user’s sketch, and their image of the sketch are crafted with some care. The expectations are reasonable: connect lines you’d like connected; get a clear image of your sketch. Like so much else in life, better input gives better results.

paper_pen_wow2_beforeafter.jpg

Input (left) and result (right) of sketch2model.

To produce an adaptable algorithm requiring as little user input as possible, the sketch2model team had to mix a little image processing wizardry with some non-technical insight.

Have you tried it? You can find it at sketch2model.com. Also on GitHub.


Previous posts in the sketch2model series: sketch2model, Sketch Image Enhancement, Linking Edges with Geomorphological Filtering.

Machine Learning in Geoscience with Scikit-learn. Part 2: inferential statistics and domain knowledge to select features for oil prediction

In the first post of this series I showed how to use Pandas, Seaborn, and Matplotlib to:

  • load a dataset
  • test, clean up, and summarize the data
  • start looking for relationships between variables using scatterplots and correlation coefficients

In this second post, I will expand on the latter point by introducing some tests and visualizations that will help highlight the possible criteria for choosing some variables, and dropping others. All in Python.

I will use a different dataset than that in the previous post. This one is from the paper “Many correlation coefficients, null hypotheses, and high value“(Lee Hunt, CSEG Recorder, December 2013).

The target to be predicted is oil production from a marine barrier sand. We have measured production (in tens of barrels per day) and 7 unknown (initially) predictors, at 21 wells.

Hang on tight, and read along, because it will be a wild ride!

I will show how to:

1) automatically flag linearly correlated predictors, so we can decide which might be dropped. In the example below (a matrix of pair-wise correlation coefficients between variables) we see that X2, and X7, the second and third best individual predictors of production (shown in the bottom row) are also highly correlated to X1, the best overall predictor.

2) automatically flag predictors that fail a critical r test

3) create a table to assess the probability that a certain correlation is spurious, in other words the probability of getting at least the correlation coefficient we got with our the sample, or even higher, purely by chance.

I will not recommend to run these tests and apply the criteria blindly. Rather, I will suggest how to use them to learn more about the data, and in conjunction with domain knowledge about the problem at hand (in this case oil production), make more informed choices about which variables should, and which should not be used.

And, of course, I will show how to make the prediction.

Have fun reading: get the Jupyter notebook on GitHub.

Machine learning in Planetary Science: compressing Pluto images with scikit-learn and PCA

In a previous post I showed some of the beautiful new images of Pluto from New Horizon’s mission,  coloured using the new Matplotlib perceptual colormaps:

colormappedNew_Horizons_Pluto

More recently I was experimenting with Principal Component Analysis in scikit-learn, and one of the things I used it for was compression of some of these Pluto images. Below is an example of the first two components from the False Color Pluto image:

first_two

You can take a look at the Python code available on this Jupyter Notebook. There are of course better ways of compressing images, but this was a fun way to play around with PCA.

In a follow-up post I will use image registration and image processing techniques to reproduce from the raw channels NASA’s Psychedelic Pluto Image.