This is a short and sweet follow-up to yesterday’s post Making many synthetic seismic models in Python, in which I want to show how to add oblique faults to an impedance model, as opposed to a vertical one.
In the pseudo-code below, with the first line of code I select one of the impedance models from the many I created, then in lines 2 and 3, respectively, make two new versions, one shifted up 5 samples, one shifted down 5 samples. The impedance values for the extra volume added – at the bottom in the first case, the top in the second – are derived from the unshifted impedance model.
Next, I make two copies of the original, call them normal and reverse, and then replace the upper triangle with the upper triangle of the shifted down and shifted up versions, respectively.
In this short post I show how to adapt Agile Scientific‘s Python tutorial x lines of code, Wedge model and adapt it to make 100 synthetic models in one shot: X impedance models times X wavelets times X random noise fields (with I vertical fault).
and then create 10 slightly different variations of the Vp-rho pairs one of which are is shown in Figure 2:
rnd = numpy.random.rand(10,6,2)*0.2
manyrocks = np.array([rocks + rocks*rn for rn in rnd], dtype=np.int)
earth = manyrocks[model]
Figure 2. A Vp-rho pair (earth model)
at which point I can combine Vp-rho pairs to make 10 impedance models, then insert a vertical fault with:
impedances = [np.apply_along_axis(np.product, -1, e).astype(float) for e in earth]# Python 2
faulted = copy.deepcopy(impedances)
for r, i in zip(faulted, np.arange(len(faulted))):
temp = np.array(r)
rolled = np.roll(np.array(r[:,:24]), 4, axis = 0)
temp[:,:24]=rolled
faulted[i]=temp
Figure 3. Four faulted impedance models.
next I calculate reflection coefficients (Figure 4)and convolve them with a list of 10 Ricker wavelets (generated using Agile’s Bruges) to make synthetic seismic models, shown in Figure 5.
I received an email from a reader, Will, asking some clarification about this article and the making of many impedance models. I’m re-posting here what I wrote back.
I think the key to understand this is how we multiply velocity by density in each of the possible earth model.
Looking at the notebook, the earth model array has shape:
print(np.shape(earth))>>> (10, 50, 49, 2)
with the last axis having dimension 2: one Vp and one Rho, so in summary 10 models of size 50×49, each with a Vp and a Rho.
you use numpy.apply_along_axis to multiply Vp by Rho along the last dimension, -1 , using numpy.product, and the list comprehension [... for e in earth] to do it for all models in the array earth.
As anticipated in the introductory post of this short series I am going to demonstrate how to automatically detect where a seismic section is located in an image (be it a picture taken from your wall, or a screen capture from a research paper), rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section.
You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.
In this part one I will be focusing on the image preparation and enhancement, and the automatic detection of the seismic section (all done using functions from numpy, scipy, and scikit-image). In order to do that, first I convert the input image (Figure 1) containing the seismic section to grayscale and then enhance it by increasing the image contrast (Figure 2).
Figure 1 – input image
Figure 2 – grayscale image
All it takes to do that is three lines of code as follows:
For a good visual intuition of what actually is happening during the contrast stretching, check my post sketch2model – sketch image enhancements: in there I show intensity profiles taken across the same image before and after the process.
Finding the seismic section in this image involve four steps:
converting the grayscale image to binary with a threshold (in this example a global threshold with the Otsu method)
finding and retaining only the largest object in the binary image (heuristically assumed to be the seismic section)
filling its holes
applying morphological operations to remove minutiae (tick marks and labels)
# (i) label all white objects (the ones in the binary image).
# scipy.ndimage.label actually labels 0s (the background) as 0 and then
# every non-connected, nonzero object as 1, 2, ... n.
label_objects, nb_labels = scipy.ndimage.label(binary_global)
# (ii) calculate every labeled object's binary size (including that
# of the background)
sizes = numpyp.bincount(label_objects.ravel())
# (3) set the size of the background to 0 so that if it happened to be
# larger than the largest white object it would not matter
sizes[0] = 0
# (4) keep only the largest object
binary_objects = remove_small_objects(binary_global, max(sizes))
Figure 4 – isolated seismic section
# Remove holes (black regions inside white object)
binary_holes = scipy.ndimage.morphology.binary_fill_holes(binary_objects)
Figure 5 – holes removed
enhanced = opening(binary_holes, disk(7))
Figure 6 – removed residual tick marks and labels
That’s it!!!
You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.
In the next post, we will use this polygonal binary object both as a basis to capture the actual coloured seismic section from the input image and to derive a transformation to rectify it to a rectangle.
Acquisition footprint is a noise field that appears on 3D seismic amplitude slices or horizons as an interwoven linear crosshatching parallel to the source line and receiver line directions. It is for the most part an expression of inadequate acquisition geometry, resulting in insufficient sampling of the seismic wave field (aliasing) and irregularities in the offset and azimuth distribution, particularly in the cross line direction.
Sometimes source-generated noise and incorrect processing (for example residual NMO due to erroneous velocity picks, incomplete migration, or other systematic errors) can accentuate the footprint.
This noise can interfere with the mapping of stratigraphic features and fault patterns, posing a challenge to seismic interpreters working in both exploration and development settings.
To demonstrate the relevance of the phenomenon I show below a gallery of examples from the literature of severe footprint in land data: an amplitude time slice (Figure 1a) and a vertical section (Figure 1b) from a Saudi Arabian case study, some seismic attributes (Figures 2, 3, 4, and 5), and also some modeled streamer data (Figure 6).
Figure 1. Amplitude time slice (top, time = 0.44 s) showing footprint in both inline and crossline direction, and amplitude section (bottom) highlighting the effect in the vertical direction. From Al-Bannagi et al. Copyrighted material.
Figure 2. Edge detection (Sobel filter) on the Penobscot 3D horizon (average time ~= 0.98 s) displaying N-S footprint. From Hall.
Figure 3. Edge detection (Sobel filter) on a shallow horizon (average time ~= 0.44 s) from the F3 Netherlands 3D survey displaying E-W footprint.
Figure 4. Similarity attribute (top , time = 0.6 s), and most positive curvature (bottom, time = 1.3 s), both showing footprint. From Davogustto and Marfurt. Copyrighted material.
Figure 5. Amplitude time slice (top, time = 1.32 s) the corresponding coherence section (bottom) both showing footprint. From Chopra and Larsen. Copyrighted material.
Figure 6. Acquisition footprint in the form of low fold striation due to dip streamer acquisition. From Long et al. Copyrighted material.
In my next post I will review (with more examples form literature) some strategies available to either prevent or minimize the footprint with better acquisition parameters and modeling of the stack response; I will also discuss some ways the footprint can be attenuated after the acquisition of the data (with bin regularization/interpolation, dip-steered median filters, and kx ky filters, from simple low-pass to more sophisticated ones) when the above mentioned strategies are not available, due to time/cost constraint or because the interpreter is working with legacy data.
In subsequent posts I will illustrate a workflow to model synthetic acquisition footprint using Python, and how to automatically remove it in the Fourier domain with frequency filters, and then how to remove it from real data.
References
Al-Bannagi et al. 2005 – Acquisition footprint suppression via the truncated SVD technique: Case studies from Saudi Arabia: The Leading Edge, SEG, 24, 832– 834.
Chopra and Larsen, 2000 – Acquisition Footprint, Its Detection and Removal: CSEG Recorder, 25 (8).
Davogusto and Martfurt, 2011 – Footprint Suppression Applied to Legacy Seismic Data Volumes: 31st Annual GCSSEPM Foundation Bob F Perkins Research Conference 2011.
In a recent post titled Unweaving the rainbow, Matt Hall described our joint attempt (partly successful) to create a Python tool to enable recovery of digital data from any pseudo-colour scientific image (and a seismic section in particular, like the one in Figure 1), without any prior knowledge of the colormap.
Figure 1. Test image: a photo of a distorted seismic section on my wall.
Please check our GitHub repositoryfor the code and slides andwatch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:
In the next two post, coming up shortly, I will describe in greater detail my contribution to the project, which focused on developing a computer vision pipeline to automatically detect where the seismic section is located in the image, rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section. The full workflow is included below (with sections I-VI developed to date):
I – Image preparation, enhancement:
Convert to gray scale
Optional: smooth or blur to remove high frequency noise
Enhance contrast
II – Find seismic section:
Convert to binary with adaptive or other threshold method
Find and retain only largest object in binary image
Fill its holes
Apply opening and dilation to remove minutiae (tick marks and labels)
III – Define rectification transformation
Detect contour of largest object find in (2). This should be the seismic section.
Approximate contour with polygon with enough tolerance to ensure it has 4 sides only
Sort polygon corners using angle from centroid
Define new rectangular image using length of largest long and largest short sides of initial contour
Estimate and output transformation to warp polygon to rectangle
IV – Warp using transformation
V – Blanking annotations inside seismic section (if rectangular):
Start with output of (4)
Pre-process and apply canny filter
Find contours in the canny filter smaller than input size
Sort contours (by shape and angular relationships or diagonal lengths)
Loop over contours:
Approximate contour
If approximation has 4 points AND the 4 semi-diagonals are of same length: fill contour and add to mask
VI – Use mask to remove text inside rectangle in the input and blank (NaN) the whole rectangle.
VII – Optional: tools to remove arrows and circles/ellipses:
For arrows – contours from (4) find ones with 7 sizes and low convexity (concave) or alternatively Harris corner and count 7 corners, or template matching
For ellipses – template matching or regionprops
VIII – Optional FFT filters to remove timing lines and vertical lines
You can download from GitHub all the tools for the automated workflow (parts I-VI) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.
The first post focuses on the image pre-processing and enhancement, and the detection of the seismic line (sections I and II, in green); the second one deals with the rectification of the seismic (sections IV to V, in blue). They are not meant as full tutorials, rather as a pictorial road map to (partial) success, but key Python code snippets will be included and discussed.
The idea behind this series of articles is to show how to predict P-wave velocity, as measured by a geophysical well log (the sonic), from a suite of other logs: density, gamma ray, and neutron, and also depth, using Machine Learning.
I will explore different Machine Learning methods from the scikit-learn Python library and compare their performances.
To wet your appetites, here’s an example of P-wave velocity, Vp, predicted using a cross-validated linear model, which will be the benchmark for the performance of other models, such as SVM and Random Forest:
In the first notebook, which is already available on GitHub here, I show how to use the Pandas and Seaborn Python libraries to import the data, check it, clean it up, and visualize to explore relationships between the variables. For example, shown below is a heatmap with the pairwise Spearman correlation coefficient between the variables (logs):
Stay tuned for the next post / notebook!
PS: I am very excited by the kick-off of the Geophysical Tutorial (The Leading Edge) Machine Learning Contest 2016. Check it out here!
As of yesterday, I no longer have a full-time day job.
I am looking for opportunities.
I’d love to hear about projects in geophysics, computational geoscience, data science, machine learning. Feel free to get in touch with me at matteo@mycarta.ca.