Use Agile Scientific’s Welly to load two wells with several geophysical logs
Use Pandas, Welly, and NumPy to: remove all logs except for compressional wave velocity (Vp), shear wave velocity (Vs), and density (RHOB); store the wells in individual DataFrames; make the sampling rate common to both wells; check for null values; convert units from imperial to metric; convert slowness to velocity; add a well name column
Split the DataFrame by well using unique values in the well name column
For each group/well use Agile Scientific’s Bruges ‘s Backus average to upscale all curves individually
Add the upscaled curves back to the DataFrame
Matt Hall, (organizer), told me during a breakfast chat on the first day of the sprint that this tutorial would be a very good to have since it is one of the most requested examples by neophyte users of the Bruges library; I was happy to oblige.
The code for the most important bit, the last two items in the above list, is included below:
# Define parameters for the Backus filter
lb = 40 # Backus length in meters
dz = 1.0 # Log sampling interval in meters
# Do the upscaling work
# Add to the input DataFrame
wells_final= (np.concatenate((wells.values, wells_bk.values), axis=1))
And here is a plot comparing the raw and upscaled Vp and Vs logs for one of the wells:
Please check the notebook if you want to try the full example.
In the last post I wrote about what Volodymyr and I worked on during a good portion of day two of the sprint in October, and continued to work on upon our return to Calgary.
In addition to that I also continued to work on a notebook example, started in day one, demonstrating on how to upscale sonic and density logs from more than one log at a time using Bruges ‘ backusand Panda’s groupby. This will be the focus of a future post.
The final thing I did was to write, and test an error_flag function for Bruges. The function calculates the difference between a predicted and a real curve; it flags errors in prediction if the difference between the curves exceeds a user-defined distance (in standard deviation units) from the mean difference. Another option available is to check whether the curves have opposite slopes (for example one increasing, the other decreasing within a specific interval). The result is a binary error log that can then be used to generate QC plots, to evaluate the performance of the prediction processes in a more (it is my hope) insightful way.
The inspiration for this stems from a discussion over coffee I had 5 or 6 years ago with Glenn Larson, a Geophysicist at Devon Energy, about the limitations of (and alternatives to) using a single global score when evaluating the result of seismic inversion against wireline well logs (the ground truth). I’d been holding that in the back of my mind for years, then finally got to it last Fall.
Summary statistics can also be calculated by stratigraphic unit, as demonstrated in the accompanyingJupyter Notebook.
In a recent post titled Unweaving the rainbow, Matt Hall described our joint attempt (partly successful) to create a Python tool to enable recovery of digital data from any pseudo-colour scientific image (and a seismic section in particular, like the one in Figure 1), without any prior knowledge of the colormap.
Figure 1. Test image: a photo of a distorted seismic section on my wall.
Please check our GitHub repositoryfor the code and slides andwatch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:
In the next two post, coming up shortly, I will describe in greater detail my contribution to the project, which focused on developing a computer vision pipeline to automatically detect where the seismic section is located in the image, rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section. The full workflow is included below (with sections I-VI developed to date):
I – Image preparation, enhancement:
Convert to gray scale
Optional: smooth or blur to remove high frequency noise
II – Find seismic section:
Convert to binary with adaptive or other threshold method
Find and retain only largest object in binary image
Fill its holes
Apply opening and dilation to remove minutiae (tick marks and labels)
III – Define rectification transformation
Detect contour of largest object find in (2). This should be the seismic section.
Approximate contour with polygon with enough tolerance to ensure it has 4 sides only
Sort polygon corners using angle from centroid
Define new rectangular image using length of largest long and largest short sides of initial contour
Estimate and output transformation to warp polygon to rectangle
IV – Warp using transformation
V – Blanking annotations inside seismic section (if rectangular):
Start with output of (4)
Pre-process and apply canny filter
Find contours in the canny filter smaller than input size
Sort contours (by shape and angular relationships or diagonal lengths)
Loop over contours:
If approximation has 4 points AND the 4 semi-diagonals are of same length: fill contour and add to mask
VI – Use mask to remove text inside rectangle in the input and blank (NaN) the whole rectangle.
VII – Optional: tools to remove arrows and circles/ellipses:
For arrows – contours from (4) find ones with 7 sizes and low convexity (concave) or alternatively Harris corner and count 7 corners, or template matching
For ellipses – template matching or regionprops
VIII – Optional FFT filters to remove timing lines and vertical lines
The first post focuses on the image pre-processing and enhancement, and the detection of the seismic line (sections I and II, in green); the second one deals with the rectification of the seismic (sections IV to V, in blue). They are not meant as full tutorials, rather as a pictorial road map to (partial) success, but key Python code snippets will be included and discussed.
I started this blog in 2012; in these 3 1/2 years it has been a wonderful way to channel some of my interests in image processing, geophysics, and visualization (in particular colour), and more recently Python.
Starting with 2016 I would like to concentrate my efforts on building useful (hopefully) and fun (for me at least) open source (this one is for sure) tools in Python. This is going to be my modus operandi:
do some work, get to some milestones
upload the relevant IPython/Jupiter Notebooks on GitHub
The implementation of the finished app involves using morphological filtering and other image processing methods to enhance the sketch image and convert it into a model with discrete bodies, then pass it on to Agile’s modelr.io to create a synthetic.