Working with ZMAP+ grid files in Python

I just added to the awesome Geoscience I/O library a notebook showing how to use Python to read in an XYZ grid file in legacy ZMAP+ format as numpy array, display it with matplotlib, write it to a plain ASCII XYZ grid text file. I will use the DEM data for Mount St. Helens BEFORE the 1980 eruption, taken from Evan Bianco’s excellent notebook, which I converted to ZMAP+ (that part I will demonstrate in a future post / notebook).

You can get the full notebook on GitHub.

ZMAP plus files have a header section that looks something like this (this one comes from the DEM file I am using):

! -----------------------------------------
! ZMap ASCII grid file
!
! -----------------------------------------
@unnamed HEADER, GRID, 5
15, 1E+030, , 7, 1
468, 327, 0, 326, 0, 467
0. , 0. , 0.
@

where the rows starting with a ! symbol are comments, and the rows in between the two @ symbols store the information about the grid. In particular, the first two entries in the third line of this column are nrows and ncols, the number of rows and columns,  respectively.

The header section is followed by a data section of ncols blocks each containing nrows elements arranged in rows of 5 (given by the last number of the first row with the @ symbol.

OK, let’s get to it.

I break the task of reading in two parts: the first part is to gather from the header the information I will be using. I want from the section between the two @s:

  • the second entry in its second line – the null value
  • the first two entries in its third line – the number of rows and columns, as mentioned
  • the last four entries in its third line – the xmin, xmax, ymin, ymax, respectively

I am making the assumption that the number of commented rows may not be the same in all ZMAP+ files, hence I abandoned my first approach which was based on using getline with specific line numbers. Instead, I used:

  • an if statement to check for the line beginning with @ (usually followed by a grid name) but not @\n; then grab the next two lines
  • another if statement to check whether a line starts with !
  • a counter is increased by one for each line meeting either of the above conditions. This is used later on to skip all lines that do not have data
filename = '../data/Helens_before_XYZ_ZMAP_PLUS.dat'
count = 0
hdr=[]
with open(filename) as h:
    for line in h:
        if line.startswith('!'): 
            count += 1
        if line.startswith('@') and not line.startswith('@\n'):
            count += 1
            hdr.append(next(h)); count += 1
            hdr.append(next(h)); count += 1
            count += 1

which gives:

hdr
>>> ['15, 1E+030, , 7, 1\n', '468, 327, 0, 326, 0, 467\n']

Next I use the line hdr = [x.strip('\n') for x in ','.join(hdr).split(',')] to remove the newline, and the next if / else statement to deal with the null values, if present:

if hdr[1] ==' ': null=np.nan
else: null = float(hdr[1])

Finally, I get the remainder of the information with:

xmin, xmax, ymin, ymax = [ float(x) for x in hdr[7:11]]
grid_rows, grid_cols = [int(y) for y in hdr[5:7]]

Which I can check with:

print(null, xmin, xmax, ymin, ymax, grid_rows, grid_cols)
>>> 1e+30 0.0 326.0 0.0 467.0 468 327

That’s it! I think there is room to come back and write something more efficient later on, but for now I am satisfied this is robust enough to handle headers.

The rest, reading in the data, is downhill, with a first loop to skip the 9 header lines, and a second one to read all elements in a single array:

with open(filename) as f:
    [next(f) for _ in range(count)]                
    for line in f:
        grid = (np.asarray([word for line in f for word in line.split()]).astype(float)).reshape(grid_cols, grid_rows).T
grid[grid==null]=np.nan

Done!!

Visual data exploration in Python – correlation, confidence, spuriousness

This weekend – it was long overdue –  I cleaned up a notebook prepared last year to accompany the talk “Data science tools for petroleum exploration and production“, which I gave with my friend Thomas Speidel at the Calgary 2018 CSEG/CSPG Geoconvention.

The Python notebook can be downloaded from GitHub as part of a full repository, which includes R code from Thomas, or run interactively with Binder.

In this post I want to highlight on one aspect in particular: doing data exploration visually, but also quantitatively with inferential statistic tests.

Like all data scientists (professional, or in the making, and I ‘park’ myself for now in the latter bin), a scatter matrix is often the first thing I produce, after data cleanup, to look for obvious pairwise relationships and trends between variables. And I really love the flexibility, and looks of seaborn‘s pairgrid.

In the example below I use again data from Lee Hunt, Many correlation coefficients, null hypoteses, and high value CSEG Recorder, December 2013; my followers would be familiar with this small toy dataset from reading Geoscience ML notebook 2 and Geoscience_ML_notebook 3.

The scatter matrix in Figure 1 includes bivariate scatter-plots in the upper triangle, contours in the lower triangle, shape of the bivariate distributions on the diagonal.

matplotlib.pyplot.rcParams["axes.labelsize"] = 18
g = seaborn.PairGrid(data, diag_sharey=False)
axes = g.axes
g.map_upper(matplotlib.pyplot.scatter,  linewidths=1, 
            edgecolor="w", s=90, alpha = 0.5)
g.map_diag(seaborn.kdeplot, lw = 4, legend=False)
g.map_lower(seaborn.kdeplot, cmap="Blues_d")
matplotlib.pyplot.show()
Matrix_initial

Scatter matrix from pairgrid

It looks terrific. But, as I said, I’d like to show how to add to the individual scatterplots some useful annotation. These are the ones I typically focus on:

For the Spearman correlation coefficient I use scipy.stats.spearmanr, whereas for the confidence interval and the probability of spurious correlation I use my own functions, which I include below (following, respectively, Stan Brown’s  Stats without tears and Cynthia Kalkomey’s Potential risks when using seismic attributes as predictors of reservoir properties):

def confInt(r, nwells):
    z_crit = scipy.stats.norm.ppf(.975) 
    std_Z = 1/numpy.sqrt(nwells-3)      
    E = z_crit*std_Z
    Z_star = 0.5*(numpy.log((1+r)/(1.0000000000001-r)))
    ZCI_l = Z_star - E
    ZCI_u = Z_star + E
    RCI_l = (numpy.exp(2*ZCI_l)-1)/(numpy.exp(2*ZCI_l)+1) 
    RCI_u = (numpy.exp(2*ZCI_u)-1)/(numpy.exp(2*ZCI_u)+1)
    return RCI_u, RCI_l
def P_spurious (r, nwells, nattributes):
    t_of_r = r * numpy.sqrt((nwells-2)/(1-numpy.power(r,2)))  
    p = scipy.stats.t.sf(numpy.abs(t_of_r), nwells-2)*2 
    ks = numpy.arange(1, nattributes+1, 1)
    return numpy.sum(p * numpy.power(1-p, ks-1))

I also need a function to calculate the critical r, that is, the value of correlation coefficient  above which one can rule out chance as an explanation for a relationship:

def r_crit(nwells, a):
    t = scipy.stats.t.isf(a, nwells-2) 
    r_crit = t/numpy.sqrt((nwells-2)+ numpy.power(t,2))
    return r_crit

where a   is equal to alpha/2, alpha being the level of significance, or the chance of being wrong that one accepts to live with, and nwells-2 is equivalent to the degrees of freedom.

Finally, I need a utility function (adapted from this Stack Overflow answer) to calculate on the fly and annotate the individual scatterplots with the CC, CI, and P.

def corrfunc(x, y, rc=rc, **kws):
    r, p = svipy.stats.spearmanr(x, y)
    u, l = confInt(r, 21)  
    if r > rc:
       rclr = 'g'
    else:
        rclr= 'm' 
    if p > 0.05:
       pclr = 'm'
    else:
        pclr= 'g'
    ax = matplotlib.pyplot.gca()
    ax.annotate("CC = {:.2f}".format(r), xy=(.1, 1.25), 
                xycoords=ax.transAxes, color = rclr, fontsize = 14)
    ax.annotate("CI = [{:.2f} {:.2f}]".format(u, l), xy=(.1, 1.1), 
                xycoords=ax.transAxes, color = rclr, fontsize = 14)
    ax.annotate("PS = {:.3f}".format(p), xy=(.1, .95), 
                xycoords=ax.transAxes, color = pclr, fontsize = 14)

So, now I just calculate the critical r for the number of observations, 21 wells in this case:

rc = r_crit(21, 0.025)

and wrap it all up this way:

matplotlib.pyplot.rcParams["axes.labelsize"] = 18
g = seaborn.PairGrid(data, diag_sharey=False)
axes = g.axes
g.map_upper(matplotlib.pyplot.scatter,  linewidths=1, 
            edgecolor="w", s=90, alpha = 0.5)
g.map_upper(corrfunc)
g.map_diag(seaborn.kdeplot, lw = 4, legend=False)
g.map_lower(seaborn.kdeplot, cmap="Blues_d")
matplotlib.pyplot.show()

so that:

  • the correlation coefficient is coloured green if it is larger than the critical r, else coloured in purple
  • the confidence interval is coloured green if both lower and upper are larger than the critical r, else coloured in purple
  • the probability of spurious correlation is coloured in green when below 0.05 (or 5% chance)
matrix_final

Enhanced scatter matrix

Working with 3D seismic data in Python using segyio and numpy (mostly)

Between 2 and 3 years ago I started turning my long time passion for image processing, and particularly morphological image processing, to the task of fault segmentation.

At the time I shared my preliminary code, of which I was very happy, in a Jupyter notebook, which you can run interactively at this GitHub repository.

Two areas need improvement to get that initial workflow closer to a production one. The first one is on the image processing and morphology side; I am thinking of including: a better way to clean-up very short faults; pruning to eliminate spurious segments in the skeletonization result; the von Mises distribution instead of standard distribution to filter low dip angles. I know how to improve all those aspects, and have some code snippets sitting around in various locations on my Mac, but I am not quite ready to push for it.

The second area, on the seismic side, is ability to work with 3D data. This has been a sore spot for some time.

Enter segyio, a fast, open-source library, developed precisely to work with SEGY files. To be fair, segyio has been around for some time, as I very well know from being a member of the Software Underground community (swung), but it was only a month or so ago that I started tinkering with it.

This post is mostly to share back with the community what I’ve learned in my very first playground session (with some very helpful tips from Jørgen Kvalsvik, a fellow member of swung, and one of the creators of segyio), which allowed me to create a 3D fault segmentation volume (and have lots of fun in the process) from a similarity (or discontinuity) volume.

The workflow, which you can run interactively at this segyio-notebooks GitHub repository (look for the 01 – Basic tutorial) is summarized pictorially in Figure 1, and comprises the steps below:

  • use segy-io to import two seismic volumes in SEGY file format from the F3 dataset, offshore Netherlands, licensed CC-BY-SA: a similarity volume, and an amplitude volume (with dip steered median filter smoothing applied)
  • manipulate the similarity to create a discontinuity/fault volume
  • create a fault mask and display a couple of amplitude time slices with superimposed faults
  • write the fault volume to SEGY file using segy-io, re-using the headers from the input file
workflow

Figure 1. Naive 3D seismic fault segmentation workflow in Python.

 

Get the notebooks on GitHub (look for the 01 – Basic tutorial)

Feedback is welcome.

 

DISCLAIMER: The steps outlined in the tutorial are not intended as a production-quality fault segmentation workflow. They work reasonably well on the small, clean similarity volume, artfully selected for the occasion, but it is just a simple example.

Upscaling geophysical logs with Python using Pandas and Bruges

With a few hours of work last weekend, I finished putting together a Jupyter notebook tutorial, started at the Geophysics Python sprint 2018, demonstrating how to:

    • Use Agile Scientific’s Welly to load two wells with several geophysical logs
    • Use PandasWelly, and NumPy to: remove all logs except for compressional wave velocity (Vp), shear wave velocity (Vs), and density (RHOB); store the wells in individual DataFrames; make the sampling rate common to both wells; check for null values; convert units from imperial to metric; convert slowness to velocity; add a well name column
    • Split the DataFrame by well using unique values in the well name column
    • For each group/well use Agile Scientific’s Bruges ‘s Backus average to upscale all curves individually
    • Add the upscaled curves back to the DataFrame

Matt Hall, (organizer), told me during a breakfast chat on the first day of the sprint that this tutorial would be a very good to have since it is one of the most requested examples by neophyte users of the Bruges library; I was happy to oblige.

The code for the most important bit, the last two items in the above list, is included below:

# Define parameters for the Backus filter
lb = 40   # Backus length in meters
dz = 1.0  # Log sampling interval in meters

# Do the upscaling work
wells_bk = pd.DataFrame()
grouped = wells['well'].unique()  
for well in grouped:
    new_df = pd.DataFrame()
    Vp = np.array(wells.loc[wells['well'] == well, 'Vp'])
    Vs = np.array(wells.loc[wells['well'] == well, 'Vs'])
    rhob = np.array(wells.loc[wells['well'] == well, 'RHOB'])
    Vp_bks, Vs_bks, rhob_bks = br.rockphysics.backus(Vp, Vs, rhob, lb, dz)
    new_df['Vp_bk'] = Vp_bks
    new_df['Vs_bk'] = Vs_bks
    new_df['rhob_bk'] = rhob_bks
    wells_bk = pd.concat([wells_bk, new_df])

# Add to the input DataFrame
wells_final = (np.concatenate((wells.values, wells_bk.values), axis=1)) 
cols = list(wells) + list(wells_bk) 
wells_final_df = pd.DataFrame(wells_final, columns=cols)

And here is a plot comparing the raw and upscaled Vp and Vs logs for one of the wells:

backus

Please check the notebook if you want to try the full example.

Geophysics Python sprint 2018 – day 2 and beyond, part II

In the last post I wrote about what Volodymyr and I worked on during a good portion of day two of the sprint in October, and continued to work on upon our return to Calgary.

In addition to that I also continued to work on a notebook example, started in day one, demonstrating on how to upscale sonic and density logs from more than one log at a time using Bruges ‘ backusand Panda’s groupby. This will be the focus of a future post.

The final thing I did was to write, and test an error_flag function for Bruges. The function calculates the difference between a predicted and a real curve; it flags errors in prediction if the difference between the curves exceeds a user-defined distance (in standard deviation units) from the mean difference. Another option available is to check whether the curves have opposite slopes (for example one increasing, the other decreasing within a specific interval). The result is a binary error log that can then be used to generate QC plots, to evaluate the performance of the prediction processes in a more (it is my hope) insightful way.

The inspiration for this stems from a discussion over coffee I had 5 or 6 years ago with Glenn Larson, a Geophysicist at Devon Energy, about the limitations of (and alternatives to) using a single global score when evaluating the result of seismic inversion against wireline well logs (the ground truth). I’d been holding that in the back of my mind for years, then finally got to it last Fall.

flag_full

Summary statistics can also be calculated by stratigraphic unit, as demonstrated in the accompanying Jupyter Notebook.

Having fun learning Python with your kids

When I was a kid I loved video games. Now … we’re talking about 70s and 80s games: my first home computer was a Texas Instruments TI-99/4A, my second one a Commodore 64. I loved all games in those days, no matter how primitive they might seem like now; you can get a glimpse of WHAT we are talking about in this video. And funny enough, I still really do love them; below is one of my favorites, H.E.R.O. which I can play these days thanks to a C64 emulator.

HERO

Unfortunately for me, I did not catch (not then, at least) the programming bug. What really turned me down was having tried to type thousands of lines of BASIC to run what looked like an awesome adventure game from a command list in a magazine, only to never be able to play the game because the list was full of typos.

So, I want to make it better for both my daughter, AND myself,  because I plan to make more games. First of all, we are working with Python. There are a lot of resources for making games in Python; a really good one is Al Sweigart‘s Invent Your Own Computer Games with Python (as an aside: I have been using a chapter here, a chapter there from many of his books, it is awesome that he made them available online, but it was time to give back so I bought the 4th edition of Invent Your Own Computer Games with Python).

To set the stage for our first game I reviewed the first few chapters of the book, particularly Chapter 5: Dragon Realm, to get some ideas on how to use Python to make interactive games, but then decided to start with our own game: a calculator that would talk to the player, via the terminal, and ask for numbers and for their favorite algebraic operation. However, we added a twist because it sounded a lot more fun: IF her mom were to play, we’d want the results to be occasionally wrong. My daughter worked out some of the logic, after suggesting the twist, and I did the heavy lifting with Python; in the remainder of the post I will work through how we did it. Please note this game is written for Python 3.

In the first block, Jester, the calculator, introduces itself and then asks the player for their name; the name is stored to a variable via the input function and used for greeting and throughout the game. The capitalize() method turns the first letter of the name string to upper case, if needed.

print ('Hello, I am a talking calculator and my name is Jester.') 
name = input('What is your name? ') 
print ('Hello ' + name.capitalize() + ', nice to meet you!')

The real game starts in the big block below:

while True:
  operation = input(name.capitalize() + 
', would you like to add or subtract? (enter "a" or "s"; "q" to quit) ')
  if operation.lower() =='q':
    print('Goodbye ' + name.capitalize())
    break
  elif operation.lower() =='s' or operation.lower() =='a':
    print ('OK, ready.')
    input1 = input('Can I have the first number? ')
    try:
        num1=int(input1)
    except ValueError:
        print ("That is not a number!")
        input1 = input('Can I have the first number? ')
    input2 = input('Can I have the second number? ')
    try:
       num2=int(input2)
    except ValueError:
        print("That is not a number!")
        input2 = input('Can I have the second number? ')
    num1=int(input1)
    num2=int(input2)

Let’s break it up. First, the player chooses an operation (addition or subtraction) again via the input function. The while loop allows the player to continue playing after each operation.

while True:
  operation = input(name.capitalize() + 
', would you like to add or subtract? (enter "a" or "s"; "q" to quit) ')

The following if statement is used to exit the while loop (using break) if the player decides to quit.

  if operation.lower() =='q':
    print('Goodbye ' + name.capitalize())
    break

An elif statement comes next, allowing the player to chose the operation; at the same time if neither ‘a’ nor ‘s’ are chosen, the program will go back to the beginning of the while loop.

  elif operation.lower() =='s' or operation.lower() =='a':
    print ('OK, ready.')

Finally, Jester asks for the first number to be added or subtracted. We want to be sure that the string passed by the user in here is a number, so we use a try clause to handle a possible exception (for example a letter is entered instead of a number). Here is the logic: if when prompted for the first number we typed a letter by accident (for example the string ‘w’, instead of the string ‘3’ , when input1  is passed to int we’d get this fatal error and the program would be interrupted:

>>> num1=int(input1)
>>> ValueError: invalid literal for int() with base 10: 'w'

With the try clause we instead handle the exception by printing the custom error message “That is not a number!“, followed by a new request for a number:

    input1 = input('Can I have the first number? ')
    try:
        num1=int(input1)
    except ValueError:
        print ("That is not a number!")
        input1 = input('Can I have the first number? ')
    num1=int(input1)

The above cycle is repeated for the second number. After Jester gets two legitimate integers to work with, the operation part begins, which includes Jester’s prank. Indeed, depending on the name of the player, the result will be either always correct, or sometimes correct, sometimes a bit off (which is done using random). We picked “Olivia” to represent the “lucky” object of the prank. This is the only hard-coded part of the program. It needs to be changed with the name of the intended “victim”.

There are two nested if statements below that allow for the following to happen: if the player’s name is Olivia, a random number between 1 and 3, rand1 is drawn.  Then the  value of rand1 is checked: if if it is 3 then a non-zero number, num3, of value drawn between 1 and 5 (rand2) will be added to the result of the operation to make it wrong; otherwise the operation will execute correctly.

from random import randint
num3 = 0 # initializes random error variable
  if (name.lower() == 'olivia'): # for this player ....
    rnd1 = randint(1,3)
    rnd2 = randint(1,5)
    if rnd1 == 3: # .... if a 3 is drawn the correct result 
      num3 = rnd2 # .... will be modified by this random amount

Finally, this is the addition and subtraction block: in both cases unless “Olivia” is the player num3 will be zero and the result will be correct.

  if operation.lower() == 'a':
    result = num1 + num2 + num3
    print (str(num1) + ' + '  + str(num2) + ' = ' + str(result))
  elif operation.lower() == 's':
    result = (max(num1, num2) - min(num1, num2)) - num3
    print (str(max(num1, num2)) + ' - ' 
         + str(min(num1, num2))  + ' = ' + str(result))

If you want to play the game, please copy the full program below and paste it in a .py file and give it a try. Enjoy!

# NB optimized for Python 3
from random import randint

print ('Hello, I am a talking calculator and my name is Jester.')
name = input('What is your name? ')
print ('Hello ' + name.capitalize() + ', nice to meet you!')

while True:
  operation = input(name.capitalize() +
  ', would you like to add or subtract? (enter "a" or "s"; "q" to quit) ')

  if operation.lower() =='q':
    print('Goodbye ' + name.capitalize())
    break

  elif operation.lower() =='s' or operation.lower() =='a':
    print ('OK, ready.')
    input1 = input('Can I have the first number? ')
    try:
        num1=int(input1)
    except ValueError:
       print ("That is not a number!")
       input1 = input('Can I have the first number? ')

    input2 = input('Can I have the second number? ')
    try:
       num2=int(input2)
    except ValueError:
        print("That is not a number!")
        input2 = input('Can I have the second number? ')

    num1=int(input1)
    num2=int(input2)

  num3 = 0
  if (name.lower() == 'olivia'):
    rnd1 = randint(1,3)
    rnd2 = randint(1,5)
    if rnd1 == 3:
      num3 = rnd2

  if operation.lower() == 'a':
    result = num1 + num2 + num3
    print (str(num1) + ' + '  + str(num2) + ' = ' + str(result))

  elif operation.lower() == 's':
    result = (max(num1, num2) - min(num1, num2)) - num3
    print (str(max(num1, num2)) + ' - '
           + str(min(num1, num2))  + ' = ' + str(result))

 

Geophysics Python sprint 2018 – day 2 and beyond, part I

In my last post I wrote about what I did on day one of the Geophysics sprint run by Agile Scientific in Santa Ana two weeks ago.

This post and the next one are about the project Volodymyr and I worked on during day two of the sprint, and continued to work on upon our return to Calgary.

We had read a great notebook by Alessandro Amato del Monte (I recommend browsing his Geophysical notes repo) showing how to reconstruct a velocity log from density with optimized alpha and beta parameters for the Inverse Gardner function, found via scipy.curve_fit.

Inspired by that, we set out with a dual goal:

  • First, we wanted to adapt Alessandro’s optimization idea so that it would work with Bruges‘ Inverse Gardner
  • Second, we wanted to adapt a function from some old work of mine to flag sections of the output velocity log with poor prediction; this would be useful to learn where alpha and beta may need to be tweaked because of changes in the rock lithology or fluid content

I’ll walk you through some of our work. Below are the two functions:

# Alessandro's simple inverse Gardner
def inv_gardner(rho, alpha, beta):
    return (rho/alpha)**(1/beta)

# Bruges' inverse Gardner
def inverse_gardner(rho, alpha=310, beta=0.25, fps=False):
    """
    Computes Gardner's density prediction from P-wave velocity.
    Args:
        rho (ndarray): Density in kg/m^3.
        alpha (float): The factor, 310 for m/s and 230 for fps.
        beta (float): The exponent, usually 0.25.
        fps (bool): Set to true for FPS and the equation will use the typical
            value for alpha. Overrides value for alpha, so if you want to use
            your own alpha, regardless of units, set this to False.
        Returns:
            ndarray: Vp estimate in m/s.
    """
    alpha = 230 if fps else alpha
    exponent = 1 / beta
    factor = 1 / alpha**exponent
    return factor * rho**exponent

They look similarly structured, and take the same arguments. We can test them by passing a single density value and alpha/beta pair.

inv_gardner(2000, 0.39, 0.23)
>>> 1.349846231542594e+16

inverse_gardner(2000, 0.39, 0.23)
>>> 1.3498462315425942e+16

Good. So the next logical step would be to define some model density and velocity data (shamelessly taken from Alessandro’s notebook, except we now use Bruges’ Gardner with S.I. units) and pass the data, and Bruges’ inverse Gardner toscipy.curve_fit to see if it does just work; could it be that simple?

# Make up random velocity and density with Bruges' direct Gardner
vp_test = numpy.linspace(1500, 5500)
rho_test = gardner(vp_test, 310, 0.25)
noise = numpy.random.uniform(0.1, 0.3, vp_test.shape)*1000
rho_test = rho_test + noise

The next block is only slightly different from Alessandro’s notebook. Instead of using all data, we splits both density and velocity into two pairs of arrays: a rho12 and vp2 to optimize foralpha and beta,  a rho1 for calculating “unknown” velocities vp_calc1 further down; the last one, v1, will be used just to show where the real data might have been had we not had to calculate it.

idx = np.arange(len(vp_test))
np.random.seed(3)
spl1 = np.random.randint(0, len(vp_test), 15)
spl2 = np.setxor1d(idx,spl1)
rho1 = rho_test[spl1]
rho2 = rho_test[spl2]
vp1= vp_test[spl1] # this we pretend we do not have 
vp2= vp_test[spl2]

Now, as in Alessandro’s notebook, we pass simple inverse Gardner function to scipy.curve_fit to find optimal alpha and beta parameters, and we printalpha and beta.

popt_synt2, pcov2 = scipy.curve_fit(inv_gardner,rho2, vp2)
print (popt_synt2)
>>> [3.31376056e+02 2.51257203e-01]

Those values seem reasonable, but just to be sure let’s calculate vp_calc1 from rho1 and plot everything to be sure.

vp_calc1 = inv_gardner(rho1, *popt_synt2)

# this is to show the fit line
rho_synt_fit=np.linspace(1, 3000, 50)
vp_synt_fit=inv_gardner(rho_synt_fit, *popt_synt2)

plt.figure(figsize=(10, 10))
plt.plot(rho2,vp2,'or', markersize = 10, label = "fitted points")
plt.plot(rho1,vp1,'ob', markersize = 10, alpha = 0.4, label = "calculated points")
plt.plot(rho1,vp1,'ok', markersize = 10, label = "withheld points")
plt.plot(rho_synt_fit, vp_synt_fit, '-r', lw=2, 
         label='Fit' r'$ V_p=(%.2f / \rho)^{1/%.2f}$' %(popt_synt2[0], 
                                                     popt_synt2[1]))
plt.xlabel('Density rho [kg/m^3]'), plt.xlim(1800, 3000)
plt.ylabel('Velocty Vp [m/s]'), plt.ylim(1000, 6000)
plt.grid()
plt.legend(loc='upper left')
plt.show()

plot

That looks great. Let’s now try the same using Bruges’ Inverse Gardner.

popt_synt2, pcov2 = curve_fit(inverse_gardner, rho2, vp2)
print (popt_synt2)
>>> [1.         0.29525991 1.        ]

That is odd, we do not get the same parameters; additionally, there’s this error message:

../scipy/optimize/minpack.py:794:
OptimizeWarning: Covariance of the parameters could not be estimated 
category=OptimizeWarning)

One possible explanation is that although both inv_gardner and inverse_gardner take three parameters, perhaps scipy.curve_fit does not know to expect it because in the latter alpha and betaare pre-assigned.

The workaround for this was to write a wrapper function to ‘map’ between the call signature of scipy.curve_fit and that of inverse_gardner so that it would be ‘communicated’ to the former explicitly.

def optimize_inverse_gardner(rho, alpha, beta):
    return inverse_gardner(rho, alpha=alpha, beta=beta)

popt_synt2, pcov2 = scipy.curve_fit(optimize_inverse_gardner, 
                                    rho2, vp2) 
print (popt_synt2)
>>> [3.31376060e+02 2.51257202e-01]

Which is the result we wanted.

In the next post we will apply this to some real data and show how to flag areas of poorer results.

Geophysics Python sprint 2018 – day 1

Last weekend I went to California to attend my first ever Python sprint, which was organized at MAZ Café con leche (Santa Ana) by Agile Scientific.

For me this event was a success in many respects. First of all, I wanted to spend some dedicated time working on an open source project, rather than chipping away at it once in a while. Also, participating in a project that was not my own seemed like a good way to challenge myself, by pushing me out of a zone of comfort. Finally, this was an opportunity to engage with other members of the Software Underground Slack team, some of which (for example Jesper Dramsch and Brendon Hall) I’ve known for some time but actually never met in person.

Please read about the Sprint in general on Matt Hall‘s blog post, Café con leche. My post is a short summary of what I did on the first day.

After a tasty breakfast, and at least a good hour of socializing, I sat at a table with three other people interested in working on Bruges (Agile’s Python library for Geophysics) : Jesper Dramsch, Adriana Gordon and Volodymyr Vragov.

As I tweeted that evening, we had a light-hearted start, but then we set to work.Screen Shot 2018-10-21 at 11.11.52 AM

While Adriana and Jesper tackled Bruges’ documentation, which was sorely needed, Volodymyr spent some hours on example notebooks from in-Bruges (a tour of Bruges), which needed fixing, and also on setting up our joint project for day 2 (more in the next post). For my part, I  put together a tutorial notebooks on how to use Bruges’ functions on wireline logs stored in a Pandas DataFrame. According to Matt, this is requested quite often, so it seemed like a good choice.

Let’s say that a number of wells are stored in a DataFrame with both a depth column, and a well name column, in addition to log curves.

The logic for operating on logs individually is this:
Split the wells in the DataFrame using groupby, then
for each well
for each of the logs of interest
do something using one of Bruges’ functions (for example apply a rolling mean)

The code to do that is surprisingly simple, once you’ve figure it out (I myself struggle often, and not little with Pandas at the outset of new projects).

One has to first create a list with the logs of interest, like so:

logs = ['GR', 'RHOB']

then define the length of the window for the rolling operation:

window = 9

finally, the logic above is applied as:

wells_sm=pd.DataFrame()

grouped=wells['well'].unique()

for well in grouped:    
  new_df=pd.DataFrame()   
    for log in logs:
      sm=br.filters.mean(np.array(wells[log][wells['well']==well]),
                         window)
        new_df[str(log) + '_sm']=sm 
    wells_sm=pd.concat([wells_sm, new_df])

where wells_sm is a temporary DataFrame for the filtered logs, which can be added back to the original DataFrame with:

wells_filtered = (np.concatenate((wells.values, 
                  wells_sm.values), axis=1))
cols = list(wells) + list(wells_sm)
wells_filtered_df = pd.DataFrame(wells_filtered, columns=cols)

You can work through the full example in the notebook.