I will be honest. I have been very busy and unable to write much. Specifically on the latest advances in large language models and their applications, I had forced myself to stay on the fence. Yes, read about it, stay on top of the developments, but leave the experimenting to others, for the time being. Then two things changed my mind: first, I started working through David Mertz’ book Regular Expression Puzzles and AI Coding Assistants (very superficially, admittedly, as regular expressions are not my forte, but the book is awesome); second, I chanced on the article Why everyone should try GPT-4, even the CEO by Cassie Kozyrkov.
Here are the things I am actively playing with, right now. Soon I will start documenting these explorations with a new series of short blog posts:
Writing an essay in philosophy based on a specific, complex prompt
Act as baking assistant
Write an imaginary dialogue between specific historic and fictional characters
Demonstrate geometry theorem
Perform arithmetic operations
Act as song translation assistant
Write poem with specific metric, and music in a specific composer’s style
Act as coding assistant
Solve puzzles
Play a live tic tac toe and/or chess game
Detect logical fallacies
Help with complex numerical estimations given a set of rules
In the last post I wrote about what Volodymyr and I worked on during a good portion of day two of the sprint in October, and continued to work on upon our return to Calgary.
In addition to that I also continued to work on a notebook example, started in day one, demonstrating on how to upscale sonic and density logs from more than one log at a time using Bruges ‘ backusand Panda’s groupby. This will be the focus of a future post.
The final thing I did was to write, and test an error_flag function for Bruges. The function calculates the difference between a predicted and a real curve; it flags errors in prediction if the difference between the curves exceeds a user-defined distance (in standard deviation units) from the mean difference. Another option available is to check whether the curves have opposite slopes (for example one increasing, the other decreasing within a specific interval). The result is a binary error log that can then be used to generate QC plots, to evaluate the performance of the prediction processes in a more (it is my hope) insightful way.
The inspiration for this stems from a discussion over coffee I had 5 or 6 years ago with Glenn Larson, a Geophysicist at Devon Energy, about the limitations of (and alternatives to) using a single global score when evaluating the result of seismic inversion against wireline well logs (the ground truth). I’d been holding that in the back of my mind for years, then finally got to it last Fall.
Summary statistics can also be calculated by stratigraphic unit, as demonstrated in the accompanyingJupyter Notebook.
This post and the next one are about the project Volodymyr and I worked on during day two of the sprint, and continued to work on upon our return to Calgary.
First, we wanted to adapt Alessandro’s optimization idea so that it would work with Bruges‘ Inverse Gardner
Second, we wanted to adapt a function from some old work of mine to flag sections of the output velocity log with poor prediction; this would be useful to learn where alpha and beta may need to be tweaked because of changes in the rock lithology or fluid content
I’ll walk you through some of our work. Below are the two functions:
# Alessandro's simple inverse Gardner
def inv_gardner(rho, alpha, beta):
return (rho/alpha)**(1/beta)
# Bruges' inverse Gardner
def inverse_gardner(rho, alpha=310, beta=0.25, fps=False):
"""
Computes Gardner's density prediction from P-wave velocity.
Args:
rho (ndarray): Density in kg/m^3.
alpha (float): The factor, 310 for m/s and 230 for fps.
beta (float): The exponent, usually 0.25.
fps (bool): Set to true for FPS and the equation will use the typical
value for alpha. Overrides value for alpha, so if you want to use
your own alpha, regardless of units, set this to False.
Returns:
ndarray: Vp estimate in m/s.
"""
alpha = 230 if fps else alpha
exponent = 1 / beta
factor = 1 / alpha**exponent
return factor * rho**exponent
They look similarly structured, and take the same arguments. We can test them by passing a single density value and alpha/beta pair.
Good. So the next logical step would be to define some model density and velocity data (shamelessly taken from Alessandro’s notebook, except we now use Bruges’ Gardner with S.I. units) and pass the data, and Bruges’ inverse Gardner toscipy.curve_fit to see if it does just work; could it be that simple?
# Make up random velocity and density with Bruges' direct Gardner
vp_test = numpy.linspace(1500, 5500)
rho_test = gardner(vp_test, 310, 0.25)
noise = numpy.random.uniform(0.1, 0.3, vp_test.shape)*1000
rho_test = rho_test + noise
The next block is only slightly different from Alessandro’s notebook. Instead of using all data, we splits both density and velocity into two pairs of arrays: a rho12 and vp2 to optimize foralpha and beta, a rho1 for calculating “unknown” velocities vp_calc1 further down; the last one, v1, will be used just to show where the real data might have been had we not had to calculate it.
idx = np.arange(len(vp_test))
np.random.seed(3)
spl1 = np.random.randint(0, len(vp_test), 15)
spl2 = np.setxor1d(idx,spl1)
rho1 = rho_test[spl1]
rho2 = rho_test[spl2]
vp1= vp_test[spl1] # this we pretend we do not have
vp2= vp_test[spl2]
Now, as in Alessandro’s notebook, we pass simple inverse Gardner function to scipy.curve_fit to find optimal alpha and beta parameters, and we printalpha and beta.
That is odd, we do not get the same parameters; additionally, there’s this error message:
../scipy/optimize/minpack.py:794:
OptimizeWarning: Covariance of the parameters could not be estimated
category=OptimizeWarning)
One possible explanation is that although both inv_gardner and inverse_gardner take three parameters, perhaps scipy.curve_fit does not know to expect it because in the latter alpha and betaare pre-assigned.
The workaround for this was to write a wrapper function to ‘map’ between the call signature of scipy.curve_fit and that of inverse_gardner so that it would be ‘communicated’ to the former explicitly.
Last weekend I went to California to attend my first ever Python sprint, which was organized at MAZ Café con leche (Santa Ana) by Agile Scientific.
For me this event was a success in many respects. First of all, I wanted to spend some dedicated time working on an open source project, rather than chipping away at it once in a while. Also, participating in a project that was not my own seemed like a good way to challenge myself, by pushing me out of a zone of comfort. Finally, this was an opportunity to engage with other members of the Software Underground Slack team, some of which (for example Jesper Dramsch and Brendon Hall) I’ve known for some time but actually never met in person.
Please read about the Sprint in general on Matt Hall‘s blog post, Café con leche. My post is a short summary of what I did on the first day.
After a tasty breakfast, and at least a good hour of socializing, I sat at a table with three other people interested in working on Bruges (Agile’s Python library for Geophysics) : Jesper Dramsch, Adriana Gordon and Volodymyr Vragov.
As I tweeted that evening, we had a light-hearted start, but then we set to work.
While Adriana and Jesper tackled Bruges’ documentation, which was sorely needed, Volodymyr spent some hours on example notebooks from in-Bruges (a tour of Bruges), which needed fixing, and also on setting up our joint project for day 2 (more in the next post). For my part, I put together a tutorial notebooks on how to use Bruges’ functions on wireline logs stored in a Pandas DataFrame. According to Matt, this is requested quite often, so it seemed like a good choice.
Let’s say that a number of wells are stored in a DataFrame with both a depth column, and a well name column, in addition to log curves.
The logic for operating on logs individually is this:
Split the wells in the DataFrame using groupby, then
for each well
for each of the logs of interest
do something using one of Bruges’ functions (for example apply a rolling mean)
The code to do that is surprisingly simple, once you’ve figure it out (I myself struggle often, and not little with Pandas at the outset of new projects).
One has to first create a list with the logs of interest, like so:
logs=['GR','RHOB']
then define the length of the window for the rolling operation: