Confidence intervals and prediction intervals in OLS regression: a geoscience worked example

Featured

Introduction

I recently released an open source research bullshit detector. I ended up doing some house cleaning in he repo Data-science-tools-petroleum-exploration-and-production. The result is this new notebook — available in a teaching-oriented version and a practitioner-oriented version on GitHub — that walks through the distinction between regression confidence interval (CI) and the prediction interval (PI), using a real petroleum geology dataset.

When you fit an OLS regression to well data and plot the result, the output typically includes an uncertainty band around the regression line. That band can represent two very different questions, depending on how it is computed. One question is: “Where does the average production lie, for wells with a given gross pay?” The other is: “What production should we expect from the next individual well we drill?” These are not the same question, and conflating the two can lead to significantly different conclusions in a drilling decision context.

The two intervals

The confidence interval (CI) captures uncertainty about where the true regression line lies. Because our sample is limited, the estimated line is just one of many possible lines we could have obtained. The CI narrows as sample size increases, and answers: “What is the average production for wells at this gross pay value?”

The prediction interval (PI) captures uncertainty about where a new individual observation will fall. Even if the true regression line were known exactly, individual wells would still scatter around it due to natural variability. The PI always includes that residual scatter on top of parameter uncertainty — so it is always wider than the CI, and retains an irreducible minimum width even with infinite data.

Mathematically, the only difference between the two formulas is a +1 under the square root in the PI expression. That extra 1 represents the variance of a single new observation around the mean — what the notebook calls the irreducible scatter.

In statsmodels, both intervals come out of a single call: results.get_prediction().summary_frame(alpha=0.05), with the CI in columns mean_ci_lower / mean_ci_upper and the PI in obs_ci_lower / obs_ci_upper.

The dataset

The data comes from Lee Hunt’s (2013) paper Many correlation coefficients, null hypotheses, and high value (CSEG Recorder, December 2013). It contains measurements from 21 wells producing from a marine barrier sand, with variables including gross pay (m), porosity-height, position within the reservoir, pressure draw-down, and production in tens of barrels per day. Gross pay is the strongest single predictor of production (r = 0.87), so that is the starting point.

Where the difference matters: economic risk

The practical value of the CI vs. PI distinction becomes concrete when an economic cutoff is added. In the notebook the minimum economic production is set at 20 (tens of bbl/d), and the question is: what minimum gross pay should be required before drilling?

Looking at the regression line alone, ~3.5 m of gross pay looks sufficient — the predicted mean production at that thickness crosses the threshold. But the PI lower bound tells a different story: to have 95% confidence that the next well drilled will exceed the economic cutoff, approximately 12 m of gross pay is needed. The difference between 3.5 m and 12 m is enormous in practical terms — it could determine whether a prospect gets drilled at all. The figure below shows this directly.

Economic risk assessment using prediction intervals — OLS regression of production vs. gross pay for 21 wells, with 95% confidence and prediction interval bands and an economic cutoff at 20 tens of bbl/d
OLS regression of production (tens of bbl/d) vs. gross pay (m) for 21 wells from Hunt (2013). The darker inner band is the 95% confidence interval for the mean response; the lighter outer band is the 95% prediction interval for a new well. The dashed green line is the economic production cutoff at 20 (tens of bbl/d). At this cutoff, the regression line alone suggests ~3.5 m of gross pay is sufficient; the PI lower bound requires ~12 m.

Effect of sample size

The analysis is repeated with only 5 wells, representing an early appraisal scenario. The PI widens substantially, and the required minimum gross pay shifts upward again. As Hunt (2013) notes: the path forward is to either accept the uncertainty or work to reduce it — drill more wells, incorporate additional seismic data, and so on.

Adding predictors

In practice, production depends on more than gross pay. Adding Position and Pressure to the model — two physically meaningful predictors — improves R² and reduces the residual standard error. A partial-effect plot (holding Position and Pressure at their mean values, varying Gross pay) shows the multivariate PI is visibly narrower than the bivariate one. The side-by-side comparison carries the title “Adding Predictors Narrows the Prediction Interval.”

Closing

The key point is stated directly in the notebook: when assessing risk for the next well, reach for the PI, not the CI. The regression line and the CI answer a different question than the one a drilling decision requires.

Upcoming book: 52 things you should know about Geocomputing

I am very excited to write that, after a successful second attempt at collecting enough essays, the book 52 Things You Should Know About Geocomputing, by Agile Libre, is very likely to become a reality.

*** July 2020 UPDATE ***

This project got a much needed boost during the hackathon at the 2020 Transform virtual event. Watch the video recording on the group YouTube channel here.

*********************

In one of the three chapter I submitted for this book, Prototype colourmaps for fault interpretation, I talk about building a widget for interactive generation of grayscale colourmaps with sigmoid lightness. The whole process is well described in the chapter and showcased in the accompanying GitHub repo.

This post is an opportunity for me to reflect on how revisiting old projects is very important. Indeed, I consider it an essential part of how I approach scientific computing, and a practical way to incorporate new insights, and changes (hopefully betterments) in your coding abilities.

In the fist version of the Jupyter notebook, submitted in 2017, all the calculations and all the plotting commands where packed inside a single monster function that was passed to ipywidgets.interact. Quite frankly, as time passed this approach seemed less and less Phytonic (aka mature) and no longer representative of my programming skills and style, and increased understanding of widget objects.

After a significant hiatus (2 years) I restructured the whole project in several ways:
– Converted Python 2 code to Python 3
– Created separate helper functions for each calculation and moved them to the top to improve on both clarity and reusability of the code.
– Improved and standardized function docstrings
– Optimized and reduced the number of parameters
– Switched from interact to interactive to enable access to the colormaparray in later cells (for printing, further plotting, and exporting).

The new notebook is here.

In a different chapter in the book I talk at more length about the need to Keep on improving your geocomputing projects.

.