In the last post I wrote about what Volodymyr and I worked on during a good portion of day two of the sprint in October, and continued to work on upon our return to Calgary.
In addition to that I also continued to work on a notebook example, started in day one, demonstrating on how to upscale sonic and density logs from more than one log at a time using Bruges ‘ backusand Panda’s groupby. This will be the focus of a future post.
The final thing I did was to write, and test an error_flag function for Bruges. The function calculates the difference between a predicted and a real curve; it flags errors in prediction if the difference between the curves exceeds a user-defined distance (in standard deviation units) from the mean difference. Another option available is to check whether the curves have opposite slopes (for example one increasing, the other decreasing within a specific interval). The result is a binary error log that can then be used to generate QC plots, to evaluate the performance of the prediction processes in a more (it is my hope) insightful way.
The inspiration for this stems from a discussion over coffee I had 5 or 6 years ago with Glenn Larson, a Geophysicist at Devon Energy, about the limitations of (and alternatives to) using a single global score when evaluating the result of seismic inversion against wireline well logs (the ground truth). I’d been holding that in the back of my mind for years, then finally got to it last Fall.
Summary statistics can also be calculated by stratigraphic unit, as demonstrated in the accompanyingJupyter Notebook.
This post and the next one are about the project Volodymyr and I worked on during day two of the sprint, and continued to work on upon our return to Calgary.
First, we wanted to adapt Alessandro’s optimization idea so that it would work with Bruges‘ Inverse Gardner
Second, we wanted to adapt a function from some old work of mine to flag sections of the output velocity log with poor prediction; this would be useful to learn where alpha and beta may need to be tweaked because of changes in the rock lithology or fluid content
I’ll walk you through some of our work. Below are the two functions:
# Alessandro's simple inverse Gardner
def inv_gardner(rho, alpha, beta):
return (rho/alpha)**(1/beta)
# Bruges' inverse Gardner
def inverse_gardner(rho, alpha=310, beta=0.25, fps=False):
"""
Computes Gardner's density prediction from P-wave velocity.
Args:
rho (ndarray): Density in kg/m^3.
alpha (float): The factor, 310 for m/s and 230 for fps.
beta (float): The exponent, usually 0.25.
fps (bool): Set to true for FPS and the equation will use the typical
value for alpha. Overrides value for alpha, so if you want to use
your own alpha, regardless of units, set this to False.
Returns:
ndarray: Vp estimate in m/s.
"""
alpha = 230 if fps else alpha
exponent = 1 / beta
factor = 1 / alpha**exponent
return factor * rho**exponent
They look similarly structured, and take the same arguments. We can test them by passing a single density value and alpha/beta pair.
Good. So the next logical step would be to define some model density and velocity data (shamelessly taken from Alessandro’s notebook, except we now use Bruges’ Gardner with S.I. units) and pass the data, and Bruges’ inverse Gardner toscipy.curve_fit to see if it does just work; could it be that simple?
# Make up random velocity and density with Bruges' direct Gardner
vp_test = numpy.linspace(1500, 5500)
rho_test = gardner(vp_test, 310, 0.25)
noise = numpy.random.uniform(0.1, 0.3, vp_test.shape)*1000
rho_test = rho_test + noise
The next block is only slightly different from Alessandro’s notebook. Instead of using all data, we splits both density and velocity into two pairs of arrays: a rho12 and vp2 to optimize foralpha and beta, a rho1 for calculating “unknown” velocities vp_calc1 further down; the last one, v1, will be used just to show where the real data might have been had we not had to calculate it.
idx = np.arange(len(vp_test))
np.random.seed(3)
spl1 = np.random.randint(0, len(vp_test), 15)
spl2 = np.setxor1d(idx,spl1)
rho1 = rho_test[spl1]
rho2 = rho_test[spl2]
vp1= vp_test[spl1] # this we pretend we do not have
vp2= vp_test[spl2]
Now, as in Alessandro’s notebook, we pass simple inverse Gardner function to scipy.curve_fit to find optimal alpha and beta parameters, and we printalpha and beta.
That is odd, we do not get the same parameters; additionally, there’s this error message:
../scipy/optimize/minpack.py:794:
OptimizeWarning: Covariance of the parameters could not be estimated
category=OptimizeWarning)
One possible explanation is that although both inv_gardner and inverse_gardner take three parameters, perhaps scipy.curve_fit does not know to expect it because in the latter alpha and betaare pre-assigned.
The workaround for this was to write a wrapper function to ‘map’ between the call signature of scipy.curve_fit and that of inverse_gardner so that it would be ‘communicated’ to the former explicitly.
Last weekend I went to California to attend my first ever Python sprint, which was organized at MAZ Café con leche (Santa Ana) by Agile Scientific.
For me this event was a success in many respects. First of all, I wanted to spend some dedicated time working on an open source project, rather than chipping away at it once in a while. Also, participating in a project that was not my own seemed like a good way to challenge myself, by pushing me out of a zone of comfort. Finally, this was an opportunity to engage with other members of the Software Underground Slack team, some of which (for example Jesper Dramsch and Brendon Hall) I’ve known for some time but actually never met in person.
Please read about the Sprint in general on Matt Hall‘s blog post, Café con leche. My post is a short summary of what I did on the first day.
After a tasty breakfast, and at least a good hour of socializing, I sat at a table with three other people interested in working on Bruges (Agile’s Python library for Geophysics) : Jesper Dramsch, Adriana Gordon and Volodymyr Vragov.
As I tweeted that evening, we had a light-hearted start, but then we set to work.
While Adriana and Jesper tackled Bruges’ documentation, which was sorely needed, Volodymyr spent some hours on example notebooks from in-Bruges (a tour of Bruges), which needed fixing, and also on setting up our joint project for day 2 (more in the next post). For my part, I put together a tutorial notebooks on how to use Bruges’ functions on wireline logs stored in a Pandas DataFrame. According to Matt, this is requested quite often, so it seemed like a good choice.
Let’s say that a number of wells are stored in a DataFrame with both a depth column, and a well name column, in addition to log curves.
The logic for operating on logs individually is this:
Split the wells in the DataFrame using groupby, then
for each well
for each of the logs of interest
do something using one of Bruges’ functions (for example apply a rolling mean)
The code to do that is surprisingly simple, once you’ve figure it out (I myself struggle often, and not little with Pandas at the outset of new projects).
One has to first create a list with the logs of interest, like so:
logs=['GR','RHOB']
then define the length of the window for the rolling operation:
This guest post (first published here) is by Elwyn Galloway, author of Scibbatical on WordPress. It is the forth in our series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathonorganized by Agile Geoscience. Happy reading.
As Matteo demonstrated by example, sketch2model’s ability to segment a sketch properly depends on the fidelity of a sketch.
An image of a whiteboard sketch (left) divides an area into three sections. Without morphological filtering, sketch2model segments the original image into two sections (identified as orange, purple) (centre). The algorithm correctly segments the area into three sections (orange, purple, green) when morphological filtering is applied (right).
To compensate for sketch imperfections, Matteo suggested morphological filtering on binarized images. Morphological filtering is a set of image modification tools which modify the shape of elements in an image. He suggested using the closing tool for our purposes. Have a look at Matteo’s Post for insight into this and other morphological filters.
One of the best aspects of this approach is that it is simple to apply. There is essentially one parameter to define: a structuring element. Since you’ve already read Matteo’s post, you recall his onion analogy explaining the morphological filtering processes of erosion and dilation – erosion is akin to removing an onion layer, dilation is adding a layer on. You’ll also recall that the size of the structuring element is the thickness of the layer added to, or removed from, the onion. Essentially, the parameterization of this process comes down to choosing the thickness of the onion layers.
Sketch2model uses dilation followed by erosion to fill gaps left between sketch lines (morphological dilation followed by erosion is closing). Matteo created this really great widget to illustrate closing using an interactive animation.
Matteo’s animation was created using this interactive Jupyter notebook. Closing connects the lines of the sketch.
Some is Good, More is Better?
Matteo showed that closing fails if the structural element used is too small. So just make it really big, right? Well, there can be too much of a good thing. Compare what happens when you use an appropriately sized structuring element (mild) to the results from an excessively large structuring element (wild).
Comparing the results of mild and wild structuring elements: if the structuring element is too large, the filter compromises the quality of the reproduction.
Using a morphological filter with a structural element that is too small doesn’t fix the sketches, but using a structural element that is too large compromises the sketch too. We’re left to find an element that’s just right. Since one of the priorities for sketch2model was to robustly handle a variety of sketches with as little user input as possible — marker on whiteboard, pencil on paper, ink on napkin — we were motivated to find a way to do this without requiring the user to select the size of the structuring element.
Is there a universal solution? Consider this: a sketch captured in two images, each with their own resolution. In one image, the lines of the sketch appear to be approximately 16 pixels wide. The same lines appear to be 32 pixels wide in the other image. Since the size of the structuring element is defined in terms of pixels, it becomes apparent the ideal structuring element cannot be “one size fits all”.
High-resolution (left) versus low-resolution (right) image of the same portion of a sketch. Closing the gap between the lines would require a different size structuring element for each image: about 5 pixels for high-resolution or 1 pixel for low-resolution.
Thinking Like a Human
Still motivated to avoid user parameterization for the structuring element, we explored ways to make the algorithm intelligent enough to select an appropriate structuring element on its own. Ultimately, we had to realize a few things before we came up with something that would work:
When capturing an image of a sketch, users compose very similar images (compose in the photographic sense of the word): sketch is centered and nearly fills the captured image.
The image of a sketch is not the same as a user’s perception of a sketch: a camera may record imperfections (gaps) in a sketch that a user does not perceive.
The insignificance of camera resolution: a sketched feature in captured at two different resolutions would have two different lengths (in pixels), but identical lengths when defined as a percentage of image size.
With these insights, we deduced that the gaps we were trying to fill with morphological filtering would be those that escaped the notice of the sketch artist.
Recognizing the importance of accurate sketch reproduction, our solution applies the smallest structuring element possible that will still fill any unintentional gaps in a sketch. It does so in a way that is adaptable.
A discussion about the definition of “unintentional gap” allowed us to create a mandate for the closing portion of our algorithm. Sketch2model should fill gaps the user doesn’t notice. The detail below the limit of the user’s perception should not affect the output model. A quick “literature” (i.e. Google) search revealed that a person’s visual perception is affected by many factors beyond the eye’s optic limits. Without a simple formula to define a limit, we did what any hacker would do… define it empirically. Use a bunch of test images to tweak the structuring element of the closing filter to leave the perceptible gaps and fill in the imperceptible ones. In the sketch2model algorithm, the size of structuring element is defined as a fraction of the image size, so it was the fraction that we tuned empirically.
Producing Usable Results
Implicit in the implementation is sketch2model’s expectation that the user’s sketch, and their image of the sketch are crafted with some care. The expectations are reasonable: connect lines you’d like connected; get a clear image of your sketch. Like so much else in life, better input gives better results.
Input (left) and result (right) of sketch2model.
To produce an adaptable algorithm requiring as little user input as possible, the sketch2model team had to mix a little image processing wizardry with some non-technical insight.
As written by Elwyn in the first post of this series, sketch2model was conceived at the 2015 Calgary Geoscience Hackathon as a web and mobile app that would turn an image of geological sketch into a geological model, and then use Agile Geoscience’s modelr.io to create a synthetic seismic model.
The skech2model concept: modelling at the speed of imagination. Take a sketch (a), turn it into an earth model (b), create a forward seismic model (c). Our hack takes you from a to b.
One of the main tasks in sketch2model is to identify each and every geological body in a sketch as a closed polygon. As Elwyn wrote, “if the sketch were reproduced exactly as imagined, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended – an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross, see example below). We applied some morphological filtering to compensate for the sketch imperfections.
Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).
The cartoon below shows what would be the final output of sketch2model in the two cases in the example above (non closed and closed gap).
My objective with this post is to explain visually how we correct for some of these imperfections within sketch2model. I will focus on the use of morphological closing, which consist in applying in sequence a dilation and an erosion, the two fundamental morphological operations.
Quick mathematical morphology review
All morphological operations result from the interaction of an image with a structuring element (a kernel) smaller than the image and typically in the shape of a square, disk, or diamond. In most cases the image is binary, that is pixels take either value of 1, for the foreground objects, or 0 for the background. The structuring element operates on the foreground objects.
Morphological erosion is used to remove pixels on the foreground objects’ boundaries. How ‘deeply’ the boundaries are eroded depends on the size of the structuring element (and shape, but in this discussion I will ignore the effect of changing the shape). This operation is in my mind analogous to peeling off a layer from an onion; the thickness of the layer is related to the structuring element size.
Twan Maintz in his book Digital and medical image processingdescribes the interaction of image and structuring element during erosion this way: place the structuring element anywhere in the image: if it is fully contained in the foreground object (or in one of the objects) then the origin (central) pixel of the structuring element (and only that one) is part of the eroded output. The book has a great example on page 129.
Dilation does the opposite of erosion: it expands the object boundaries (adding pixels) by an amount that is again related to the size of the structuring element. This is analogous to me to adding back a layer to the onion.
Again, thanks to Maintz the interaction of image and structuring element in dilation can be intuitively described: place the structuring element anywhere in the image: does it touch any of the foreground objects? If yes then the origin of the structuring element is part of the dilated result. Great example on pages 127-128.
Closing is then for me akin to adding a layer to an onion (dilation) and then peeling it back off (erosion) but with the major caveat that some of the changes produced by the dilation are irreversible: background holes smaller than the structuring element that are filled by the dilation are not restored by the erosion. Similarly, lines in the input image separated by an amount of pixels smaller than the size of the structuring element are linked by the dilation and not disconnected by the erosion, which is exactly what we wanted for sketch2model.
Closing demo
If you still need further explanation on these morphological operations, I’d recommend reading further on the ImageMagik user guide the sections on erosion, dilation, and closing, and the examples on the Scikit-image website.
As discussed in the previous section, when applying closing to a binary image, the external points in any object in the input image will be left unchanged in the output, but holes will be filled, partially or completely, and disconnected objects like edges (or lines in sketches) can become connected.
I will use this model binary image containing two 1-pixel wide lines. Think of them as lines in a sketch that should have been connected, but are not.
We will attempt to connect these lines using morphological closing with a disk-shaped structuring element of size 2. The result is plotted in the binary image below, showing that closing was successful.
But what would have happened with a smaller structuring element, or with a larger one? In the case of a disk of size 1, the closing magic did not happen:
Observing this result, one would increase the size of the structuring element. However, as Elwyn will show in the next post, also too big a structuring element would have detrimental effects, causing subsequent operations to introduce significant artifacts in the final results. This has broader implications for our sketch2model app: how do we select automatically (i.e. without hard coding it into the program) the appropriate structuring element size? Again, Elwyn will answer that question; in the last section I want to concentrate on explaining how the closing machinery works in this case.
In the next figure I have broken down the closing operation into its component dilation and erosion, and plotted them step by step to show what happens:
So we see that the edges do get linked by the dilation, but by only one pixel, which the following erosion then removes.
And now let’s break down the closing with disk of size two into its component. This is equivalent to applying two consecutive passes of dilation with disk of size 1, and then two consecutive passes of erosion with disk of size 1, as in the demonstration in the next figure below (by the way, if we observed carefully the second panel above we could predict that the dilation with a disk of size two would result in a link 3-pixel wide instead of 1-pixel wide, which the subsequent erosion will not disconnect).
Below is a GIF animated version of this demo, cycling to the above steps; you can also run it yourself by downloading and running the Jupyter notebook on GitHub.
The first post was written by Elwyn Galloway and published on both his Scibbatical blog and here on MyCarta. In that article Elwyn mentioned the need for an adaptive image conditioning workflow for binarization of photos with geological sketches in images. Binarization is the process of converting a natural image to a binary image (check this simple but awesome interactive demonstration of binarization), which in our case is necessary to separate the sketch from the background.
First we import one of the photos with sketches and convert it to a grayscale image.
im = io.imread('paper_breaks.png')
im = color.rgb2gray(im[0:-1:2,0:-1:2])
Next we enhance the grayscale image with a couple of cascaded processes. But before we do that, let’s graph the intensity values across the image to understand the degree of contrast between sketch edges and background, which ultimately will determine our success in separating them from it. We show this in the figure below, on the left, for one column of pixels (y direction). The black line across the input image on the right shows the location of the column selected. It is fairly obvious from the plot on the left that the intensity of the background is not uniform, due to variable light conditions when the photo was taken, and towards the right (e.g. bottom of the photo) it gets closer to that of the edges. In some images it might even become less than the intensity of the edge. This highlights the need for (preemptively) applying the enhancements illustrated in the remainder of the post.
The first enhancement is called compressor, or limiter. I read many years ago that it is used in electronics to find hard edges in data: the idea is to square each element in the data (image, or other type of data), smooth the result (enough to remove high frequency variations but not so much as to eliminate variability), take the square root, and finally divide each element in the input by the square root result.
I experimented with this method (at the time using Matlab and its Image Processing Toolbox) using the same gravity dataset from my 2015 geophysical tutorial on The Leading Edge (see the post Mapping and validating geophysical lineaments with Python). An example of one such experiments is shown in the figure below where: the top left map is the Bouguer data; the centre top map is the squared data; the top right is the result of a Gaussian blur; the bottom left the result of square root, and centre right is the final output, where the hardest edges in the original data have been enhanced.
The most important parameter in this process is the choice of the smoothing or blur; using a Gaussian kernel of different size more subtle edges are enhanced, as seen in the bottom right map (these are perhaps acquisition-related gridding artifacts).
In our sketch2model implementation the size of the Gaussian kernel is hardcoded; it was chosen following trial and error on multiple photos of sketches and yielded optimal results in the greatest majority of them. We were planning to have the kernel size depend on the size of the input image, but left the implementation to our ‘future work’ list.’
Here’s the compressor code from sketch2model:
# compressor or limiter (electronics): find hard edges in data with long
# wavelength variations in amplitude
# step 1: square each element in the image to obtain the power function
sqr = im**2
# step 2: gaussian of squared image
flt2 = sp.ndimage.filters.gaussian_filter(sqr,21)
# step 3: divide the intensity of each original pixel by the square root
# of the smoothed square
cmprs= im/(np.sqrt(flt2))
and a plot of the result (same column of pixels as in the previous one):
From the plot above we see that now the background intensity is uniform and the contrast has been improved. We can maximize it with contrast stretching, as below:
This guest post (first published here) is by Elwyn Galloway, author of Scibbatical on WordPress. It is the first in our series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathonorganized by Agile Geoscience. Happy reading.
Collaboration in action. Evan, Matteo, and Elwyn (foreground, L to R) work on sketch2model at the 2015 Calgary Geoscience Hackathon. Photo courtesy of Penny Colton.
Welcome to an epic blog crossover event. Two authors collaborating to tell a single story over the course of several articles.
We’ve each mentioned the sketch2model project on our respective blogs, MyCarta and scibbatical, without giving much detail about it. Apologies if you’ve been waiting anxiously for more. Through the next while, you’ll get to know sketch2model as well as we do.
The sketch2model team came together at the 2015 Geoscience Hackathon (Calgary), hosted by Agile Geoscience. Elwyn and Evan Saltman (epsalt on twitter and GitHub) knew each other from a previous employer, but neither had met Matteo before. All were intrigued by the project idea, and the individual skill sets were diverse enough to combine into a well-rounded group. Ben Bougher, part of the Agile Geoscience team, assisted with the original web interface at the hackathon. Agile’s take on this hackathon can be found on their blog.
Conception
The idea behind sketch2model is that a user should be able to easily create forward seismic models. Modelling at the speed of imagination, allowing seamless transition from idea to synthetic seismic section. It should happen quickly enough to be incorporated into a conversation. It should happen where collaboration happens.
The skech2model concept: modelling at the speed of imagination. Take a sketch (a), turn it into an earth model (b), create a forward seismic model (c). Our hack takes you from a to b.
Geophysicists like to model wedges, and for good reasons. However, wedge logic can get lost on colleagues. It may not effectively demonstrate the capability of seismic data in a given situation. The idea is not to supplant that kind of modeling, but to enable a new, lighter kind of modeling. Modeling that can easily produce results for twelve different depositional scenarios as quickly as they can be sketched on a whiteboard.
The Hack
Building something mobile to turn a sketch into a synthetic seismic section is a pretty tall order for a weekend. We decided to take a shortcut by leveraging an existing project: Agile’s online seismic modelling package, modelr. The fact that modelr works through any web browser (including a smartphone) kept things mobile. In addition, modelr’s existing functionality allows a user to upload a png image and use it as a rock property model. We chose to use a web API to interface our code with the web application (as a bonus, our approach conveniently fit with the hackathon’s theme of Web). Using modelr’s capabilities, our hack was left with the task of turning a photo of a sketched geologic section into a png image where each geologic body is identified as a different color. An image processing project!
Agile is a strong proponent for Python in geophysics (for reasons nicely articulated in their blog post), and the team was familiar with the language to one extent or another. There was no question that it was the language of choice for this project. And no regrets!
We aimed to create an algorithm robust enough to handle any image of anything a user might sketch while accurately reproducing their intent. Marker on whiteboard presents different challenges than pencil on paper. Light conditions can be highly variable. Sketches can be simple or complex, tidy or messy. When a user leaves a small gap between two lines of the sketch, should the algorithm take the sketch as-is and interpret a single body? Or fill the small gap and interpret two separate bodies?
Our algorithm needs to be robust enough to handle a variety of source images: simple, complex, pencil, marker, paper, white board (check out the glare on the bottom left image). These are some of the test images we used.
Matteo has used image processing for geoscience before, so he landed on an approach for our hack almost instantly: binarize the image to distinguish sketch from background (turn color image into a binary image via thresholding); identify and segregate geobodies; create output image with each body colored uniquely.
Taking the image of the original sketch (left) and creating a binary image (right) is an integral part of the sketch2model process.
Python has functions to binarize a color image, but for our applications, the results were very inconsistent. We needed a tool that would work for a variety of media in various lighting conditions. Fortunately, Matteo had some tricks up his sleeve to precondition the images before binarization. We landed on a robust flow that can binarize whatever we throw at it. Matteo will be crafting a blog post on this topic to explain what we’ve implemented.
Once the image is binarized, each geological body must be automatically identified as a closed polygon. If the sketch were reproduced exactly as imagined, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended — an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross, see example below). We applied some morphological filtering to compensate for the sketch imperfections. If applied too liberally, this type of filtering causes unwanted side effects. Elwyn will explore how we struck a balance between filling unintentional gaps and accurate sketch reproduction in an upcoming blog post.
Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).
Compared to the binarization and segmentation, generating the output is a snap. With this final step, we’ve transformed a sketch into a png image where each geologic body is a different color. It’s ready to become a synthetic seismic section in modelr.
Into the Wild
“This is so cool. Draw something on a whiteboard and have a synthetic seismogram right on your iPad five seconds later. I mean, that’s magical.”
Sketch2model was a working prototype by the end of the hackathon. It wasn’t the most robust algorithm, but it worked on a good proportion of our test images. The results were promising enough to continue development after the hackathon. Evidently, we weren’t the only ones interested in further development because sketch2model came up on the February 17th episode of Undersampled Radio. Host Matt Hall: “This is so cool. Draw something on a whiteboard and have a synthetic seismogram right on your iPad five seconds later. I mean, that’s magical.”
Since the hackathon, the algorithm and web interface have progressed to the point that you can use it on your own images at sketch2model.com. To integrate this functionality directly into the forward modelling process, sketch2model will become an option in modelr. The team has made this an open-source project, so you’ll also find it on GitHub. Check out the sketch2model repository if you’re interested in the nuts and bolts of the algorithm. Information posted on these sites is scant right now, but we are working to add more information and documentation.
Sketch2model is designed to enable a new kind of collaboration and creativity in subsurface modelling. By applying image processing techniques, our team built a path to an unconventional kind of forward seismic modelling. Development has progressed to the point that we’ve released it into the wild to see how you’ll use it.