# Computer vision in geoscience: recover seismic data from images, introduction

In a recent post titled Unweaving the rainbow, Matt Hall described our joint attempt (partly successful) to create a Python tool to enable recovery of digital data from any pseudo-colour scientific image (and a seismic section in particular, like the one in Figure 1), without any prior knowledge of the colormap.

Figure 1. Test image: a photo of a distorted seismic section on my wall.

Please check our GitHub repository for the code and slides and watch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:

In the next two post, coming up shortly, I will describe in greater detail my contribution to the project, which focused on developing a computer vision pipeline to automatically detect  where the seismic section is located in the image, rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section. The full workflow is included below (with sections I-VI developed to date):

• I – Image preparation, enhancement:
1. Convert to gray scale
2. Optional: smooth or blur to remove high frequency noise
3. Enhance contrast
• II – Find seismic section:
1. Convert to binary with adaptive or other threshold method
2. Find and retain only largest object in binary image
3. Fill its holes
4. Apply opening and dilation to remove minutiae (tick marks and labels)
• III – Define rectification transformation
1. Detect contour of largest object find in (2). This should be the seismic section.
2. Approximate contour with polygon with enough tolerance to ensure it has 4 sides only
3. Sort polygon corners using angle from centroid
4. Define new rectangular image using length of largest long and largest short sides of initial contour
5. Estimate and output transformation to warp polygon to rectangle
• IV – Warp using transformation
• V – Blanking annotations inside seismic section (if rectangular):
2. Pre-process and apply canny filter
3. Find contours in the canny filter smaller than input size
4. Sort contours (by shape and angular relationships or diagonal lengths)
5. Loop over contours:
1. Approximate contour
2. If approximation has 4 points AND the 4 semi-diagonals are of same length: fill contour and add to mask
• VI – Use mask to remove text inside rectangle in the input and blank (NaN) the whole rectangle.
• VII – Optional: tools to remove arrows and circles/ellipses:
1. For arrows – contours from (4) find ones with 7 sizes and low convexity (concave) or alternatively Harris corner and count 7 corners, or template matching
2. For ellipses – template matching or regionprops
• VIII – Optional FFT filters to remove timing lines and vertical lines

You can download from GitHub all the tools for the automated workflow (parts I-VI) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

The first post focuses on the image pre-processing and enhancement, and the detection of the seismic line (sections I and II, in green); the second one deals with the rectification of the seismic (sections IV to V, in blue). They are not meant as full tutorials, rather as a pictorial road map to (partial) success, but key Python code snippets will be included and discussed.

# The rainbow is dead…long live the rainbow! – Perceptual palettes, part 1

#### Introduction

This is the first  post in a series on the rainbow and similar color palettes. My goal is to demonstrate it is not a good idea to use these palettes to display scientific data, and then answer these two questions: (1) is there anything we can do to “fix” the rainbow, and (2) if not, can we design a new one from scratch.

#### The rainbow is dead…some examples

In a previous post I showed a pseudo-3D rendering of my left hand x-ray using intensity (which is a measure of bone thickness) as the elevation. I mapped the rendering to both grayscale and rainbow color palettes, and here I reproduced the two images side by side:

I used this example to argue (briefly) that the rainbow obscures some details and confuses images by introducing artifacts. Notice that in this case it clearly reduces the effectiveness of the pseudo-3D rendering in general. It also introduces inversions in the perception of elevation. The thick part in the head of the radius bone, indicated by the arrow, looks like a depression, whereas it is clearly (and correctly) a high in the grayscale version.