Computer vision in geoscience: recover seismic data from images – part 2

In part 1 of this short series I demonstrated how to detect the portion occupied by the seismic section in an image (Figure 1).

Figure 1

The result was a single binary image with the white object representing the pixels occupied by the seismic section (Figure 2).

Figure 2

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

Next I want to use this binary object to derive a transformation function to rectify to a rectangle the seismic section in the input image.

The first step is to detect the contour of the object. Notice that because we used morphological operations it is not a perfect quadrilateral: it has rounded corners and some of the sides are bent, therefore the second step will be to approximate the contour with a polygon with enough tolerance to ensure it has 4 sides only(this took some trial and error but 25 turned out to be a good value for the parameter for a whole lot of test images I tried).

In reality, the two steps are performed together using the functions find_contours (there is only one to find, reallyand approximate_polygon from the skimage.measure module, as below:

contour = np.squeeze(find_contours(enhanced, 0))
coords = approximate_polygon(contour, tolerance=25)

The variable coords contains the coordinate for the corner points of the polygon (the first point is repeated last to close the polygon), which in Figure 3 I plotted superimposed to the input binary object.

Figure 3 – approximated polygon

A problem with the output of  approximate_polygon is that the points are not ordered; to solve it I adapted a function from a Stack Overflow answer to sort them based on the angle from their centroid:

def ordered(points):
  x = points[:,0]
  y = points[:,1]
  cx = np.mean(x)
  cy = np.mean(y)
  a = np.arctan2(y - cy, x - cx)
  order = a.ravel().argsort()
  x = x[order]
  y = y[order]
  return np.vstack([x,y])

I call the function as below to get the corners in the contour without the last one (repetition of the first point).

sortedCoords = ordered(coords[:-1]).T

I can then plot them using colors in a predefined order to convince myself the indeed are sorted:

plt.scatter(sortedCoords[:, 1], sortedCoords[:, 0], s=60, 
 color=['magenta', 'cyan', 'orange', 'green'])

Figure 4 – corners sorted in counter-clockwise order

The next bit of code may seem a bit complicated but it is not. With coordinates of the corners known, and their order as well, I can calculate the largest width and height of the input seismic section, and I use them to define the size of the registered output section, which is to be of rectangular shape:

w1 = np.sqrt(((sortedCoords[0, 1]-sortedCoords[3, 1])**2)
  +((sortedCoords[0, 0]-sortedCoords[3, 0])**2))
w2 = np.sqrt(((sortedCoords[1, 1]-sortedCoords[2, 1])**2)
  +((sortedCoords[1, 0]-sortedCoords[2, 0])**2))

h1 = np.sqrt(((sortedCoords[0, 1]-sortedCoords[1, 1])**2)
  +((sortedCoords[0, 0]-sortedCoords[1, 0])**2))
h2 = np.sqrt(((sortedCoords[3, 1]-sortedCoords[2, 1])**2)
  +((sortedCoords[3, 0]-sortedCoords[2, 0])**2))

w = max(int(w1), int(w2))
h = max(int(h1), int(h2))

and with those I define the coordinates of the output corners used to derive the transformation function:

dst = np.array([
  [0, 0],
  [h-1, 0],
  [h-1, w-1],
  [0, w-1]], dtype = 'float32')

Now I have everything I need to rectify the seismic section in the input image: it is warped using homologous points (the to sets of four corners) and a transformation function.

dst[:,[0,1]] = dst[:,[1,0]]
sortedCoords[:,[0,1]] = sortedCoords[:,[1,0]]
tform = skimage.transform.ProjectiveTransform()
tform.estimate(dst,sortedCoords)
warped =skimage.transform.warp(img, tform, output_shape=(h-1, w-1))

Notice that I had to swap the x and y coordinates to calculate the transformation function. The result is shown in Figure 5: et voilà!

Figure 5 – rectified seismic section

You can download from GitHub the code to try this yourself (both part 1 and part 2, and some of the optional features outlined in the introduction, like removing the rectangle with label inside the section) as well as an example Jupyter Notebook showing how to run it.

Computer vision in geoscience: recover seismic data from images – part 1

As anticipated in the introductory post of this short series I am going to demonstrate how to automatically detect where a seismic section is located in an image (be it a picture taken from your wall, or a screen capture from a research paper), rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section.

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

In this part one I will be focusing on the image preparation and enhancement, and the automatic detection of the seismic section (all done using functions from numpy, scipy, and scikit-image)In order to do that, first I convert the input image  (Figure 1) containing the seismic section to grayscale and then enhance it by increasing the image contrast (Figure 2).

Figure 1 – input image

 

Figure 2 – grayscale image

All it takes to do that is three lines of code as follows:

gry = skimage.color.rgb2gray(img);
p2, p95 = numpy.percentile(gry, (2, 95))
rescale = exposure.rescale_intensity(gry, in_range=(p2, p95))

For a good visual intuition of what actually is happening during the contrast stretching, check my post sketch2model – sketch image enhancements: in there  I show intensity profiles taken across the same image before and after the process.

Finding the seismic section in this image involve four steps:

  1. converting the grayscale image to binary with a threshold (in this example a global threshold with the Otsu method)
  2. finding and retaining only the largest object in the binary image (heuristically assumed to be the seismic section)
  3. filling its holes
  4. applying morphological operations to remove minutiae (tick marks and labels)

Below I list the code, and show the results.

global_thresh = threshold_otsu(rescale)
binary_global = rescale < global_thresh

Figure 3 – binary image

# (i) label all white objects (the ones in the binary image).
# scipy.ndimage.label actually labels 0s (the background) as 0 and then
# every non-connected, nonzero object as 1, 2, ... n.
label_objects, nb_labels = scipy.ndimage.label(binary_global)

# (ii) calculate every labeled object's binary size (including that 
# of the background)
sizes = numpyp.bincount(label_objects.ravel())

# (3) set the size of the background to 0 so that if it happened to be 
# larger than the largest white object it would not matter
sizes[0] = 0

# (4) keep only the largest object
binary_objects = remove_small_objects(binary_global, max(sizes))

Figure 4 – isolated seismic section

# Remove holes (black regions inside white object)
binary_holes = scipy.ndimage.morphology.binary_fill_holes(binary_objects)

Figure 5 – holes removed

enhanced = opening(binary_holes, disk(7))

Figure 6 – removed residual tick marks and labels

That’s it!!!

You can download from GitHub all the tools for the automated workflow (including both part 1 and part 2, and some of the optional features outlined in the introduction) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

In the next post, we will use this polygonal binary object both as a basis to capture the actual coloured seismic section from the input image and to derive a transformation to rectify it to a rectangle.

Computer vision in geoscience: recover seismic data from images, introduction

In a recent post titled Unweaving the rainbow, Matt Hall described our joint attempt (partly successful) to create a Python tool to enable recovery of digital data from any pseudo-colour scientific image (and a seismic section in particular, like the one in Figure 1), without any prior knowledge of the colormap.

Seismic picture on wall

Figure 1. Test image: a photo of a distorted seismic section on my wall.

Please check our GitHub repository for the code and slides and watch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:

In the next two post, coming up shortly, I will describe in greater detail my contribution to the project, which focused on developing a computer vision pipeline to automatically detect  where the seismic section is located in the image, rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section. The full workflow is included below (with sections I-VI developed to date):

  • I – Image preparation, enhancement:
    1. Convert to gray scale
    2. Optional: smooth or blur to remove high frequency noise
    3. Enhance contrast
  • II – Find seismic section:
    1. Convert to binary with adaptive or other threshold method
    2. Find and retain only largest object in binary image
    3. Fill its holes
    4. Apply opening and dilation to remove minutiae (tick marks and labels)
  • III – Define rectification transformation
    1. Detect contour of largest object find in (2). This should be the seismic section.
    2. Approximate contour with polygon with enough tolerance to ensure it has 4 sides only
    3. Sort polygon corners using angle from centroid
    4. Define new rectangular image using length of largest long and largest short sides of initial contour
    5. Estimate and output transformation to warp polygon to rectangle
  • IV – Warp using transformation
  • V – Blanking annotations inside seismic section (if rectangular):
    1. Start with output of (4)
    2. Pre-process and apply canny filter
    3. Find contours in the canny filter smaller than input size
    4. Sort contours (by shape and angular relationships or diagonal lengths)
    5. Loop over contours:
      1. Approximate contour
      2. If approximation has 4 points AND the 4 semi-diagonals are of same length: fill contour and add to mask
  • VI – Use mask to remove text inside rectangle in the input and blank (NaN) the whole rectangle. 
  • VII – Optional: tools to remove arrows and circles/ellipses:
    1. For arrows – contours from (4) find ones with 7 sizes and low convexity (concave) or alternatively Harris corner and count 7 corners, or template matching
    2. For ellipses – template matching or regionprops
  • VIII – Optional FFT filters to remove timing lines and vertical lines

You can download from GitHub all the tools for the automated workflow (parts I-VI) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

The first post focuses on the image pre-processing and enhancement, and the detection of the seismic line (sections I and II, in green); the second one deals with the rectification of the seismic (sections IV to V, in blue). They are not meant as full tutorials, rather as a pictorial road map to (partial) success, but key Python code snippets will be included and discussed.

Mild or wild: robustness through morphological filtering

This guest post (first published here) is by Elwyn Galloway, author of Scibbatical on WordPress. It is the forth in our series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience. Happy reading.

 

We’re highlighting a key issue that came up in our project, and describing what how we tackled it. Matteo’s post on Morphological Filtering does a great job of explaining what we implemented in sketch2model. I’ll build on his post to explain the why and how. In case you need a refresher on sketch2model, look back at sketch2model, Sketch Image Enhancement, Linking Edges with Geomorphological Filtering.

Morphological Filtering

As Matteo demonstrated by example, sketch2model’s ability to segment a sketch properly depends on the fidelity of a sketch.

fill_before_after_closing

An image of a whiteboard sketch (left) divides an area into three sections. Without morphological filtering, sketch2model segments the original image into two sections (identified as orange, purple) (centre). The algorithm correctly segments the area into three sections (orange, purple, green) when morphological filtering is applied (right).

To compensate for sketch imperfections, Matteo suggested morphological filtering on binarized images. Morphological filtering is a set of image modification tools which modify the shape of elements in an image. He suggested using the closing tool for our purposes. Have a look at Matteo’s Post for insight into this and other morphological filters.

One of the best aspects of this approach is that it is simple to apply. There is essentially one parameter to define: a structuring element. Since you’ve already read Matteo’s post, you recall his onion analogy explaining the morphological filtering processes of erosion and dilation – erosion is akin to removing an onion layer, dilation is adding a layer on. You’ll also recall that the size of the structuring element is the thickness of the layer added to, or removed from, the onion. Essentially, the parameterization of this process comes down to choosing the thickness of the onion layers.

Sketch2model uses dilation followed by erosion to fill gaps left between sketch lines (morphological dilation followed by erosion is closing). Matteo created this really great widget to illustrate closing using an interactive animation.

closing_demo1

Matteo’s animation was created using this interactive Jupyter notebook. Closing connects the lines of the sketch.

Some is Good, More is Better?

Matteo showed that closing fails if the structural element used is too small. So just make it really big, right? Well, there can be too much of a good thing. Compare what happens when you use an appropriately sized structuring element (mild) to the results from an excessively large structuring element (wild).

over-morph filtering.png

Comparing the results of mild and wild structuring elements: if the structuring element is too large, the filter compromises the quality of the reproduction.

Using a morphological filter with a structural element that is too small doesn’t fix the sketches, but using a structural element that is too large compromises the sketch too. We’re left to find an element that’s just right. Since one of the priorities for sketch2model was to robustly handle a variety of sketches with as little user input as possible — marker on whiteboard, pencil on paper, ink on napkin — we were motivated to find a way to do this without requiring the user to select the size of the structuring element.

Is there a universal solution? Consider this: a sketch captured in two images, each with their own resolution. In one image, the lines of the sketch appear to be approximately 16 pixels wide. The same lines appear to be 32 pixels wide in the other image. Since the size of the structuring element is defined in terms of pixels, it becomes apparent the ideal structuring element cannot be “one size fits all”.

res_vs_res

High-resolution (left) versus low-resolution (right) image of the same portion of a sketch. Closing the gap between the lines would require a different size structuring element for each image: about 5 pixels for high-resolution or 1 pixel for low-resolution.

Thinking Like a Human

Still motivated to avoid user parameterization for the structuring element, we explored ways to make the algorithm intelligent enough to select an appropriate structuring element on its own. Ultimately, we had to realize a few things before we came up with something that would work:

  1. When capturing an image of a sketch, users compose very similar images (compose in the photographic sense of the word): sketch is centered and nearly fills the captured image.
  2. The image of a sketch is not the same as a user’s perception of a sketch: a camera may record imperfections (gaps) in a sketch that a user does not perceive.
  3. The insignificance of camera resolution: a sketched feature in captured at two different resolutions would have two different lengths (in pixels), but identical lengths when defined as a percentage of image size.

With these insights, we deduced that the gaps we were trying to fill with morphological filtering would be those that escaped the notice of the sketch artist.

Recognizing the importance of accurate sketch reproduction, our solution applies the smallest structuring element possible that will still fill any unintentional gaps in a sketch. It does so in a way that is adaptable.

A discussion about the definition of “unintentional gap” allowed us to create a mandate for the closing portion of our algorithm. Sketch2model should fill gaps the user doesn’t notice. The detail below the limit of the user’s perception should not affect the output model. A quick “literature” (i.e. Google) search revealed that a person’s visual perception is affected by many factors beyond the eye’s optic limits. Without a simple formula to define a limit, we did what any hacker would do… define it empirically. Use a bunch of test images to tweak the structuring element of the closing filter to leave the perceptible gaps and fill in the imperceptible ones. In the sketch2model algorithm, the size of structuring element is defined as a fraction of the image size, so it was the fraction that we tuned empirically.

Producing Usable Results

Implicit in the implementation is sketch2model’s expectation that the user’s sketch, and their image of the sketch are crafted with some care. The expectations are reasonable: connect lines you’d like connected; get a clear image of your sketch. Like so much else in life, better input gives better results.

paper_pen_wow2_beforeafter.jpg

Input (left) and result (right) of sketch2model.

To produce an adaptable algorithm requiring as little user input as possible, the sketch2model team had to mix a little image processing wizardry with some non-technical insight.

Have you tried it? You can find it at sketch2model.com. Also on GitHub.


Previous posts in the sketch2model series: sketch2model, Sketch Image Enhancement, Linking Edges with Geomorphological Filtering.

sketch2model – linking edges with mathematical morphology

Introduction

As written by Elwyn in the first post of this seriessketch2model was conceived at the 2015 Calgary Geoscience Hackathon as a web and mobile app that would turn an image of geological sketch into a geological model, and then use Agile Geoscience’s modelr.io to create a synthetic seismic model.

original project promo image

The skech2model concept: modelling at the speed of imagination. Take a sketch (a), turn it into an earth model (b), create a forward seismic model (c). Our hack takes you from a to b.

One of the main tasks in sketch2model is to identify each and every geological body in a sketch  as a closed polygon. As Elwyn wrote, “if the sketch were reproduced exactly as imagined, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended – an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross, see example below). We applied some morphological filtering to compensate for the sketch imperfections.

Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).

Morphological filtering can compensate for imperfections in a sketch, as demonstrated in this example. The original sketch (left) was done with a marker on white board. Notice how the vertical stroke erased a small part of the horizontal one. The binarized version of the sketch (middle) shows an unintentional gap between the strokes, but morphological filtering successfully closes the small gap (right).

The cartoon below shows what would be the final output of sketch2model in the two cases in the example above (non closed and closed gap).

fill_before_after_closing

My objective with this post is to explain visually how we correct for some of these imperfections within sketch2model. I will focus on the use of morphological closing,  which consist in applying in sequence a dilation and an erosion, the two fundamental morphological operations.

Quick mathematical morphology review

All morphological operations result from the interaction of an image with a structuring element (a kernel) smaller than the image and typically in the shape of a square, disk, or diamond. In most cases the image is binary, that is pixels take either value of 1, for the foreground objects, or 0 for the background. The structuring element operates on the foreground objects.

Morphological erosion is used to remove pixels on the foreground objects’ boundaries. How ‘deeply’ the boundaries are eroded depends on the size of the structuring element (and shape, but in this discussion I will ignore the effect of changing the shape). This operation is in my mind analogous to peeling off a layer from an onion; the thickness of the layer is related to the structuring element size.

Twan Maintz in his book Digital and medical image processing describes the interaction of image and structuring element during erosion this way: place the structuring element anywhere in the image: if it is fully contained in the foreground object (or in one of the objects) then the origin (central) pixel of the structuring element (and only that one) is part of the eroded output. The book has a great example on page 129.

Dilation does the opposite of erosion: it expands the object boundaries (adding pixels) by an amount that is again related to the size of the structuring element. This is analogous to me to adding back a layer to the onion.

Again, thanks to Maintz the interaction of image and structuring element in dilation can be intuitively described: place the structuring element anywhere in the image: does it touch any of the foreground objects? If yes then the origin of the structuring element is part of the dilated result. Great example on pages 127-128.

Closing is then for me akin to adding a layer to an onion (dilation) and then peeling it back off (erosion) but with the major caveat that some of the changes produced by the dilation are irreversible: background holes smaller than the structuring element that are filled by the dilation are not restored by the erosion. Similarly, lines in the input image separated by an amount of pixels smaller than the size of the structuring element are linked by the dilation and not disconnected by the erosion, which is exactly what we wanted for sketch2model.

Closing demo

If you still need further explanation on these morphological operations, I’d recommend reading further on the ImageMagik user guide the sections on erosion, dilation, and closing, and the examples  on the Scikit-image website.

As discussed in the previous section, when applying closing to a binary image, the external points in any object in the input image will be left unchanged in the output, but holes will be filled, partially or completely, and disconnected objects like edges (or lines in sketches) can become connected.

We will now demonstrate it below with Python-made graphics but without code; however,  you can grab the Jupyter notebook with complete Python code on GitHub.

I will use this model binary image containing two 1-pixel wide lines. Think of them as lines in a sketch that should have been connected, but are not.

We will attempt to connect these lines using morphological closing with a disk-shaped structuring element of size 2. The result is plotted in the binary image below, showing that closing was successful.

But what would have happened with a smaller structuring element, or with a larger one? In the case of a disk of size 1, the closing magic did not happen:

Observing this result, one would increase the size of the structuring element. However, as Elwyn will show in the next post, also too big a structuring element would have detrimental effects, causing subsequent operations to introduce significant artifacts in the final results. This has broader implications for our sketch2model app: how do we select automatically (i.e. without hard coding it into the program) the appropriate structuring element size? Again, Elwyn will answer that question; in the last section I want to concentrate on explaining how the closing machinery works in this case.

In the next figure I have broken down the closing operation into its component dilation and erosion, and plotted them step by step to show what happens:

non_closed_break_red_two

So we see that the edges do get linked by the dilation, but by only one pixel, which the following erosion then removes.

And now let’s break down the closing with disk of size two into its component. This is equivalent to applying two consecutive passes of dilation with disk of size 1, and then two consecutive passes of erosion with disk of size 1, as in the demonstration in the next figure below (by the way, if we observed carefully the second panel above we could predict that the dilation with a disk of size two would result in a link 3-pixel wide instead of 1-pixel wide, which the subsequent erosion will not disconnect).

closed_break_red

Below is a GIF animated version of this demo, cycling to the above steps; you can also run it yourself by downloading and running the Jupyter notebook on GitHub.

closing_demo

Additional resources

Closing Jupyter notebook with complete Python code on GitHub

sketch2model Jupyter notebook with complete Python code on GitHub 

More reading on Closing, with examples

Related Posts

sketch2model (2015 Geoscience Hackathon, Calgary)

sketch2model – sketch image enhancements

Mapping and validating geophysical lineaments with Python

sketch2model – sketch image enhancements

This is the second post of in a series of collaborative articles about sketch2model, a project from the 2015 Calgary Geoscience Hackathon organized by Agile Geoscience.

The first post was written by Elwyn Galloway and published on both his Scibbatical blog and here on MyCarta. In that article Elwyn mentioned the need for an adaptive image conditioning workflow for binarization of photos with geological sketches in images. Binarization is the process of converting a natural image to a binary image (check this simple but awesome interactive demonstration of binarization), which in our case is necessary to separate the sketch from the background.

The following is a demonstration of the preliminary image processing operations applied to the input photo when sketch2model is run. The full code listing of this demonstration is available as a Jupyter notebook on GitHub. Also in GitHub you can find a Jupyter Notebook with the fully documented version of sketch2model.

First we import one of the photos with sketches and convert it to a grayscale image.

im = io.imread('paper_breaks.png')
im = color.rgb2gray(im[0:-1:2,0:-1:2])

paper_breaks_post

Next we enhance the grayscale image with a couple of cascaded processes. But before we do that, let’s graph the intensity values across the image to understand the degree of contrast between sketch edges and background, which ultimately will determine our success in separating them from it. We show this in the figure below, on the left, for one column of pixels (y direction). The black line across the input image on the right shows the location of the column selected. It is fairly obvious from the plot on the left that the intensity of the background is not uniform, due to variable light conditions when the photo was taken, and towards the right (e.g. bottom of the photo) it gets closer to that of the edges. In some images it might even become less than the intensity of the edge. This highlights the need for (preemptively) applying the enhancements illustrated in the remainder of the post.

section_raw

The first enhancement is called compressor, or limiter. I read many years ago that it is used in electronics to find hard edges in data: the idea is to square each element in the data (image, or other type of data), smooth the result (enough to remove high frequency variations but not so much as to eliminate variability), take the square root, and finally divide each element in the input by the square root result.

I experimented with this method (at the time using Matlab and its Image Processing Toolbox) using the same gravity dataset from my 2015 geophysical tutorial on The Leading Edge (see the post Mapping and validating geophysical lineaments with Python). An example of one such experiments is shown in the figure below where: the top left map is the Bouguer data; the centre top map is the squared data; the top right is the result of a Gaussian blur; the bottom left the result of square root, and centre right is the final output, where the hardest edges in the original data have been enhanced.

Sketch2model_compressor_campi

The most important parameter in this process is the choice of the smoothing or blur; using a Gaussian kernel of different size more subtle edges are enhanced, as seen in the bottom right map (these are perhaps acquisition-related gridding artifacts).

In our sketch2model implementation the size of the Gaussian kernel is hardcoded; it was chosen following trial and error on multiple photos of sketches and yielded optimal results in the greatest majority of them. We were planning to have the kernel size depend on the size of the input image, but left the implementation to our ‘future work’ list.’

Here’s the compressor code from sketch2model:

# compressor or limiter (electronics): find hard edges in data with long 
# wavelength variations in amplitude
# step 1: square each element in the image to obtain the power function
sqr = im**2
# step 2: gaussian of squared image
flt2 = sp.ndimage.filters.gaussian_filter(sqr,21)
# step 3: divide the intensity of each original pixel by the square root 
# of the smoothed square
cmprs= im/(np.sqrt(flt2))

and a plot of the result (same column of pixels as in the previous one):

section_compressed

From the plot above we see that now the background intensity is uniform and the contrast has been improved. We can maximize it with contrast stretching, as below:

# contrast stretching
p2, p98 = np.percentile(cmprs, (2, 98))
rescale = exposure.rescale_intensity(cmprs, in_range=(p2, p98))

section_stretched

We now have ideal contrast between edges and background, and can get a binary image with the desired sketch edges using a scalar threshold:

# binarize image with scalar threshold
binary = ~(color.rgb2gray(rescale) > 0.5)

section_binary

Bingo!