Simulating seismic surveys using King Tut’s CAT scan

The remote sensing used to study the human body is very similar to the remote sensing used to study the subsurface. Apart from a scaling factor (due to the different frequencies of the signals used) the only major difference between the two methods of investigation is in that radiologists and doctors looking at an x-ray, ultrasound, or CAT scan image know what to look for in those images, as bones, tissues, and anomalies, have known characteristics, whereas the subsurface is always to a large extent unknown.

In this short visual post I am going to use a CAT scan of King Tut’s skull to explore the effect on the image quality of progressive decimation of the data followed by upsampling it back to the initial size. I will also look at the effect of these manipulations on the results of edge detection.

With this I want to simulate the progressive reduction in imaging quality that happens when going from high density 3D seismic acquisition to medium density 3D seismic to high quality, but sparse 2D seismic lines.

Here’s the input image in Figure 1.

tut20bone20frag

Figure 1. CAT scan of King Tut’s skull – Supreme Council of Antiquities. guardians.net/hawass/press_release_tutankhamun_ct_scan_results.htm

 

In Figure 2 I am showing the image after import into a Jupiter Notebook and conversion to grayscale, and the result of edge detection using the Sobel filter. Notice the excellent quality of the edge detection result.

ground_Tut

Figure 2. Original image, or ground truth for the experiment,  and edge detection result.

To simulate a high-resolution 3D seismic acquisition I decimated the original image by a factor of 4 in both directions. The resulting image (no interpolation) is, shown in Figure 3, is of good quality, and so is the edge detection result.

highres_3D

Figure 3. Simulated high-resolution 3D survey and edge detection result.

The image in Figure 4 results from a further decimation by a factor of 2 of the image in Figure 3, then interpolation to upsample to the same size as the image in Figure 4. The image and the edge detection are still of fair quality overall, but some of the smaller features have either disappeared, merged, or faded.

Figure 4. Simulated medium resolution 3D survey and edge detection result.

Figure 4. Simulated medium resolution 3D survey and edge detection result.

Now look at Figure 5: this is the equivalent of a high quality (in one direction) 2D dataset. Although we can still guess at what this represents, I would argue this is a result of our a priori knowledge of what it is supposed to represent – a human skull; and yet I don’t think anybody would want their doctor to make a diagnosis  based on this image.

Figure 5. Simulated set of very high-resolution 2D lines.

Figure 5. Simulated set of very high-resolution 2D lines.

The image in Figure 6 results from 2D interpolation (my intention is to simulate the result we would get by gridding 2D data to get a continuous image. We can now definitely interpret this as a skull, but the edge detection result is very unsatisfactory.

Figure 6. Simulated interpolation of 2D lines.

Figure 6. Simulated interpolation of 2D lines.

In  future post we will explore the effects of adding periodic noise (similar to seismic acquisition footprint) on these images and on the edge detection results. I will also show you how to remove it using 2D FFT filters, as promised (now more than a year ago) in my post Moiré Patterns.

If you would like to play with the code, get the Jupiter Notebook here.