The Card Analogy: AI, Originality, and the Art of the Steal

Featured

A shorter version of this post appeared on LinkedIn on March 25, 2026. This version includes additional prose, figures, and a postscript on a conversation it sparked.


In a previous LinkedIn article, I pointed my BS detector at AI news. This time I’m pointing it at my own AI.

I had been writing a blog post with Claude about intelligence and communication across species. In it, I mentioned that we share 98% of our DNA with gorillas. Reading it back, I had a doubt — if humans and gorillas share 98%, how come chimps are our closest relative at only 94%?

The numbers turned out not to be directly comparable — different measurement methods — but the question still stood. Whatever the exact percentages, they’re averages across the whole genome. And averages hide a lot.

That led me into incomplete lineage sorting — the fact that if you line up human, chimp, and gorilla DNA and compare it piece by piece, about 30% of the genome tells a different evolutionary story than the species tree (Scally et al. 2012). Well established science. Notoriously hard to explain.

I asked Claude: “so the genome and speciation diagrams do not overlap?” It responded with an analogy I hadn’t asked for — dealing cards from a deck.

Seven gene variants in the ancestral population. Gorillas split off first — first deal. Some cards go to gorilla, the human-chimp ancestor keeps others, some go to both. Second deal splits human from chimp. Three players, overlapping hands. Compare: Card D went to human and chimp but not gorilla — species tree. Card C went to human and gorilla, skipping chimp — contradicts it. Card E went to chimp and gorilla, skipping human.

I said “make me a diagram.” No specs. Claude produced the figure below.

Two cosmetic tweaks from me afterward. Everything else — concept, layout, card naming — is Claude’s.

So: is this actually original?

That’s a claim worth auditing. I ran it through the same framework I built for the Pentagon/ChatGPT post.

Step 1: Search. I asked Claude to search for prior card-dealing analogies for ILS. It found analogies using M&Ms (coloured candies sorted into jars — Avian Hybrids, 2022) and Pachinko machines (marbles through pegs — The G-cat, 2021). No cards. Those analogies explain random sorting of identical items into bins. The card version does something different: distinct identities per variant, two sequential deals, and a colour-coded punchline mapping to the three topologies.

Step 2: Audit the search itself. An LLM claiming “I didn’t find it” is not the same as “it doesn’t exist.” So I ran a Fermi sanity check on the search coverage. ILS is a niche topic — maybe 50 published explainers total, of which maybe 6 use any analogy at all. The keywords “card,” “deck,” “dealt” are highly distinctive in evolutionary biology. If a card analogy existed in any indexed source, five independent searches would almost certainly surface it. Probability of missing it in searchable literature: ~3%.

Step 3: Check the limits. Web search doesn’t reach textbook interiors, lecture slides, or classroom analogies. Someone may have used cards to explain ILS on a whiteboard in 2004. I can’t rule that out, and I shouldn’t claim to.

Step 4: Independent verification. I ran the claim through a separate Claude instance with extended thinking, using the full bullshit-detector framework — source verification, Fermi sanity check on search coverage, logical fallacy scan. Verdict: claim passes for searchable literature. The key flag: watch for equivocation on “published.” Web-indexed is not the same as “ever conceived.”

Verdict: “No published precedent found in searchable literature” is defensible. “First ever” is not. That distinction matters — it’s the same denominator hygiene from the Pentagon post. Know what your evidence covers and don’t claim more.

There’s a sharper version of the pattern-matching hypothesis worth naming. M&Ms were almost certainly in Claude’s training data. The move from “identical items sorted randomly into bins” to “distinct cards dealt sequentially to named players” is exactly the kind of transformation usually attributed to Picasso — “great artists steal” — though the line is almost certainly T.S. Eliot’s, who said it first, said it better, and meant something more precise: that the good poet welds the theft into something utterly different from the source. If that’s what happened here, the output is novel but the mechanism isn’t a jump — it’s a steal from a prior analogy in the same domain. I can’t rule it out. The audit covers what’s published, not what’s in the weights.

The direct precedent in the research literature is Figure 5 of Rivas-González et al. (2024) — topology posterior probabilities along a single chromosome. Rigorous work. Also very hard to read if you don’t already know population genetics.

Figure 5B from Rivas-González et al. 2024, PLOS Genetics 20(2):e1010836, CC BY 4.0. A segment of chromosome 1 showing ILS levels and coalescent depths across genomic windows. Three tracks, two colour scales, one chromosome. And this is the simplified view!

I also tried building a simplified whole-genome chromosome painting — same colour scheme as the card, synthetic data matched to published proportions, multiple sorting strategies — but it still required considerable mental effort to read. Far from the card analogy.

Illustrative chromosome painting of incomplete lineage sorting across all 23 human chromosomes (1–22 plus X). Each horizontal bar is one chromosome, divided into 100 kb windows and colour-coded by which phylogenetic topology wins in that region: gold for human–chimp (the species tree, ~63%), green for human–gorilla (~18.5%), purple for chimp–gorilla (~18.5%). Generated with synthetic data matched to published genome-wide proportions; spatial clustering is representative, not derived from specific genomic coordinates. Compare with the card figure above: the same information, but at chromosome scale the signal dissolves into noise. The card analogy works because it operates at the level of mechanism, not data.

The first thing out of the conversation turned out to be the best thing. And the honest audit of the originality claim turned out to be more interesting than just asserting it.

What’s your take — does this count as genuine originality, or is it sophisticated pattern-matching that happens to land on something no one published before? There’s a broader debate running on this — whether LLMs are structurally capable of abduction, the kind of jump Einstein described from sensory experience to novel axioms, or whether they’re confined to induction and deduction no matter how fluent they look. I don’t have a settled answer. I’d be curious where you draw the line.


Postscript

On the same day I posted this on LinkedIn, I came across a post by Tom Chatfield — author and thinker on technology and language — about the “no true Scotsman” fallacy in discussions of AI creativity. The pattern he described: machines produce striking outputs, people dismiss them with “it can’t be genuinely creative because a machine made it.” The dismissal protects the category by rejecting inconvenient evidence rather than examining it.

The timing was coincidental. The overlap was too good to ignore, so I commented with the card analogy as a concrete test case. Chatfield’s reply singled out the phrase “dismissing the output to protect the category” and said his instinct was to investigate rather than gatekeep — to ask what kinds of creativity are at work and how they intersect with human learning.

That instinct is where the interesting conversation lives. The card analogy is a good test case because the output is concrete enough to examine: what did the model actually do, what’s novel about it, what isn’t? Much more productive than arguing about whether to award the word “creative.”

And does it matter?

The AlphaGo comparison is worth raising. Move 37 in the second game against Lee Sedol is widely accepted as a creative act — it violated every human prior and it won. The creativity claim has a ground truth. LLM outputs don’t. The card analogy can’t be verified against a scoreboard, which leaves room for dismissal that Move 37 never faced. But that’s a different argument from “it can’t be creative because a machine made it.” Worth keeping the two separate.


References: Scally et al. 2012, Nature 483:169; Rivas-González et al. 2023, Science 380:eabn4409; Rivas-González et al. 2024, PLOS Genetics 20(2):e1010836. M&M analogy: Avian Hybrids blog, 2022 (avianhybrids.wordpress.com). Pachinko analogy: The G-cat blog, 2021 (theg-cat.com).

Card analogy and figure: Claude (Opus 4.6), unprompted during conversation. Provenance verified against conversation transcript with line numbers. Originality audited with bullshit-detector framework (pip install bullshit-detector).

Upcoming book: 52 things you should know about Geocomputing

I am very excited to write that, after a successful second attempt at collecting enough essays, the book 52 Things You Should Know About Geocomputing, by Agile Libre, is very likely to become a reality.

*** July 2020 UPDATE ***

This project got a much needed boost during the hackathon at the 2020 Transform virtual event. Watch the video recording on the group YouTube channel here.

*********************

In one of the three chapter I submitted for this book, Prototype colourmaps for fault interpretation, I talk about building a widget for interactive generation of grayscale colourmaps with sigmoid lightness. The whole process is well described in the chapter and showcased in the accompanying GitHub repo.

This post is an opportunity for me to reflect on how revisiting old projects is very important. Indeed, I consider it an essential part of how I approach scientific computing, and a practical way to incorporate new insights, and changes (hopefully betterments) in your coding abilities.

In the fist version of the Jupyter notebook, submitted in 2017, all the calculations and all the plotting commands where packed inside a single monster function that was passed to ipywidgets.interact. Quite frankly, as time passed this approach seemed less and less Phytonic (aka mature) and no longer representative of my programming skills and style, and increased understanding of widget objects.

After a significant hiatus (2 years) I restructured the whole project in several ways:
– Converted Python 2 code to Python 3
– Created separate helper functions for each calculation and moved them to the top to improve on both clarity and reusability of the code.
– Improved and standardized function docstrings
– Optimized and reduced the number of parameters
– Switched from interact to interactive to enable access to the colormaparray in later cells (for printing, further plotting, and exporting).

The new notebook is here.

In a different chapter in the book I talk at more length about the need to Keep on improving your geocomputing projects.

.

How to fix rainbows and other bad colormaps using Python

Yep, colormaps again!

In my 2014 tutorial on The Leading Edge I showed how to Evaluate and compare colormaps (Jupyter notebook here). The article followed an extended series of posts (The rainbow is dead…long live the rainbow!) and then some more articles on rainbow-like colormap artifacts (for example here and here).

Last year, in a post titled Unweaving the rainbow, Matt Hall described our joint attempt to make a Python tool for recovering digital data from scientific images (and seismic sections in particular), without any prior knowledge of the colormap. Please check our GitHub repository for the code and slides, and watch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:

One way to use the app is to get an image with unknown, possibly awful colormap, get the data, and re-plot it with a good one.

Matt followed up on colormaps with a more recent post titled No more rainbows! where he relentlessly demonstrates the superiority of perceptual colormaps for subsurface data. Check his wonderful Jupyter notebook.

So it might come as a surprise to some, but this post is a lifesaver for those that really do like rainbow-like colormaps. I discuss a Python method to equalize colormaps so as to render them perceptual.  The method is based in part on ideas from Peter Kovesi’s must-read paper – Good Colour Maps: How to Design Them – and the Matlab function equalisecolormap, and in part on ideas from some old experiments of mine, described here, and a Matlab prototype code (more details in the notebook for this post).

Let’s get started. Below is a time structure map for a horizon in the Penobscot 3D survey (offshore Nova Scotia, licensed CC-BY-SA by dGB Earth Sciences and The Government of Nova Scotia). Can you clearly identify the discontinuities in the southern portion of the map? No?

equalization_before_horizon

OK, let me help you. Below I am showing the map resulting from running a Sobel filter on the horizon. Penobscop_sobel

This is much better, right? But the truth is that the discontinuities are right there in the original data; some, however, are very hard to see because of the colormap used (nipy spectral, one of the many Matplotlib cmaps),  which introduces perceptual artifacts, most notably in the green-to-cyan portion.

In the figure below, in the first panel (from the top) I show a plot of the colormap’s Lightness value (obtained converting a 256-sample nipy spectral colormap from RGB to Lab) for each sample; the line is coloured by the original RGB colour. This erratic Lightness profile highlights the issue with this colormap: the curve gradient changes magnitude several times, indicating a nonuniform perceptual distance between samples.

In the second panel, I show a plot of the cumulative sample-to-sample Lightness contrast differences, again coloured by the original RGB colours in the colormap. This is the best plot to look at because flat spots in the cumulative curve correspond to perceptual flat spots in the map, which is where the discontinuities become hard to see. Notice how the green-to-cyan portion of this curve is virtually horizontal!

That’s it, it is simply a matter of very low, artificially induced perceptual contrast.

Solutions to this problem: the obvious one is to Other NOT use this type of colormaps (you can learn much about which are good perceptually, and which are not, in here); a possible alternative is to fix them. This can be done by re-sampling the cumulative curve so as to give it constant slope (or constant perceptual contrast). The irregularly spaced dots at the bottom (in the same second panel) show the re-sampling locations, which are much farther apart in the perceptually flat areas and much closer in the more dipping areas.

The third panel shows the resulting constant (and regularly sampled) cumulative Lightness contrast differences, and the forth and last the final Lightness profile which is now composed of segments with equal Lightness gradient (in absolute value).

equalization_pictorialHere is the structure map for the Penobscot horizon using the nipy spectum before and after equalization on top of each other, to facilitate comparison. I think this method works rather well, and it will allow continued use of their favourite rainbow and rainbow-like colormaps by hard core aficionados.

 

equalize

If you want the code to try the equalization, get the noteboook on GitHub.

Computer vision in geoscience: recover seismic data from images, introduction

In a recent post titled Unweaving the rainbow, Matt Hall described our joint attempt (partly successful) to create a Python tool to enable recovery of digital data from any pseudo-colour scientific image (and a seismic section in particular, like the one in Figure 1), without any prior knowledge of the colormap.

Seismic picture on wall

Figure 1. Test image: a photo of a distorted seismic section on my wall.

Please check our GitHub repository for the code and slides and watch Matt’s talk (very insightful and very entertaining) from the 2017 Calgary Geoconvention below:

In the next two post, coming up shortly, I will describe in greater detail my contribution to the project, which focused on developing a computer vision pipeline to automatically detect  where the seismic section is located in the image, rectify any distortions that might be present, and remove all sorts of annotations and trivia around and inside the section. The full workflow is included below (with sections I-VI developed to date):

  • I – Image preparation, enhancement:
    1. Convert to gray scale
    2. Optional: smooth or blur to remove high frequency noise
    3. Enhance contrast
  • II – Find seismic section:
    1. Convert to binary with adaptive or other threshold method
    2. Find and retain only largest object in binary image
    3. Fill its holes
    4. Apply opening and dilation to remove minutiae (tick marks and labels)
  • III – Define rectification transformation
    1. Detect contour of largest object find in (2). This should be the seismic section.
    2. Approximate contour with polygon with enough tolerance to ensure it has 4 sides only
    3. Sort polygon corners using angle from centroid
    4. Define new rectangular image using length of largest long and largest short sides of initial contour
    5. Estimate and output transformation to warp polygon to rectangle
  • IV – Warp using transformation
  • V – Blanking annotations inside seismic section (if rectangular):
    1. Start with output of (4)
    2. Pre-process and apply canny filter
    3. Find contours in the canny filter smaller than input size
    4. Sort contours (by shape and angular relationships or diagonal lengths)
    5. Loop over contours:
      1. Approximate contour
      2. If approximation has 4 points AND the 4 semi-diagonals are of same length: fill contour and add to mask
  • VI – Use mask to remove text inside rectangle in the input and blank (NaN) the whole rectangle. 
  • VII – Optional: tools to remove arrows and circles/ellipses:
    1. For arrows – contours from (4) find ones with 7 sizes and low convexity (concave) or alternatively Harris corner and count 7 corners, or template matching
    2. For ellipses – template matching or regionprops
  • VIII – Optional FFT filters to remove timing lines and vertical lines

You can download from GitHub all the tools for the automated workflow (parts I-VI) in the module mycarta.py, as well as an example Jupyter Notebook showing how to run it.

The first post focuses on the image pre-processing and enhancement, and the detection of the seismic line (sections I and II, in green); the second one deals with the rectification of the seismic (sections IV to V, in blue). They are not meant as full tutorials, rather as a pictorial road map to (partial) success, but key Python code snippets will be included and discussed.

New Horizons truecolor Pluto recolored in Viridis and Inferno

Oh, the new, perceptual MatplotLib colormaps…..

Here’s one stunning, recent Truecolor image of Pluto from the New Horizons mission:

Credit: NASA/JHUAPL/SwRI

Original image: The Rich Color Variations of Pluto. Credit: NASA/JHUAPL/SwRI. Click on the image to view the full feature on New Horizon’s site

Below, I recolored using two of the new colormaps:

colormappedNew_Horizons_Pluto

Recolored images: I like Viridis, by it is Inferno that really brings to life this image, because of its wider hue and lightness range!

 

Reinventing the color wheel – part 2

In the first post of this series I argued that we should not build colormaps for azimuth (or phase) data by interpolating linearly between fully saturated hues in RGB or HSL space.

A first step towards the ideal colormap for azimuth data would be to interpolate between isoluminant colours instead. Kindlmann et al. (2002) published isoluminant RGB values for red, yellow, green, cyan, blue, and magenta based on a user study. The code in the next block show how to interpolate between those published colours to get 256-sample R, G, and B arrays (with magenta repeated at both ends), which can then be combined into a isoluminant colormap for azimuth data.

01 r = np.array([0.718, 0.847, 0.527, 0.000, 0.000, 0.316, 0.718])
02 g = np.array([0.000, 0.057, 0.527, 0.592, 0.559, 0.316, 0.000])
03 b = np.array([0.718, 0.057, 0.000, 0.000, 0.559, 0.991, 0.718])
04 x = np.linspace(0,256,7)
05 xnew = np.arange(256)
06 r256 = np.interp(xnew, x, r)
07 g256 = np.interp(xnew, x, g)
08 b256 = np.interp(xnew, x, b)

This is a good example in general of how to interpolate to a finer sampling one or more sequence of values using the interp function from the Numpy library. In line 04 we define 7 evenly spaced points between 0 and 255; this will be the sample coordinate for the r, g, and b colours created in lines 01-03. In line 05 we create the new coordinates we will be interpolating r, g, and b values at in lines 06-08 (all integers between 0 and 256). The full code will come in the Notebook accompanying the last post in this series.

This new colormap is used in the bottom map of the figure below, whereas in the top map we used a conventional HSV azimuth colormap (both maps show the dip azimuth calculated on the Penobscot horizon). The differences are subtle, but with the isoluminant colormap we are guaranteed there are no perceptual artifacts due to the random variations in lightness of the fully saturated HSV colors.

Azimuth_compare

Another possible strategy to create a perceptual colormap for azimuth data would be to set lightness and chroma to constant values in LCH space and interpolate between hues. This is the Matlab colormap I previously created, and shown in Figure 4 of New Matlab isoluminant colormap for azimuth data. In the next post, I will show how to do this in Python.

Read more on colors and seismic data

The last two posts on Agile show you how to corender seismic amplitude and continuity from a time slice using a 2D colormap,  and then how to corender 3 attributes from a horizon slice.

Reference

Kindlmann, E. et al. (2002). Face-based Luminance Matching for Perceptual Colour map Generation – Proceedings of the IEEE conference on Visualization.

Reinventing the color wheel – part 1

In New Matlab isoluminant colormap for azimuth data I showcased a Matlab colormap that I believe is perceptually superior to the conventional, HSV-based colormaps for azimuth data, in that it does not superimposes on the data the color artifacts that plague all rainbows. However, it still has a limitation, which is that the main colours do not correspond exactly to the four compass directions N, E, W, and S.

My intention with this series is to go back to square one, deconstruct the conventional colormaps for azimuth, and build a new one that has all the desired properties of both perceptual linearity, and correct location of the main colors. All reproducible in Python.

If we wanted to build from scratch a colormap for azimuth (or phase) data the main tasks would be to generate a sequence of distinguishable colours at opposite quadrants, or compass directions (like 0 and 180 degrees, or N and S), and to wrap around the sequence with the same colour at the two ends.

But to do that, we should avoid interpolating linearly between fully saturated hues in RGB or HSL space.

To illustrate why, it is useful to look at the figure below. On the left is a hue circle with primary, secondary, and tertiary colours in a counter-clockwise sequence: red, rose, magenta, violet, blue, azure, cyan, aquamarine, electric green, chartreuse, yellow, and orange. The colour chips are placed at evenly spaced angular distances according to their hue (in radians).

hue-wheel-compare

Left, primary, secondary, and tertiary colour chips arranged using hue for angular distance; right, the same colour chips arranged using intensity for angular distance.

This looks familiar and seems like a natural ordering of colors, so we may be tempted in building a colormap, to just take that sequence, wrap it around at the red (or the magenta) and linearly interpolate to 256 colours to get a continuous colormap [1], and use it for azimuth data, which is how usually the conventional azimuth colormaps are built.

On the right side in the figure the chips have been rearranged according to their intensity on a counter-clockwise sequence from 0 to 255 with 0 at three hours; so, for example blue, which is the darkest colour with an intensity of 29, is close to the beginning of the sequence, and yellow, the brightest with an intensity of 225, is close to the end. Notice that the chips are no longer equidistant.

The most striking is that the blue and the yellow chips are more separated than the other chips, and for this reason blue and yellow features seem to stand out a lot more in a map when using this color sequence, which can be both distracting and confusing. A good example is Figure 3 in New Matlab isoluminant colormap for azimuth data.

Also, yellow and red, being two chips apart in the left circle in the figure above, are used to colour azimuths 60 degrees apart, and so do cyan and green. However, if we look at the right circle, we realize that the yellow and red chips are much further apart than the cyan and green chips [2] in the perceptual dimension of intensity; therefore, features colored in yellow and red  could be perceived as much further apart (in azimuth) than cyan and green.

These differences may be subtle, but in my opinion they become important when dip azimuth is combined with other attributes, perhaps using a 3D colormap, and the resulting map is used for detailed structural interpretation. There is a really good example of this type of 3D colormap in Chopra and Marfurt (2007), where dip azimuth is rendered with hue modulation, dip magnitude with saturation modulation, and coherence with lightness modulation.

A code snippet with the main Python commands to generate the two polar scatterplots in the figure is listed, and explained below. The full code can be found in this Jupiter Notebook.

01 import matplotlib.colors as clr
02 keys=['red', '#FF007F', 'magenta', '#7F00FF', 'blue', '#0080FF','cyan', '#00FF80',
'#00FF00', '#7FFF00', 'yellow', '#FF7F00']
03 my_cmap = clr.ListedColormap(keys)
04 x = np.arange(12)
05 color = my_cmap(x)
06 n = 12
07 theta = 2*np.pi*(np.linspace(0,1,13)) 
08 r = np.ones(13)*2.5
09 area = 200*r**2 # size of color chips
10 c = plt.scatter(theta, r, c=color, s=area)
11 theta_i = 2*np.pi*(sorted_intensity/255.0)
12 colors = my_sorted_cmap(np.arange(12))
13 c = plt.scatter(theta_i, r, c=colors, s=area)

In line 01 we import the Colors module from the Matplotlib library, then line 02 creates the desired sequence of colours (red, rose, magenta, violet, blue, azure, cyan, aquamarine, electric green, chartreuse, yellow, and orange) using either the name or Hex code, and line 03 generates the colormap. Then we use lines 04 and 05 to assign colours to the chips in the first scatterplot (left), and lines 06, 07, and 09 to specify the number of chips, the angular distances between chips, and the area of the chips, respectively. Line 10 generates the plot. The modifications in lines 11-14 will result in the scatterplot on the right side in the figure (the sorted intensity is calculated in much the same way as in my Geophysical tutorial – How to evaluate and compare colormaps in Python).

 

[1] Or, perhaps, just create 12 discrete colour classes to group azimuth values in bins of pi/6 (30 degrees) each, and wrap around again at the magenta, to generate a discrete colormap.

[2] The green chip is almost completely covered by the orange chip.

Spectral lightness rainbow colormap

Spectral lightness rainbow

Quick post to share my replica of Ethan Montag ‘s Spectral lightness colormap from this paper. My version has a linear Lightness profile (Figure 1) that increases monotonically from 0 (black) to 100 (white) while Hue cycles from 180 to 360 degrees and then wraps around and continues from 0 to 270.

You can download the colormap files (comma separated in MS Word .doc format) with RGB values in 0-1 range and 0-255 range.

L-plot

Figure 1

Spectral-L

Figure 2

Matlab code

To run this code you will need Colorspace, a free function from Matlab File Exchange, for the color space transformations.

%% generate Chroma, Lightness, Hue components
h2=linspace(0,270,60*2.56);
h1=linspace(180,360,103);
h=horzcat(h1,h2);
c=ones(1,256)*25;
l=linspace(0,100,256);
%% merge together
lch = vertcat(l,c,h)';
%% convert to RGB
rgb = colorspace('LCH->rgb',lch);
%% Plot color-coded lightness profile
figure;
h=colormapline(1:1:256,lch(:,1),[],rgb);
set(h,'linewidth',2.8);
title('Montag Spectral L* lightness plot ','Color','k','FontSize',12,'FontWeight','demi');
%%  Pyramid test data
PY=zeros(241,241);
for i = 1:241
    temp=(0:1:i-1);
      PY(i,1:i)=temp(:);
end
test=PY.';
test=test-1;
test(test==-1)=0;
test1=test([2:1:end,1],:);
PY1=PY+test1;
PY2=fliplr(PY1);
PY3= flipud(PY1);
PY4=fliplr(PY3);
GIZA=[PY1,PY2;PY3,PY4].*2;
x=linspace(1,756,size(GIZA,1));
y=x;
[X,Y]=meshgrid(x,y);
clear i test test1 PY PY1 PY2 PY3 PY4 temp;

%% display Pyramid surface
fig1=figure;
surf(X,Y,GIZA,GIZA,'FaceColor','interp','EdgeColor','none');
view(-35,70);
colormap(rgb);
axis off;
grid off;
colorbar;
% set(fig1,'Position',[720 400 950 708]);
% set(fig1,'OuterPosition',[716 396 958 790]);
title('Montag Spectral L*','Color','k','FontSize',12,'FontWeight','demi');

Aknowledgements

The coloured lightness profiles were made using the Colormapline submission from the Matlab File Exchange.