Modernizing Python Code in the AI Era: A Different Kind of Learning

Featured

A few years ago I wrote about advancing my Python coding skills after working through a couple of chapters from Daniel Chen’s excellent book Pandas for Everyone. In that post I showed how I improved code I’d written in 2018 for the SEG Machine Learning contest. The original code used unique() to get lists of well names, then looped through with list comprehensions to calculate flagged samples and proportions. The 2020 version replaced all that with groupby() and apply(), making it much more compact and Pythonic. For example, where I’d written a list comprehension like [result_a.loc[result_a.zone==z,'flag'].sum() for z in zones_a], I could now write simply result_a.groupby('zone', sort=False).flag.sum().values. The runtime also improved – from 86ms down to 52ms. I remember being quite happy with how much cleaner and more readable the code turned out, and how the learning from those two chapters made an immediate practical difference.

Recently, I had to modernize the Busting bad colormaps Panel app, which I built back in 2020 to demonstrate colormap distortion artifacts (something that – as you know – I care a lot about). The app had been deliberately frozen in time – I’d pinned specific library versions in the environment file because I knew things would eventually become obsolete, and I wanted it to stay functional for as long as possible without having to constantly fix compatibility issues.

But some of those issues had finally caught up with me, and the app had ben down for soem time. Last fall, working with Github copilot, I fixed some matplotlib 3.7+ compatibility problems – replace the deprecated cm.register_cmap() with plt.colormaps.register(), fix anrgb2gray error, and resolve a ValueError in the plotting functions.

But the deployment was also broken. In 2021, mybinder.org had switched to JupyterLab as the default interface, changing how apps needed to be deployed. Panel developers had to adapt their code to work with this new setup. The old Panel server URL pattern no longer worked. I tried to figure out the new URL pattern by browsing through the Binder documentation, but I couldn’t make sense of it and failed miserably. It was a short-lived effort that pushed me toward trying something different: full-on coding with Claude Opus 4.5 using Copilot in VSCode.

That’s what allowed me, this month, to complete the modernization process (though honestly, we still haven’t fully sorted out a Binder timeout issue).

A step back to 2020: Building the app from scratch

When I originally built the colormap app, I coded everything myself, experimenting with Panel features I’d never used before, figuring out the supporting functions and visualizations. I also got very good advice from the Panel Discourse channel when I got stuck.

One issue I worked on was getting the colormap collection switching to behave properly. After the first collection switch, the Colormaps dropdown would update correctly, but the Collections dropdown would become non-responsive. With help from experts on the Discourse channel, I figured out how to fix it using Panel’s param.Parameterized class structure.

2026: Working with Claude

The second, and hardest part of the modernization was done almost entirely by Claude Opus. Here’s what that looked like in practice:

Binder deployment: Claude independently figured out the new JupyterLab URL pattern (?urlpath=lab/tree/NotebookName.ipynb instead of the old ?urlpath=%2Fpanel%2FNotebookName). Only later, when fact-checking for this post, did we discover the history of Binder’s 2021 switch to JupyterLab and how Panel had to adapt. This helped, though we’re still working through some timeout issues.

Environment upgrade: Claude upgraded to Python 3.12 and Panel 1.8.5, bringing everything up to modern versions. The key packages are now Panel 1.8.5, param 2.3.1, and bokeh 3.8.1.

Code modernization: Claude spotted and fixed deprecated API calls – the style parameter for Panel widgets became styles.

Collection switching – Claude’s breakthrough: This was Claude’s biggest solo contribution. The collection switching broke during the update, and Claude independently diagnosed that the class-based param.Parameterized approach that had worked in Panel 0.x wasn’t reliable in Panel 1.x. Without me having to guide the solution, Claude figured out how to rewrite it using explicit widgets with param.watch callbacks.

The comparison shows the change:

The new approach uses explicit widget objects with callback functions, which works more reliably in Panel 1.x than the class-based parameterized approach.

New features: Claude integrated two new colormap collections I’d been wanting to add for years – Fabio Crameri’s scientific colormaps (cmcrameri) and Kristen Thyng’s cmocean colormaps. That brought the total from 3 to 5 colormap collections.

Here are examples of the app showing each of the new collections:

The app testing of cmocean deep colormap
The app testing of Crameri’s batlow colormap

Documentation: Claude updated the README with detailed step-by-step Binder instructions, added a troubleshooting section, and created a table documenting all five colormap collections.

I provided the requirements and guidance throughout, but I almost never looked at the implementation details – what I’ve taken to calling the “bits and bobs” of the code. I focused on what I needed to happen, Claude figured out how to make it happen.

What changed (and what didn’t)

I still understand what the code does conceptually. I can read it, review it, check that it’s correct. I know why we needed to move from Parameterized classes to explicit widgets, and I understand the reactive programming model. But I didn’t write those lines myself.

The work happens at a different level now. I bring the domain expertise (what makes a good colormap visualization), the requirements (needs to deploy on Binder, needs these specific colormap collections), and the quality judgment (that widget behavior isn’t quite right). Claude brings the implementation knowledge, awareness of modern best practices, and the ability to quickly adapt code patterns to new frameworks.

This is really different from my 2020 experience. Back then, working through those Pandas patterns taught me techniques I could apply to other projects. Now, I’m learning what becomes possible when you can clearly articulate requirements and delegate the implementation.

The honest trade-off

There’s a trade-off here, and I’m trying to be honest about it. In 2020, working through the Panel widget patterns taught me things that stuck. In 2026, I got working, modernized code in a fraction of the time, but with less hands-on knowledge of Panel 1.x internals.

For this particular project, that trade-off made sense. I needed a working app deployed and accessible, not deep expertise in Panel migration patterns. But I’m conscious that I’m optimizing for different outcomes now: shipping features fast versus building deep technical understanding through hands-on work.

What this means going forward

After years of writing code line by line, this new way of working feels both efficient and different. I got more done in a couple of hours than I might have accomplished in several weeks working solo. The app is modernized, deployed, working better than ever, and even has new features I’d been wanting to add for years.

This has been a gamechanger for how I work. I still do the work that matters most to me: seeing the tool gap, coming up with the vision, iteratively prototyping to flesh out what I actually need. That’s substantial work, and it’s mine. But after that initial phase? A lot of the implementation will be done with Claude. The app is done and it’s great, and I know this is the path forward for me.

References

Chen, D.Y. (2018). Pandas for Everyone: Python Data Analysis. Addison-Wesley Professional.

Crameri, F. (2018). Geodynamic diagnostics, scientific visualisation and StagLab 3.0. Geoscientific Model Development, 11, 2541-2562. https://www.fabiocrameri.ch/colourmaps/

Niccoli, M. (2020). Keep advancing your Python coding skills. MyCarta Blog. https://mycartablog.com/2020/10/22/keep-advancing-your-python-coding-skills/

Thyng, K.M., Greene, C.A., Hetland, R.D., Zimmerle, H.M., and DiMarco, S.F. (2016). True colors of oceanography: Guidelines for effective and accurate colormap selection. Oceanography, 29(3), 9-13. https://matplotlib.org/cmocean/


Try the app yourself: The modernized colormap distortion app is available on GitHub and you can run it in Binder without installing anything.

Busting bad colormaps with Python and Panel

I have not done much work with, or written here on the blog about colormaps and perception in quite some time.

Last spring, however, I decided to build a web-based app to show the effects of using a bad colormaps. This stemmed from two needs: first, to further my understanding of Panel, after working through the awesome tutorial by James Bednar, Panel: Dashboards (at PyData Austin 2019); and second, to enable people to explore interactively the effects of bad colormaps on their perception, and consequently the ability to on interpret faults on a 3D seismic horizon.

I introduced the app at the Transform 2020 virtual subsurface conference, organized by Software Underground last June. Please watch the recording of my lightning talk as it explains in detail the machinery behind it.

I am writing this post in part to discuss some changes to the app. Here’s how it looks right now:

The most notable change is the switch from one drop-down selector to two-drop-down selectors, in order to support both the Matplotlib collection and the Colorcet collection of colormaps. Additionally, the app has since been featured in the resource list on the Awesome Panel site, an achievement I am really proud of.

You can try the app yourself by either running the notebook interactively with Binder, by clicking on the button below:

or, by copying and pasting this address into your browser:

https://mybinder.org/v2/gh/mycarta/Colormap-distorsions-Panel-app/master?urlpath=%2Fpanel%2FDemonstrate_colormap_distortions_interactive_Panel

Let’s look at a couple of examples of insights I gained from using the app. For those that jumped straight to this example, the top row shows:

  • the horizon, plotted using the benchmark grayscale colormap, on the left
  • the horizon intensity, derived using skimage.color.rgb2gray, in the middle
  • the Sobel edges detected on the intensity, on the right

and the bottom row,  shows:

  • the horizon, plotted using the Matplotlib gist_rainbow colormap, on the left
  • the intensity of the colormapped, in the middle. This is possible thanks to a function that makes a figure (but does not display it), plots the horizon with the specified colormap, then saves plot in the canvas to an RGB numpy array
  • the Sobel edges detected on the colormapped intensity, on the right

I think the effects of this colormaps are already apparent when comparing the bottom left plot to the top left plot. However, simulating perception can be quite revealing for those that have not considered these effects before. The intensity in the bottom middle plot is very washed out in the areas corresponding to green color in the bottom left, and as a result, many of the faults are not visible any more, or only with much difficulty, which is demonstrated by the Sobel edges in the bottom right.

And if you are not quite convinced yet, I have created these hill-shaded maps, using Matt Hall”s delightful function from this notebook (and check his blog post):

Below is another example, using the Colocrcet cet_rainbow which is is one of Peter Kovesi’s perceptually uniform colormaps.  I use many of Peter’s colormaps, but never used this one, because I use my own perceptual rainbow, which does not have  a fully saturated yellow, or a fully saturated red. I think the app demonstrate, that even though they are more subtle , this rainbow still is introducing some artifacts. The yellow colour creates narrow flat bands, visible in the intensity and Sobel plots, and indicated by yellow arrows; the red colour is also bad as usual, causing an artificial decrease in intensity(magenta arrows).

New Horizons truecolor Pluto recolored in Viridis and Inferno

Oh, the new, perceptual MatplotLib colormaps…..

Here’s one stunning, recent Truecolor image of Pluto from the New Horizons mission:

Original image: The Rich Color Variations of Pluto. Credit: NASA/JHUAPL/SwRI. Click on the image to view the full feature on New Horizon’s site

Below, I recolored using two of the new colormaps:

Recolored images: I like Viridis, by it is Inferno that really brings to life this image, because of its wider hue and lightness range!

 

NASA’s beautiful ‘Planet On Fire’ images and video

Credits: NASA’s Goddard Space Flight Center and NASA Center for Climate Simulation.

 

Click on the image to watch the original video on NASA’s Visualization Explorer site.

Read the full story on NASA’s Visualization Explorer site.

Reinventing the color wheel – part 2

In the first post of this series I argued that we should not build colormaps for azimuth (or phase) data by interpolating linearly between fully saturated hues in RGB or HSL space.

A first step towards the ideal colormap for azimuth data would be to interpolate between isoluminant colours instead. Kindlmann et al. (2002) published isoluminant RGB values for red, yellow, green, cyan, blue, and magenta based on a user study. The code in the next block show how to interpolate between those published colours to get 256-sample R, G, and B arrays (with magenta repeated at both ends), which can then be combined into a isoluminant colormap for azimuth data.

01 r = np.array([0.718, 0.847, 0.527, 0.000, 0.000, 0.316, 0.718])
02 g = np.array([0.000, 0.057, 0.527, 0.592, 0.559, 0.316, 0.000])
03 b = np.array([0.718, 0.057, 0.000, 0.000, 0.559, 0.991, 0.718])
04 x = np.linspace(0,256,7)
05 xnew = np.arange(256)
06 r256 = np.interp(xnew, x, r)
07 g256 = np.interp(xnew, x, g)
08 b256 = np.interp(xnew, x, b)

This is a good example in general of how to interpolate to a finer sampling one or more sequence of values using the interp function from the Numpy library. In line 04 we define 7 evenly spaced points between 0 and 255; this will be the sample coordinate for the r, g, and b colours created in lines 01-03. In line 05 we create the new coordinates we will be interpolating r, g, and b values at in lines 06-08 (all integers between 0 and 256). The full code will come in the Notebook accompanying the last post in this series.

This new colormap is used in the bottom map of the figure below, whereas in the top map we used a conventional HSV azimuth colormap (both maps show the dip azimuth calculated on the Penobscot horizon). The differences are subtle, but with the isoluminant colormap we are guaranteed there are no perceptual artifacts due to the random variations in lightness of the fully saturated HSV colors.

Another possible strategy to create a perceptual colormap for azimuth data would be to set lightness and chroma to constant values in LCH space and interpolate between hues. This is the Matlab colormap I previously created, and shown in Figure 4 of New Matlab isoluminant colormap for azimuth data. In the next post, I will show how to do this in Python.

Read more on colors and seismic data

The last two posts on Agile show you how to corender seismic amplitude and continuity from a time slice using a 2D colormap,  and then how to corender 3 attributes from a horizon slice.

Reference

Kindlmann, E. et al. (2002). Face-based Luminance Matching for Perceptual Colour map Generation – Proceedings of the IEEE conference on Visualization.

Reinventing the color wheel – part 1

In New Matlab isoluminant colormap for azimuth data I showcased a Matlab colormap that I believe is perceptually superior to the conventional, HSV-based colormaps for azimuth data, in that it does not superimposes on the data the color artifacts that plague all rainbows. However, it still has a limitation, which is that the main colours do not correspond exactly to the four compass directions N, E, W, and S.

My intention with this series is to go back to square one, deconstruct the conventional colormaps for azimuth, and build a new one that has all the desired properties of both perceptual linearity, and correct location of the main colors. All reproducible in Python.

If we wanted to build from scratch a colormap for azimuth (or phase) data the main tasks would be to generate a sequence of distinguishable colours at opposite quadrants, or compass directions (like 0 and 180 degrees, or N and S), and to wrap around the sequence with the same colour at the two ends.

But to do that, we should avoid interpolating linearly between fully saturated hues in RGB or HSL space.

To illustrate why, it is useful to look at the figure below. On the left is a hue circle with primary, secondary, and tertiary colours in a counter-clockwise sequence: red, rose, magenta, violet, blue, azure, cyan, aquamarine, electric green, chartreuse, yellow, and orange. The colour chips are placed at evenly spaced angular distances according to their hue (in radians).

Left, primary, secondary, and tertiary colour chips arranged using hue for angular distance; right, the same colour chips arranged using intensity for angular distance.

This looks familiar and seems like a natural ordering of colors, so we may be tempted in building a colormap, to just take that sequence, wrap it around at the red (or the magenta) and linearly interpolate to 256 colours to get a continuous colormap [1], and use it for azimuth data, which is how usually the conventional azimuth colormaps are built.

On the right side in the figure the chips have been rearranged according to their intensity on a counter-clockwise sequence from 0 to 255 with 0 at three hours; so, for example blue, which is the darkest colour with an intensity of 29, is close to the beginning of the sequence, and yellow, the brightest with an intensity of 225, is close to the end. Notice that the chips are no longer equidistant.

The most striking is that the blue and the yellow chips are more separated than the other chips, and for this reason blue and yellow features seem to stand out a lot more in a map when using this color sequence, which can be both distracting and confusing. A good example is Figure 3 in New Matlab isoluminant colormap for azimuth data.

Also, yellow and red, being two chips apart in the left circle in the figure above, are used to colour azimuths 60 degrees apart, and so do cyan and green. However, if we look at the right circle, we realize that the yellow and red chips are much further apart than the cyan and green chips [2] in the perceptual dimension of intensity; therefore, features colored in yellow and red  could be perceived as much further apart (in azimuth) than cyan and green.

These differences may be subtle, but in my opinion they become important when dip azimuth is combined with other attributes, perhaps using a 3D colormap, and the resulting map is used for detailed structural interpretation. There is a really good example of this type of 3D colormap in Chopra and Marfurt (2007), where dip azimuth is rendered with hue modulation, dip magnitude with saturation modulation, and coherence with lightness modulation.

A code snippet with the main Python commands to generate the two polar scatterplots in the figure is listed, and explained below. The full code can be found in this Jupiter Notebook.

01 import matplotlib.colors as clr
02 keys=['red', '#FF007F', 'magenta', '#7F00FF', 'blue', '#0080FF','cyan', '#00FF80',
'#00FF00', '#7FFF00', 'yellow', '#FF7F00']
03 my_cmap = clr.ListedColormap(keys)
04 x = np.arange(12)
05 color = my_cmap(x)
06 n = 12
07 theta = 2*np.pi*(np.linspace(0,1,13)) 
08 r = np.ones(13)*2.5
09 area = 200*r**2 # size of color chips
10 c = plt.scatter(theta, r, c=color, s=area)
11 theta_i = 2*np.pi*(sorted_intensity/255.0)
12 colors = my_sorted_cmap(np.arange(12))
13 c = plt.scatter(theta_i, r, c=colors, s=area)

In line 01 we import the Colors module from the Matplotlib library, then line 02 creates the desired sequence of colours (red, rose, magenta, violet, blue, azure, cyan, aquamarine, electric green, chartreuse, yellow, and orange) using either the name or Hex code, and line 03 generates the colormap. Then we use lines 04 and 05 to assign colours to the chips in the first scatterplot (left), and lines 06, 07, and 09 to specify the number of chips, the angular distances between chips, and the area of the chips, respectively. Line 10 generates the plot. The modifications in lines 11-14 will result in the scatterplot on the right side in the figure (the sorted intensity is calculated in much the same way as in my Geophysical tutorial – How to evaluate and compare colormaps in Python).

 

[1] Or, perhaps, just create 12 discrete colour classes to group azimuth values in bins of pi/6 (30 degrees) each, and wrap around again at the magenta, to generate a discrete colormap.

[2] The green chip is almost completely covered by the orange chip.

The rainbow is dead…long live the rainbow! – Perceptual palettes, part 5 – CIE Lab linear L* rainbow

Some great examples

After my previous post in this series there was a great discussion on perceptual color palettes with some members of the Worldwide Geophysicists group on LinkedIn. Ian MacLeod shared some really good examples, and uploaded it in here.

HSL linear L rainbow palette

Today I’d like to share a color palette that I really like:

It is one of the palettes introduced in a paper by Kindlmann et al. [1]. The authors created their palettes with a technique they call luminance controlled interpolation. They explain it in this online presentation. However they used different palettes (their isoluminant rainbow, and their heated body) so if you find it confusing I recommend you look at the paper first. Indeed, this is a good read if you are interested in colormap generation techniques; it is one of the papers that encouraged me to develop the methodology for my cube law rainbow, which I will introduce in an upcoming post.

This is how I understand their method to create the palette: they mapped six pure-hue rainbow colors (magenta, blue, cyan, green, yellow, and red) in HSL space, and adjusted the Luminance by changing the HSL Lightness value to ‘match’ that of six control points evenly spaced along the gray scale palette. After that, they interpolated linearly along the L axis between 0 and 1 using the equation presented in the paper.

CIE Lab linear L* rainbow palette

For this post I will try to create a similar palette. In fact, initially I was thinking of just replicating it, so I imported the palette as a screen capture image into Matlab, reduced it to a 256×3 RGB colormap matrix, and converted RGB values to Lab to check its linearity in lightness. Below I am showing the lightness profile, colored by value of L*, and the Great Pyramid of Giza – my usual test surface –  also colored by L* (notice I changed the X axis of both L* plots from sample number to Pyramid elevation to facilitate comparison of the two figures).

Clearly, although the original palette was constructed to be perceptually linear, it is not linear following my import. Notice in particular the notch in the profile in the blue area, at approximately 100 m elevation. This artifact is also visible as a flat-looking blue band in the pyramid.

I have to confess I am not too sure why the palette has this peculiar lightness profile. I suspect this may be because their palette is by construction device dependent (see the paper) so that when I took the screen capture on my monitor I introduced the artifacts.

The only way to know for sure would be to use their software to create the palette, or alternatively write the equation from the paper into Matlab code and create a palette calibrated on my monitor, then compare it to the screen captured one. Perhaps one day I will find the time to do it but having developed my own method to create a perceptual palette my interest in this one became just practical: I wanted to get on with it and use it.

Fixing and testing the palette

Regardless of what the cause might be for this nonlinear L* profile, I decide to fix it and I did it by simply replacing the original profile with a new one, linearly changing between 0.0 and 1.0. Below I am showing the L* plot for this adjusted palette, and the Great Pyramid of Giza, both again colored by value of L*.

The pyramid with the adjusted palette seems better: the blue band is gone, and it looks great. I am ready to try it on a more complex surface. For that I have chosen the digital elevation data for South America available online through the Global Land One-km Base Elevation Project at the National Geophysical Data Center. To load and display the data in Matlab I used the first code snippet in Steve Eddin’s post on the US continental divide  (modified for South America data tiles). Below is the data mapped using the adjusted palette. I really like the result: it’s smooth and it looks right.

But how do I know, really? I mean, once I move away from my perfectly flat pyramid surface, how do I know what to expect, or not expect? In other words, how would I know if an edge I see on the map above is an artifact, or worse, that the palette is not obscuring real edges?

In some cases the answer is simple. Let’s take a look at the four versions of the map in my last figure. The first on the left was generated using th ROYGBIV palette I described in this post. It would be obvious to me, even if I never looked at the L* profile, that the blue areas are darker than the purple areas, giving the map a sort of inverted image look.

But how about the second map from the left? For this I used the default rainbow from a popular mapping program. This does not look too bad at first sight. Yes, the yellow is perceived as a bright, sharp edge, and we now know why that is, but other than that it would be hard to tell if there are artifacts. After a second look the whole area away from the Andes is a bit too uniform.

A good way to assess these maps is to use grayscale, which we know is a good perceptual option, as a benchmark. This is the last map on the right. The third map of South America was coloured using my adjusted linear L* palette. This maps looks more similar to our grayscale benchmark. Comparison of the colorbars will also help: the third and fourth are very similar and both look perceptually linear, whereas the third does show flatness in the blue and green areas.

Let me know what you think of these examples. And as usual, you are welcome to use the palette in your work. You can download it here.

UPDATE

With my following post, Comparing color palettes, I introduced my new method to compare palettes with ImageJ and the 3D color inspector plugin. Here below are the recorded 3D animations of the initial and adjusted palettes respectively. In 3D it is easier to see there is an area of flat L* between the dark purple and dark blue in the initial color palette. The adjusted color palette instead monotonically spirals upwards.

References

[1] Kindlmann, G. Reinhard, E. and Creem, S., 2002, Face-based Luminance Matching for Perceptual Colormap Generation, IEEE – Proceedings of the conference on Visualization ’02

Related posts (MyCarta)

The rainbow is dead…long live the rainbow! – the full series

What is a colour space? reblogged from Colour Chat

Color Use Guidelines for Mapping and Visualization

A rainbow for everyone

Is Indigo really a colour of the rainbow?

Why is the hue circle circular at all?

A good divergent color palette for Matlab

Related topics (external)

Color in scientific visualization

The dangers of default disdain

Color tools

How to avoid equidistant HSV colors

Non-uniform gradient creator

Colormap tool

Color Oracle – color vision deficiency simulation – stand alone (Window, Mac and Linux)

Dichromacy –  color vision deficiency simulation – open source plugin for ImageJ

Vischeck – color vision deficiency simulation – plugin for ImageJ and Photoshop (Windows and Linux)

For teachers

NASA’s teaching resources for grades 6-9: What’s the Frequency, Roy G. Biv?

ImageJ and 3D Color inspector plugin

http://rsbweb.nih.gov/ij/docs/concepts.html

http://rsb.info.nih.gov/ij/plugins/color-inspector.html

The rainbow is dead…long live the rainbow! – series outline

The rainbow is dead…long live the rainbow! – Part 1

The rainbow is dead…long live the rainbow! – Part 2: a rainbow puzzle

The rainbow is dead…long live the rainbow! – Part 3

The rainbow is dead…long live the rainbow! – Part 4 – CIE Lab heated body

The rainbow is dead…long live the rainbow! – Part 5 – CIE Lab linear L* rainbow

The rainbow is dead series – Part 6 -Comparing color palettes

The rainbow is dead series – Part 7 – Perceptual rainbow palette – the method

The rainbow is dead series – Part 7 – Perceptual rainbow palette – the godies