Picobot Revisited: Optimizing a Tiny Robot’s Rules, Ten Years Later

Featured

A decade-old folder, handwritten notes, and a deceptively simple robot.

Introduction

Wrapping up a third personal fun project in two months? Check!! And this is the longest-standing one, and possibly one of my favourite ever. It goes back to when I was barely past the first steps into my exploration of both Python, and Computer Science. This project was fun because it had to do with solving puzzles. I am happy to share it with you, my readers, today.

If you’ve ever watched a Roomba bump into a wall, spin around, and trundle off in a seemingly random direction, you’ve witnessed a real-world version of the problem I’m about to describe. How does a robot that can only sense what’s immediately around it — no map, no memory of where it’s been, no grand plan — manage to cover every square inch of a room?

In January 2015, I was working through Harvey Mudd College’s “CS for All” materials on my own — no live instruction, no solutions to check against — and I encountered Picobot: a simulated robot even simpler than a Roomba. Picobot became one of my favourite puzzles. I scribbled diagrams, wrote copious amounts of notes, tested rules, and eventually optimized my solutions down to what I believed were the minimum number of rules needed to cover the whole room. I kept everything into a well-worn file folder. This was my very first serious dab into CS, and I loved it!

That folder has survived multiple reorganizations over the years – every once in a while I’d open it, think about writing it up properly, and close it again. But, after positive experience wrapping up projects collaboratively with Claude — the colormap app, the Mill’s Methods post — Picobot was next in line.

With the help of Claude Opus (v 4.5) I verified those old solutions, built a Python simulator, and finally documented the work properly.

This post is about the optimization journey. The reasoning. The moments when things click.

What is Picobot?

Picobot is a pedagogical robot created for Harvey Mudd’s introductory computer science course. It lives in a grid world and has one job: visit every empty cell. The catch? Picobot is nearly blind.

The Constraints

Picobot can only sense its four immediate neighbours: North, East, West, and South. For each direction, it knows one thing: is there a wall, or is it empty? That’s it. No memory of where it’s been. No coordinates. No global view.

Here’s an example of what Picobot “sees”:

    N
W ● E ← Picobot sees: N=empty, E=wall, W=empty, S=empty
S

We encode this as a 4-character string: xExx

  • x means empty (nothing there)
  • N, E, W, or S means wall in that direction
  • Position order is always: North, East, West, South

So xExx means “wall to the East, everything else empty.”

The Rules

Picobot follows rules that say: “If I’m in this state and I see this pattern, then move this direction and switch to this state.”

The format is:

STATE  SURROUNDINGS -> MOVE  NEW_STATE

For example:

0 Nx** -> E 1

This means: “In State 0, if there’s a wall to the North and East is empty, move East and switch to State 1.”

The wildcard * matches anything:

0 x*** -> N 0

“In State 0, if North is empty (don’t care about the rest), move North and stay in State 0.”

There’s also a special move: X (stay put). The robot doesn’t move but can change state. This seems useless at first. It’s not.

The Goal

Write the smallest set of rules that makes Picobot visit every empty cell in a room, regardless of where it starts.

The Harvey Mudd Picobot lab posed two main challenges, below, and several optional one.

  1. Empty Room: A rectangular room with walls only on the boundary
  2. Maze: A maze with single-cell-wide corridors

The lab simulator is actually still live at https://www.cs.hmc.edu/picobot/

Give it a shot, it’s fun!

Back to the story.

The Empty Room: From 7 to 6 Rules

The Strategy: Boustrophedon

The word comes from Greek: “ox-turning.” It’s how you plow a field — go one direction, turn around at the end, come back the other way. Mow a lawn. a line of text, then the next (if you are Etruscan).

For Picobot, the boustrophedon pattern looks like this:

The robot sweeps East, drops down, sweeps West, drops down, repeats. But first, it needs to get to the top of the room — so it goes North until it hits the wall.

My Initial Solution: January 6, 2015

I have an email I sent to myself at 12:44 AM on January 6, 2015 — working late (on a Tuesday night!!!) on this puzzle. It shows my first experiments:

First experiment: go to origin:
# go to origin
0 **** -> X 3
3 ***x -> S 3
3 ***S -> W 2
2 **x* -> W 2
2 **W* -> X 0

And then my first complete solution:

Final solution program 1
0 x*** -> N 0 # (initial) state 0 with nothing N: go N
0 Nx** -> E 1 # state 0 with a wall N but none E: go E, AND

1 *x** -> E 1 # state 1 with nothing E: go E
# OR, instead of previous 2. This is if initially by E wall
0 NE** -> W 2 # state 0 with a wall N and one E: go W

# once it reaches east wall
1 *E** -> W 2 # state 1 with a wall E: go W
2 **x* -> W 2 # state 2 with nothing W: go W
2 **W* -> S 1 # state 2 with a wall W: go S

That’s 7 rules. The comments show my thinking — I was handling the case where Picobot starts by the East wall separately.

The Harvey Mudd lecture slides posed an extra challenge: “how FEW rules can you use? The current record is six rules” The solution wasn’t shown — just the target. That became the question that hooked me: how do you get there? I was one rule away

The Insight: “C and F Are the Same”

My handwritten notes show positions labelled A through F, representing different situations Picobot might encounter. The breakthrough came when I realized:

Position C (just finished going North, need to decide: East or West?) and Position F (at a wall during the sweep, need to decide direction) were being handled by separate rules — but they didn’t need to be.

The key insight: after going North and hitting the wall, I don’t need a separate rule to check East. I can use the X move (stay put) to transition to State 1, and let State 1’s existing rules handle it.

This is counter-intuitive. The X move looks like wasted time — the robot just sits there! But it’s not wasted. It’s a state transition without movement that lets me reuse existing rules instead of duplicating logic.

The Final Solution: January 24, 2015

Eighteen days later, I emailed myself the optimized solution — Saturday, January 24, 2015 at 5:05 PM (weekend fun work):

# Optimized EMPTY ROOM program:
0 x*** -> N 0
0 N*** -> X 1
1 *x** -> E 1
1 *E** -> W 2
2 **x* -> W 2
2 **W* -> S 1

Six rules. Let me walk through why this works:

State 0 handles “going North.” When Picobot hits the North wall, it executes X 1 — stays put but switches to State 1. Now State 1 takes over.

State 1 is dual-purpose:

  • If East is empty → go East (continuing the sweep)
  • If East is wall → start going West (end of row)

Because Picobot stays put when transitioning from State 0 to State 1, it’s in the exact same position, and State 1 correctly determines whether to go East or start heading West.

State 2 sweeps West. When it hits the West wall, it goes South and switches back to State 1. Again, State 1 determines: East or end of row?

The elegance is that State 1 does double duty. It handles both “continue going East” and “decide what to do at the end of a row.” The X move is what makes this possible.

Verified

I tested this against all 529 possible starting positions in a 25×25 room. Every single one reaches 100% coverage. Maximum steps: 1,013. The solution works.

The Maze: From 16 to 12 Rules

The maze challenge is different. Corridors are one cell wide. There are dead ends, branches, and loops. The boustrophedon strategy won’t work here.

The Strategy: Right-Hand Wall Following

The classic maze-solving algorithm: keep your right hand on the wall and walk. You’ll eventually visit everywhere (in a simply-connected maze).

For Picobot, “right hand on wall” translates to:

  1. If you can turn right, turn right
  2. Otherwise, if you can go forward, go forward
  3. Otherwise, if you can turn left, turn left
  4. Otherwise, turn around (dead end)

With four directions (North, East, West, South) and the “right-hand” rule relative to each, we need four states — one for each direction Picobot is “facing.”

  • State 0: Going North (right hand on East wall)
  • State 1: Going East (right hand on South wall)
  • State 2: Going West (right hand on North wall)
  • State 3: Going South (right hand on West wall)

Initial Solution: 16 Rules

The straightforward implementation uses 4 rules per state:

# State 0: Facing North (right hand = East)
0 *x** -> E 1 # Can turn right → turn right (now facing East)
0 *Ex* -> N 0 # Can't turn right, but forward is open → go North
0 *EW* -> W 3 # Can't go forward → turn left (face West)
0 *EWS -> S 2 # Dead end → turn around (face South)

# ... and similarly for States 1, 2, 3

16 rules total. It works. But can we do better?

Two-Phase Optimization

My maze notes show two distinct approaches:

Phase 1: Working from principles. The small diagram in my notes shows me reasoning about the state transitions theoretically. What’s the minimum information needed at each decision point? Where is there redundancy?

Phase 2: Empirical debugging. The large diagram shows positions A through K — specific spots in a maze where I tested rules. When the principled approach hit edge cases, I sketched the situation, walked through it (“what would I do here?”), and translated my intuition into rules.

The note “Key is G” appears on the page. Position G was where the solution got validated — when it handled G correctly, the logic was proven.

The Iteration: A Failed Attempt

That same January 24 email shows me trying to adapt the empty room optimization for the maze — and failing:

This, optimized for maze, does not work. At dead ends it turns around but then it goes to the other end and enters an infinite loop...

The attempt that followed didn’t handle dead ends properly. The robot would turn around, walk to the other end, and loop forever.

The Final Solution

Then, in the same email:

This works!!
0 *x** -> E 1
0 xE** -> N 0
0 NE** -> X 2
1 ***x -> S 3
1 *x*S -> E 1
1 *E*S -> X 0
2 x*** -> N 0
2 N*x* -> W 2
2 N*W* -> X 3
3 **x* -> W 2
3 **Wx -> S 3
3 **WS -> X 1

12 rules: 3 per state instead of 4. A 25% reduction.

The key insight: each state now handles only three cases:

  1. Right is open → turn right
  2. Forward is open → go forward
  3. Both blocked → stay put, rotate to next state (which will check left/behind)

The X move chains states together. If right and forward are blocked, we stay put and try the next state. That state checks its right (our left). If that’s blocked too, it chains again. The sequence continues until we find a way forward.

Verified

Tested against all 287 reachable positions in a 25×25 maze, and all 280 cells in the actual Harvey Mudd lab maze. 100% coverage every time. Here’s one simulation:

The right-hand rule doesn’t just guarantee coverage — it collapses the state space. The rules are ordered to check “right side open” first. In State 0 (facing North), rule 1 asks: is East open? If yes, go East — Picobot never evaluates what’s ahead. That’s how rule ordering implements “keep your hand on the wall.” Different physical positions with the same wall-relationship become equivalent, and that’s what makes 4 states and 12 rules possible. Take a look at the simulations below of the two equivalent positions sketched in my handwritten notes, shown earlier:

Making It Explicit: Starting State Matters

Here’s something worth highlighting — something that’s in the Harvey Mudd lab instructions but easy to overlook.

The 6-rule empty room solution requires Picobot to start in State 0.

The Harvey Mudd simulator always starts in State 0, and the lab materials mention this. Whether I consciously accounted for this in 2015, I don’t remember — I didn’t document it in my notes. But when I built my own simulator in 2025, I could test explicitly: what happens if Picobot starts in State 1 or State 2?

Start StateInitial DirectionCoverage
0North100% ✓
1East~50% ✗
2West~45% ✗

Starting in State 1 or 2, Picobot gets stuck. It begins the East-West sweep from wherever it starts — never going North to reach the top first. The rows above its starting position never get visited.

This isn’t a bug in the solution. It’s a constraint: the boustrophedon pattern assumes you start by going North. The 6-rule minimum only works because State 0 guarantees that first trip to the top wall.

A truly state-agnostic solution — one that works regardless of starting state — would need more rules. The elegance of 6 rules comes from working within the standard initial conditions.


What I Learned

  1. The X move is not wasted time. It’s a state transition that enables rule reuse. This is the key to minimizing rule count.
  2. Different problems, different methods. The empty room yielded to analytical insight (“C and F are the same”). The maze required two phases: principled derivation, then empirical debugging.
  3. Implicit assumptions matter. The starting state requirement was in the lab materials all along, but easy to overlook. Building my own tools made it explicit.
  4. Old projects are worth revisiting. With fresh eyes — and some help — I found new ways to understand and share work I already knew.
  5. How I approached it. Looking back at my notes, I see a pattern that’s familiar from my day-to-day work: diagrams everywhere, positions A-K labeled, “me walking in the maze.” Try something → watch where it fails → sketch that spot → ask “what would I do here?” → translate to rules → repeat. “C and F are the same” collapsed the problem by seeing equivalence the formal notation obscured. The notes weren’t just records — they were how I thought. And 18 days between 7 rules and 6 rules: no rushing, no giving up. This is field scientist methodology applied to computer science. Maybe that’s why I loved it.
  6. There is no free lunch in AI collaboration. This project — both the technical verification and this blog post — would not have been possible without deep understanding of the subject matter. That understanding came from me (the 2015 work, the insights, the diagrams), from the extensive documentation I’d kept, and from all the iterative work we did together. This isn’t “vanilla coding” where you prompt an AI and get a finished product. It’s genuine collaboration: human insight plus AI execution. The AI didn’t optimize Picobot — I did, in 2015. The AI helped me verify, document, and communicate that work in 2025.

Try It Yourself

The full Python implementation is on GitHub: https://github.com/mycarta/picobot-optimizer

Itncludes:

  • picobot_simulator.py — The core engine
  • picobot_rooms.py — Empty room and maze generators
  • picobot_visualizer.py — GIF animation creator
  • optimized_solutions.py — The 6-rule and 12-rule solutions
  • test_solutions.py — Exhaustive verification

All documented and ready to explore.


What’s Next

Part 2: How I revisited this project with AI assistance — and what that collaboration actually looked like.

Part 3: Educational materials. Exercises, concept checks, and scaffolded challenges for those learning to code.


The Picobot simulator was created for Harvey Mudd College’s “CS for All” course. My optimization work is from January 2015. Verification, documentation, and visualization were completed in January 2025 with AI assistance.


AI/HI (Human Intelligence) Transparency Statement

Modified from Brewin

Has any text been generated using HI?Yes
Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes
Have any methods of analysis been suggested using HI?Yes
Have any methods of analysis been suggested using AI?Yes
Do any analyses utilize AI technologies, such as Large Language Models, for tasks like analyzing, summarizing, or retrieving information from data?Yes

Additional context:

The Picobot optimization work described in this post — the solutions, the insights, the handwritten diagrams, the reasoning behind “C and F are the same” and “Key is G” — was done entirely by me in January 2015, working alone through Harvey Mudd’s CS for All materials with no live instruction and no solutions to check against. The emails quoted in this post are timestamped records from that work.

In January 2025, I revisited this project with Claude AI (Anthropic). Claude built the Python simulator, ran exhaustive verification tests, created the GIF visualizations, and helped document the reasoning. The explicit testing of starting states emerged from our joint exploration — I asked the question, Claude ran the tests.

This post was drafted collaboratively. I provided the source materials (my 2015 notes, emails, the verified solutions, our session transcripts), direction, and editorial judgment throughout. Claude drafted based on these inputs and our discussion of structure and framing. I reviewed, revised, and made all final decisions about what went to publication.

A note on AI collaboration: This kind of work is not “vanilla coding” — prompting an AI and receiving a polished output. It required deep domain knowledge (mine), extensive primary documentation (my 2015 notes and emails), iterative correction (many rounds), and genuine intellectual engagement from both sides. The AI contributed too — not the original insights, but meta-insights: recognizing patterns in my notes, naming things I’d done but hadn’t articulated (like “C and F are the same” as a key moment), and seeing that I’d used different methodologies for the empty room versus the maze. The AI did not and could not have done this alone. Neither could I have done the verification, visualization, and documentation at this scale without AI assistance. That’s what real collaboration looks like.

The intellectual work is mine. The documentation, verification, and articulation is collaborative.

Modernizing Python Code in the AI Era: A Different Kind of Learning

Featured

A few years ago I wrote about advancing my Python coding skills after working through a couple of chapters from Daniel Chen’s excellent book Pandas for Everyone. In that post I showed how I improved code I’d written in 2018 for the SEG Machine Learning contest. The original code used unique() to get lists of well names, then looped through with list comprehensions to calculate flagged samples and proportions. The 2020 version replaced all that with groupby() and apply(), making it much more compact and Pythonic. For example, where I’d written a list comprehension like [result_a.loc[result_a.zone==z,'flag'].sum() for z in zones_a], I could now write simply result_a.groupby('zone', sort=False).flag.sum().values. The runtime also improved – from 86ms down to 52ms. I remember being quite happy with how much cleaner and more readable the code turned out, and how the learning from those two chapters made an immediate practical difference.

Recently, I had to modernize the Busting bad colormaps Panel app, which I built back in 2020 to demonstrate colormap distortion artifacts (something that – as you know – I care a lot about). The app had been deliberately frozen in time – I’d pinned specific library versions in the environment file because I knew things would eventually become obsolete, and I wanted it to stay functional for as long as possible without having to constantly fix compatibility issues.

But some of those issues had finally caught up with me, and the app had ben down for soem time. Last fall, working with Github copilot, I fixed some matplotlib 3.7+ compatibility problems – replace the deprecated cm.register_cmap() with plt.colormaps.register(), fix anrgb2gray error, and resolve a ValueError in the plotting functions.

But the deployment was also broken. In 2021, mybinder.org had switched to JupyterLab as the default interface, changing how apps needed to be deployed. Panel developers had to adapt their code to work with this new setup. The old Panel server URL pattern no longer worked. I tried to figure out the new URL pattern by browsing through the Binder documentation, but I couldn’t make sense of it and failed miserably. It was a short-lived effort that pushed me toward trying something different: full-on coding with Claude Opus 4.5 using Copilot in VSCode.

That’s what allowed me, this month, to complete the modernization process (though honestly, we still haven’t fully sorted out a Binder timeout issue).

A step back to 2020: Building the app from scratch

When I originally built the colormap app, I coded everything myself, experimenting with Panel features I’d never used before, figuring out the supporting functions and visualizations. I also got very good advice from the Panel Discourse channel when I got stuck.

One issue I worked on was getting the colormap collection switching to behave properly. After the first collection switch, the Colormaps dropdown would update correctly, but the Collections dropdown would become non-responsive. With help from experts on the Discourse channel, I figured out how to fix it using Panel’s param.Parameterized class structure.

2026: Working with Claude

The second, and hardest part of the modernization was done almost entirely by Claude Opus. Here’s what that looked like in practice:

Binder deployment: Claude independently figured out the new JupyterLab URL pattern (?urlpath=lab/tree/NotebookName.ipynb instead of the old ?urlpath=%2Fpanel%2FNotebookName). Only later, when fact-checking for this post, did we discover the history of Binder’s 2021 switch to JupyterLab and how Panel had to adapt. This helped, though we’re still working through some timeout issues.

Environment upgrade: Claude upgraded to Python 3.12 and Panel 1.8.5, bringing everything up to modern versions. The key packages are now Panel 1.8.5, param 2.3.1, and bokeh 3.8.1.

Code modernization: Claude spotted and fixed deprecated API calls – the style parameter for Panel widgets became styles.

Collection switching – Claude’s breakthrough: This was Claude’s biggest solo contribution. The collection switching broke during the update, and Claude independently diagnosed that the class-based param.Parameterized approach that had worked in Panel 0.x wasn’t reliable in Panel 1.x. Without me having to guide the solution, Claude figured out how to rewrite it using explicit widgets with param.watch callbacks.

The comparison shows the change:

The new approach uses explicit widget objects with callback functions, which works more reliably in Panel 1.x than the class-based parameterized approach.

New features: Claude integrated two new colormap collections I’d been wanting to add for years – Fabio Crameri’s scientific colormaps (cmcrameri) and Kristen Thyng’s cmocean colormaps. That brought the total from 3 to 5 colormap collections.

Here are examples of the app showing each of the new collections:

The app testing of cmocean deep colormap
The app testing of Crameri’s batlow colormap

Documentation: Claude updated the README with detailed step-by-step Binder instructions, added a troubleshooting section, and created a table documenting all five colormap collections.

I provided the requirements and guidance throughout, but I almost never looked at the implementation details – what I’ve taken to calling the “bits and bobs” of the code. I focused on what I needed to happen, Claude figured out how to make it happen.

What changed (and what didn’t)

I still understand what the code does conceptually. I can read it, review it, check that it’s correct. I know why we needed to move from Parameterized classes to explicit widgets, and I understand the reactive programming model. But I didn’t write those lines myself.

The work happens at a different level now. I bring the domain expertise (what makes a good colormap visualization), the requirements (needs to deploy on Binder, needs these specific colormap collections), and the quality judgment (that widget behavior isn’t quite right). Claude brings the implementation knowledge, awareness of modern best practices, and the ability to quickly adapt code patterns to new frameworks.

This is really different from my 2020 experience. Back then, working through those Pandas patterns taught me techniques I could apply to other projects. Now, I’m learning what becomes possible when you can clearly articulate requirements and delegate the implementation.

The honest trade-off

There’s a trade-off here, and I’m trying to be honest about it. In 2020, working through the Panel widget patterns taught me things that stuck. In 2026, I got working, modernized code in a fraction of the time, but with less hands-on knowledge of Panel 1.x internals.

For this particular project, that trade-off made sense. I needed a working app deployed and accessible, not deep expertise in Panel migration patterns. But I’m conscious that I’m optimizing for different outcomes now: shipping features fast versus building deep technical understanding through hands-on work.

What this means going forward

After years of writing code line by line, this new way of working feels both efficient and different. I got more done in a couple of hours than I might have accomplished in several weeks working solo. The app is modernized, deployed, working better than ever, and even has new features I’d been wanting to add for years.

This has been a gamechanger for how I work. I still do the work that matters most to me: seeing the tool gap, coming up with the vision, iteratively prototyping to flesh out what I actually need. That’s substantial work, and it’s mine. But after that initial phase? A lot of the implementation will be done with Claude. The app is done and it’s great, and I know this is the path forward for me.

References

Chen, D.Y. (2018). Pandas for Everyone: Python Data Analysis. Addison-Wesley Professional.

Crameri, F. (2018). Geodynamic diagnostics, scientific visualisation and StagLab 3.0. Geoscientific Model Development, 11, 2541-2562. https://www.fabiocrameri.ch/colourmaps/

Niccoli, M. (2020). Keep advancing your Python coding skills. MyCarta Blog. https://mycartablog.com/2020/10/22/keep-advancing-your-python-coding-skills/

Thyng, K.M., Greene, C.A., Hetland, R.D., Zimmerle, H.M., and DiMarco, S.F. (2016). True colors of oceanography: Guidelines for effective and accurate colormap selection. Oceanography, 29(3), 9-13. https://matplotlib.org/cmocean/


Try the app yourself: The modernized colormap distortion app is available on GitHub and you can run it in Binder without installing anything.

The value of intellectual play: Mill, machine learning, and a drilling problem I couldn’t stop thinking about

Featured

A few years back, I watched a CSEG talk by Lee Hunt (then at Jupiter Resources) called Value thinking: from the classical to the hyper-modern. One case study in particular stuck with me—so much so that I ended up exploring it in a Jupyter Lab notebook, bringing it up in a job interview, and eventually testing whether an AI could reason through it on its own.

This post is about that journey. It’s also about what happens when you let yourself get genuinely curious about someone else’s problem. And—fair warning—it involves a 19th-century philosopher, a seven-well dataset, and a neural network that learned to distrust AVO attributes.

The problem

Jupiter Resources had a history of occasionally encountering drilling trouble in the Wilrich reservoir—specifically, loss of circulation when encountering large systems of open fractures. Mud loss. The kind of problem that can cost you a well.

They had done extensive geophysical work with multiple seismic attributes that, in theory, should correlate with fractures: Curvature, Coherence, AVAz (amplitude variation with azimuth), VVAZ (velocity variation with azimuth), and Diffraction imaging. But they lacked direct calibration data for the drilling problem, and some of the attributes were giving conflicting results.

Lee Hunt, who led the team and the geophysical work, suspected from the start that the AVO-based attributes might be compromised. He had seen evidence as far back as 2014 that AVAz and VVAZ responses in the Wilrich were dominated by an overlying coal, not the fractures themselves—the attributes were measuring a different geological signal entirely. Diffraction imaging was planned early as a complementary measure, precisely because it might not be affected by the coals in the same way (personal communication).

Seven wells. Five attributes. Four of the wells had experienced drilling problems; three had not. Here’s the data:

The question: which attribute—or combination—could reliably predict drilling problems, so that future wells could be flagged ahead of time?

Mill’s Methods: 19th-century philosophy meets drilling risk

Rather than accept uncertainty and provide no geophysical guidance at all, the team at Jupiter tried something different: Mill’s Methods of Induction. Their goal was to find a pattern that could help them advise the operations team—flag high-risk well locations ahead of time so contingency plans could be in place. Mill’s Methods are a set of logical procedures for identifying causal relationships, laid out by philosopher John Stuart Mill in 1843. They’re often illustrated with a food poisoning example (who ate what, who got sick), but they work just as well here.

This approach was characteristic of Lee Hunt’s attitude toward quantitative geophysics—an attitude I had come to admire through his other work. A few years earlier, he had published a CSEG Recorder column called “Many correlation coefficients, null hypotheses, and high value,” a tutorial on statistics for geophysicists that included synthetic production data and an explicit invitation: “You can do it, too. Write in to tell us how.”

I took him up on it. I worked through his examples in Jupyter notebooks, built visualizations, explored prediction intervals, learned a good deal of scientific computing along the way. I reached out to him about the work. I even wrote up some of that exploration in a blog post on distance correlation and variable clustering—the kind of technical deep-dive where you’re learning as much about the tools as about the data. That extended engagement gave me a feel for his way of thinking: understand the statistics, accept the uncertainty, improve your techniques if you can—but don’t just throw up your hands when the data is messy.

Method of Agreement: Look at all the problem wells (A, B, F, G). What do they have in common? Curvature is TRUE for all four. So is Diffraction imaging. The other attributes vary.

Method of Difference: Compare problem wells to non-problem wells (C, D, E). Neither Curvature nor Diffraction alone perfectly discriminates—Well E has Curvature TRUE but no problem; Well D has Diffraction TRUE but no problem.

Joint Method: But here’s the key insight—Curvature AND Diffraction together form a perfect discriminator. Every well where both are TRUE had problems. Every well where at least one is FALSE did not.

This wasn’t a claim about causation. It was a decision rule: when the next well location shows both high curvature and diffraction anomalies, flag it as elevated risk and ensure contingency protocols are in place.

The logic is sound because of asymmetric costs. Preparing for mud loss (having lost circulation material on site, adjusting mud weight plans) is a minor expense. Not preparing when you should have—that’s where you lose time, money, sometimes the well. You don’t need certainty to justify preparation. You need a defensible signal.

What a neural network learned

I wanted to see if a data-driven approach would arrive at the same answer. Looking at the table myself, and spending some time applying Mill’s Methods, I had already seen the pattern—Curvature and Diffraction together were the key predictors. But I was curious: what would a simple neural network learn on its own?

I trained a two-layer network (no hidden layer)—mathematically equivalent to logistic regression—on the same seven wells. (Yes, seven wells. I know. But stay with me.)

The network classified all seven wells correctly. But the real insight came from the weights it learned:

Attribute Weight
Curvature +14.6
Diffraction +9.7
Coherence ~0
AVAz −4.9
VVAZ −14.5

Curvature and Diffraction were strongly positive—predictive of problems. Coherence contributed almost nothing. But AVAz and VVAZ were negative—the network learned to suppress them.

A way to think about negative weights: imagine training a network to identify ducks from a set of photos that includes birds, ducks, and people in duck suits. The network will learn to weight “duck features” positively, but also to weight “human features” negatively—to avoid being fooled by the costumes. In the Wilrich case, the AVAz and VVAZ attributes were like duck suits: they looked like fracture indicators, but they were actually measuring something else.

This was interesting. All five attributes have theoretical justification for detecting fractures. Why would the network actively discount two of them?

When I mentioned this result to Lee Hunt, he confirmed what he had long suspected (personal communication): the AVAz and VVAZ responses in the Wilrich were dominated by an overlying coal, not the fractures themselves. He had measured this effect and documented it in a 2014 paper, where multiple attributes—including AVAz—showed statistically significant correlations to coal thickness rather than to reservoir properties. The neural network had learned, from just seven data points, to suppress exactly the attributes that Lee’s domain knowledge had already flagged as problematic.

This is Mill’s Method of Residues in action: if you know something else causes an observation, subtract it out. And it’s a reminder that domain knowledge and data-driven methods can converge on the same answer when both are applied honestly. I found this deeply satisfying.

What the AI got right—and what it missed

More recently, I revisited this problem using ChatGPT with the Wolfram plugin. I wanted to see if an AI, given just the table and a prompt about Mill’s Methods, could reason its way to the same conclusions.

It did—mechanically. It correctly identified Curvature and Diffraction as the consistent factors among problem wells. It noted that neither attribute alone was a perfect discriminator. It even offered to run logistic regression.

But it missed the interpretive leap. It hedged with phrases like “although there are exceptions” when in fact there were no exceptions to the conjunction rule. And it didn’t articulate the pragmatic framing: that the goal wasn’t to find the true cause, but to build a defensible decision rule under uncertainty.

That framing—the shift from epistemology to operations—required domain knowledge and judgment. The AI could apply Mill’s Methods. It couldn’t tell me why that application was useful here.

Drafting this post, I worked with a different AI—Claude—and found the collaboration more useful in a different way: not for solving the problem, but for reflection. Having to explain the context, the history, the why of my interest helped me articulate what I’d been carrying around in my head for years. Sometimes the value of a thinking partner isn’t in the answers, but in the questions that force you to be clearer.

Why this stuck with me

I’ll be honest: I kept thinking about this problem for years. It became part of a longer arc of engagement with Lee’s work—first the statistics tutorial, then the Wilrich case study, each building on the last.

When I interviewed for a geophysics position (Lee was retiring, and I was a candidate for his role), I mentioned this case study. I pulled out a pen and paper and wrote the entire seven-well table from memory. They seemed impressed—not because memorizing a table is hard, but because it signaled that I’d actually enjoyed thinking about it. That kind of retention only happens when curiosity is real.

I didn’t get the job. The other candidate had more operational experience, and that was the right call. But the process was energizing, and I’m sure that enthusiasm carried into my next opportunity, where I landed happily and stayed for over six years.

I tell this not to brag, but to make a point: intellectual play compounds. You don’t always see the payoff immediately. Sometimes you explore a problem just because it’s interesting—because someone like Lee writes “You can do it, too” and you decide to take him seriously—and it pays dividends in ways you didn’t expect.

The convergence

Three very different approaches—19th-century inductive logic, a simple neural network, and (later) an AI assistant—all pointed to the same answer: Curvature and Diffraction predict drilling problems in this dataset. The AVO attributes are noise, or worse, misleading.

When three methods converge, you can trust the signal. And you can make decisions accordingly.

That’s the real lesson here: rigorous reasoning under uncertainty isn’t about finding the One True Cause. It’s about building defensible heuristics, being honest about what you don’t know, and updating as new data comes in. Mill understood this in 1843. A neural network can learn it from seven wells. And sometimes, so can an AI—with a little help.

I hope you enjoyed this as much as I enjoyed putting it together.


The original case study was presented by Lee Hunt in his CSEG talk “Value thinking: from the classical to the hyper-modern.” The neural network analysis is in my Geoscience_ML_notebook_4. Lee documented the coal correlation issue in Hunt et al., “Precise 3D seismic steering and production rates in the Wilrich tight gas sands of West Central Alberta” (SEG Interpretation, May 2014), and later reflected on confirmation bias as an obstacle to recognizing such issues in “Useful Mistakes, Cognitive Biases and Seismic” (CSEG Recorder, April 2021). My thanks to Lee for the original inspiration, for confirming the geological context, and for sharing the original presentation materials.


  • Hunt, L., 2013, Many correlation coefficients, null hypotheses, and high value: CSEG Recorder, December 2013. Link
  • Hunt, L., S. Hadley, S. Reynolds, R. Gilbert, J. Rule, M. Kinzikeev, 2014, Precise 3D seismic steering and production rates in the Wilrich tight gas sands of West Central Alberta: SEG Interpretation, May 2014.
  • Hunt, L., 2021, Useful Mistakes, Cognitive Biases and Seismic: CSEG Recorder, April 2021.
  • My neural network analysis: Geoscience_ML_notebook_4
  • My earlier exploration of Lee’s production data: Data exploration in Python: distance correlation and variable clustering
  • ChatGPT + Wolfram session on Mill’s Methods: Gist

AI/HI Transparency Statement Modified from Brewin http://www.theguardian.com/books/2024/apr/04/why-i-wrote-an-ai-transparency-statement-for-my-book-and-think-other-authors-should-too

Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes

Additional context: This post emerged from a conversation with Claude AI (Anthropic). I provided the source materials (a ChatGPT + Wolfram session, a Jupyter notebook, personal history with the problem), direction, and editorial judgment throughout. Claude drafted the post based on these inputs and our discussion of structure, voice, and framing. I reviewed multiple draft, revised as needed, rewrote some key sections, and made all final decisions about what went to publication. The core analysis—Mill’s Methods, the neural network, the interpretation—was done by me years before this collaboration; the AI’s role was in helping articulate and structure that work for a blog audience.

ChatGPT as an essay-writing assistant – Part III

Featured

The Challenge of a Satisfying Conclusion

When I published Part II of this series back in February 2025, I had a plan for Part III. Show the prompts I used, analyze the time investment, evaluate the result against Part I’s GPT-3.5 baseline, maybe try one more iteration with even newer tools. Straightforward. Methodical.

But I never finished it. To be honest, I lost interest. Another marginally better AI-generated essay wasn’t going to cut it—not for me, and probably not for you readers either. Another iteration showing GPT-4.5 writes slightly better than GPT-4? That’s predictable, uninspiring… so I dropped it.

But the unfinished series sat there in the back of my mind. I wasn’t actively working on it, but I also couldn’t quite let it go. It created a kind of block—I found myself not writing about anything at all, partly because this felt incomplete, partly because my interests had genuinely shifted elsewhere.

Recently though, I came back to this question. Not because I wanted to complete the series for completeness sake, but because I wanted to understand what would actually make Part III worthwhile.

So I asked for help. I brainstormed with Claude (Anthropic’s AI) about what Part III should actually be about—what would make it worth writing and worth reading. And something clicked.

What Was the Question Really Asking For?

Looking back now, with decades between me and that moment in Professoressa Carbone’s classroom, I think I understand what she was asking for. She wasn’t looking for recitation of Plato’s philosophy mechanically applied to medieval warfare. She wanted to see if I could reason using philosophical frameworks in unfamiliar territory. Synthesis, not facts. Thinking, not performing memorization.

At 15, I wasn’t ready for that. I had volunteered for the oral examination thinking I could rely on prepared material about Plato’s recent lessons. Instead, she cut through my preparation with a single question that required genuine philosophical thinking: “What would Plato have thought about the Hundred Years’ War?”

It was a brilliant pedagogical move. It required understanding Plato’s ideas deeply enough to apply them to a completely different context—a context Plato never encountered, in a historical period he never knew. It required the kind of intellectual flexibility and reasoning that, honestly, I didn’t have yet.

The humiliation I felt wasn’t really about not knowing facts. It was about being exposed as someone trying to get by on memorization rather than understanding. And I think she knew it. She saw through my bluff.

So What Would Satisfy?

This brings me back to the problem of Part III. Showing that AI can now generate a more sophisticated-sounding essay than my 15-year-old self could produce doesn’t prove anything interesting. AI is very good at generating sophisticated-sounding content. That’s almost the problem.

What would actually satisfy—both as closure for this series and as something worth your time reading—is demonstrating the kind of reasoning Professoressa Carbone was asking for. Can I, now, with the benefit of intellectual maturity and AI assistance, actually think through what Plato might have thought about prolonged warfare between nations? Not just string together plausible-sounding paragraphs with proper citations, but engage in genuine philosophical reasoning?

What Would That Actually Look Like?

If I were to actually write that essay—the one demonstrating real philosophical reasoning rather than AI-generated content—what would it need?

Looking back at the GPT-4 essay from Part II, it has proper citations and coherent structure, but it’s superficial. It lists Platonic concepts (philosopher-kings, guardians, ideal states) and applies them mechanically to medieval warfare. That’s exactly the kind of recitation Professoressa Carbone was testing me against.

Real reasoning would require:

  • Connecting Plato’s specific ideas to specific events or decisions during the Hundred Years’ War—not just general principles applied generally
  • Exploring how Plato’s concepts might actually illuminate something about prolonged conflict between nations that we wouldn’t see otherwise
  • Considering contemporary interpretations or modern applications—what do we learn about conflict, governance, or political philosophy from this exercise?
  • Drawing genuine insights about both Plato and warfare, not just restating both

That’s the essay I’d want to write someday. Not as an academic exercise, but as personal closure—proving to myself I can do the kind of thinking she was asking for.

Closure for Now

But that’s not this post. This post is about giving you, the readers, closure on this series. About acknowledging honestly what I learned about AI as a writing assistant, and why simple iteration wasn’t the answer.

Here’s what I’ve learned:

AI is excellent at generating plausible content. GPT-4 produced an essay that looks credible—proper structure, citations, coherent arguments. For many purposes, that’s enough.

But AI doesn’t reason, it recognizes patterns. The essay from Part II strings together familiar ideas in familiar ways. It’s sophisticated pattern matching, not thinking. It can’t do what Professoressa Carbone was asking for: genuine synthesis that produces new insight.

The real value of AI as a writing assistant isn’t in replacing thinking—it’s in supporting it. AI can help with research, organization, articulation. It can reduce cognitive load so you can focus on the hard part: the actual reasoning. But you still have to do the reasoning.

Writing with AI requires clarity about what you’re trying to accomplish. If you want content generation, AI does that well. If you want thinking support, you need to know what thinking you’re trying to do. The tool can’t figure that out for you.

This series started with a simple question: can AI help me write an essay? The answer turned out to be more nuanced than I expected. It depends entirely on what kind of essay, and what role you want AI to play. For the essay I’d need to write to truly answer Professoressa Carbone’s question—the one that demonstrates reasoning rather than recitation—AI could help, but it couldn’t do the essential work.

Maybe someday I’ll write that essay. For now, I’m moving on to other projects where I’m excited about what AI can do: document extraction in geoscience, agentic workflows, problems where AI’s strengths align better with what I’m trying to accomplish.

Thank you for following this journey with me. Even if it didn’t end where I originally planned, I learned something worth sharing.

A Final Thought: Rigor Without Brutality

I started this series partly because of concerns about AI in education—concerns rooted in my own experience.

ChatGPT has educators calling for more in-class writing and oral examinations. I agree we need assessment that can’t be faked by AI. But I’m deeply opposed to the brutality that often came with those older systems.

Here’s the thing: the brutality was never necessary for the educational value. Professoressa Carbone’s question was pedagogically brilliant. The public humiliation didn’t make it more effective; it just made it traumatic.

We need assessment methods that demand genuine reasoning, in environments that support both students and teachers. It’s possible to have rigorous evaluation without breaking people in the process.

AI forces us to confront what we actually value in education: not the appearance of learning, but the development of genuine understanding and reasoning. The question is whether we can build systems that nurture that without the cruelty.

AI/HI Transparency Statement Modified from Brewin http://www.theguardian.com/books/2024/apr/04/why-i-wrote-an-ai-transparency-statement-for-my-book-and-think-other-authors-should-too

Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes

Additional context: This post was collaboratively written through an iterative conversation with Claude (Anthropic). The human author provided the direction, constraints, personal context, and decisions about what to include/exclude. The AI assistant drafted text, which was then reviewed and revised based on feedback. Sections were rewritten multiple times to match the author’s voice and intentions. The final editorial decisions, including what content made it to publication, were made by the human author.

ChatGPT as an essay-writing assistant – Part II

The blog post below was produced entirely by GPT-4.0, following a series of iterative prompts I provided, from the Introduction to the References and Footnotes, included. Please refer to my AI (Artificial Intelligence) and HI (Human Intelligence) Table in the last section. In the next post, I will include the full listing of my prompts, text evaluation, and time investment versus return analysis. I will also attempt prompting for some further improvements beyond this initial analysis.

Plato’s Perspective on the Hundred Years’ War

Introduction

The Hundred Years’ War, a prolonged and devastating conflict between England and France (1337-1453), was a historical event unknown to the ancient Greek philosopher Plato. Nonetheless, by examining his philosophical ideas, primarily in “The Republic” and “Laws,” we may speculate on his potential views regarding such a conflict.

The Philosopher King and the Ideal State

The Philosopher-King and the Ideal State In his magnum opus, “The Republic,” Plato presents the concept of a philosopher-king, an individual embodying wisdom and justice, who governs the ideal state (Plato, Republic, Stephanus 473c-d, Book V)1. This enlightened ruler, possessing both knowledge and moral integrity, would guide society toward harmony and justice.

War as Necessary Evil

In the “Laws,” Plato acknowledges that war might be a necessary recourse, particularly when defending the state against injustice (Plato, Laws, Stephanus 625c, Book I)2. However, he also emphasizes the importance of striving for peace and pursuing diplomacy whenever possible.

Striving for Peace

Plato contends that an ideal state should always seek to cultivate peace with other nations, fostering a spirit of friendship and cooperation (Plato, Republic, Stephanus 470a-b, Book V)3. This suggests that, had he been aware of the Hundred Years’ War, Plato would likely have advocated for diplomatic efforts to resolve the conflict.

The Guardians as Defenders of the State

The Guardians: Defenders of the State In “The Republic,” Plato introduces the guardians, a class of citizens specifically trained to defend the state and uphold its values (Plato, Republic, Stephanus 414a-b, Book III)4. These individuals, educated in both martial and intellectual pursuits, would protect the state from external threats, such as those posed during the Hundred Years’ War.

Governance in Times of War

Though Plato acknowledges the potential necessity of war, he cautions against allowing it to dominate the state’s agenda (Plato, Republic, Stephanus 468a-b, Book V)5. Instead, he argues that governance should remain focused on promoting the well-being of citizens and maintaining social harmony.

Fostering Social Harmony and Virtue

Plato maintains that an ideal state should strive to cultivate virtue among its citizens, promoting wisdom, courage, moderation, and justice (Plato, Republic, Stephanus 427e-428a, Book IV)6. In this context, the prolonged conflict of the Hundred Years’ War might be seen as a distraction from the state’s primary goal of fostering a harmonious and virtuous society.

Conclusion

Though Plato lived centuries before the Hundred Years’ War, his philosophical ideas can provide valuable insights into how he might have viewed such a conflict. By examining his writings, particularly “The Republic” and “Laws,” we can speculate that Plato would have recognized the potential necessity of war in certain circumstances but would have strongly advocated for peace, diplomacy, and the pursuit of justice and virtue in governance.

References

TitleAuthorDate WrittenChicago citation
The RepublicPlato380 BCEPlato. The Republic. Translated by G.M.A. Grube. Indianapolis: Hackett Publishing Company, 1992.
LawsPlato348 BCEPlato. Laws. Translated by A.E. Taylor. London: Dent, 1934.

Footnotes

  1. Plato, Republic, 473c-d. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  2. Plato, Laws, 628a-630d. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  3. Plato, Republic, 470a-471c. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  4. Plato, Republic, 433b-c. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  5. Plato, Republic, 414b-415d. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  6. Plato, Laws, 348, 663a-b. Perseus Digital Library | The Internet Encyclopedia of Philosophy 

AI (Artificial Intelligence) and HI (Human Intelligence) Statement

Modified from Brewin http://www.theguardian.com/books/2024/apr/04/why-i-wrote-an-ai-transparency-statement-for-my-book-and-think-other-authors-should-too

Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?No
Has any methods of analysis been suggested using HI?Yes
Has any methods of analysis been suggested using AI?No
Do any analyses utilize AI technologies, such as Large Language Models, for tasks like analyzing, summarizing, or retrieving information from data?Yes

ChatGPT as an essay-writing assistant – Part I

Outline

Introduction and background

In this multi-part series of articles, I will document my efforts in writing a high school Philosophy paper using ChatGPT. The paper shall answer in writing a question I was asked at an impromptu oral assessment in Philosophy class in my 3rd year of high school.

Let me take a step back to give you some context. I was enrolled in Classical High School in Rome, Italy. This type of school has a strong 5-year emphasis on History and Philosophy, as well as Latin, Greek, and Italian literature and grammar, with a bit of Science and Math. Students are typically evaluated on weekly written assignments, a minimum of 3 scheduled written tests each semester, and a minimum of 2 oral assessments per semester. During the latter ones, the teacher would (usually) pick two names at random from the attendance list once a week and gauge their knowledge on both the weekly lesson, in detail, but also on anything covered in the year up to that point; this could last between 30-60 minutes, and would happen in front of an audience of (often roaring) classmates. Seriously: brutal!

Only occasionally were students allowed to give a presentation on a prepared topic or volunteer for a full oral evaluation. This was the case for me during a block on Plato. And so it was that the teacher, the famously feared Professoressa Carbone (who taught both History and Philosophy), probed into my soul with her terrifying eyes and asked with a sinister smile, “Tell me, Niccoli, what would Plato have thought about the Hundred Years’ War?

My first prompt

When I heard about OpenAI ChatGPT being available, it seemed very fit to try this question in my first investigation. So I went ahead and asked the question “What would Plato have thought about the Hundred Years’ War?“, with only the minor modification to add citations and break down the response in steps.

GPT-3.5 result

Below is the result from the default GPT-3.5 model:

Essay written by Chat GPT-3.5

Time investment and return

  • Time investment: the whole investigatyion took about 30 minutes, including a very light read on prompts from a reputalbe source, and reading the answer (becaue the citations really did not come as full references, there was really nothing else to do other than that).
  • Return: was this worth 30 minutes of my time? The answer is, as often, it depends! On the one hand, this is a really poor answer in my view (see next section for a more detailed evaluation). So, as an essay (l imagine myself as a student looking for a quick free lunch), this would not do it; not even close. For me… but then again, even at 13 in my first year of high school I would’ve known this was not going to cut it… or in Junior high, for what matters. On the other hand, I was deliberate in providing a very simple, not engineered or iteratively refined prompt. As such, the response will come useful as a benchamrk against wich I will compare other efforts (for example using a better model, GPT4, and improving the prompt).

Evaluation

Is the answer grammatically correct? yes; is it written in a uniform style, appropriate for a school essay? yes; Does it contain any hallucination? I would say no, everything sounds reasonable. However I must admit that without reference to specific passages to Plato’s work, it is hard to say for sure. From what I remember of Plato’s Philosophy it’s reasonable, and that’s all.

On the other hand, had I been handed in this as a teacher, I would not be impressed. IF this were the output of an in-class exam, I probably would have marked it as a D- (a D for the very mediocre text and a minus for the ridiculousness of the citations). Had it instead been handed for an at home written assignement (in which cse the student would have had access to their notes and books) this would definitely be certainly worth an F (Fail). Now I sound like Professoressa Carbone.

You may reasonably wonder at the end of this article: how did I fare in that evaluation in 1986? Fair question. And I am not ashamed to confess it was a disaster. At 15 I was still very immature, intelligent but unengaged in school work, and also unable to do a real analysis. By that I mean interpret information rather than just accumulate facts and connect them. I honestly do not recall the mark I got (we had different levels of Failure that went progressively deeper, not too dissimilarly from the Circles in Dante’s Inferno) but I rememberr that an F definitely it was.

P.S. The model did a perfectly fine job in proofreading my first draft for this post. THat was definitely worth a lot, given I type very slowly, and I make lots of typos.

10-year anniversary

Featured

July 15th, 2022

Dear readers:

I started writing this blog 10 years ago and it is to date one of the things I am the most proud of.

This is a big milestone for me, so I would like to begin with thanking all the people that encouraged me at the beginning, and in particular, for their valiant support and feedback: Matt Hall, Evan Bianco, Oliver Kuhn, Steve Lynch, and last but not least my life partner Rebecca.

A lot of the work I did in the first couple of years was on furthering my understanding, and sharing that with others, of the use of colours in scientific visualization, and how to use better colormaps, for example Perceptual rainbow palette, part a and part b.

I am grateful I achieved those knowledge-sharing goals:

  • The material is referenced in the matplotlib documentation
  • The blog has often been used as reference in talks and other publications on colormaps, beginning with this classic matplotlib talk given by Kristen Thyng at scipy 2014
  • I was thrilled to have received positive feedback on my work by Bernice Rogovitz, someone I hold in very high esteem
  • Some of that work on the blog resulted in being invited by Matt Hall to write a tutorial for The Leading Edge (SEG). The tutorial came with a Jupyter notebook demonstrating how to evaluate default colour maps and favour more perceptual alternatives
  • I am particularly proud to see that the article is still ranking in the top 20 most downloaded papers from The Leading Edge (between 2010-2020)
  • Additionally, the two blog post are to date top results for the #google search “perceptual rainbow” and “perceptual palette images

Ultimately, I am very happy to have created a space for sharing and exchanging ideas freely.

So, to celebrate these 10 years of MyCarta, I treated it to a new domain, mycartablog.com (but the old domain, and links still work) and a brand new look (it took me a while to get there but I like it a lot) with a theme that should now be responsive for all devices (welcome to the new era Matteo!).

I will also soon publish a short series of short but sweet new posts on colormaps and visualization (and republish on linkedin).

Thank you all for sharing this journey!

Keep advancing your Python coding skills

October 22, 2020

In my last post I touched on the topic of continuously improving your geo-computing projects (also take a look at my chapter from the upcoming Software Underground book, 52 things you should know about geocomputing).

However, one aspect that I intentionally left out in was that of coding skills as I was planning to get back to it with a dedicated post, which you are reading just now.

2018 vs 2020 comparison of flag percentage calculation

In the Jupyter notebook I compare the results of seismic inversion from two methods (with or without inversion-tailored noise attenuation) using a custom function to flag poor prediction of the target well log using median/median absolute deviation as a statistic for the error; the results are shown below.

One may just do this visual comparison, but I also included calculations to count the number and percentage of samples that have been flagged for each case. Below is a cell of code from the Jupyter notebook (let’s call it 2020 code) that does just that .

zone_errors_a['flagged samples']=result_a.groupby('zone', sort=False).flag.sum().values
zone_errors_b['flagged samples']=result_b.groupby('zone', sort=False).flag.sum().values

def calc_proportion(dtf):
"""
function to calculate proportion of flagged samples
"""
x=dtf.flag
return round(100 * x.sum()/len(x), 1)

zone_errors_a['proportion (%)']=result_a.groupby('zone',sort=False).apply(calc_proportion).values
zone_errors_b['proportion (%)']=result_b.groupby('zone',sort=False).apply(calc_proportion).values

I am a lot happier with this code than with the original code (circa 2018), which is in the cell below.

zones_a=list(result_a['zone'].unique())
zones_b=list(result_b['zone'].unique())

zone_errors_a['flagged samples']=[result_a.loc[result_a.zone==z,'flag'].sum() for z in zones_a]
zone_errors_b['flagged samples']=[result_b.loc[result_b.zone==z,'flag'].sum() for z in zones_b]

zone_errors_a['proportion (%)']=[round(result_a.loc[result_a.zone==z,  'flag'].sum()/len(result_a.loc[result_a.zone==z,'flag'])*100,1) for z in zones_a]                                


zone_errors_b['proportion (%)']=[round(result_b.loc[result_b.zone==z,  'flag'].sum()/len(result_b.loc[result_b.zone==z,'flag'])*100,1) for z in zones_b]                                    

The major differences in the older code are:

  • I was using unique instead of Pandas’ groupby
  • I was using list comprehensions to work through the DataFrame, instead of Pandas’ apply and a custom function to calculate the percentages on the entire DataFrame at once.

I find the 2020 code much more tidy and easier to read.

Enters Pandas for everyone

The above changes happened in but a few hours over two evenings, after having worked through chapters 9 and 10 of Pandas for Everyone by Daniel Chen, a very accessible read for all aspiring data scientists, which I highly recommend (also, watch Daniel’s fully-packed 2019 Pycon tutorial).

And before you ask: no, you do not get the Agile Scientific sticker with the book, I am sorry.

🙂

Comparison of 2016 vs 2020 code snippets from the 2016 SEG Machine Learning contest

A second example is of code used to calculate the first and second derivatives for all geophysical logs from the wells in the 2016 SEG Machine Learning contest.

The two cells of code below do exactly the same thing: loop through the wells and for each one in turn loop through the logs, calculate the derivatives, add them to a temporary Pandas DataFrame, then concatenate into a single output DataFrame. In this case, the only difference is the moving away from unique to groupby.

I use the %%timeit cell magic to compare the runtimes for the two cells.

2016 code
%%timeit
# for training data
# calculate all 1st and 2nd derivative for all logs, for all wells
train_deriv_df = pd.DataFrame()             # final dataframe

for well in train_data['Well Name'].unique():        # for each well
    new_df = pd.DataFrame() # make a new temporary dataframe
   
    for log in ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND' ,'PE']: # for each log
        # calculate and write to temporary dataframe
        new_df[str(log) + '_d1'] = np.array(np.gradient(train_feat_df[log][train_feat_df['Well Name'] == well]))
        new_df[str(log) + '_d2'] = np.array(np.gradient(np.gradient(train_feat_df[log][train_feat_df['Well Name'] == well])))
         
    # append all rows of temporary dataframe to final dataframe          
    train_deriv_df = pd.concat([train_deriv_df, new_df])

86 ms ± 1.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2020 code
%%timeit
# for training data
# calculate all 1st and 2nd derivative for all logs, for all wells
train_deriv_df = pd.DataFrame() # final dataframe

for _, data in train_feat_df.groupby('Well Name'): # for each well        
    new_df = pd.DataFrame()                        # make a new temporary dataframe
   
    for log in ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND' ,'PE']: # for each log
        # calculate and write to temporary dataframe 
        new_df[str(log) + '_d1'] = np.gradient(data[log])
        new_df[str(log) + '_d2'] = np.gradient(np.gradient(data[log]))

    # append all rows of temporary dataframe to final dataframe          
    train_deriv_df = pd.concat([train_deriv_df, new_df])

52.3 ms ± 353 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

We go down to 52.3 ms from 86 ms, which is a modest improvement, but certainly the code is more compact and a whole lot lighter to read (i.e. more pythonic, or pandaish if you prefer): I am happy!

As an aside, if you want to know more about timing code execution, see section 1.07 from Jake VanderPlas’ outstanding Python Data Science Handbook, which I also cannot recommend enough (and do yourself a favor: watch his series Reproducible Data Analysis in Jupyter).

By the way, below I show the notebook code comparison generated using the nbdiff-web option from the awesome nbdime library, a recent discovery.