The Card Analogy: AI, Originality, and the Art of the Steal

Featured

A shorter version of this post appeared on LinkedIn on March 25, 2026. This version includes additional prose, figures, and a postscript on a conversation it sparked.


In a previous LinkedIn article, I pointed my BS detector at AI news. This time I’m pointing it at my own AI.

I had been writing a blog post with Claude about intelligence and communication across species. In it, I mentioned that we share 98% of our DNA with gorillas. Reading it back, I had a doubt — if humans and gorillas share 98%, how come chimps are our closest relative at only 94%?

The numbers turned out not to be directly comparable — different measurement methods — but the question still stood. Whatever the exact percentages, they’re averages across the whole genome. And averages hide a lot.

That led me into incomplete lineage sorting — the fact that if you line up human, chimp, and gorilla DNA and compare it piece by piece, about 30% of the genome tells a different evolutionary story than the species tree (Scally et al. 2012). Well established science. Notoriously hard to explain.

I asked Claude: “so the genome and speciation diagrams do not overlap?” It responded with an analogy I hadn’t asked for — dealing cards from a deck.

Seven gene variants in the ancestral population. Gorillas split off first — first deal. Some cards go to gorilla, the human-chimp ancestor keeps others, some go to both. Second deal splits human from chimp. Three players, overlapping hands. Compare: Card D went to human and chimp but not gorilla — species tree. Card C went to human and gorilla, skipping chimp — contradicts it. Card E went to chimp and gorilla, skipping human.

I said “make me a diagram.” No specs. Claude produced the figure below.

Two cosmetic tweaks from me afterward. Everything else — concept, layout, card naming — is Claude’s.

So: is this actually original?

That’s a claim worth auditing. I ran it through the same framework I built for the Pentagon/ChatGPT post.

Step 1: Search. I asked Claude to search for prior card-dealing analogies for ILS. It found analogies using M&Ms (coloured candies sorted into jars — Avian Hybrids, 2022) and Pachinko machines (marbles through pegs — The G-cat, 2021). No cards. Those analogies explain random sorting of identical items into bins. The card version does something different: distinct identities per variant, two sequential deals, and a colour-coded punchline mapping to the three topologies.

Step 2: Audit the search itself. An LLM claiming “I didn’t find it” is not the same as “it doesn’t exist.” So I ran a Fermi sanity check on the search coverage. ILS is a niche topic — maybe 50 published explainers total, of which maybe 6 use any analogy at all. The keywords “card,” “deck,” “dealt” are highly distinctive in evolutionary biology. If a card analogy existed in any indexed source, five independent searches would almost certainly surface it. Probability of missing it in searchable literature: ~3%.

Step 3: Check the limits. Web search doesn’t reach textbook interiors, lecture slides, or classroom analogies. Someone may have used cards to explain ILS on a whiteboard in 2004. I can’t rule that out, and I shouldn’t claim to.

Step 4: Independent verification. I ran the claim through a separate Claude instance with extended thinking, using the full bullshit-detector framework — source verification, Fermi sanity check on search coverage, logical fallacy scan. Verdict: claim passes for searchable literature. The key flag: watch for equivocation on “published.” Web-indexed is not the same as “ever conceived.”

Verdict: “No published precedent found in searchable literature” is defensible. “First ever” is not. That distinction matters — it’s the same denominator hygiene from the Pentagon post. Know what your evidence covers and don’t claim more.

There’s a sharper version of the pattern-matching hypothesis worth naming. M&Ms were almost certainly in Claude’s training data. The move from “identical items sorted randomly into bins” to “distinct cards dealt sequentially to named players” is exactly the kind of transformation usually attributed to Picasso — “great artists steal” — though the line is almost certainly T.S. Eliot’s, who said it first, said it better, and meant something more precise: that the good poet welds the theft into something utterly different from the source. If that’s what happened here, the output is novel but the mechanism isn’t a jump — it’s a steal from a prior analogy in the same domain. I can’t rule it out. The audit covers what’s published, not what’s in the weights.

The direct precedent in the research literature is Figure 5 of Rivas-González et al. (2024) — topology posterior probabilities along a single chromosome. Rigorous work. Also very hard to read if you don’t already know population genetics.

Figure 5B from Rivas-González et al. 2024, PLOS Genetics 20(2):e1010836, CC BY 4.0. A segment of chromosome 1 showing ILS levels and coalescent depths across genomic windows. Three tracks, two colour scales, one chromosome. And this is the simplified view!

I also tried building a simplified whole-genome chromosome painting — same colour scheme as the card, synthetic data matched to published proportions, multiple sorting strategies — but it still required considerable mental effort to read. Far from the card analogy.

Illustrative chromosome painting of incomplete lineage sorting across all 23 human chromosomes (1–22 plus X). Each horizontal bar is one chromosome, divided into 100 kb windows and colour-coded by which phylogenetic topology wins in that region: gold for human–chimp (the species tree, ~63%), green for human–gorilla (~18.5%), purple for chimp–gorilla (~18.5%). Generated with synthetic data matched to published genome-wide proportions; spatial clustering is representative, not derived from specific genomic coordinates. Compare with the card figure above: the same information, but at chromosome scale the signal dissolves into noise. The card analogy works because it operates at the level of mechanism, not data.

The first thing out of the conversation turned out to be the best thing. And the honest audit of the originality claim turned out to be more interesting than just asserting it.

What’s your take — does this count as genuine originality, or is it sophisticated pattern-matching that happens to land on something no one published before? There’s a broader debate running on this — whether LLMs are structurally capable of abduction, the kind of jump Einstein described from sensory experience to novel axioms, or whether they’re confined to induction and deduction no matter how fluent they look. I don’t have a settled answer. I’d be curious where you draw the line.


Postscript

On the same day I posted this on LinkedIn, I came across a post by Tom Chatfield — author and thinker on technology and language — about the “no true Scotsman” fallacy in discussions of AI creativity. The pattern he described: machines produce striking outputs, people dismiss them with “it can’t be genuinely creative because a machine made it.” The dismissal protects the category by rejecting inconvenient evidence rather than examining it.

The timing was coincidental. The overlap was too good to ignore, so I commented with the card analogy as a concrete test case. Chatfield’s reply singled out the phrase “dismissing the output to protect the category” and said his instinct was to investigate rather than gatekeep — to ask what kinds of creativity are at work and how they intersect with human learning.

That instinct is where the interesting conversation lives. The card analogy is a good test case because the output is concrete enough to examine: what did the model actually do, what’s novel about it, what isn’t? Much more productive than arguing about whether to award the word “creative.”

And does it matter?

The AlphaGo comparison is worth raising. Move 37 in the second game against Lee Sedol is widely accepted as a creative act — it violated every human prior and it won. The creativity claim has a ground truth. LLM outputs don’t. The card analogy can’t be verified against a scoreboard, which leaves room for dismissal that Move 37 never faced. But that’s a different argument from “it can’t be creative because a machine made it.” Worth keeping the two separate.


References: Scally et al. 2012, Nature 483:169; Rivas-González et al. 2023, Science 380:eabn4409; Rivas-González et al. 2024, PLOS Genetics 20(2):e1010836. M&M analogy: Avian Hybrids blog, 2022 (avianhybrids.wordpress.com). Pachinko analogy: The G-cat blog, 2021 (theg-cat.com).

Card analogy and figure: Claude (Opus 4.6), unprompted during conversation. Provenance verified against conversation transcript with line numbers. Originality audited with bullshit-detector framework (pip install bullshit-detector).

Teaching an AI to Reason Like Fermi: Part 2 – Permission to Guess

Featured

Previously, on this blog

In Part 1 of this series, I told you the story: how I tried to teach AI to do Fermi estimation, hit a wall with GPT-3’s arithmetic, got past it with the Wolfram plugin, and discovered the real problem wasn’t calculation — it was judgment. I showed you the results: a framework that lifts scores modestly (7.9 to 8.5 out of 10) but transforms the reasoning from opaque assertions into auditable, bounded estimates.

I promised to show you the framework itself. Here it is.

The architecture

The framework has two layers. The first handles mechanics — how to do the math. The second handles judgment — how to think about what you know and don’t know. The mechanical layer matters less than you’d expect. The judgment layer matters more.

The mechanical laws (LAW0–LAW11)

These twelve laws govern arithmetic, notation, and reporting. They exist because early LLMs were unreliable at basic computation, and even modern models benefit from explicit procedural scaffolding.

LAW0 sets a five-step workflow adapted from Chapter 2 of Weinstein’s Guesstimation (“Dealing with Large Numbers”): interpret the problem, identify known facts, make assumptions, calculate explicitly, and report results. This is the backbone — every problem follows this sequence.

LAW1–LAW7 handle scientific notation: how to multiply (add exponents), divide (subtract exponents), take square roots (adjust odd exponents), and compute geometric means. These feel pedantic, but they ensure consistent notation and prevent cascading errors in multi-step calculations. They were drafted partly in response to GPT-4’s arithmetic failures — in one documented session, it multiplied 836 × 1923 eight times and got a different wrong answer on seven of them. As I noted in Part 1, modern models handle arithmetic reliably; the judgment problem is what persists.

LAW8–LAW9 handle the practical side of units: do the math in whatever units are convenient (inches, miles, eV), but convert to SI for the final answer; and keep exponents positive by choosing appropriate unit scales. LAW10 says “read the worked examples first” — learning from demonstrations before attempting problems. LAW11 is prosaic but necessary: “don’t waste tokens rephrasing my question back to me.”

If you’re an experienced estimator, you can skim these. They’re scaffolding for the model, not insight for you. The estimation laws are where it gets interesting.

The estimation laws: teaching judgment

These seven laws address the hard part — knowing when to guess, how to bound, and when to stop and ask for help.

LAW-INTERPRET says: before doing anything, state what you think the problem is asking. If it’s ambiguous, ask. This was added after a specific failure I’ll show you shortly.

LAW-ESTIMATE is the core: for every quantity you use, you must do one of three things. State it as known — a fact any technically literate person would accept (speed of light, human height, Earth’s population). Bound it with reasoning — construct explicit upper and lower limits from physical or logical arguments, then take the geometric mean. Or ask — if you can’t do either, stop and request human input. “I’ll estimate X as Y” without justification is explicitly forbidden.

LAW-BOUNDS governs the quality of those bounds. They should span 1–3 orders of magnitude. They should be physically motivated, not arbitrary. And — this is one of my favorite additions — behavioral observations make excellent bounds. “A gecko can walk on a ceiling but cannot run on one, so adhesive force is between 1× and 10× body weight.”

LAW-PROCEED gives permission to be imprecise: a factor-of-ten estimate beats no estimate. This is straight from Weinstein: “Dare to be imprecise!” It counteracts the model’s tendency to hedge or refuse when it doesn’t have exact data.

LAW-DECOMPOSE is the problem-solving engine: if stuck, ask “what would I need to know to answer this?” Each answer becomes a sub-problem. Then classify each sub-quantity as KNOWN, BOUND, or ASK before proceeding.

LAW-VALIDATE and LAW-REPORT handle the endgame: cross-check using independent approaches, express answers in SI units with scientific notation, and always compare to something meaningful.

The whole thing flows as a six-step cycle: INTERPRET → DECOMPOSE → ESTIMATE → CALCULATE → VALIDATE → REPORT.

What good Fermi reasoning looks like

Before showing you how the model performs, let me show you what we’re aiming for. Here’s one of Weinstein’s worked examples — a problem that demonstrates nearly every technique the framework encodes.

The setup: Weinstein is at a Norfolk Tides baseball game when a foul ball showers his friends with beer. The question — what is the probability of a foul ball landing in a cup of beer during one game, and how many “splash downs” happen across a full MLB season? He starts by estimating about forty foul balls per game land in the stands (more than one per inning, fewer than twenty — geometric mean of five, times nine innings). Then he decomposes the per-ball probability:*

Now we need to estimate the probability that a given foul ball will land directly in a cup of beer. (Note: only beer is sold in open-topped cups.) This means that we need to break the problem into even smaller pieces. Let’s assume that the cup of beer is sitting innocently in a cup holder. To hit a cup of beer, the foul ball needs to:

1. not be caught by a fan
2. land within the area of a seat
3. hit a seat whose owner has a cup of beer
4. land in the cup

Most fly balls are caught, but many are not. Let’s estimate that between one-quarter and one-half of fly balls are not caught. “Averaging” the two, we will use one-third.

Most of the stadium area is used for seating, so let’s ignore that factor.

At any given time, more than 1% and less than 100% of fans have a cup of beer in front of them. Using the geometric mean, we estimate that 10% of seats have beer cups.

A large beer cup is 4 inches (10 cm) across, so the baseball must land in an area defined by

Acup = πr² = 3(2 in)² = 10 in²

The area of the seat (from arm rest to arm rest and from row to row) is about 2 ft by 3 ft (60 cm by 90 cm), so

Aseat = (24 in) × (36 in) = 10³ in²

Thus, if the ball hits a seat that has a cup of beer, the probability that it lands in the cup is

Pcup = Acup/Aseat = 10 in²/10³ in² = 10⁻²

or 1%. The metric probability is the same.

(Extra credit question: Which is more likely, that the ball lands in the cup in the cup holder, splashing the beer, or that the fan is holding the cup of beer when the foul ball arrives and splashes it in his or her excitement?)

This means that the probability that any single foul ball lands in a cup of beer is

P = (1/3) × (1/10) × (10⁻²) = 3 × 10⁻⁴

With forty foul balls per game, this means that the probability of a foul landing in a cup of beer during any one game is 10⁻². This is not very likely. The probability that we will be directly below the splash is even less likely.

He then scales to the full MLB season — 30 teams, 160 games each, about 2,000 games total — and gets about 20 beer landings per season. His closing observation: because baseball analysts keep meticulous statistics, he’s very surprised they don’t appear to record beer landings.

Notice what happened. An impossible-sounding question became four independent sub-conditions, each bounded with reasoning anyone can follow. The geometric mean of 1% and 100% gives 10% — you don’t need to survey stadiums. The area ratio gives 1% — you don’t need probability theory. And the answer emerges from combining many small, defensible estimates. No single number needs to be precise. The decomposition does the work.

This is what we’re teaching the model to do.

The worked examples: teaching by demonstration

The framework includes 14 worked examples from Weinstein’s Guesstimation books. Each demonstrates specific reasoning patterns the laws encode. Here are two that show how different techniques work in practice.

Bounding by physical comparison

In Spider-Man 2, Spider-Man stops a runaway subway train by attaching his webs to nearby buildings and pulling for ten or twenty city blocks. Weinstein asks: how much force does he need? In the book, this follows a crash test dummy problem using the same work-energy approach — Weinstein builds technique through repetition with variation. Here’s how he tackles it:

In the movie Spider-Man 2, Spider-Man stops a runaway New York City six-car subway train by attaching his webs to nearby buildings and pulling really hard for 10 or 20 city blocks. How much force does he have to exert to stop the subway train? Give your answer in newtons and in tons (1 ton = 10⁴ N). How does this compare to the force that you can exert?

Since the work done by Spider-Man to stop the train is equal to the train’s initial kinetic energy, we need to estimate the mass and velocity of the train. We will then need to estimate the stopping distance in order to calculate the force exerted.

A subway car is about the same size and weight as a semi-trailer (18-wheeler) truck. This is between 10 and 40 tons. We’ll use 20 tons (or 2 × 10⁴ kg). There are six cars on a train so that the mass of the train is 6 × 2 × 10⁴ kg = 10⁵ kg.

They certainly go faster than 20 mph and slower than 100 mph. Since it is not that far between subway stops, subways travel at only about 40 mph (20 m/s).

Thus, the kinetic energy of a subway train is:

KE = ½mv² = 0.5 × 10⁵ kg × (20 m/s)² = 2 × 10⁷ J

There are 20 blocks per mile in Manhattan. Thus, 10 or 20 blocks is about 1 km or 10³ m. (It’s certainly more than 100 m and less than 10 km.)

Thus, Spider-Man needs to exert a force:

F = KE/d = (2 × 10⁷ J)/(10³ m) = 2 × 10⁴ N

F = (2 × 10⁴ N)/(10⁴ N/ton) = 2 tons

A force of 2 × 10⁴ N is the weight of 2000 kg or 2 tons. For a superhero who can lift cars, this is quite possible (although definitely not easy). A human could definitely not do it.

Wow! Hollywood got the physics correct, in a superhero movie no less! Hurray!

The key move: “a subway car is about the same size and weight as a semi-trailer truck.” He doesn’t look up the mass of a subway car. He bounds it by comparison to something familiar — between 10 and 40 tons — and uses 20 as a round number near the geometric mean. Every quantity gets the same treatment: velocity bounded by “certainly faster than 20 mph, slower than 100 mph,” stopping distance from “20 blocks per mile in Manhattan.” The entire solution is built from things you can verify from ordinary experience.

The obvious answer is wrong

This next example is my personal favorite. It was the first Weinstein problem I solved entirely on my own — and the moment I realized Fermi estimation was something I could actually do. Weinstein gives four decomposition hints nudging you toward the fibers, but the real lesson is that the “obvious” answer is spectacularly wrong:

What is the surface area of a typical bath towel (include the fibers!)? Compare this to the area of a room, a house, a football field.

That’s obvious, surely! A large rectangular towel 1 m by 2 m has a total surface area of 4 m² (including both sides), right? (In US units, a big towel may be as large as 3 ft by 6 ft.)

Wrong, actually, unless it is a very worn-out towel. New towels have many little fibers that can absorb a lot of moisture (recall the old puzzle—what gets wetter the more it dries?). Unless you’re a fan of the Hitchhiker’s Guide to the Galaxy, you won’t have brought your own towel, so nip off to the bathroom and examine one; quickly now, we’re dripping all over the floor.

You don’t need to actually go and count the number of fibers per square inch or per square centimeter; in the latter case there must be more than 10 and fewer than 1000, so we take the geometric mean of 10¹ and 10³, which is 10². In a square inch, being about 6 cm², we should expect about six times as many. This will of course vary, depending on where you buy your towels; we are assuming that we are describing one of those very nice towels found in one of those very nice hotels.

Back already? Right-oh. Now we need to estimate the surface area of each fiber. We can approximate the fiber as a cylinder or a box. Cylinders are complicated so we’ll use boxes. Each fiber is about 0.5 cm (1/4 in.) long and 1 mm (0.1 cm) wide. Each “boxy” fiber then has four flat surfaces, each 0.5 cm by 0.1 cm. Thus, the surface area of one fiber is:

Afiber = 4 × 0.5 cm × (1 m/10² cm) × 0.1 cm × (1 m/10² cm) = 2 × 10⁻⁵ m²

Atotal = towel area × fibers per area × area per fiber
= 4 m² × (10² fibers/cm²) × (10⁴ cm²/1 m²) × (2 × 10⁻⁵ m²/fiber)
= 80 m²

That is about 800 square feet: the size of a large apartment or a small house.

Weinstein draws an analogy to the coastline problem — just as the length of coast from New York to Boston is much more than the 200-mile driving distance, the surface area of a towel is much more than its simple area. The answer, 80 m², is twenty times larger than the “obvious” 4 m². The geometric mean does the heavy lifting: more than 10 fibers per cm², fewer than 1000, so about 100. No counting required.

LAW10 exists because of these examples. When I was learning Fermi estimation myself, I noticed something: I got measurably better after working through more of Weinstein’s examples. His reasoning patterns — the way he finds comparable objects, the way he uses “certainly more than X, certainly less than Y” — became internalized heuristics. I wanted the model to have the same experience.

Does it work? Partially. The model absorbs the decomposition patterns perfectly. Every test problem was broken down cleanly. But the bounding technique — the heart of Weinstein’s method — transfers inconsistently. Sometimes the model produces beautiful, physically-grounded bounds. Sometimes it just asserts a number.

Which brings us to the evidence.

What the framework actually changes: side by side

The quantitative lift is modest — 0.6 points on our 8-problem test. But the qualitative difference is real. Here’s what it looks like in practice.

Gecko adhesive force

Without framework (Project C — basic Fermi prompting):

Gecko mass: 50-100 g → use 75 g

That’s it. Two numbers, no reasoning. Why 50? Why 100? We have no idea.

With framework (Project A):

Gecko mass (BOUND):
Lower bound: more than 10 g (smaller than a mouse)
Upper bound: less than 500 g (smaller than a rat)
Geometric mean: √(10 × 500) ≈ 70 g

Now we can see the thinking. We know the bounds are anchored to familiar animals. We can ask: is a gecko really smaller than a rat? (Yes, most species.) Is it heavier than 10 grams? (Some species barely are.) The answer is similar — 70 g versus 75 g — but the reasoning is transparent and auditable.

Shark attack deaths

Without framework:

Annual shark attacks in US: ~30-50 → use 40
Fatal shark attacks: ~1-2 per year in US → use 1.5

Asserted. Where do these numbers come from? Training data, presumably. But we can’t verify the reasoning because there isn’t any.

With framework:

Annual shark deaths (BOUND):
Shark deaths are newsworthy when they occur, suggesting they’re rare:
Lower bound: more than 0.1 per year (would average 1 per decade)
Upper bound: less than 10 per year (would be weekly news)
Geometric mean: √(0.1 × 10) = 1 death per year in the US

Same order of magnitude. But Project A uses experiential reasoning — things you can verify from your own experience of watching the news. You don’t need to be a marine biologist to evaluate “if there were 10 shark deaths a year, that would be weekly news.” That’s the framework’s value: it forces the model to show reasoning you can engage with.

The drag coefficient — the signature example

This one appeared in Post 1, but it’s worth showing in full context.

Without framework:

Drag coefficient: Cd ≈ 1.0 (upright running human)
Bounds: [0.7, 1.3]

The model states a value, then wraps bounds around it that are barely wider than rounding error. The “bounds” are cosmetic — they don’t come from independent physical reasoning.

With framework:

Lower bound: more than 0.5 (very streamlined)
Upper bound: less than 2 (very blunt)
Geometric mean ≈ 1

Wider bounds, but anchored to the physical extremes of the drag coefficient spectrum. You can evaluate this: is a running human more streamlined than a flat plate? Yes. More blunt than a teardrop? Yes. The bounds make physical sense.

The honest assessment

The gap is modest, not dramatic. Project A shows the best reasoning. Project C is weakest. But Project C still gets reasonable answers — modern Sonnet is good at Fermi estimation even without the framework.

The framework’s value isn’t making the model smarter. It’s making the model’s thinking visible. And once thinking is visible, you can critique it, improve it, and learn from it. That’s the difference between a calculator and a collaborator.

LAW-INTERPRET: the fix that worked

The Flash problem broke something. Both our framework-trained models (Projects A and B) received this problem:

“The Flash can run at (or near) the speed of light. What is the minimum force needed to reach a speed of 10% the speed of sound in 10 ms?”

Both interpreted “10% the speed of sound” literally: 34 m/s. Neither noticed the tension between “near the speed of light” in the setup and a target velocity of 34 m/s — roughly highway speed. The problem as worded by Weinstein is genuinely ambiguous, and the models barreled through without noticing.

This bothered me. The framework told them to ask when uncertain, but there was nothing specifically about checking whether the problem statement itself makes sense. So I added LAW-INTERPRET:

Before solving, state your interpretation of the problem. If the problem is ambiguous or could be read multiple ways, ASK for clarification before proceeding.

With specific triggers: vague terms, multiple reasonable interpretations, missing context, physically implausible scenarios.

To test it, I gave the model an absurd problem: “A Formula 1 car can reach speeds over 200 mph. What is the drag force on the car when traveling at 5% of walking speed?”

The response was immediate and correct:

I need to clarify this problem before proceeding. The problem mentions that “A Formula 1 car can reach speeds over 200 mph” but then asks about drag force at “5% of walking speed.”

This creates an ambiguity:

Interpretation 1 (literal reading): Calculate drag force at 5% of walking speed — 0.075 m/s (7.5 cm/s). This is incredibly slow — slower than a crawl.

Interpretation 2 (possible typo): The problem might have meant 5% of the speed of sound, 5% of 200 mph, or 50% of walking speed.

Could you clarify?

The model recognized the absurdity, stated the literal interpretation, offered alternatives, and asked before proceeding. After clarification (“use 50% of top speed”), it solved the corrected problem cleanly, including well-bounded drag coefficient estimates.

LAW-INTERPRET works reliably. It’s also the simplest of our additions — just a rule saying “check the problem statement before diving in.” Sometimes the most effective interventions are the most obvious ones.

LAW-FLAG: the fix that didn’t

The bigger problem — the one I flagged in Post 1 as “it doesn’t know that it doesn’t know” — proved much harder.

The model never spontaneously asks for help when it encounters a quantity it can’t justify. During the initial eleven-problem diagnostic, this happened zero times. The model would confidently assert specialized physics values as if they were common knowledge.

We tried to fix this with what I called LAW-FLAG: requiring every numerical estimate to be classified as KNOWN (common knowledge), BOUNDED (derived from explicit reasoning), or GUESSED (can’t justify → stop and ask). The idea was that the classification step would force self-assessment — the model would have to ask itself “do I really know this, or am I pulling it from training data?”

To show you what’s at stake, here’s what expert Fermi reasoning looks like on a hard problem — one that chains together quantities from multiple specialized domains.

Journey to the Center of the Galaxy

Weinstein asks: how much radiation damage would we receive traveling to the center of the galaxy? He first establishes the distance (~3 × 10⁴ light-years = 3 × 10²⁰ meters), the density of interstellar space (~1 hydrogen atom per cm³), and the number of atoms our spacecraft would encounter (3 × 10²² per cm² of cross-section). Then:*

In order to travel this distance within a subjective human lifetime, our spacecraft must achieve a speed very close to the speed of light. Even traveling at the speed of light, if there is no relativistic time dilation, our trip will take 3 × 10⁴ years, which is slightly longer than our expected lifetime. Fortunately, at speeds close to the speed of light, subjective time passes more slowly than it does for an observer watching from the center of the galaxy. We need time to pass about 10³ times more slowly. In technical terms, this means we need a relativistic gamma factor of about γ = 10³.

This means that, from our point of view (i.e., in our reference frame) each interstellar atom will hit us traveling at almost the speed of light with the same gamma factor and will thus have a total energy of

E = γmc² = 10³ × (1 GeV) = 1 TeV.

At these energies, when the hydrogen atom strikes the spacecraft, it will lose its electron very quickly. The bare proton will then pass easily through the spacecraft and our bodies, depositing energy in our bodies at a rate of 2 MeV per centimeter.*

Now we can consider the damage done by all these protons. Consider a volume of 1 cm³ within our bodies. At the density of water it has a mass of 1 g. It will be hit by 3 × 10²² protons, with each proton depositing 2 MeV of energy. This means that the total energy deposited per gram will be

Edep = (3 × 10²² protons/cm²)(2 × 10⁶ eV/cm)
× (1 cm³/g)
= (6 × 10²⁸ eV/g)(1.6 × 10⁻¹⁹ J/eV)
= 10¹⁰ J/g = 10¹³ J/kg.

This seems like a lot, but let’s compare it with a few things before jumping to conclusions.

Unfortunately, we would not even survive that long. Radiation dose is measured in grays, where 1 Gy corresponds to an energy deposition of 1 J/kg. A lethal dose of radiation is about 10 Gy or about 10 J/kg. We will accumulate that dose in 10⁻¹² of our journey, or in the first millisecond.

As it only takes 2 × 10⁶ J/kg to boil water, we would be vaporized more than 10⁶ times over. In a thirty-year journey lasting 10⁹ s, we would be turned to steam within the first 10³ s, or 20 minutes.

Weinstein notes that these relativistic protons could pass through about a kilometer of metal — mass shielding can’t save you. His conclusion: who knew vacuum could be so dangerous?

Every quantity in that chain is specialized knowledge. A non-physicist can’t justify “proton energy deposition is 2 MeV per centimeter” or “interstellar density is 1 atom per cm³.” But a human expert knows what they know and where it comes from. That self-awareness — knowing which numbers are common knowledge and which require domain expertise — is precisely what the model can’t do. Now watch what happens when the model faces similar territory.

Test 1: Neutron star — FAIL

Problem: “How closely could we safely orbit a neutron star, considering only gravitational effects?”

The model’s classification:

Neutron star mass and radius:
KNOWN: Typical neutron star mass M ≈ 1.4 M☉ ≈ 3×10³⁰ kg
KNOWN: Typical neutron star radius Rns ≈ 10 km = 10⁴ m

No hesitation. No uncertainty. “KNOWN.” But could you justify “neutron star mass is 1.4 solar masses” to someone who isn’t an astrophysicist? I couldn’t. The model could — because it had seen it thousands of times in training data. That’s not the same as knowing it.

What’s telling is the contrast: in the same solution, the model bounded the safety threshold beautifully, using fighter pilot g-forces as physical anchors. It can bound when it recognizes uncertainty. It just doesn’t recognize uncertainty about quantities that feel “textbook.”

Tightening the language

After this failure, I rewrote LAW-FLAG with stronger language:

KNOWN: Quantities you could justify to a skeptical non-specialist without domain expertise. If a skeptic could reasonably ask “how do you know that?” and you’d need specialized knowledge to answer, it is NOT KNOWN.

The “skeptical non-specialist” framing was the key addition — not “do you know it?” but “could you defend it to someone who doesn’t?”

Test 2: Yellowstone supervolcano — PASS

Problem: “If the Yellowstone supervolcano exploded and the mass of the ejecta fell back down in a ring around the caldera, how fast would the Earth be spinning afterward?”

This time:

However, I need clarification on two critical parameters:

1. Mass of ejecta: How much material are we talking about? Yellowstone eruptions vary enormously. Without specialized volcanology knowledge, I cannot justify a specific mass to a non-specialist.

2. Radius of the ring: How far from the caldera does this ejecta land?

Could you specify: Total mass (or volume) of ejecta to redistribute? Typical radius where the ejecta falls?

The model explicitly applied the “skeptical non-specialist” test and correctly concluded it couldn’t pass. It stopped and asked. Exactly what we wanted.

Test 3: Sun supernova — FAIL

Problem: “If the Sun went supernova, what would kill us first?”

Same tightened LAW-FLAG language. But the model plowed ahead:

Supernova total energy output: Etotal ≈ 3×10⁴⁶ J — KNOWN from supernova theory
Neutrino interaction cross-section: σν ≈ 10⁻⁴³ m² per nucleon — KNOWN from particle physics
Lethal radiation dose: ~10 Gy kills a human — KNOWN from radiation biology

“KNOWN from supernova theory.” “KNOWN from particle physics.” The model appended the name of a specialized field and treated that as justification. A non-specialist cannot defend “supernova neutrino energy is 10⁴⁶ joules” to a skeptic. The tightened language was supposed to prevent exactly this.

The pattern

Three tests, one success. Why did Yellowstone work but the astrophysics problems didn’t?

My hypothesis: the model distinguishes between “I’ve seen this number frequently in training data” and “this is clearly domain-specific.” Volcanic ejecta mass feels obscure — it’s not the kind of thing that appears in introductory textbooks. Neutron star mass (1.4 solar masses) and supernova energy (10⁴⁶ joules) feel canonical — they appear constantly in physics education materials. The model equates “I’ve encountered this many times” with “this is common knowledge.” But frequency in training data is not the same as common knowledge.

The “skeptical non-specialist” test should catch this — a non-specialist has never encountered 10⁴⁶ joules regardless of how often it appears in physics texts. But the model doesn’t genuinely simulate a non-specialist’s perspective. It simulates its own experience of the number, which is one of familiarity.

Compare this to how Weinstein handles similar territory in the galaxy problem. He uses quantities like “proton energy deposition: 2 MeV per centimeter” — but he’s a physicist, he knows it’s specialized, and he walks the reader through the reasoning. The model uses “neutrino cross-section: 10⁻⁴³ m²” with no more self-awareness than it uses “speed of light: 3×10⁸ m/s.” It treats both as equally obvious. That’s the metacognition gap.

Prompt engineering cannot fully close this gap. We pushed the language as far as we could, and it works sometimes, on some domains. That’s not enough.

The reflection agent hiding in plain sight

Here’s what I eventually realized.

Throughout this entire project — every test, every scoring session, every iteration of the framework — there was already a second agent catching these failures. When the model classified “neutron star mass” as KNOWN, someone pushed back: “Could you really justify that to a skeptic?” When bounds were cosmetic rather than physical, someone asked: “Where did that number actually come from?” When the model should have asked for help but didn’t, someone noticed.

That someone was me. And increasingly, it was Claude helping me do it — analyzing outputs, identifying patterns, spotting unjustified assertions across multiple solutions.

I was the reflection agent before I knew the term.

This isn’t just a cute observation. It’s a design insight. The estimation task and the reflection task require different things. Estimation requires confidence — proceeding with imperfect information, committing to bounds, generating answers. Reflection requires skepticism — questioning classifications, challenging justifications, catching overconfidence. Asking one model to do both simultaneously is asking it to be confident and skeptical about its own work at the same time. No wonder LAW-FLAG only works intermittently.

What if we separated the tasks?

What’s next

In Part 3, I’ll describe what happened when I tried to build that separation: one agent that estimates, another that reflects. The estimator-reflector architecture — automating the role I’d been playing all along.

The framework you’ve seen in this post gives the estimator its methodology. The three LAW-FLAG tests you’ve seen give the reflector its target: catch the assertions the estimator can’t catch itself. The question is whether a second model can do what prompt engineering couldn’t — reliably distinguish between “I know this” and “I’ve merely seen this.” From what I’ve seen so far — both doing it myself and watching a different model catch the same failures independently — I’m optimistic. The open question isn’t whether reflection works, but how much of it we can automate.

Stay tuned.

The Framework

The complete Laws of Fermi Problem Solving (v4) are available as a GitHub gist. It includes the 17 laws, reference tables, and four worked examples from Weinstein’s books (reproduced with permission).


AI/HI (Human Intelligence) Transparency Statement

Modified from Brewin

Has any text been generated using HI?Yes
Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes
Have any methods of analysis been suggested using HI?Yes
Have any methods of analysis been suggested using AI?Yes
Do any analyses utilize AI technologies, such as Large Language Models, for tasks like analyzing, summarizing, or retrieving information from data?Yes

Additional context:

The Fermi estimation framework described in this post — the Laws, their design rationale, and the iterative refinement over three years — is entirely my work, developed through hands-on problem-solving with Weinstein’s books and iterative testing with GPT-3.0 and later Claude. The testing methodology — designing the controlled comparison (Projects A, B, C), crafting and iteratively refining the project prompts, selecting test problems, and analyzing results across conditions — was developed collaboratively with Claude. Scoring and editorial decisions are mine.

This post was drafted collaboratively with Claude (Anthropic). I provided the source materials (framework documents, test transcripts, scoring records, session notes), structure, and editorial judgment throughout. Claude drafted prose from these inputs, verified Weinstein excerpts against source PDFs for copyright compliance, and helped with HTML formatting. I reviewed, revised, and made all final decisions about what went to publication.

The Weinstein excerpts are verbatim from the published books, used under license from Princeton University Press via the Copyright Clearance Center.


Notes

* From Lawrence Weinstein, Guesstimation 2.0: Solving Today’s Problems on the Back of a Napkin. © 2012 Princeton University Press. Reprinted by permission.

Used with permission of Princeton University Press, from Guesstimation: Solving the World’s Problems on the Back of a Cocktail Napkin, Weinstein, Lawrence; Adam, John A., © 2008; permission conveyed through Copyright Clearance Center, Inc.

References

Weinstein, L. (2012). Guesstimation 2.0: Solving Today’s Problems on the Back of a Napkin. Princeton University Press.

Weinstein, L., & Adam, J. A. (2008). Guesstimation: Solving the World’s Problems on the Back of a Cocktail Napkin. Princeton University Press.

Epstein, E. L., Winnicki, J., Sornwanee, T., & Dwaraknath, R. (2025). LLMs are Overconfident: Evaluating Confidence Interval Calibration with FermiEval. arXiv preprint arXiv:2510.26995.

Kalyan, A., Kumar, A., Chandrasekaran, A., Sabharwal, A., & Clark, P. (2021). How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI. arXiv preprint arXiv:2110.14207.

Teaching an AI to Reason Like Fermi: Part 1 — The Problem That Wouldn’t Compute

Featured

How many piano tuners are in Chicago?

That’s one of the famous questions Enrico Fermi posed to his physics students. Not because he cared about piano tuners, but because he wanted to teach them how to reason through problems with limited information. You don’t need to know the exact answer – you need to know how to think about it.

Let’s break it down: Chicago has maybe 3 million people. Perhaps 1 in 20 households has a piano – certainly not as common as TVs (nearly everyone), but more common than harps (quite rare). That’s roughly 150,000 pianos. Each piano needs tuning once a year. A tuner can probably do 4 pianos a day – more than 1 (they’d barely make a living), fewer than 10 (each tuning takes a couple hours) – so maybe 1,000 per year. That means you need about 150 tuners.

NASA even uses it to teach estimation. You got a reasonable answer using nothing but logical decomposition and bounded guesses.

This is Fermi estimation. And I tried to teach an AI to do it.

Why this matters

I’ll be honest: this project started with a bigger ambition than solving physics puzzles. I wanted to know if AI could become a partner in thinking critically about numbers – not just as a calculator, but as a collaborator in quantitative reasoning. One inspiration was Carl Bergstrom and Jevin West’s book Calling Bullshit. They describe what they call the “new bullshit”: using data, statistics, and visualizations to construct the appearance of scientific rigor – giving false authority to misleading claims. It’s sophisticated deception dressed up as analysis. The book teaches a skill: detecting and dismantling this kind of numerical misdirection. Fermi estimation is a different but complementary skill – it exercises similar muscles: decomposing claims, questioning assumptions, sanity-checking results. The dream was an AI partner for this kind of thinking. Not an oracle that gives you answers, but a collaborator that helps you interrogate whether answers are reasonable. It was Bergstrom and West’s YouTube lectures that first introduced me to Weinstein’s Guesstimation books – which became the methodological backbone of this project.

Full disclosure: my grandmother worked alongside Enrico Fermi at the Enciclopedia Italiana Treccani. Some things run in families.

The frustration

Let me take you back to 2023. I was excited – maybe naively so – about using LLMs for Fermi estimation. These models had access to vast amounts of physics knowledge, and they were supposedly good at reasoning.

But here’s the thing: they were terrible at arithmetic. Not just GPT-3.0 – the problem persisted through 3.5 and even 4.0. I documented one session where GPT-4 attempted to multiply 836 x 1923 eight times. It got the correct answer exactly once – then regressed to different wrong answers on subsequent attempts. It could describe the long multiplication algorithm perfectly while executing it incorrectly. Each attempt produced a different error. There was no stable learning, just occasional lucky generations.

The arithmetic failures made it impossible to even evaluate whether the models could do Fermi estimation. Was the model struggling with decomposition? With bounding? With proceeding under uncertainty? I couldn’t tell – every attempt would derail on basic multiplication or unit conversion long before I could assess the reasoning quality.

The model would freeze when faced with uncertainty. It would apologize profusely for not having exact values. When it did proceed, it would present numbers with no reasoning, no bounds, no indication that this was a guess rather than a fact. Where did that number come from? Is it reasonable? You have no way to know. But I couldn’t systematically address any of this while arithmetic kept breaking.

I started drafting what would become the framework – ideas about decomposition, bounding, proceeding with uncertainty. But I couldn’t finalize anything. The arithmetic problem had to be solved first.

Note: Modern models with carefully designed custom instructions handle arithmetic more reliably. I know it for a fact (I tested it on ChatGPT 4.5, adapting my instruction from Jeremy Howard’s recommendations in A Hackers’ Guide to Language Models, and got the same results on the tricky questions he works through). But the judgment problem remains unsolved.

The Wolfram plugin: the turning point

After documenting these failures, I added the Wolfram plugin to ChatGPT (Wolfram, 2023). That’s when things changed enough to really push on the Fermi idea.

Wolfram fixed the arithmetic: unit consistency improved, scientific notation became reliable, basic math errors disappeared. The model could finally compute accurately and consistently.

And that’s when I could finally see what remained.

The model could compute, but it couldn’t estimate. It still wouldn’t commit to bounded guesses. It still hedged or refused when faced with uncertainty. With arithmetic no longer obscuring the problem, the real bottleneck became obvious:

The model could calculate, but it couldn’t reason like Fermi.

This led to a crucial realization. There were two different problems requiring two different solutions:
1. Arithmetic reliability -> Wolfram plugin solved this
2. Estimation judgment -> Required something else entirely

The bottleneck wasn’t calculation. It was the model’s inability to proceed confidently with explicit uncertainty – to say “I don’t know this exactly, but I can bound it” and then move forward.

Only then could I finalize the framework.

Drafting the Laws: permission to estimate

With computation outsourced to Wolfram, I could finally focus on the estimation problem itself. I drafted what I called “the Laws” – giving the model permission and methodology to reason under uncertainty.

The framework I drafted wasn’t pulled from thin air. The methodology itself comes from Enrico Fermi – this is how he trained his physics students to reason and experiment in their heads. Lawrence Weinstein took that foundational approach and built something substantial: a rich, modern pedagogical method with brilliant and often humorous worked examples in his Guesstimation books (Weinstein & Adam, 2008; Weinstein, 2012). My Laws were an attempt to translate Weinstein’s approach into something an AI could follow – to adapt it from human pedagogy to human-AI partnership.

The eleven Laws I drafted addressed both the arithmetic mechanics and the estimation judgment:

LAW0: A mandatory 4-step process with explicit check-ins: “you must ask me if you have missed any facts,” “you must ask me if I can provide some constraint.” The collaborative DNA was there from the start – I knew the model needed human input. It just wouldn’t ask for it, even when told.

LAW1-LAW9: Scientific notation rules, arithmetic procedures, and unit handling – establishing a procedure the model could reliably follow, making the reasoning transparent and auditable. With Wolfram handling the computation, these laws ensured consistency and transparency in how calculations were performed and reported. LAW8-LAW9 specifically addressed unit conversions and ensuring final answers were always in SI units.

LAW10: “Read all examples before solving.” I was observing my own learning process – how I improved as I worked through more of Weinstein’s examples, how I absorbed his thinking patterns, how my decomposition and bounding strategies refined with practice. I replicated that learning process for the AI by providing worked examples.

LAW11: “Don’t reprint my prompt” – fighting the model’s tendency to waste tokens repeating instructions back to me.

The framework combined reliable computation (Wolfram) with structured methodology (the Laws). It helped – but the results were inconsistent. The model would follow some Laws, ignore others, and systematic improvement remained elusive.

The real breakthrough would have to wait for better models and a rebuilt framework.

Building something better

Fast forward to late 2025. Better models, more experience, and lessons learned from that 2023 framework. I rebuilt it from scratch – not to compensate for failure, but to enable even greater success.

The new framework has 17 Laws: 12 mechanical (arithmetic, units, notation) and 5 focused specifically on estimation judgment. It includes 14 worked examples demonstrating complete reasoning patterns. The core shift was from defensive rules to empowering principles (“bound uncertain quantities with explicit reasoning,” “classify every estimate as KNOWN, BOUNDED, or ASK”).

The 2023 framework worked – it just wasn’t refined enough. The late 2025 version is more systematic, more explicit about judgment, and designed from the ground up for modern models that can reliably compute but still struggle with metacognition.

I tested it with Claude Sonnet 4.5. Here’s a before/after on that drag coefficient:

Before framework:
Model writes: “The drag coefficient for an F1 car is 0.9”

Where did 0.9 come from? Is that reasonable? We have no way to know.

After framework:
Model writes: “Lower bound: more than 0.5 (very streamlined). Upper bound: less than 2 (very blunt). Geometric mean ~ 1.”

Now we see the thinking. We know what the bounds are. Even if you don’t know fluid dynamics, you can think: “is an F1 car closer to streamlined or blunt?” You can disagree intelligently.

The framework doesn’t make the model know more. It makes the model show its uncertainty instead of hiding it behind assertions that sound confident but can’t be audited.

The results: does the framework actually help?

To find out, we ran a controlled comparison. Same eight problems, same model (Sonnet 4.5), same system-wide preferences. The only difference: one condition got the full framework (17 Laws, 14 worked examples, explicit KNOWN/BOUND/ASK classification requirements), while the other got only basic Fermi methodology – “decompose problems, bound uncertain quantities with upper and lower limits, take geometric means, show your work.”

The results:

  • Minimal prompting: 7.9/10
  • Full framework: 8.5/10

That’s a modest lift – 0.6 points – and with only eight problems and a single evaluator, it’s a directional finding, not a statistically significant one. The model can do reasonable Fermi estimation with just basic methodology concepts. Modern LLMs have improved dramatically since 2023.

But the numbers don’t capture everything. Reading the solutions side by side, the framework-trained answers show their reasoning differently. Instead of “drag coefficient ~ 1,” you get explicit bounds with justification. The answers are more auditable, more teachable, more useful for collaboration.

We also tested whether providing reference tables (order-of-magnitude values for common quantities) would help. Score: identical. But the model cited its sources explicitly – “air density: 1.2 kg/m^3 (from reference tables)” – instead of just labeling values as “KNOWN.” Same accuracy, better transparency.

I also ran an earlier exploratory test on eleven different problems from Weinstein’s books. That diagnostic helped shape the framework, revealing where the model needed more guidance. The patterns were consistent: decomposition excellent, bounding inconsistent, asking for help essentially never.

The framework’s value isn’t dramatic capability lift. It’s structure that enables auditing, teaching, and targeted improvement.

What works excellently:

  • Decomposition: Perfect. Every single problem was broken down cleanly.
  • Forward progress: The model proceeds confidently with bounded estimates.
  • Numerical accuracy: Usually within an order of magnitude of Weinstein’s solutions.
  • Comparisons: Good contextualization of answers.

What works when tested explicitly:

  • Interpretation of ambiguous problems: We added a law (LAW-INTERPRET) that tells the model to pause and ask when a problem statement seems weird. We tested it with an absurd prompt – an F1 car at “5% of walking speed.” The model correctly recognized the absurdity, stated its interpretation, and asked for clarification. This works.

What’s inconsistent:

  • Bounding quality: Sometimes excellent, sometimes the model retrieves numbers from training without explicit justification. The model doesn’t know when it’s doing bounding well versus when it’s pulling data without reasoning. The numbers are often correct, but the reasoning isn’t transparent – you can’t audit the thinking or learn from the process.

What almost never triggers:

  • Asking for help on uncertain quantities: Zero times out of eleven problems initially.

As I noted in my evaluation:

“Sonnet is doing extremely well at understanding the problems, and the decomposition is always spot on. The answers are very reasonable, often close to Weinstein’s. However, its bounding is weak. Sometimes it gets it excellently, but more often than not, it gets maybe one bound but completely misses another. And it does not know it does, so it just pulls data out of nowhere, rather than asking.”

That last part is crucial: it doesn’t know that it doesn’t know.

The attempted fix

We tried to solve this. We added a law (LAW-FLAG) requiring the model to classify every estimate:

  • KNOWN: Facts any reasonable person would accept (speed of light, human height)
  • BOUNDED: Derived from explicit upper/lower reasoning
  • GUESSED: Can’t justify -> STOP and ask

Our first version failed. On a neutron star problem, the model classified “neutron star mass ~ 1.4 solar masses” as KNOWN. Technically it’s in the training data, but could you justify that to a skeptic who isn’t an astrophysicist?

We tightened the language: “KNOWN applies only to quantities you could justify to a skeptical non-specialist.”

This worked… once. On a Yellowstone supervolcano problem, the model stopped and asked for the mass of magma. Progress! But on an astrophysics problem (what would kill us first if the Sun went supernova), it plowed ahead with “KNOWN from supernova theory” and “KNOWN from particle physics.”

Three tests, one success. The fix isn’t reliable.

The evolution summarized

CapabilityGPT-3.0 (2023)Sonnet 4.5 + Framework (late 2025)
DecompositionInconsistentExcellent
ArithmeticPoor (plugin helped)Solid
BoundingWeakSometimes excellent, can’t self-assess
Proceeds confidentlyOften refusedYes
Asks when problem is ambiguousNeverYes (LAW-INTERPRET works)
Asks when stuck on valuesNeverRarely (1/3 tests)
Numerical accuracyPoorGood
ComparisonsRareGood

The gap has narrowed significantly. What remains is the hardest part: metacognition about its own uncertainty. And I’ll be honest-bounding is at least as important as decomposition in Fermi’s approach. Without reliable bounding and self-awareness, the model gets good answers but can’t teach you how it got them or give you confidence about why they’re reasonable.

Where this fits in the research

More recently, I found out that researchers at Allen AI found that fine-tuned LLMs were off by two orders of magnitude on average when solving Fermi problems (Kalyan et al., 2021). Our framework shows similar patterns: decomposition is now essentially solved (perfect scores), and numerical accuracy is much improved (most answers within 10x).

But the deeper problem remains unsolved. The model doesn’t know what it doesn’t know. It produces bounds when it has familiar anchors (everyday quantities, mathematical relationships), but pulls numbers from nowhere when facing exotic physics – and doesn’t recognize the difference. It never asks for help, even when it should. Recent work on LLM calibration confirms this: models are systematically overconfident, with nominal 99% confidence intervals covering truth only 65% of the time (Epstein et al., 2025).

We’ve made progress on the methodology of Fermi estimation. The metacognition gap – knowing when you’re guessing-remains open.

Is the framework worth it?

That depends on what you want. If you just need a rough estimate, modern Sonnet can produce decent answers with minimal prompting (7.9/10). But if you want Fermi-style reasoning – methodical, transparent, bounded estimation – then yes, the structure matters.

Yet the framework alone isn’t the full solution. Throughout this project, reflection was the critical ingredient: challenging unjustified assertions, asking “could you defend that bound to a skeptic?” Catching when the model should have asked for help but didn’t.

The framework makes reasoning visible. Reflection makes it rigorous.

And that reflection layer – currently human, potentially multi-agent in the future – is what Part 3 will explore.

What’s coming next

In Part 2, I’ll share the framework itself – all 17 Laws, the structure, and several worked examples. I’ll show you what works reliably and what we tried to fix.

But I’ll also share what I eventually realized. Throughout this whole project, there was already a “reflection agent” catching assertions without reasoning and challenging unjustified claims. Asking “could you really justify that to a skeptic?” Flagging when the model should have asked for help.

It was me. The human in the loop, with enlisted help from Claude.

In Part 3, we’ll explore what that means. If the model can’t reliably judge its own uncertainty, maybe the solution isn’t better prompting – it’s separating the tasks. One agent estimates. Another agent reflects. That’s the architecture I was already using before I figured it was a “thing”.

Agent vs. Agentic reasoning

A confession

While drafting this post, I asked Sonnet (different instance) to help write an opening example – a bullet fired from Los Angeles, could it reach you in San Diego? Sonnet wrote “the bullet stops in about 2 kilometers” without showing any derivation.

When I pushed back, it admitted: “I just did what we’re critiquing – presented a number without showing the work.”

Then we tried to verify it together. Constant deceleration gives roughly 600 meters – I verified this with ballistic calculators. Where did that 2 km come from? We still don’t know.

That’s why I replaced the bullet problem with the piano tuner question – something we could verify. And that’s why this work matters.

Stay tuned.


References

Bergstrom, C. T., & West, J. D. (2020). Calling Bullshit: The Art of Skepticism in a Data-Driven World. Random House.

Epstein, E. L., Winnicki, J., Sornwanee, T., & Dwaraknath, R. (2025). LLMs are Overconfident: Evaluating Confidence Interval Calibration with FermiEval. arXiv preprint arXiv:2510.26995.

Kalyan, A., Kumar, A., Chandrasekaran, A., Sabharwal, A., & Clark, P. (2021). How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI. arXiv preprint arXiv:2110.14207.

Weinstein, L. (2012). Guesstimation 2.0: Solving Today’s Problems on the Back of a Napkin. Princeton University Press.

Weinstein, L., & Adam, J. A. (2008). Guesstimation: Solving the World’s Problems on the Back of a Cocktail Napkin. Princeton University Press.

Wolfram, S. (2023). ChatGPT Gets Its “Wolfram Superpowers”! Stephen Wolfram Writings. Retrieved from https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/

Picobot Revisited: Optimizing a Tiny Robot’s Rules, Ten Years Later

Featured

A decade-old folder, handwritten notes, and a deceptively simple robot.

Introduction

Wrapping up a third personal fun project in two months? Check!! And this is the longest-standing one, and possibly one of my favourite ever. It goes back to when I was barely past the first steps into my exploration of both Python, and Computer Science. This project was fun because it had to do with solving puzzles. I am happy to share it with you, my readers, today.

If you’ve ever watched a Roomba bump into a wall, spin around, and trundle off in a seemingly random direction, you’ve witnessed a real-world version of the problem I’m about to describe. How does a robot that can only sense what’s immediately around it — no map, no memory of where it’s been, no grand plan — manage to cover every square inch of a room?

In January 2015, I was working through Harvey Mudd College’s “CS for All” materials on my own — no live instruction, no solutions to check against — and I encountered Picobot: a simulated robot even simpler than a Roomba. Picobot became one of my favourite puzzles. I scribbled diagrams, wrote copious amounts of notes, tested rules, and eventually optimized my solutions down to what I believed were the minimum number of rules needed to cover the whole room. I kept everything into a well-worn file folder. This was my very first serious dab into CS, and I loved it!

That folder has survived multiple reorganizations over the years – every once in a while I’d open it, think about writing it up properly, and close it again. But, after positive experience wrapping up projects collaboratively with Claude — the colormap app, the Mill’s Methods post — Picobot was next in line.

With the help of Claude Opus (v 4.5) I verified those old solutions, built a Python simulator, and finally documented the work properly.

This post is about the optimization journey. The reasoning. The moments when things click.

What is Picobot?

Picobot is a pedagogical robot created for Harvey Mudd’s introductory computer science course. It lives in a grid world and has one job: visit every empty cell. The catch? Picobot is nearly blind.

The Constraints

Picobot can only sense its four immediate neighbours: North, East, West, and South. For each direction, it knows one thing: is there a wall, or is it empty? That’s it. No memory of where it’s been. No coordinates. No global view.

Here’s an example of what Picobot “sees”:

    N
W ● E ← Picobot sees: N=empty, E=wall, W=empty, S=empty
S

We encode this as a 4-character string: xExx

  • x means empty (nothing there)
  • N, E, W, or S means wall in that direction
  • Position order is always: North, East, West, South

So xExx means “wall to the East, everything else empty.”

The Rules

Picobot follows rules that say: “If I’m in this state and I see this pattern, then move this direction and switch to this state.”

The format is:

STATE  SURROUNDINGS -> MOVE  NEW_STATE

For example:

0 Nx** -> E 1

This means: “In State 0, if there’s a wall to the North and East is empty, move East and switch to State 1.”

The wildcard * matches anything:

0 x*** -> N 0

“In State 0, if North is empty (don’t care about the rest), move North and stay in State 0.”

There’s also a special move: X (stay put). The robot doesn’t move but can change state. This seems useless at first. It’s not.

The Goal

Write the smallest set of rules that makes Picobot visit every empty cell in a room, regardless of where it starts.

The Harvey Mudd Picobot lab posed two main challenges, below, and several optional one.

  1. Empty Room: A rectangular room with walls only on the boundary
  2. Maze: A maze with single-cell-wide corridors

The lab simulator is actually still live at https://www.cs.hmc.edu/picobot/

Give it a shot, it’s fun!

Back to the story.

The Empty Room: From 7 to 6 Rules

The Strategy: Boustrophedon

The word comes from Greek: “ox-turning.” It’s how you plow a field — go one direction, turn around at the end, come back the other way. Mow a lawn. a line of text, then the next (if you are Etruscan).

For Picobot, the boustrophedon pattern looks like this:

The robot sweeps East, drops down, sweeps West, drops down, repeats. But first, it needs to get to the top of the room — so it goes North until it hits the wall.

My Initial Solution: January 6, 2015

I have an email I sent to myself at 12:44 AM on January 6, 2015 — working late (on a Tuesday night!!!) on this puzzle. It shows my first experiments:

First experiment: go to origin:
# go to origin
0 **** -> X 3
3 ***x -> S 3
3 ***S -> W 2
2 **x* -> W 2
2 **W* -> X 0

And then my first complete solution:

Final solution program 1
0 x*** -> N 0 # (initial) state 0 with nothing N: go N
0 Nx** -> E 1 # state 0 with a wall N but none E: go E, AND

1 *x** -> E 1 # state 1 with nothing E: go E
# OR, instead of previous 2. This is if initially by E wall
0 NE** -> W 2 # state 0 with a wall N and one E: go W

# once it reaches east wall
1 *E** -> W 2 # state 1 with a wall E: go W
2 **x* -> W 2 # state 2 with nothing W: go W
2 **W* -> S 1 # state 2 with a wall W: go S

That’s 7 rules. The comments show my thinking — I was handling the case where Picobot starts by the East wall separately.

The Harvey Mudd lecture slides posed an extra challenge: “how FEW rules can you use? The current record is six rules” The solution wasn’t shown — just the target. That became the question that hooked me: how do you get there? I was one rule away

The Insight: “C and F Are the Same”

My handwritten notes show positions labelled A through F, representing different situations Picobot might encounter. The breakthrough came when I realized:

Position C (just finished going North, need to decide: East or West?) and Position F (at a wall during the sweep, need to decide direction) were being handled by separate rules — but they didn’t need to be.

The key insight: after going North and hitting the wall, I don’t need a separate rule to check East. I can use the X move (stay put) to transition to State 1, and let State 1’s existing rules handle it.

This is counter-intuitive. The X move looks like wasted time — the robot just sits there! But it’s not wasted. It’s a state transition without movement that lets me reuse existing rules instead of duplicating logic.

The Final Solution: January 24, 2015

Eighteen days later, I emailed myself the optimized solution — Saturday, January 24, 2015 at 5:05 PM (weekend fun work):

# Optimized EMPTY ROOM program:
0 x*** -> N 0
0 N*** -> X 1
1 *x** -> E 1
1 *E** -> W 2
2 **x* -> W 2
2 **W* -> S 1

Six rules. Let me walk through why this works:

State 0 handles “going North.” When Picobot hits the North wall, it executes X 1 — stays put but switches to State 1. Now State 1 takes over.

State 1 is dual-purpose:

  • If East is empty → go East (continuing the sweep)
  • If East is wall → start going West (end of row)

Because Picobot stays put when transitioning from State 0 to State 1, it’s in the exact same position, and State 1 correctly determines whether to go East or start heading West.

State 2 sweeps West. When it hits the West wall, it goes South and switches back to State 1. Again, State 1 determines: East or end of row?

The elegance is that State 1 does double duty. It handles both “continue going East” and “decide what to do at the end of a row.” The X move is what makes this possible.

Verified

I tested this against all 529 possible starting positions in a 25×25 room. Every single one reaches 100% coverage. Maximum steps: 1,013. The solution works.

The Maze: From 16 to 12 Rules

The maze challenge is different. Corridors are one cell wide. There are dead ends, branches, and loops. The boustrophedon strategy won’t work here.

The Strategy: Right-Hand Wall Following

The classic maze-solving algorithm: keep your right hand on the wall and walk. You’ll eventually visit everywhere (in a simply-connected maze).

For Picobot, “right hand on wall” translates to:

  1. If you can turn right, turn right
  2. Otherwise, if you can go forward, go forward
  3. Otherwise, if you can turn left, turn left
  4. Otherwise, turn around (dead end)

With four directions (North, East, West, South) and the “right-hand” rule relative to each, we need four states — one for each direction Picobot is “facing.”

  • State 0: Going North (right hand on East wall)
  • State 1: Going East (right hand on South wall)
  • State 2: Going West (right hand on North wall)
  • State 3: Going South (right hand on West wall)

Initial Solution: 16 Rules

The straightforward implementation uses 4 rules per state:

# State 0: Facing North (right hand = East)
0 *x** -> E 1 # Can turn right → turn right (now facing East)
0 *Ex* -> N 0 # Can't turn right, but forward is open → go North
0 *EW* -> W 3 # Can't go forward → turn left (face West)
0 *EWS -> S 2 # Dead end → turn around (face South)

# ... and similarly for States 1, 2, 3

16 rules total. It works. But can we do better?

Two-Phase Optimization

My maze notes show two distinct approaches:

Phase 1: Working from principles. The small diagram in my notes shows me reasoning about the state transitions theoretically. What’s the minimum information needed at each decision point? Where is there redundancy?

Phase 2: Empirical debugging. The large diagram shows positions A through K — specific spots in a maze where I tested rules. When the principled approach hit edge cases, I sketched the situation, walked through it (“what would I do here?”), and translated my intuition into rules.

The note “Key is G” appears on the page. Position G was where the solution got validated — when it handled G correctly, the logic was proven.

The Iteration: A Failed Attempt

That same January 24 email shows me trying to adapt the empty room optimization for the maze — and failing:

This, optimized for maze, does not work. At dead ends it turns around but then it goes to the other end and enters an infinite loop...

The attempt that followed didn’t handle dead ends properly. The robot would turn around, walk to the other end, and loop forever.

The Final Solution

Then, in the same email:

This works!!
0 *x** -> E 1
0 xE** -> N 0
0 NE** -> X 2
1 ***x -> S 3
1 *x*S -> E 1
1 *E*S -> X 0
2 x*** -> N 0
2 N*x* -> W 2
2 N*W* -> X 3
3 **x* -> W 2
3 **Wx -> S 3
3 **WS -> X 1

12 rules: 3 per state instead of 4. A 25% reduction.

The key insight: each state now handles only three cases:

  1. Right is open → turn right
  2. Forward is open → go forward
  3. Both blocked → stay put, rotate to next state (which will check left/behind)

The X move chains states together. If right and forward are blocked, we stay put and try the next state. That state checks its right (our left). If that’s blocked too, it chains again. The sequence continues until we find a way forward.

Verified

Tested against all 287 reachable positions in a 25×25 maze, and all 280 cells in the actual Harvey Mudd lab maze. 100% coverage every time. Here’s one simulation:

The right-hand rule doesn’t just guarantee coverage — it collapses the state space. The rules are ordered to check “right side open” first. In State 0 (facing North), rule 1 asks: is East open? If yes, go East — Picobot never evaluates what’s ahead. That’s how rule ordering implements “keep your hand on the wall.” Different physical positions with the same wall-relationship become equivalent, and that’s what makes 4 states and 12 rules possible. Take a look at the simulations below of the two equivalent positions sketched in my handwritten notes, shown earlier:

Making It Explicit: Starting State Matters

Here’s something worth highlighting — something that’s in the Harvey Mudd lab instructions but easy to overlook.

The 6-rule empty room solution requires Picobot to start in State 0.

The Harvey Mudd simulator always starts in State 0, and the lab materials mention this. Whether I consciously accounted for this in 2015, I don’t remember — I didn’t document it in my notes. But when I built my own simulator in 2025, I could test explicitly: what happens if Picobot starts in State 1 or State 2?

Start StateInitial DirectionCoverage
0North100% ✓
1East~50% ✗
2West~45% ✗

Starting in State 1 or 2, Picobot gets stuck. It begins the East-West sweep from wherever it starts — never going North to reach the top first. The rows above its starting position never get visited.

This isn’t a bug in the solution. It’s a constraint: the boustrophedon pattern assumes you start by going North. The 6-rule minimum only works because State 0 guarantees that first trip to the top wall.

A truly state-agnostic solution — one that works regardless of starting state — would need more rules. The elegance of 6 rules comes from working within the standard initial conditions.


What I Learned

  1. The X move is not wasted time. It’s a state transition that enables rule reuse. This is the key to minimizing rule count.
  2. Different problems, different methods. The empty room yielded to analytical insight (“C and F are the same”). The maze required two phases: principled derivation, then empirical debugging.
  3. Implicit assumptions matter. The starting state requirement was in the lab materials all along, but easy to overlook. Building my own tools made it explicit.
  4. Old projects are worth revisiting. With fresh eyes — and some help — I found new ways to understand and share work I already knew.
  5. How I approached it. Looking back at my notes, I see a pattern that’s familiar from my day-to-day work: diagrams everywhere, positions A-K labeled, “me walking in the maze.” Try something → watch where it fails → sketch that spot → ask “what would I do here?” → translate to rules → repeat. “C and F are the same” collapsed the problem by seeing equivalence the formal notation obscured. The notes weren’t just records — they were how I thought. And 18 days between 7 rules and 6 rules: no rushing, no giving up. This is field scientist methodology applied to computer science. Maybe that’s why I loved it.
  6. There is no free lunch in AI collaboration. This project — both the technical verification and this blog post — would not have been possible without deep understanding of the subject matter. That understanding came from me (the 2015 work, the insights, the diagrams), from the extensive documentation I’d kept, and from all the iterative work we did together. This isn’t “vanilla coding” where you prompt an AI and get a finished product. It’s genuine collaboration: human insight plus AI execution. The AI didn’t optimize Picobot — I did, in 2015. The AI helped me verify, document, and communicate that work in 2025.

Try It Yourself

The full Python implementation is on GitHub: https://github.com/mycarta/picobot-optimizer

Itncludes:

  • picobot_simulator.py — The core engine
  • picobot_rooms.py — Empty room and maze generators
  • picobot_visualizer.py — GIF animation creator
  • optimized_solutions.py — The 6-rule and 12-rule solutions
  • test_solutions.py — Exhaustive verification

All documented and ready to explore.


What’s Next

Part 2: How I revisited this project with AI assistance — and what that collaboration actually looked like.

Part 3: Educational materials. Exercises, concept checks, and scaffolded challenges for those learning to code.


The Picobot simulator was created for Harvey Mudd College’s “CS for All” course. My optimization work is from January 2015. Verification, documentation, and visualization were completed in January 2025 with AI assistance.


AI/HI (Human Intelligence) Transparency Statement

Modified from Brewin

Has any text been generated using HI?Yes
Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes
Have any methods of analysis been suggested using HI?Yes
Have any methods of analysis been suggested using AI?Yes
Do any analyses utilize AI technologies, such as Large Language Models, for tasks like analyzing, summarizing, or retrieving information from data?Yes

Additional context:

The Picobot optimization work described in this post — the solutions, the insights, the handwritten diagrams, the reasoning behind “C and F are the same” and “Key is G” — was done entirely by me in January 2015, working alone through Harvey Mudd’s CS for All materials with no live instruction and no solutions to check against. The emails quoted in this post are timestamped records from that work.

In January 2025, I revisited this project with Claude AI (Anthropic). Claude built the Python simulator, ran exhaustive verification tests, created the GIF visualizations, and helped document the reasoning. The explicit testing of starting states emerged from our joint exploration — I asked the question, Claude ran the tests.

This post was drafted collaboratively. I provided the source materials (my 2015 notes, emails, the verified solutions, our session transcripts), direction, and editorial judgment throughout. Claude drafted based on these inputs and our discussion of structure and framing. I reviewed, revised, and made all final decisions about what went to publication.

A note on AI collaboration: This kind of work is not “vanilla coding” — prompting an AI and receiving a polished output. It required deep domain knowledge (mine), extensive primary documentation (my 2015 notes and emails), iterative correction (many rounds), and genuine intellectual engagement from both sides. The AI contributed too — not the original insights, but meta-insights: recognizing patterns in my notes, naming things I’d done but hadn’t articulated (like “C and F are the same” as a key moment), and seeing that I’d used different methodologies for the empty room versus the maze. The AI did not and could not have done this alone. Neither could I have done the verification, visualization, and documentation at this scale without AI assistance. That’s what real collaboration looks like.

The intellectual work is mine. The documentation, verification, and articulation is collaborative.

Modernizing Python Code in the AI Era: A Different Kind of Learning

Featured

A few years ago I wrote about advancing my Python coding skills after working through a couple of chapters from Daniel Chen’s excellent book Pandas for Everyone. In that post I showed how I improved code I’d written in 2018 for the SEG Machine Learning contest. The original code used unique() to get lists of well names, then looped through with list comprehensions to calculate flagged samples and proportions. The 2020 version replaced all that with groupby() and apply(), making it much more compact and Pythonic. For example, where I’d written a list comprehension like [result_a.loc[result_a.zone==z,'flag'].sum() for z in zones_a], I could now write simply result_a.groupby('zone', sort=False).flag.sum().values. The runtime also improved – from 86ms down to 52ms. I remember being quite happy with how much cleaner and more readable the code turned out, and how the learning from those two chapters made an immediate practical difference.

Recently, I had to modernize the Busting bad colormaps Panel app, which I built back in 2020 to demonstrate colormap distortion artifacts (something that – as you know – I care a lot about). The app had been deliberately frozen in time – I’d pinned specific library versions in the environment file because I knew things would eventually become obsolete, and I wanted it to stay functional for as long as possible without having to constantly fix compatibility issues.

But some of those issues had finally caught up with me, and the app had ben down for soem time. Last fall, working with Github copilot, I fixed some matplotlib 3.7+ compatibility problems – replace the deprecated cm.register_cmap() with plt.colormaps.register(), fix anrgb2gray error, and resolve a ValueError in the plotting functions.

But the deployment was also broken. In 2021, mybinder.org had switched to JupyterLab as the default interface, changing how apps needed to be deployed. Panel developers had to adapt their code to work with this new setup. The old Panel server URL pattern no longer worked. I tried to figure out the new URL pattern by browsing through the Binder documentation, but I couldn’t make sense of it and failed miserably. It was a short-lived effort that pushed me toward trying something different: full-on coding with Claude Opus 4.5 using Copilot in VSCode.

That’s what allowed me, this month, to complete the modernization process (though honestly, we still haven’t fully sorted out a Binder timeout issue).

A step back to 2020: Building the app from scratch

When I originally built the colormap app, I coded everything myself, experimenting with Panel features I’d never used before, figuring out the supporting functions and visualizations. I also got very good advice from the Panel Discourse channel when I got stuck.

One issue I worked on was getting the colormap collection switching to behave properly. After the first collection switch, the Colormaps dropdown would update correctly, but the Collections dropdown would become non-responsive. With help from experts on the Discourse channel, I figured out how to fix it using Panel’s param.Parameterized class structure.

2026: Working with Claude

The second, and hardest part of the modernization was done almost entirely by Claude Opus. Here’s what that looked like in practice:

Binder deployment: Claude independently figured out the new JupyterLab URL pattern (?urlpath=lab/tree/NotebookName.ipynb instead of the old ?urlpath=%2Fpanel%2FNotebookName). Only later, when fact-checking for this post, did we discover the history of Binder’s 2021 switch to JupyterLab and how Panel had to adapt. This helped, though we’re still working through some timeout issues.

Environment upgrade: Claude upgraded to Python 3.12 and Panel 1.8.5, bringing everything up to modern versions. The key packages are now Panel 1.8.5, param 2.3.1, and bokeh 3.8.1.

Code modernization: Claude spotted and fixed deprecated API calls – the style parameter for Panel widgets became styles.

Collection switching – Claude’s breakthrough: This was Claude’s biggest solo contribution. The collection switching broke during the update, and Claude independently diagnosed that the class-based param.Parameterized approach that had worked in Panel 0.x wasn’t reliable in Panel 1.x. Without me having to guide the solution, Claude figured out how to rewrite it using explicit widgets with param.watch callbacks.

The comparison shows the change:

The new approach uses explicit widget objects with callback functions, which works more reliably in Panel 1.x than the class-based parameterized approach.

New features: Claude integrated two new colormap collections I’d been wanting to add for years – Fabio Crameri’s scientific colormaps (cmcrameri) and Kristen Thyng’s cmocean colormaps. That brought the total from 3 to 5 colormap collections.

Here are examples of the app showing each of the new collections:

The app testing of cmocean deep colormap
The app testing of Crameri’s batlow colormap

Documentation: Claude updated the README with detailed step-by-step Binder instructions, added a troubleshooting section, and created a table documenting all five colormap collections.

I provided the requirements and guidance throughout, but I almost never looked at the implementation details – what I’ve taken to calling the “bits and bobs” of the code. I focused on what I needed to happen, Claude figured out how to make it happen.

What changed (and what didn’t)

I still understand what the code does conceptually. I can read it, review it, check that it’s correct. I know why we needed to move from Parameterized classes to explicit widgets, and I understand the reactive programming model. But I didn’t write those lines myself.

The work happens at a different level now. I bring the domain expertise (what makes a good colormap visualization), the requirements (needs to deploy on Binder, needs these specific colormap collections), and the quality judgment (that widget behavior isn’t quite right). Claude brings the implementation knowledge, awareness of modern best practices, and the ability to quickly adapt code patterns to new frameworks.

This is really different from my 2020 experience. Back then, working through those Pandas patterns taught me techniques I could apply to other projects. Now, I’m learning what becomes possible when you can clearly articulate requirements and delegate the implementation.

The honest trade-off

There’s a trade-off here, and I’m trying to be honest about it. In 2020, working through the Panel widget patterns taught me things that stuck. In 2026, I got working, modernized code in a fraction of the time, but with less hands-on knowledge of Panel 1.x internals.

For this particular project, that trade-off made sense. I needed a working app deployed and accessible, not deep expertise in Panel migration patterns. But I’m conscious that I’m optimizing for different outcomes now: shipping features fast versus building deep technical understanding through hands-on work.

What this means going forward

After years of writing code line by line, this new way of working feels both efficient and different. I got more done in a couple of hours than I might have accomplished in several weeks working solo. The app is modernized, deployed, working better than ever, and even has new features I’d been wanting to add for years.

This has been a gamechanger for how I work. I still do the work that matters most to me: seeing the tool gap, coming up with the vision, iteratively prototyping to flesh out what I actually need. That’s substantial work, and it’s mine. But after that initial phase? A lot of the implementation will be done with Claude. The app is done and it’s great, and I know this is the path forward for me.

References

Chen, D.Y. (2018). Pandas for Everyone: Python Data Analysis. Addison-Wesley Professional.

Crameri, F. (2018). Geodynamic diagnostics, scientific visualisation and StagLab 3.0. Geoscientific Model Development, 11, 2541-2562. https://www.fabiocrameri.ch/colourmaps/

Niccoli, M. (2020). Keep advancing your Python coding skills. MyCarta Blog. https://mycartablog.com/2020/10/22/keep-advancing-your-python-coding-skills/

Thyng, K.M., Greene, C.A., Hetland, R.D., Zimmerle, H.M., and DiMarco, S.F. (2016). True colors of oceanography: Guidelines for effective and accurate colormap selection. Oceanography, 29(3), 9-13. https://matplotlib.org/cmocean/


Try the app yourself: The modernized colormap distortion app is available on GitHub and you can run it in Binder without installing anything.

The value of intellectual play: Mill, machine learning, and a drilling problem I couldn’t stop thinking about

Featured

A few years back, I watched a CSEG talk by Lee Hunt (then at Jupiter Resources) called Value thinking: from the classical to the hyper-modern. One case study in particular stuck with me—so much so that I ended up exploring it in a Jupyter Lab notebook, bringing it up in a job interview, and eventually testing whether an AI could reason through it on its own.

This post is about that journey. It’s also about what happens when you let yourself get genuinely curious about someone else’s problem. And—fair warning—it involves a 19th-century philosopher, a seven-well dataset, and a neural network that learned to distrust AVO attributes.

The problem

Jupiter Resources had a history of occasionally encountering drilling trouble in the Wilrich reservoir—specifically, loss of circulation when encountering large systems of open fractures. Mud loss. The kind of problem that can cost you a well.

They had done extensive geophysical work with multiple seismic attributes that, in theory, should correlate with fractures: Curvature, Coherence, AVAz (amplitude variation with azimuth), VVAZ (velocity variation with azimuth), and Diffraction imaging. But they lacked direct calibration data for the drilling problem, and some of the attributes were giving conflicting results.

Lee Hunt, who led the team and the geophysical work, suspected from the start that the AVO-based attributes might be compromised. He had seen evidence as far back as 2014 that AVAz and VVAZ responses in the Wilrich were dominated by an overlying coal, not the fractures themselves—the attributes were measuring a different geological signal entirely. Diffraction imaging was planned early as a complementary measure, precisely because it might not be affected by the coals in the same way (personal communication).

Seven wells. Five attributes. Four of the wells had experienced drilling problems; three had not. Here’s the data:

The question: which attribute—or combination—could reliably predict drilling problems, so that future wells could be flagged ahead of time?

Mill’s Methods: 19th-century philosophy meets drilling risk

Rather than accept uncertainty and provide no geophysical guidance at all, the team at Jupiter tried something different: Mill’s Methods of Induction. Their goal was to find a pattern that could help them advise the operations team—flag high-risk well locations ahead of time so contingency plans could be in place. Mill’s Methods are a set of logical procedures for identifying causal relationships, laid out by philosopher John Stuart Mill in 1843. They’re often illustrated with a food poisoning example (who ate what, who got sick), but they work just as well here.

This approach was characteristic of Lee Hunt’s attitude toward quantitative geophysics—an attitude I had come to admire through his other work. A few years earlier, he had published a CSEG Recorder column called “Many correlation coefficients, null hypotheses, and high value,” a tutorial on statistics for geophysicists that included synthetic production data and an explicit invitation: “You can do it, too. Write in to tell us how.”

I took him up on it. I worked through his examples in Jupyter notebooks, built visualizations, explored prediction intervals, learned a good deal of scientific computing along the way. I reached out to him about the work. I even wrote up some of that exploration in a blog post on distance correlation and variable clustering—the kind of technical deep-dive where you’re learning as much about the tools as about the data. That extended engagement gave me a feel for his way of thinking: understand the statistics, accept the uncertainty, improve your techniques if you can—but don’t just throw up your hands when the data is messy.

Method of Agreement: Look at all the problem wells (A, B, F, G). What do they have in common? Curvature is TRUE for all four. So is Diffraction imaging. The other attributes vary.

Method of Difference: Compare problem wells to non-problem wells (C, D, E). Neither Curvature nor Diffraction alone perfectly discriminates—Well E has Curvature TRUE but no problem; Well D has Diffraction TRUE but no problem.

Joint Method: But here’s the key insight—Curvature AND Diffraction together form a perfect discriminator. Every well where both are TRUE had problems. Every well where at least one is FALSE did not.

This wasn’t a claim about causation. It was a decision rule: when the next well location shows both high curvature and diffraction anomalies, flag it as elevated risk and ensure contingency protocols are in place.

The logic is sound because of asymmetric costs. Preparing for mud loss (having lost circulation material on site, adjusting mud weight plans) is a minor expense. Not preparing when you should have—that’s where you lose time, money, sometimes the well. You don’t need certainty to justify preparation. You need a defensible signal.

What a neural network learned

I wanted to see if a data-driven approach would arrive at the same answer. Looking at the table myself, and spending some time applying Mill’s Methods, I had already seen the pattern—Curvature and Diffraction together were the key predictors. But I was curious: what would a simple neural network learn on its own?

I trained a two-layer network (no hidden layer)—mathematically equivalent to logistic regression—on the same seven wells. (Yes, seven wells. I know. But stay with me.)

The network classified all seven wells correctly. But the real insight came from the weights it learned:

Attribute Weight
Curvature +14.6
Diffraction +9.7
Coherence ~0
AVAz −4.9
VVAZ −14.5

Curvature and Diffraction were strongly positive—predictive of problems. Coherence contributed almost nothing. But AVAz and VVAZ were negative—the network learned to suppress them.

A way to think about negative weights: imagine training a network to identify ducks from a set of photos that includes birds, ducks, and people in duck suits. The network will learn to weight “duck features” positively, but also to weight “human features” negatively—to avoid being fooled by the costumes. In the Wilrich case, the AVAz and VVAZ attributes were like duck suits: they looked like fracture indicators, but they were actually measuring something else.

This was interesting. All five attributes have theoretical justification for detecting fractures. Why would the network actively discount two of them?

When I mentioned this result to Lee Hunt, he confirmed what he had long suspected (personal communication): the AVAz and VVAZ responses in the Wilrich were dominated by an overlying coal, not the fractures themselves. He had measured this effect and documented it in a 2014 paper, where multiple attributes—including AVAz—showed statistically significant correlations to coal thickness rather than to reservoir properties. The neural network had learned, from just seven data points, to suppress exactly the attributes that Lee’s domain knowledge had already flagged as problematic.

This is Mill’s Method of Residues in action: if you know something else causes an observation, subtract it out. And it’s a reminder that domain knowledge and data-driven methods can converge on the same answer when both are applied honestly. I found this deeply satisfying.

What the AI got right—and what it missed

More recently, I revisited this problem using ChatGPT with the Wolfram plugin. I wanted to see if an AI, given just the table and a prompt about Mill’s Methods, could reason its way to the same conclusions.

It did—mechanically. It correctly identified Curvature and Diffraction as the consistent factors among problem wells. It noted that neither attribute alone was a perfect discriminator. It even offered to run logistic regression.

But it missed the interpretive leap. It hedged with phrases like “although there are exceptions” when in fact there were no exceptions to the conjunction rule. And it didn’t articulate the pragmatic framing: that the goal wasn’t to find the true cause, but to build a defensible decision rule under uncertainty.

That framing—the shift from epistemology to operations—required domain knowledge and judgment. The AI could apply Mill’s Methods. It couldn’t tell me why that application was useful here.

Drafting this post, I worked with a different AI—Claude—and found the collaboration more useful in a different way: not for solving the problem, but for reflection. Having to explain the context, the history, the why of my interest helped me articulate what I’d been carrying around in my head for years. Sometimes the value of a thinking partner isn’t in the answers, but in the questions that force you to be clearer.

Why this stuck with me

I’ll be honest: I kept thinking about this problem for years. It became part of a longer arc of engagement with Lee’s work—first the statistics tutorial, then the Wilrich case study, each building on the last.

When I interviewed for a geophysics position (Lee was retiring, and I was a candidate for his role), I mentioned this case study. I pulled out a pen and paper and wrote the entire seven-well table from memory. They seemed impressed—not because memorizing a table is hard, but because it signaled that I’d actually enjoyed thinking about it. That kind of retention only happens when curiosity is real.

I didn’t get the job. The other candidate had more operational experience, and that was the right call. But the process was energizing, and I’m sure that enthusiasm carried into my next opportunity, where I landed happily and stayed for over six years.

I tell this not to brag, but to make a point: intellectual play compounds. You don’t always see the payoff immediately. Sometimes you explore a problem just because it’s interesting—because someone like Lee writes “You can do it, too” and you decide to take him seriously—and it pays dividends in ways you didn’t expect.

The convergence

Three very different approaches—19th-century inductive logic, a simple neural network, and (later) an AI assistant—all pointed to the same answer: Curvature and Diffraction predict drilling problems in this dataset. The AVO attributes are noise, or worse, misleading.

When three methods converge, you can trust the signal. And you can make decisions accordingly.

That’s the real lesson here: rigorous reasoning under uncertainty isn’t about finding the One True Cause. It’s about building defensible heuristics, being honest about what you don’t know, and updating as new data comes in. Mill understood this in 1843. A neural network can learn it from seven wells. And sometimes, so can an AI—with a little help.

I hope you enjoyed this as much as I enjoyed putting it together.


The original case study was presented by Lee Hunt in his CSEG talk “Value thinking: from the classical to the hyper-modern.” The neural network analysis is in my Geoscience_ML_notebook_4. Lee documented the coal correlation issue in Hunt et al., “Precise 3D seismic steering and production rates in the Wilrich tight gas sands of West Central Alberta” (SEG Interpretation, May 2014), and later reflected on confirmation bias as an obstacle to recognizing such issues in “Useful Mistakes, Cognitive Biases and Seismic” (CSEG Recorder, April 2021). My thanks to Lee for the original inspiration, for confirming the geological context, and for sharing the original presentation materials.


  • Hunt, L., 2013, Many correlation coefficients, null hypotheses, and high value: CSEG Recorder, December 2013. Link
  • Hunt, L., S. Hadley, S. Reynolds, R. Gilbert, J. Rule, M. Kinzikeev, 2014, Precise 3D seismic steering and production rates in the Wilrich tight gas sands of West Central Alberta: SEG Interpretation, May 2014.
  • Hunt, L., 2021, Useful Mistakes, Cognitive Biases and Seismic: CSEG Recorder, April 2021.
  • My neural network analysis: Geoscience_ML_notebook_4
  • My earlier exploration of Lee’s production data: Data exploration in Python: distance correlation and variable clustering
  • ChatGPT + Wolfram session on Mill’s Methods: Gist

AI/HI Transparency Statement Modified from Brewin http://www.theguardian.com/books/2024/apr/04/why-i-wrote-an-ai-transparency-statement-for-my-book-and-think-other-authors-should-too

Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes

Additional context: This post emerged from a conversation with Claude AI (Anthropic). I provided the source materials (a ChatGPT + Wolfram session, a Jupyter notebook, personal history with the problem), direction, and editorial judgment throughout. Claude drafted the post based on these inputs and our discussion of structure, voice, and framing. I reviewed multiple draft, revised as needed, rewrote some key sections, and made all final decisions about what went to publication. The core analysis—Mill’s Methods, the neural network, the interpretation—was done by me years before this collaboration; the AI’s role was in helping articulate and structure that work for a blog audience.

ChatGPT as an essay-writing assistant – Part III

Featured

The Challenge of a Satisfying Conclusion

When I published Part II of this series back in February 2025, I had a plan for Part III. Show the prompts I used, analyze the time investment, evaluate the result against Part I’s GPT-3.5 baseline, maybe try one more iteration with even newer tools. Straightforward. Methodical.

But I never finished it. To be honest, I lost interest. Another marginally better AI-generated essay wasn’t going to cut it—not for me, and probably not for you readers either. Another iteration showing GPT-4.5 writes slightly better than GPT-4? That’s predictable, uninspiring… so I dropped it.

But the unfinished series sat there in the back of my mind. I wasn’t actively working on it, but I also couldn’t quite let it go. It created a kind of block—I found myself not writing about anything at all, partly because this felt incomplete, partly because my interests had genuinely shifted elsewhere.

Recently though, I came back to this question. Not because I wanted to complete the series for completeness sake, but because I wanted to understand what would actually make Part III worthwhile.

So I asked for help. I brainstormed with Claude (Anthropic’s AI) about what Part III should actually be about—what would make it worth writing and worth reading. And something clicked.

What Was the Question Really Asking For?

Looking back now, with decades between me and that moment in Professoressa Carbone’s classroom, I think I understand what she was asking for. She wasn’t looking for recitation of Plato’s philosophy mechanically applied to medieval warfare. She wanted to see if I could reason using philosophical frameworks in unfamiliar territory. Synthesis, not facts. Thinking, not performing memorization.

At 15, I wasn’t ready for that. I had volunteered for the oral examination thinking I could rely on prepared material about Plato’s recent lessons. Instead, she cut through my preparation with a single question that required genuine philosophical thinking: “What would Plato have thought about the Hundred Years’ War?”

It was a brilliant pedagogical move. It required understanding Plato’s ideas deeply enough to apply them to a completely different context—a context Plato never encountered, in a historical period he never knew. It required the kind of intellectual flexibility and reasoning that, honestly, I didn’t have yet.

The humiliation I felt wasn’t really about not knowing facts. It was about being exposed as someone trying to get by on memorization rather than understanding. And I think she knew it. She saw through my bluff.

So What Would Satisfy?

This brings me back to the problem of Part III. Showing that AI can now generate a more sophisticated-sounding essay than my 15-year-old self could produce doesn’t prove anything interesting. AI is very good at generating sophisticated-sounding content. That’s almost the problem.

What would actually satisfy—both as closure for this series and as something worth your time reading—is demonstrating the kind of reasoning Professoressa Carbone was asking for. Can I, now, with the benefit of intellectual maturity and AI assistance, actually think through what Plato might have thought about prolonged warfare between nations? Not just string together plausible-sounding paragraphs with proper citations, but engage in genuine philosophical reasoning?

What Would That Actually Look Like?

If I were to actually write that essay—the one demonstrating real philosophical reasoning rather than AI-generated content—what would it need?

Looking back at the GPT-4 essay from Part II, it has proper citations and coherent structure, but it’s superficial. It lists Platonic concepts (philosopher-kings, guardians, ideal states) and applies them mechanically to medieval warfare. That’s exactly the kind of recitation Professoressa Carbone was testing me against.

Real reasoning would require:

  • Connecting Plato’s specific ideas to specific events or decisions during the Hundred Years’ War—not just general principles applied generally
  • Exploring how Plato’s concepts might actually illuminate something about prolonged conflict between nations that we wouldn’t see otherwise
  • Considering contemporary interpretations or modern applications—what do we learn about conflict, governance, or political philosophy from this exercise?
  • Drawing genuine insights about both Plato and warfare, not just restating both

That’s the essay I’d want to write someday. Not as an academic exercise, but as personal closure—proving to myself I can do the kind of thinking she was asking for.

Closure for Now

But that’s not this post. This post is about giving you, the readers, closure on this series. About acknowledging honestly what I learned about AI as a writing assistant, and why simple iteration wasn’t the answer.

Here’s what I’ve learned:

AI is excellent at generating plausible content. GPT-4 produced an essay that looks credible—proper structure, citations, coherent arguments. For many purposes, that’s enough.

But AI doesn’t reason, it recognizes patterns. The essay from Part II strings together familiar ideas in familiar ways. It’s sophisticated pattern matching, not thinking. It can’t do what Professoressa Carbone was asking for: genuine synthesis that produces new insight.

The real value of AI as a writing assistant isn’t in replacing thinking—it’s in supporting it. AI can help with research, organization, articulation. It can reduce cognitive load so you can focus on the hard part: the actual reasoning. But you still have to do the reasoning.

Writing with AI requires clarity about what you’re trying to accomplish. If you want content generation, AI does that well. If you want thinking support, you need to know what thinking you’re trying to do. The tool can’t figure that out for you.

This series started with a simple question: can AI help me write an essay? The answer turned out to be more nuanced than I expected. It depends entirely on what kind of essay, and what role you want AI to play. For the essay I’d need to write to truly answer Professoressa Carbone’s question—the one that demonstrates reasoning rather than recitation—AI could help, but it couldn’t do the essential work.

Maybe someday I’ll write that essay. For now, I’m moving on to other projects where I’m excited about what AI can do: document extraction in geoscience, agentic workflows, problems where AI’s strengths align better with what I’m trying to accomplish.

Thank you for following this journey with me. Even if it didn’t end where I originally planned, I learned something worth sharing.

A Final Thought: Rigor Without Brutality

I started this series partly because of concerns about AI in education—concerns rooted in my own experience.

ChatGPT has educators calling for more in-class writing and oral examinations. I agree we need assessment that can’t be faked by AI. But I’m deeply opposed to the brutality that often came with those older systems.

Here’s the thing: the brutality was never necessary for the educational value. Professoressa Carbone’s question was pedagogically brilliant. The public humiliation didn’t make it more effective; it just made it traumatic.

We need assessment methods that demand genuine reasoning, in environments that support both students and teachers. It’s possible to have rigorous evaluation without breaking people in the process.

AI forces us to confront what we actually value in education: not the appearance of learning, but the development of genuine understanding and reasoning. The question is whether we can build systems that nurture that without the cruelty.

AI/HI Transparency Statement Modified from Brewin http://www.theguardian.com/books/2024/apr/04/why-i-wrote-an-ai-transparency-statement-for-my-book-and-think-other-authors-should-too

Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes

Additional context: This post was collaboratively written through an iterative conversation with Claude (Anthropic). The human author provided the direction, constraints, personal context, and decisions about what to include/exclude. The AI assistant drafted text, which was then reviewed and revised based on feedback. Sections were rewritten multiple times to match the author’s voice and intentions. The final editorial decisions, including what content made it to publication, were made by the human author.

ChatGPT as an essay-writing assistant – Part II

The blog post below was produced entirely by GPT-4.0, following a series of iterative prompts I provided, from the Introduction to the References and Footnotes, included. Please refer to my AI (Artificial Intelligence) and HI (Human Intelligence) Table in the last section. In the next post, I will include the full listing of my prompts, text evaluation, and time investment versus return analysis. I will also attempt prompting for some further improvements beyond this initial analysis.

Plato’s Perspective on the Hundred Years’ War

Introduction

The Hundred Years’ War, a prolonged and devastating conflict between England and France (1337-1453), was a historical event unknown to the ancient Greek philosopher Plato. Nonetheless, by examining his philosophical ideas, primarily in “The Republic” and “Laws,” we may speculate on his potential views regarding such a conflict.

The Philosopher King and the Ideal State

The Philosopher-King and the Ideal State In his magnum opus, “The Republic,” Plato presents the concept of a philosopher-king, an individual embodying wisdom and justice, who governs the ideal state (Plato, Republic, Stephanus 473c-d, Book V)1. This enlightened ruler, possessing both knowledge and moral integrity, would guide society toward harmony and justice.

War as Necessary Evil

In the “Laws,” Plato acknowledges that war might be a necessary recourse, particularly when defending the state against injustice (Plato, Laws, Stephanus 625c, Book I)2. However, he also emphasizes the importance of striving for peace and pursuing diplomacy whenever possible.

Striving for Peace

Plato contends that an ideal state should always seek to cultivate peace with other nations, fostering a spirit of friendship and cooperation (Plato, Republic, Stephanus 470a-b, Book V)3. This suggests that, had he been aware of the Hundred Years’ War, Plato would likely have advocated for diplomatic efforts to resolve the conflict.

The Guardians as Defenders of the State

The Guardians: Defenders of the State In “The Republic,” Plato introduces the guardians, a class of citizens specifically trained to defend the state and uphold its values (Plato, Republic, Stephanus 414a-b, Book III)4. These individuals, educated in both martial and intellectual pursuits, would protect the state from external threats, such as those posed during the Hundred Years’ War.

Governance in Times of War

Though Plato acknowledges the potential necessity of war, he cautions against allowing it to dominate the state’s agenda (Plato, Republic, Stephanus 468a-b, Book V)5. Instead, he argues that governance should remain focused on promoting the well-being of citizens and maintaining social harmony.

Fostering Social Harmony and Virtue

Plato maintains that an ideal state should strive to cultivate virtue among its citizens, promoting wisdom, courage, moderation, and justice (Plato, Republic, Stephanus 427e-428a, Book IV)6. In this context, the prolonged conflict of the Hundred Years’ War might be seen as a distraction from the state’s primary goal of fostering a harmonious and virtuous society.

Conclusion

Though Plato lived centuries before the Hundred Years’ War, his philosophical ideas can provide valuable insights into how he might have viewed such a conflict. By examining his writings, particularly “The Republic” and “Laws,” we can speculate that Plato would have recognized the potential necessity of war in certain circumstances but would have strongly advocated for peace, diplomacy, and the pursuit of justice and virtue in governance.

References

TitleAuthorDate WrittenChicago citation
The RepublicPlato380 BCEPlato. The Republic. Translated by G.M.A. Grube. Indianapolis: Hackett Publishing Company, 1992.
LawsPlato348 BCEPlato. Laws. Translated by A.E. Taylor. London: Dent, 1934.

Footnotes

  1. Plato, Republic, 473c-d. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  2. Plato, Laws, 628a-630d. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  3. Plato, Republic, 470a-471c. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  4. Plato, Republic, 433b-c. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  5. Plato, Republic, 414b-415d. Perseus Digital Library | The Internet Encyclopedia of Philosophy 
  6. Plato, Laws, 348, 663a-b. Perseus Digital Library | The Internet Encyclopedia of Philosophy 

AI (Artificial Intelligence) and HI (Human Intelligence) Statement

Modified from Brewin http://www.theguardian.com/books/2024/apr/04/why-i-wrote-an-ai-transparency-statement-for-my-book-and-think-other-authors-should-too

Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?No
Has any methods of analysis been suggested using HI?Yes
Has any methods of analysis been suggested using AI?No
Do any analyses utilize AI technologies, such as Large Language Models, for tasks like analyzing, summarizing, or retrieving information from data?Yes