Why Wind Power Scales as v³: An Intuition Built from First Principles

Featured

and a regulator’s motivation for caring

The Three Methods: A Regulator’s Ladder for Evaluating Energy Claims

Suppose a developer submits a proposal for the Middle Bank area on the Scotian Shelf: 926 turbines, each rated at 15 MW, at 4.2D spacing (where D is the rotor diameter—240 m for this turbine, so 4.2D ≈ 1,000 m between towers), claiming annual energy production (AEP) of 60 TWh. Is that plausible?

You have three increasingly sophisticated ways to check.

METHOD 1 Nameplate (30 seconds, back of envelope)

The simplest possible estimate:

AEP = N × Prated × 8760 hours × CF

Where N is the number of turbines, Prated is each turbine’s maximum output (15 MW here), 8760 is the hours in a year, and CF is the capacity factor—the fraction of rated output the turbine actually produces over a year. For offshore wind, CF typically falls between 0.40 and 0.55.

Check the units: N is dimensionless (a count), Prated is in MW, 8760 is in hours, and CF is dimensionless (a fraction). So AEP comes out in MW × hours = MWh—or equivalently, dividing by 10&sup6, TWh. It’s just (number of turbines) × (power per turbine) × (hours per year) × (fraction of time at full output).

For our developer’s claim:

AssumptionAEP
CF = 0.40 (conservative)926 × 15 MW × 8760 × 0.40 = 48.7 TWh
CF = 0.50 (typical offshore)926 × 15 MW × 8760 × 0.50 = 60.8 TWh
CF = 0.55 (optimistic)926 × 15 MW × 8760 × 0.55 = 66.9 TWh

The developer’s 60 TWh falls in range—right at a typical offshore CF. Not obviously wrong. But this tells you nothing about whether CF = 0.40 or 0.55 is appropriate for this site. The capacity factor is doing all the work, and you borrowed it from industry averages rather than deriving it from the actual wind resource.

What the nameplate method hides

It treats CF as an input. But CF is an output—it’s determined by the wind speed distribution, the turbine’s power curve, wake interactions, and availability. It’s the answer, not the question.

METHOD 2 Ginsberg Swept Area (5 minutes, needs mean wind speed)

If you know the site’s mean wind speed, you can estimate power from first principles:

Pavailable = ½ × ρ × A × vavg³

Where ρ is air density (~1.225 kg/m³ at sea level), A is the rotor’s swept area (π × D² / 4), and vavg is the mean wind speed at hub height.

The derivation. Consider a cylinder of air passing through the rotor in time t. Its length is v × t, so its volume is A × v × t, and its mass is ρ × A × v × t. The kinetic energy of that air is:

KE = ½ × m × v² = ½ × (ρ × A × v × t) × v² = ½ × ρ × A × v³ × t

Divide both sides by t to get power (energy per unit time):

P = KE / t = ½ × ρ × A × v³

That’s where the v³ comes from: v once from the mass flow rate (how fast air arrives), v² from the kinetic energy per unit mass (how much energy it carries). Ginsberg (2019) walks through this same derivation; the full physical reasoning for why this matters is developed in The Starting Point below.

But there’s a catch. Wind speed varies, and because power scales as v³, the average of the cubes is not the cube of the average. A site with vavg = 9 m/s but gusty conditions produces more energy than a site with a steady 9 m/s, because the high-wind moments contribute disproportionately (v³ is convex).

Ginsberg handles this with the Energy Pattern Factor (EPF)—a multiplier that corrects the mean-cubed estimate for the actual shape of the wind speed distribution:

Mean Power Density = ½ × ρ × EPF × vavg³

For Rayleigh-distributed winds (shape factor k = 2), EPF ≈ 1.91. This corrects for the distribution without requiring the full wind record. Then to get AEP:

AEP = Mean Power Density × A × 8760 × ηturbine × ηavailability

Where ηturbine accounts for the turbine’s conversion efficiency (Cp, the power coefficient—capped at 59.3% by the Betz limit, which is the theoretical maximum any turbine can extract from the wind) and ηavailability for downtime.

This is more physical—you’re deriving CF from the wind resource rather than assuming it. For the Scotian Shelf, with mean winter wind of 9.3 m/s and summer 7.1 m/s at hub height, the swept area method produces a site-specific estimate rather than borrowing a generic CF from global averages.

What the swept area method hides

It treats each turbine as if it sees the undisturbed wind. In reality, downstream turbines sit in the wakes of upstream ones. A 926-turbine farm at 4.2D spacing will have interior turbines seeing 70–80% of the freestream velocity. Since power scales as v³, that 20–30% velocity deficit translates to 50–65% power loss for those turbines.

METHOD 3 Wake Modeling (hours to days, needs wind distribution + layout)

This is PyWake territory—PyWake is an open-source wind farm simulation tool (developed by DTU Wind Energy) that models how upstream turbines reduce wind speed for downstream ones. You specify the turbine layout, the wind climatology (direction + speed distribution), and a wake deficit model. The simulation propagates wakes through the farm, computing the actual wind speed each turbine sees, and integrates over all wind conditions to produce AEP.

Here’s where v³ bites hardest. Consider a turbine sitting 5D downstream of another in a 9.3 m/s winter wind. The Bastankhah–Porté-Agel Gaussian (bell-curve shaped) deficit model—used in Ma et al. (2025)—predicts the centerline velocity deficit from the wake expansion rate (k* = 0.04, typical for offshore low-turbulence conditions) and the upstream turbine’s thrust coefficient (CT—a measure of how hard the rotor pushes back against the wind; CT ≈ 0.78 for the IEA 15 MW reference turbine—the benchmark design used in the Ma et al. study—at 9.3 m/s, which is below rated speed). At 5D downstream, the model gives a 28% velocity deficit.

Your first instinct might be: 28% less wind, 28% less power. But the cubic says otherwise:

  • Freestream turbine sees 9.3 m/s → P ∝ (9.3)³ = 804
  • Wake-affected turbine sees 6.7 m/s → P ∝ (6.7)³ = 301

That’s 63% less power, not 28%. The cubic more than doubles the impact of the velocity deficit. And in a dense 926-turbine farm, most interior turbines are wake-affected.

Wake losses for the Scotian Shelf scenarios range from 19% (sparse layout, winter) to 46% (dense layout, summer), according to the Ma et al. (2025) simulations. For Middle Bank specifically, the losses are 22% in winter and 41% in summer. At the high end, nearly half the energy you’d expect from nameplate calculations never materializes—a correction too large for any regulator to wave through on trust.

This is what PyWake computes: the v³-amplified impact of every upstream turbine on every downstream one, integrated over all wind directions and speeds across the full year.

The Ladder

MethodInputWhat it capturesWhat it misses
NameplateN, Prated, assumed CFQuick plausibility checkEverything about the site
Ginsbergvavg, A, EPFWind resource physics, v³Wake interactions, layout effects
PyWakev(t,θ) (speed × direction), layout, turbine curvesWake losses, spacing trade-offs(This is the target capability)

Each method reveals a limitation that motivates the next. And the single thread connecting all three is why power scales as v³—because understanding the cubic relationship tells you why the nameplate method hides so much, why the EPF correction exists, and why wake-induced velocity deficits are so devastating.

That’s what this document builds.


The Starting Point

The power available in wind passing through a turbine’s swept area is:

P = ½ × ρ × A × v³

Where:

  • ρ = air density (kg/m³)

  • A = swept area (m²)

  • v = wind velocity (m/s)

The formula is easy to derive—v appears in the mass flow rate (ρ × A × v) and v² appears in kinetic energy (½ × m × v²), so power scales as v³. The math is straightforward.

What’s less obvious is why we work with power at all. Why not go directly from energy density (½ × ρ × v²) to annual energy production? Why the detour through instantaneous power?

This document develops an intuition for that question.


The Cylinder Mental Model

Imagine standing at a wind turbine and watching air flow through the rotor over an entire year. You could visualize this as an impossibly long cylinder:

  • Cross-section = the swept area (π × D² / 4)

  • Length = the total distance air has traveled past the rotor over the year

    If the wind blew at a constant 10 m/s for a year, your cylinder would be about 315 million meters long (10 m/s × 31.5 million seconds).

To find the total energy, you might try:

Energy = (energy density) × (volume)

The energy density of moving air is ½ × ρ × v² (joules per cubic meter). The volume is A × L, where L is the cylinder length. Multiply and done?

Not quite. Here’s where it gets awkward.


The Awkwardness: A Cylinder That Won’t Cooperate

The wind doesn’t blow at a constant speed. Your cylinder is made of “slices”—some added during high-wind moments, some during calm. Each slice has its own energy density depending on what v was when that slice passed through.

You might still try to salvage the simple approach:

Energy = (average energy density) × (total volume)

But you can’t cleanly separate these terms.

When v is high:

  • The cylinder extends faster (more meters of air arriving per second)

  • Those slices are energy-rich (½ × ρ × v² is large)


    When v is low:
  • The cylinder extends slowly

  • Those slices are energy-poor

    The high-v slices are both thicker (more length added per unit time) and richer (more joules per cubic meter). The low-v slices are both thinner and poorer.

This coupling wrecks any attempt at simple averaging. If you average energy density across time, you underweight the thick, juicy slices. If you try to average across volume, you need v for both terms—energy density (½ × ρ × v²) AND slice thickness (v × dt). Both depend on v, and v is different for every slice. You’re back to needing the full wind record anyway.

Total energy ≟ ½ρ · v̄² · A · v̄ · t = ½ρA · v̄³ · t    ← WRONG

Why wrong? Because the cube amplifies differences. A gust at 12 m/s contributes (12)³ = 1,728 to the energy integral, while a lull at 6 m/s contributes only (6)³ = 216. The gust is worth 8× the lull, not 2×. Averaging the wind speed before cubing it buries this asymmetry.

Energy = Σ  ½ρA · v(t)³ · Δt    ← sum over each hour
The regulator’s takeaway

When a developer reports “mean wind speed 9.3 m/s,” that single number is not enough to evaluate their AEP claim. Two sites with identical means but different variability will produce different amounts of energy—and the gustier site wins, thanks to the v³ amplification.

A Geophysics Parallel: Degrees of Entanglement

To see why this is so stubborn, consider a spectrum of cases from reservoir geophysics:

Core data (you can measure each property independently):

In a layered reservoir, each bed has a permeability (k) and a thickness (h). From core samples, you measure them separately—ruler for thickness, core plug for permeability. A thick layer can have low permeability; a thin layer can have high permeability. They’re independent. Averaging works (arithmetic, harmonic, or geometric depending on flow geometry).

Seismic inversion (the properties are independent, but the measurement tangles them):

Now try to estimate k and h from seismic reflection data. You don’t see them separately anymore. The seismic response convolves them—a thick low-k layer might look like a thin high-k layer. They’re physically independent, but entangled in the measurement. You can try to untangle them, but it’s hard.

Wind (the two properties are the same variable):

Energy density is ½ × ρ × v². Slice thickness is v × dt. Both ARE v. There’s no underlying separation to recover. It’s not that the measurement convolves them—they’re the same variable wearing two hats.

CaseProperty vs. WeightSeparable?
Core datak and h independent, measured separatelyYes
Seismic inversionk and h independent, convolved in measurementHard
Wind½ρv² and v×dt are both vImpossible—nothing to untangle

Wind sits at the extreme end: the entanglement isn’t observational, it’s definitional.


The Root Cause: The Carrier IS the Cargo

Most energy delivery systems have a carrier and a cargo that are independent.

The Truck and Coal Analogy

Imagine you’re receiving coal deliveries by truck. Two things determine how much energy arrives per hour:

  1. How fast the trucks arrive (delivery rate)

  2. How much energy is in each truckload (energy content)

    These are independent. You could:
  • Speed up the trucks without changing the coal quality

  • Switch to higher-grade coal without changing the delivery schedule

  • Double one while halving the other

The truck’s velocity has nothing to do with the coal’s BTU content. Two separate knobs, two separate decisions.

Concrete examples of this independence:

  • Slow trucks, high-grade coal: One delivery per week, but it’s anthracite. Few arrivals, lots of BTUs per ton.

  • Fast trucks, low-grade coal: Ten deliveries per day, but it’s lignite. Frequent arrivals, few BTUs per ton.

Both are perfectly coherent. You could even tune them to deliver the same total energy per month. The truck schedule and the coal grade are set by different people making different decisions—the dispatcher and the mine, say.

This independence is typical of energy delivery systems:

SystemCarrierCargo
Coal truckTruck (speed adjustable)Coal (energy content independent of truck speed)
Power lineWire (current adjustable)Electrons (voltage adjustable independently)
Gas pipelinePipe flow (rate adjustable)Gas (BTU content independent of flow rate)

You can speed up delivery without changing what’s being delivered. Two knobs.

Wind Breaks This Independence

Wind is different. There are no trucks. The air’s motion delivers it to you, and the air’s motion is the energy. There is no “air truck” bringing “energy cargo.” The velocity that transports air to your rotor is the same velocity that determines how much kinetic energy that air contains.

Think about what would need to be true for wind to behave like coal trucks: you’d need slow-moving air that somehow contained lots of kinetic energy, or fast-moving air with little energy. That’s a contradiction. The air’s kinetic energy is ½ × m × v², where v is the same velocity that’s bringing it to you.

The impossible wind analogues would be:

  • Slow breeze carrying “anthracite air” (high energy density)

  • Fast wind carrying “lignite air” (low energy density)

These don’t exist. There’s no mine selecting the air’s energy grade independently of the velocity that delivers it. The energy grade is v². The dispatcher and the mine are the same person, turning the same knob.

Coal trucks have two degrees of freedom. Wind has one.

One phenomenon, two consequences. One knob.

A Bridge Analogy: The Bullet Conveyor Belt

Imagine a conveyor belt covered with bullets, all pointing at a target. The bullets are arranged in rows across the belt. When they reach the end, they fly off and hit the target.

You have two ways to increase the damage:

Add more bullets per row (wider rows):

Each meter of belt carries more bullets. More bullets hit the target per second. But each bullet hits just as hard as before. Double the bullets per row, double the damage. Simple.

Speed up the belt:

Here’s where it gets strange. Speeding up the belt does two things at once:

  • Bullets arrive faster (more hits per second)

  • Each bullet is moving faster when it flies off, so it hits harder (damage per bullet goes up)

You can’t get one without the other. There’s no way to make bullets arrive faster while keeping them gentle, or make them hit harder while keeping arrivals slow. One dial, two consequences.

That’s wind.

Air density and rotor size are like bullets per row—you can adjust them separately. But wind speed is like belt speed. When v goes up:

  • More air arrives per second (delivery rate, proportional to v)

  • Each parcel of air carries more punch (energy density, proportional to v²)

Multiply them together: v × v² = v³.

The belt speed controls both how often bullets arrive and how hard they hit. Wind speed controls both how fast air arrives and how much energy it carries. One knob. Two consequences. That’s where the cubic comes from.

This is why v appears twice in the power equation:

  • Delivery rate (volume flow): A × v

  • Energy content (energy density): ½ × ρ × v²

Multiply them: ½ × ρ × A × v³

The v² and the v aren’t two separate variables that happen to move together. They’re two aspects of a single physical reality — one velocity, showing up twice in the equation for two different physical reasons. You cannot crank up the delivery rate while holding energy content fixed. The air delivers itself.


The Firehose Intuition

You’re standing in front of a firehose. Someone doubles the water velocity.

You don’t get hit by faster water AND more water as if those were two separate decisions. There’s one dial: velocity. Turning it up necessarily does both:

  • Each drop hits harder (v²)—because it’s moving faster

  • More drops arrive per second (v)—because they’re moving faster

Same cause, two consequences.

Total punishment: 4 × 2 = 8×

That’s the v³. Not two correlated effects, but one effect with two faces.


Why Integration Solves the Problem

Given the coupling, how do we actually calculate annual energy production?

Integration refuses to average.

Instead of trying to summarize the whole year with bulk quantities, integration says:

“Fine. I’ll go moment by moment. At this instant, v = 7 m/s. What’s the power? Good. Now the next instant, v = 7.2 m/s. What’s the power? Good. Next…”

At each infinitesimal moment, v is just one number. The coupling is trivially resolved—the same v goes into both the “how fast is the cylinder growing” calculation and the “how rich is this slice” calculation.

Power right now = ½ × ρ × A × v³ right now

No averaging. No untangling. Just one v, doing its two jobs, at this instant.

Then add up all the instants:

Energy = integral of P dt = integral of ½ × ρ × A × v³ dt

The Insight

Integration doesn’t untangle the coupling. It shrinks to a scale where the coupling doesn’t matter—because at an instant, there’s nothing to correlate. There’s just one v, with its two consequences, right now.

The sum of countless “right nows” is your answer.


When Would Averaging Work? A Thought Experiment

To sharpen the intuition, ask: what would need to be true for simple averaging to work?

The Bubble Cylinder

Return to the cylinder mental model, but change one thing. Imagine the cylinder always advances at constant speed—say, 10 m/s, all year. The energy isn’t carried by the air’s motion anymore. Instead, imagine energy as “bubbles” suspended in the air, and what varies moment to moment is the bubble density.

Now you can average:

Energy = (average bubble density) × (fixed volume)

The cylinder grows at a constant rate. Some hours have dense bubbles, some have sparse bubbles, but each hour contributes the same thickness of cylinder. The two terms—total volume and average energy density—are decoupled. Multiply at the end, done.

This is mathematically identical to the coal truck. The carrier (cylinder advancing at constant speed) is independent of the cargo (bubble density). Two knobs.

A Physical Example: Hot Water in a Pipe

What’s a real system with varying carrier speed but constant cargo density?

A pipe delivering hot water. The pump speed varies—sometimes fast, sometimes slow. But the thermal energy per liter is set by the water temperature, say 60 deg C. That’s independent of flow rate.

  • Flow fast → more liters per second, each at 60 deg C

  • Flow slow → fewer liters per second, each still at 60 deg C

The energy density (joules per liter, set by temperature) is decoupled from the delivery rate (liters per second, set by pump speed). Two knobs.

You can work with averages:

Energy delivered = (energy per liter) × (total liters delivered)

Or: (constant energy density) × (average flow rate) × (time)

The varying pump speed affects how much volume arrives, but each parcel’s richness is the same regardless of how fast it traveled.

Why Wind Doesn’t Give You This Escape

For wind to behave like hot water, you’d need the air to carry something whose concentration doesn’t depend on wind speed—say, a constant pollen count per cubic meter. Wind speed varies, but pollen density stays fixed. Now the cylinder’s “cargo” is independent of how fast it’s growing. Average pollen density, multiply by total volume, done.

But wind’s kinetic energy doesn’t work this way. The “temperature” of the air—its energy density, ½ × ρ × v²—is its velocity. There’s no separate thermostat. The air’s motion is both the carrier and the cargo.

This is why integration isn’t optional. The coupling between delivery rate and energy content is fundamental to what kinetic energy is. You can’t engineer around it. You can only shrink to instants where there’s nothing to decouple.


Two Paths to the Integral: Measurement vs. Prediction

The integration solution demands that we know v at each instant. In practice, there are two ways to get this:

Path 1: Measure the Wind Record Directly

Deploy instruments and record v(t) over time. For offshore wind, this typically means floating LIDAR (Flidar)—a buoy-mounted remote sensing system that measures wind speed at hub height. A 1-3 year measurement campaign gives you a detailed wind speed record.

With this record, you can:

  • Bin the data by wind speed (how many hours at 4 m/s, 5 m/s, 6 m/s…)

  • Calculate power for each bin

  • Sum to get annual energy production

This is the integral computed directly from measurements.

Path 2: Predict from a Probability Distribution

The Ladder’s Method 2 already used the EPF shortcut. Here we see where it comes from — why the correction factor exists at all. What if you only have the average wind speed at a site? You might know v_avg = 9 m/s from regional data or a short measurement campaign, but not the full distribution.

Here’s the problem: you can’t just compute P = ½ × ρ × A × (v_avg)³.

Because of the v³ nonlinearity, mean(v³) ≠ mean(v)³ — the average of the cubes always exceeds the cube of the average.

The solution: assume a probability distribution for wind speeds. The most common choice is the Rayleigh distribution (a special case of Weibull with shape parameter k=2), which fits many sites reasonably well.

For a Rayleigh distribution, the ratio mean(v³) / mean(v)³ works out to approximately 1.91. This is the Energy Pattern Factor (EPF)—the same EPF we used in the Ladder’s Method 2, now derived from the distribution.

The tradeoff:

  • Flidar measurement → accurate, site-specific, expensive, time-consuming

  • EPF prediction → quick, cheap, approximate, assumes Rayleigh distribution holds

For preliminary screening (“Is this site worth investigating?”), the EPF approach is often sufficient. For detailed project assessment and financing, you need the full wind speed distribution — either from a measurement campaign or from validated reanalysis data. The next section shows how that distribution is used.


From Power to Annual Energy Production

In practice, this integral is evaluated using wind speed statistics:

  1. Measure (or model) the distribution of wind speeds at a site—how many hours per year at 4 m/s, at 5 m/s, at 6 m/s, etc.

  2. For each wind speed bin, calculate power using P = ½ × Cp × ρ × A × v³ (where Cp is the turbine’s efficiency, limited by the Betz limit of 59.3%)

  3. Multiply each power by the hours at that wind speed

  4. Sum across all bins

The result is Annual Energy Production (AEP), typically in MWh or GWh per year.

This is the integral in discrete form: breaking the year into bins where v is approximately constant, computing power for each bin, multiplying by time, summing.


The Scaling Relationships (Summary)

ChangePower scales asDoubling gives you
Wind speed8x power
Rotor diameter4x power
Swept areaA2x power

Why These Matter

The v³ dominates everything. A mediocre turbine at a windy site beats an excellent turbine at a calm site.

Error propagation is brutal. A 10% error in wind speed estimates becomes a ~33% error in power predictions (1.1³ ~ 1.33). This is why wind resource assessment demands years of careful measurement.

Power vs. Energy: Power (watts) is the instantaneous rate—what the physics gives you. Energy (watt-hours) is the accumulated total—what you sell. The bridge between them is integration over time.


The Swept Area Method: The Engineer’s Lever

So v³ dominates the physics. Why do wind energy textbooks make such a fuss about the “swept area method”?

Because you can’t control the wind. You can control the rotor.

The Knobs You Actually Have

When designing or selecting a turbine, you don’t get to dial up v. The wind is what it is at your site. What you can choose is rotor diameter—and through it, swept area.

This makes the D² relationship the engineer’s primary lever:

Rotor diameterSwept areaRelative power
50 m~2,000 m²1x
100 m~7,900 m²4x
150 m~17,700 m²9x
200 m~31,400 m²16x

Going from a 50m rotor to a 200m rotor—a 4x increase in diameter—gives you 16x the power. That’s a big deal.

Why Turbines Keep Getting Bigger

In the 1980s, rotor diameter was about 15 meters. Today’s largest offshore rotors exceed 230 meters. That’s roughly a 15x increase in diameter, which means:

  • (15)² ~ 225x more swept area

  • 225x more power per turbine (at the same wind speed)

This is why the industry relentlessly pursues larger rotors despite the engineering challenges. The scaling reward is enormous—even though it’s “only” quadratic.

The Terminology Trap

Ginsberg (2019) writes:

“Power increases exponentially with swept area”

This is wrong — the relationship is quadratic, not exponential. But the impulse is understandable: Ginsberg is trying to emphasize that doubling the diameter does far more than double the output.

Better ways to convey the same idea:

  • “Power scales with the square of rotor diameter—double the diameter, quadruple the output”

  • “Going from an 80m to a 160m rotor doesn’t double production—it quadruples it”

  • “The swept area method matters because area is the one variable you actually control”

  • “Larger rotors capture dramatically more energy” (vague but not wrong)

What to avoid:

  • “Exponential” (mathematically incorrect—different growth class entirely)

  • “Increases rapidly” without quantifying (invites misinterpretation)

The Full Picture

The v³ relationship tells you what physics allows. The D² relationship tells you what engineering can capture. Together:

P = ½ × ρ × A × v³ = ½ × ρ × (π × D² / 4) × v³

You can’t change ρ (air density is what it is). You can’t change v (the wind blows as it will). You can change D—and every doubling of diameter buys you a factor of four.

That’s why swept area deserves its own “method” in the textbooks. Not because the scaling is exponential—it isn’t. But because it’s the lever you actually get to pull.


Terminology Note

These relationships are:

  • Linear in area (P ~ A)

  • Quadratic in diameter (P ~ D²)

  • Cubic in velocity (P ~ v³)

None of them are exponential. True exponential growth (P ~ ex or P ~ 2x) means the exponent contains the variable. These are polynomial relationships—the variable is in the base, not the exponent.

The distinction matters: exponential functions eventually outgrow any polynomial. Saying “exponential” when you mean “cubic” or “quadratic” isn’t just imprecise—it’s a different class of mathematical behavior.


Key Takeaways

  1. Wind power scales as v³ because velocity does double duty: it determines both how fast air arrives and how much energy that air contains.

  2. The carrier is the cargo. Unlike most energy systems, you can’t decouple delivery rate from energy content. One knob, two consequences.

  3. The cylinder model helps visualize annual energy as a long tube of variable-density air—but the coupling between slice thickness and slice richness prevents simple averaging.

  4. Integration solves this by shrinking to moments where there’s only one v, then summing. It doesn’t untangle the coupling; it sidesteps it.

  5. Power is the physics; energy is the economics. The cubic relationship governs instantaneous extraction. Integration over real wind distributions gives you what the turbine actually produces—and what investors actually care about.

  6. The methods ladder follows from v³. The nameplate method hides the cubic sensitivity inside an assumed capacity factor. The Ginsberg method exposes it through the EPF correction. Wake modeling confronts it directly: a 25% velocity deficit in a wake means (0.75)³ = 42% of undisturbed power. Each method up the ladder gives you more honest engagement with the cubic.



Closing the Loop: Why This Path?

A natural question: why do we go through energy density and power at all? Why not calculate energy directly?

Here’s the logic chain:

Step 1: Energy Density is the Fundamental Physics

The kinetic energy per cubic meter of moving air is:

Energy density = ½ × ρ × v²

This is bedrock—it falls straight out of KE = ½ × m × v².

Step 2: But Energy Density Alone is Stuck

You might want to say:

Total energy = (energy density) × (volume)

But what volume? The air isn’t sitting still. It’s a flow, not a parcel. And worse: when v changes, the energy density changes AND the rate at which volume passes through changes. The carrier-is-the-cargo coupling makes any direct calculation treacherous.

Step 3: Multiply by Flow Rate to Get Power

Introduce the volume flow rate (A × v) and multiply:

Power = (energy density) × (volume flow rate) = ½ × ρ × v² × A × v = ½ × ρ × A × v³

Power is the natural quantity for a continuous flow. It answers: “Right now, at this instant, how much energy per second is passing through?”

Step 4: Power Lets You Work Instant by Instant

This is the key move. At each instant, v is just one number. The coupling that wrecked the cylinder averaging is trivially resolved—there’s nothing to correlate. One v, doing its two jobs (setting energy density AND delivery rate), right now.

No averaging required. No untangling. Just: what’s v? Compute power. Done.

Step 5: Integrate Power Over Time to Get Energy

Sum up the instants:

Energy = integral of P dt = integral of ½ × ρ × A × v³ dt

Each moment contributes its power × its duration. The integral handles the fact that v changes from moment to moment. The result is total energy—MWh, GWh, what you actually sell.

The Path

Energy density (½ × ρ × v²)
|
v
× flow rate (A × v)
|
v
Power (½ × ρ × A × v³) <-- work instant by instant here
|
v
× time (integrate)
|
v
Energy (MWh, GWh/year)

We don’t go through power because it’s convenient. We go through power because it’s the only clean waypoint when the carrier is the cargo and v won’t hold still.

This is exactly what PyWake does at industrial scale: for each turbine in a 926-unit farm, at each hourly wind condition, it computes the local wind speed (accounting for upstream wakes), evaluates v³, and sums the result. The physics in this document is the physics inside that software.


References

Bastankhah, M. and Porté-Agel, F. (2014). A new analytical model for wind-turbine wakes. Renewable Energy, 70, 116–123. doi:10.1016/j.renene.2014.01.002

Gaertner, E., Rinker, J., Sethuraman, L., Zahle, F., Anderson, B., Barter, G., Abbas, N., Meng, F., Bortolotti, P., Skrzypinski, W., Scott, G., Feil, R., Ber, H., Dykes, K., Shields, M., Allen, C., and Viselli, A. (2020). Definition of the IEA 15-Megawatt Offshore Reference Wind Turbine. NREL/TP-5000-75698.

Ginsberg, M. (2019). Harness It: Renewable Energy Technologies and Project Development Models Transforming the Grid. Business Expert Press. ISBN: 978-1-63157-931-8.

Ma, Y., Zhai, L., Nickerson, E. C., Bhatt, U. S., Bhatt, M. P., and Lin, H. (2025). Wind data assessment and energy estimation on the Scotian Shelf. Wind Energy Science, 10, 2965–2999. doi:10.5194/wes-10-2965-2025

Pedersen, M. M., van der Laan, P., Friis-Møller, M., Rinker, J., and Réthoré, P.-E. (2019). DTUWindEnergy/PyWake. Zenodo. doi:10.5281/zenodo.2562662

Teaching an AI to Reason Like Fermi: Part 2 – Permission to Guess

Featured

Previously, on this blog

In Part 1 of this series, I told you the story: how I tried to teach AI to do Fermi estimation, hit a wall with GPT-3’s arithmetic, got past it with the Wolfram plugin, and discovered the real problem wasn’t calculation — it was judgment. I showed you the results: a framework that lifts scores modestly (7.9 to 8.5 out of 10) but transforms the reasoning from opaque assertions into auditable, bounded estimates.

I promised to show you the framework itself. Here it is.

The architecture

The framework has two layers. The first handles mechanics — how to do the math. The second handles judgment — how to think about what you know and don’t know. The mechanical layer matters less than you’d expect. The judgment layer matters more.

The mechanical laws (LAW0–LAW11)

These twelve laws govern arithmetic, notation, and reporting. They exist because early LLMs were unreliable at basic computation, and even modern models benefit from explicit procedural scaffolding.

LAW0 sets a five-step workflow adapted from Chapter 2 of Weinstein’s Guesstimation (“Dealing with Large Numbers”): interpret the problem, identify known facts, make assumptions, calculate explicitly, and report results. This is the backbone — every problem follows this sequence.

LAW1–LAW7 handle scientific notation: how to multiply (add exponents), divide (subtract exponents), take square roots (adjust odd exponents), and compute geometric means. These feel pedantic, but they ensure consistent notation and prevent cascading errors in multi-step calculations. They were drafted partly in response to GPT-4’s arithmetic failures — in one documented session, it multiplied 836 × 1923 eight times and got a different wrong answer on seven of them. As I noted in Part 1, modern models handle arithmetic reliably; the judgment problem is what persists.

LAW8–LAW9 handle the practical side of units: do the math in whatever units are convenient (inches, miles, eV), but convert to SI for the final answer; and keep exponents positive by choosing appropriate unit scales. LAW10 says “read the worked examples first” — learning from demonstrations before attempting problems. LAW11 is prosaic but necessary: “don’t waste tokens rephrasing my question back to me.”

If you’re an experienced estimator, you can skim these. They’re scaffolding for the model, not insight for you. The estimation laws are where it gets interesting.

The estimation laws: teaching judgment

These seven laws address the hard part — knowing when to guess, how to bound, and when to stop and ask for help.

LAW-INTERPRET says: before doing anything, state what you think the problem is asking. If it’s ambiguous, ask. This was added after a specific failure I’ll show you shortly.

LAW-ESTIMATE is the core: for every quantity you use, you must do one of three things. State it as known — a fact any technically literate person would accept (speed of light, human height, Earth’s population). Bound it with reasoning — construct explicit upper and lower limits from physical or logical arguments, then take the geometric mean. Or ask — if you can’t do either, stop and request human input. “I’ll estimate X as Y” without justification is explicitly forbidden.

LAW-BOUNDS governs the quality of those bounds. They should span 1–3 orders of magnitude. They should be physically motivated, not arbitrary. And — this is one of my favorite additions — behavioral observations make excellent bounds. “A gecko can walk on a ceiling but cannot run on one, so adhesive force is between 1× and 10× body weight.”

LAW-PROCEED gives permission to be imprecise: a factor-of-ten estimate beats no estimate. This is straight from Weinstein: “Dare to be imprecise!” It counteracts the model’s tendency to hedge or refuse when it doesn’t have exact data.

LAW-DECOMPOSE is the problem-solving engine: if stuck, ask “what would I need to know to answer this?” Each answer becomes a sub-problem. Then classify each sub-quantity as KNOWN, BOUND, or ASK before proceeding.

LAW-VALIDATE and LAW-REPORT handle the endgame: cross-check using independent approaches, express answers in SI units with scientific notation, and always compare to something meaningful.

The whole thing flows as a six-step cycle: INTERPRET → DECOMPOSE → ESTIMATE → CALCULATE → VALIDATE → REPORT.

What good Fermi reasoning looks like

Before showing you how the model performs, let me show you what we’re aiming for. Here’s one of Weinstein’s worked examples — a problem that demonstrates nearly every technique the framework encodes.

The setup: Weinstein is at a Norfolk Tides baseball game when a foul ball showers his friends with beer. The question — what is the probability of a foul ball landing in a cup of beer during one game, and how many “splash downs” happen across a full MLB season? He starts by estimating about forty foul balls per game land in the stands (more than one per inning, fewer than twenty — geometric mean of five, times nine innings). Then he decomposes the per-ball probability:*

Now we need to estimate the probability that a given foul ball will land directly in a cup of beer. (Note: only beer is sold in open-topped cups.) This means that we need to break the problem into even smaller pieces. Let’s assume that the cup of beer is sitting innocently in a cup holder. To hit a cup of beer, the foul ball needs to:

1. not be caught by a fan
2. land within the area of a seat
3. hit a seat whose owner has a cup of beer
4. land in the cup

Most fly balls are caught, but many are not. Let’s estimate that between one-quarter and one-half of fly balls are not caught. “Averaging” the two, we will use one-third.

Most of the stadium area is used for seating, so let’s ignore that factor.

At any given time, more than 1% and less than 100% of fans have a cup of beer in front of them. Using the geometric mean, we estimate that 10% of seats have beer cups.

A large beer cup is 4 inches (10 cm) across, so the baseball must land in an area defined by

Acup = πr² = 3(2 in)² = 10 in²

The area of the seat (from arm rest to arm rest and from row to row) is about 2 ft by 3 ft (60 cm by 90 cm), so

Aseat = (24 in) × (36 in) = 10³ in²

Thus, if the ball hits a seat that has a cup of beer, the probability that it lands in the cup is

Pcup = Acup/Aseat = 10 in²/10³ in² = 10⁻²

or 1%. The metric probability is the same.

(Extra credit question: Which is more likely, that the ball lands in the cup in the cup holder, splashing the beer, or that the fan is holding the cup of beer when the foul ball arrives and splashes it in his or her excitement?)

This means that the probability that any single foul ball lands in a cup of beer is

P = (1/3) × (1/10) × (10⁻²) = 3 × 10⁻⁴

With forty foul balls per game, this means that the probability of a foul landing in a cup of beer during any one game is 10⁻². This is not very likely. The probability that we will be directly below the splash is even less likely.

He then scales to the full MLB season — 30 teams, 160 games each, about 2,000 games total — and gets about 20 beer landings per season. His closing observation: because baseball analysts keep meticulous statistics, he’s very surprised they don’t appear to record beer landings.

Notice what happened. An impossible-sounding question became four independent sub-conditions, each bounded with reasoning anyone can follow. The geometric mean of 1% and 100% gives 10% — you don’t need to survey stadiums. The area ratio gives 1% — you don’t need probability theory. And the answer emerges from combining many small, defensible estimates. No single number needs to be precise. The decomposition does the work.

This is what we’re teaching the model to do.

The worked examples: teaching by demonstration

The framework includes 14 worked examples from Weinstein’s Guesstimation books. Each demonstrates specific reasoning patterns the laws encode. Here are two that show how different techniques work in practice.

Bounding by physical comparison

In Spider-Man 2, Spider-Man stops a runaway subway train by attaching his webs to nearby buildings and pulling for ten or twenty city blocks. Weinstein asks: how much force does he need? In the book, this follows a crash test dummy problem using the same work-energy approach — Weinstein builds technique through repetition with variation. Here’s how he tackles it:

In the movie Spider-Man 2, Spider-Man stops a runaway New York City six-car subway train by attaching his webs to nearby buildings and pulling really hard for 10 or 20 city blocks. How much force does he have to exert to stop the subway train? Give your answer in newtons and in tons (1 ton = 10⁴ N). How does this compare to the force that you can exert?

Since the work done by Spider-Man to stop the train is equal to the train’s initial kinetic energy, we need to estimate the mass and velocity of the train. We will then need to estimate the stopping distance in order to calculate the force exerted.

A subway car is about the same size and weight as a semi-trailer (18-wheeler) truck. This is between 10 and 40 tons. We’ll use 20 tons (or 2 × 10⁴ kg). There are six cars on a train so that the mass of the train is 6 × 2 × 10⁴ kg = 10⁵ kg.

They certainly go faster than 20 mph and slower than 100 mph. Since it is not that far between subway stops, subways travel at only about 40 mph (20 m/s).

Thus, the kinetic energy of a subway train is:

KE = ½mv² = 0.5 × 10⁵ kg × (20 m/s)² = 2 × 10⁷ J

There are 20 blocks per mile in Manhattan. Thus, 10 or 20 blocks is about 1 km or 10³ m. (It’s certainly more than 100 m and less than 10 km.)

Thus, Spider-Man needs to exert a force:

F = KE/d = (2 × 10⁷ J)/(10³ m) = 2 × 10⁴ N

F = (2 × 10⁴ N)/(10⁴ N/ton) = 2 tons

A force of 2 × 10⁴ N is the weight of 2000 kg or 2 tons. For a superhero who can lift cars, this is quite possible (although definitely not easy). A human could definitely not do it.

Wow! Hollywood got the physics correct, in a superhero movie no less! Hurray!

The key move: “a subway car is about the same size and weight as a semi-trailer truck.” He doesn’t look up the mass of a subway car. He bounds it by comparison to something familiar — between 10 and 40 tons — and uses 20 as a round number near the geometric mean. Every quantity gets the same treatment: velocity bounded by “certainly faster than 20 mph, slower than 100 mph,” stopping distance from “20 blocks per mile in Manhattan.” The entire solution is built from things you can verify from ordinary experience.

The obvious answer is wrong

This next example is my personal favorite. It was the first Weinstein problem I solved entirely on my own — and the moment I realized Fermi estimation was something I could actually do. Weinstein gives four decomposition hints nudging you toward the fibers, but the real lesson is that the “obvious” answer is spectacularly wrong:

What is the surface area of a typical bath towel (include the fibers!)? Compare this to the area of a room, a house, a football field.

That’s obvious, surely! A large rectangular towel 1 m by 2 m has a total surface area of 4 m² (including both sides), right? (In US units, a big towel may be as large as 3 ft by 6 ft.)

Wrong, actually, unless it is a very worn-out towel. New towels have many little fibers that can absorb a lot of moisture (recall the old puzzle—what gets wetter the more it dries?). Unless you’re a fan of the Hitchhiker’s Guide to the Galaxy, you won’t have brought your own towel, so nip off to the bathroom and examine one; quickly now, we’re dripping all over the floor.

You don’t need to actually go and count the number of fibers per square inch or per square centimeter; in the latter case there must be more than 10 and fewer than 1000, so we take the geometric mean of 10¹ and 10³, which is 10². In a square inch, being about 6 cm², we should expect about six times as many. This will of course vary, depending on where you buy your towels; we are assuming that we are describing one of those very nice towels found in one of those very nice hotels.

Back already? Right-oh. Now we need to estimate the surface area of each fiber. We can approximate the fiber as a cylinder or a box. Cylinders are complicated so we’ll use boxes. Each fiber is about 0.5 cm (1/4 in.) long and 1 mm (0.1 cm) wide. Each “boxy” fiber then has four flat surfaces, each 0.5 cm by 0.1 cm. Thus, the surface area of one fiber is:

Afiber = 4 × 0.5 cm × (1 m/10² cm) × 0.1 cm × (1 m/10² cm) = 2 × 10⁻⁵ m²

Atotal = towel area × fibers per area × area per fiber
= 4 m² × (10² fibers/cm²) × (10⁴ cm²/1 m²) × (2 × 10⁻⁵ m²/fiber)
= 80 m²

That is about 800 square feet: the size of a large apartment or a small house.

Weinstein draws an analogy to the coastline problem — just as the length of coast from New York to Boston is much more than the 200-mile driving distance, the surface area of a towel is much more than its simple area. The answer, 80 m², is twenty times larger than the “obvious” 4 m². The geometric mean does the heavy lifting: more than 10 fibers per cm², fewer than 1000, so about 100. No counting required.

LAW10 exists because of these examples. When I was learning Fermi estimation myself, I noticed something: I got measurably better after working through more of Weinstein’s examples. His reasoning patterns — the way he finds comparable objects, the way he uses “certainly more than X, certainly less than Y” — became internalized heuristics. I wanted the model to have the same experience.

Does it work? Partially. The model absorbs the decomposition patterns perfectly. Every test problem was broken down cleanly. But the bounding technique — the heart of Weinstein’s method — transfers inconsistently. Sometimes the model produces beautiful, physically-grounded bounds. Sometimes it just asserts a number.

Which brings us to the evidence.

What the framework actually changes: side by side

The quantitative lift is modest — 0.6 points on our 8-problem test. But the qualitative difference is real. Here’s what it looks like in practice.

Gecko adhesive force

Without framework (Project C — basic Fermi prompting):

Gecko mass: 50-100 g → use 75 g

That’s it. Two numbers, no reasoning. Why 50? Why 100? We have no idea.

With framework (Project A):

Gecko mass (BOUND):
Lower bound: more than 10 g (smaller than a mouse)
Upper bound: less than 500 g (smaller than a rat)
Geometric mean: √(10 × 500) ≈ 70 g

Now we can see the thinking. We know the bounds are anchored to familiar animals. We can ask: is a gecko really smaller than a rat? (Yes, most species.) Is it heavier than 10 grams? (Some species barely are.) The answer is similar — 70 g versus 75 g — but the reasoning is transparent and auditable.

Shark attack deaths

Without framework:

Annual shark attacks in US: ~30-50 → use 40
Fatal shark attacks: ~1-2 per year in US → use 1.5

Asserted. Where do these numbers come from? Training data, presumably. But we can’t verify the reasoning because there isn’t any.

With framework:

Annual shark deaths (BOUND):
Shark deaths are newsworthy when they occur, suggesting they’re rare:
Lower bound: more than 0.1 per year (would average 1 per decade)
Upper bound: less than 10 per year (would be weekly news)
Geometric mean: √(0.1 × 10) = 1 death per year in the US

Same order of magnitude. But Project A uses experiential reasoning — things you can verify from your own experience of watching the news. You don’t need to be a marine biologist to evaluate “if there were 10 shark deaths a year, that would be weekly news.” That’s the framework’s value: it forces the model to show reasoning you can engage with.

The drag coefficient — the signature example

This one appeared in Post 1, but it’s worth showing in full context.

Without framework:

Drag coefficient: Cd ≈ 1.0 (upright running human)
Bounds: [0.7, 1.3]

The model states a value, then wraps bounds around it that are barely wider than rounding error. The “bounds” are cosmetic — they don’t come from independent physical reasoning.

With framework:

Lower bound: more than 0.5 (very streamlined)
Upper bound: less than 2 (very blunt)
Geometric mean ≈ 1

Wider bounds, but anchored to the physical extremes of the drag coefficient spectrum. You can evaluate this: is a running human more streamlined than a flat plate? Yes. More blunt than a teardrop? Yes. The bounds make physical sense.

The honest assessment

The gap is modest, not dramatic. Project A shows the best reasoning. Project C is weakest. But Project C still gets reasonable answers — modern Sonnet is good at Fermi estimation even without the framework.

The framework’s value isn’t making the model smarter. It’s making the model’s thinking visible. And once thinking is visible, you can critique it, improve it, and learn from it. That’s the difference between a calculator and a collaborator.

LAW-INTERPRET: the fix that worked

The Flash problem broke something. Both our framework-trained models (Projects A and B) received this problem:

“The Flash can run at (or near) the speed of light. What is the minimum force needed to reach a speed of 10% the speed of sound in 10 ms?”

Both interpreted “10% the speed of sound” literally: 34 m/s. Neither noticed the tension between “near the speed of light” in the setup and a target velocity of 34 m/s — roughly highway speed. The problem as worded by Weinstein is genuinely ambiguous, and the models barreled through without noticing.

This bothered me. The framework told them to ask when uncertain, but there was nothing specifically about checking whether the problem statement itself makes sense. So I added LAW-INTERPRET:

Before solving, state your interpretation of the problem. If the problem is ambiguous or could be read multiple ways, ASK for clarification before proceeding.

With specific triggers: vague terms, multiple reasonable interpretations, missing context, physically implausible scenarios.

To test it, I gave the model an absurd problem: “A Formula 1 car can reach speeds over 200 mph. What is the drag force on the car when traveling at 5% of walking speed?”

The response was immediate and correct:

I need to clarify this problem before proceeding. The problem mentions that “A Formula 1 car can reach speeds over 200 mph” but then asks about drag force at “5% of walking speed.”

This creates an ambiguity:

Interpretation 1 (literal reading): Calculate drag force at 5% of walking speed — 0.075 m/s (7.5 cm/s). This is incredibly slow — slower than a crawl.

Interpretation 2 (possible typo): The problem might have meant 5% of the speed of sound, 5% of 200 mph, or 50% of walking speed.

Could you clarify?

The model recognized the absurdity, stated the literal interpretation, offered alternatives, and asked before proceeding. After clarification (“use 50% of top speed”), it solved the corrected problem cleanly, including well-bounded drag coefficient estimates.

LAW-INTERPRET works reliably. It’s also the simplest of our additions — just a rule saying “check the problem statement before diving in.” Sometimes the most effective interventions are the most obvious ones.

LAW-FLAG: the fix that didn’t

The bigger problem — the one I flagged in Post 1 as “it doesn’t know that it doesn’t know” — proved much harder.

The model never spontaneously asks for help when it encounters a quantity it can’t justify. During the initial eleven-problem diagnostic, this happened zero times. The model would confidently assert specialized physics values as if they were common knowledge.

We tried to fix this with what I called LAW-FLAG: requiring every numerical estimate to be classified as KNOWN (common knowledge), BOUNDED (derived from explicit reasoning), or GUESSED (can’t justify → stop and ask). The idea was that the classification step would force self-assessment — the model would have to ask itself “do I really know this, or am I pulling it from training data?”

To show you what’s at stake, here’s what expert Fermi reasoning looks like on a hard problem — one that chains together quantities from multiple specialized domains.

Journey to the Center of the Galaxy

Weinstein asks: how much radiation damage would we receive traveling to the center of the galaxy? He first establishes the distance (~3 × 10⁴ light-years = 3 × 10²⁰ meters), the density of interstellar space (~1 hydrogen atom per cm³), and the number of atoms our spacecraft would encounter (3 × 10²² per cm² of cross-section). Then:*

In order to travel this distance within a subjective human lifetime, our spacecraft must achieve a speed very close to the speed of light. Even traveling at the speed of light, if there is no relativistic time dilation, our trip will take 3 × 10⁴ years, which is slightly longer than our expected lifetime. Fortunately, at speeds close to the speed of light, subjective time passes more slowly than it does for an observer watching from the center of the galaxy. We need time to pass about 10³ times more slowly. In technical terms, this means we need a relativistic gamma factor of about γ = 10³.

This means that, from our point of view (i.e., in our reference frame) each interstellar atom will hit us traveling at almost the speed of light with the same gamma factor and will thus have a total energy of

E = γmc² = 10³ × (1 GeV) = 1 TeV.

At these energies, when the hydrogen atom strikes the spacecraft, it will lose its electron very quickly. The bare proton will then pass easily through the spacecraft and our bodies, depositing energy in our bodies at a rate of 2 MeV per centimeter.*

Now we can consider the damage done by all these protons. Consider a volume of 1 cm³ within our bodies. At the density of water it has a mass of 1 g. It will be hit by 3 × 10²² protons, with each proton depositing 2 MeV of energy. This means that the total energy deposited per gram will be

Edep = (3 × 10²² protons/cm²)(2 × 10⁶ eV/cm)
× (1 cm³/g)
= (6 × 10²⁸ eV/g)(1.6 × 10⁻¹⁹ J/eV)
= 10¹⁰ J/g = 10¹³ J/kg.

This seems like a lot, but let’s compare it with a few things before jumping to conclusions.

Unfortunately, we would not even survive that long. Radiation dose is measured in grays, where 1 Gy corresponds to an energy deposition of 1 J/kg. A lethal dose of radiation is about 10 Gy or about 10 J/kg. We will accumulate that dose in 10⁻¹² of our journey, or in the first millisecond.

As it only takes 2 × 10⁶ J/kg to boil water, we would be vaporized more than 10⁶ times over. In a thirty-year journey lasting 10⁹ s, we would be turned to steam within the first 10³ s, or 20 minutes.

Weinstein notes that these relativistic protons could pass through about a kilometer of metal — mass shielding can’t save you. His conclusion: who knew vacuum could be so dangerous?

Every quantity in that chain is specialized knowledge. A non-physicist can’t justify “proton energy deposition is 2 MeV per centimeter” or “interstellar density is 1 atom per cm³.” But a human expert knows what they know and where it comes from. That self-awareness — knowing which numbers are common knowledge and which require domain expertise — is precisely what the model can’t do. Now watch what happens when the model faces similar territory.

Test 1: Neutron star — FAIL

Problem: “How closely could we safely orbit a neutron star, considering only gravitational effects?”

The model’s classification:

Neutron star mass and radius:
KNOWN: Typical neutron star mass M ≈ 1.4 M☉ ≈ 3×10³⁰ kg
KNOWN: Typical neutron star radius Rns ≈ 10 km = 10⁴ m

No hesitation. No uncertainty. “KNOWN.” But could you justify “neutron star mass is 1.4 solar masses” to someone who isn’t an astrophysicist? I couldn’t. The model could — because it had seen it thousands of times in training data. That’s not the same as knowing it.

What’s telling is the contrast: in the same solution, the model bounded the safety threshold beautifully, using fighter pilot g-forces as physical anchors. It can bound when it recognizes uncertainty. It just doesn’t recognize uncertainty about quantities that feel “textbook.”

Tightening the language

After this failure, I rewrote LAW-FLAG with stronger language:

KNOWN: Quantities you could justify to a skeptical non-specialist without domain expertise. If a skeptic could reasonably ask “how do you know that?” and you’d need specialized knowledge to answer, it is NOT KNOWN.

The “skeptical non-specialist” framing was the key addition — not “do you know it?” but “could you defend it to someone who doesn’t?”

Test 2: Yellowstone supervolcano — PASS

Problem: “If the Yellowstone supervolcano exploded and the mass of the ejecta fell back down in a ring around the caldera, how fast would the Earth be spinning afterward?”

This time:

However, I need clarification on two critical parameters:

1. Mass of ejecta: How much material are we talking about? Yellowstone eruptions vary enormously. Without specialized volcanology knowledge, I cannot justify a specific mass to a non-specialist.

2. Radius of the ring: How far from the caldera does this ejecta land?

Could you specify: Total mass (or volume) of ejecta to redistribute? Typical radius where the ejecta falls?

The model explicitly applied the “skeptical non-specialist” test and correctly concluded it couldn’t pass. It stopped and asked. Exactly what we wanted.

Test 3: Sun supernova — FAIL

Problem: “If the Sun went supernova, what would kill us first?”

Same tightened LAW-FLAG language. But the model plowed ahead:

Supernova total energy output: Etotal ≈ 3×10⁴⁶ J — KNOWN from supernova theory
Neutrino interaction cross-section: σν ≈ 10⁻⁴³ m² per nucleon — KNOWN from particle physics
Lethal radiation dose: ~10 Gy kills a human — KNOWN from radiation biology

“KNOWN from supernova theory.” “KNOWN from particle physics.” The model appended the name of a specialized field and treated that as justification. A non-specialist cannot defend “supernova neutrino energy is 10⁴⁶ joules” to a skeptic. The tightened language was supposed to prevent exactly this.

The pattern

Three tests, one success. Why did Yellowstone work but the astrophysics problems didn’t?

My hypothesis: the model distinguishes between “I’ve seen this number frequently in training data” and “this is clearly domain-specific.” Volcanic ejecta mass feels obscure — it’s not the kind of thing that appears in introductory textbooks. Neutron star mass (1.4 solar masses) and supernova energy (10⁴⁶ joules) feel canonical — they appear constantly in physics education materials. The model equates “I’ve encountered this many times” with “this is common knowledge.” But frequency in training data is not the same as common knowledge.

The “skeptical non-specialist” test should catch this — a non-specialist has never encountered 10⁴⁶ joules regardless of how often it appears in physics texts. But the model doesn’t genuinely simulate a non-specialist’s perspective. It simulates its own experience of the number, which is one of familiarity.

Compare this to how Weinstein handles similar territory in the galaxy problem. He uses quantities like “proton energy deposition: 2 MeV per centimeter” — but he’s a physicist, he knows it’s specialized, and he walks the reader through the reasoning. The model uses “neutrino cross-section: 10⁻⁴³ m²” with no more self-awareness than it uses “speed of light: 3×10⁸ m/s.” It treats both as equally obvious. That’s the metacognition gap.

Prompt engineering cannot fully close this gap. We pushed the language as far as we could, and it works sometimes, on some domains. That’s not enough.

The reflection agent hiding in plain sight

Here’s what I eventually realized.

Throughout this entire project — every test, every scoring session, every iteration of the framework — there was already a second agent catching these failures. When the model classified “neutron star mass” as KNOWN, someone pushed back: “Could you really justify that to a skeptic?” When bounds were cosmetic rather than physical, someone asked: “Where did that number actually come from?” When the model should have asked for help but didn’t, someone noticed.

That someone was me. And increasingly, it was Claude helping me do it — analyzing outputs, identifying patterns, spotting unjustified assertions across multiple solutions.

I was the reflection agent before I knew the term.

This isn’t just a cute observation. It’s a design insight. The estimation task and the reflection task require different things. Estimation requires confidence — proceeding with imperfect information, committing to bounds, generating answers. Reflection requires skepticism — questioning classifications, challenging justifications, catching overconfidence. Asking one model to do both simultaneously is asking it to be confident and skeptical about its own work at the same time. No wonder LAW-FLAG only works intermittently.

What if we separated the tasks?

What’s next

In Part 3, I’ll describe what happened when I tried to build that separation: one agent that estimates, another that reflects. The estimator-reflector architecture — automating the role I’d been playing all along.

The framework you’ve seen in this post gives the estimator its methodology. The three LAW-FLAG tests you’ve seen give the reflector its target: catch the assertions the estimator can’t catch itself. The question is whether a second model can do what prompt engineering couldn’t — reliably distinguish between “I know this” and “I’ve merely seen this.” From what I’ve seen so far — both doing it myself and watching a different model catch the same failures independently — I’m optimistic. The open question isn’t whether reflection works, but how much of it we can automate.

Stay tuned.

The Framework

The complete Laws of Fermi Problem Solving (v4) are available as a GitHub gist. It includes the 17 laws, reference tables, and four worked examples from Weinstein’s books (reproduced with permission).


AI/HI (Human Intelligence) Transparency Statement

Modified from Brewin

Has any text been generated using HI?Yes
Has any text been generated using AI?Yes
Has any text been improved or corrected using HI?Yes
Have any methods of analysis been suggested using HI?Yes
Have any methods of analysis been suggested using AI?Yes
Do any analyses utilize AI technologies, such as Large Language Models, for tasks like analyzing, summarizing, or retrieving information from data?Yes

Additional context:

The Fermi estimation framework described in this post — the Laws, their design rationale, and the iterative refinement over three years — is entirely my work, developed through hands-on problem-solving with Weinstein’s books and iterative testing with GPT-3.0 and later Claude. The testing methodology — designing the controlled comparison (Projects A, B, C), crafting and iteratively refining the project prompts, selecting test problems, and analyzing results across conditions — was developed collaboratively with Claude. Scoring and editorial decisions are mine.

This post was drafted collaboratively with Claude (Anthropic). I provided the source materials (framework documents, test transcripts, scoring records, session notes), structure, and editorial judgment throughout. Claude drafted prose from these inputs, verified Weinstein excerpts against source PDFs for copyright compliance, and helped with HTML formatting. I reviewed, revised, and made all final decisions about what went to publication.

The Weinstein excerpts are verbatim from the published books, used under license from Princeton University Press via the Copyright Clearance Center.


Notes

* From Lawrence Weinstein, Guesstimation 2.0: Solving Today’s Problems on the Back of a Napkin. © 2012 Princeton University Press. Reprinted by permission.

Used with permission of Princeton University Press, from Guesstimation: Solving the World’s Problems on the Back of a Cocktail Napkin, Weinstein, Lawrence; Adam, John A., © 2008; permission conveyed through Copyright Clearance Center, Inc.

References

Weinstein, L. (2012). Guesstimation 2.0: Solving Today’s Problems on the Back of a Napkin. Princeton University Press.

Weinstein, L., & Adam, J. A. (2008). Guesstimation: Solving the World’s Problems on the Back of a Cocktail Napkin. Princeton University Press.

Epstein, E. L., Winnicki, J., Sornwanee, T., & Dwaraknath, R. (2025). LLMs are Overconfident: Evaluating Confidence Interval Calibration with FermiEval. arXiv preprint arXiv:2510.26995.

Kalyan, A., Kumar, A., Chandrasekaran, A., Sabharwal, A., & Clark, P. (2021). How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI. arXiv preprint arXiv:2110.14207.

Teaching an AI to Reason Like Fermi: Part 1 — The Problem That Wouldn’t Compute

Featured

How many piano tuners are in Chicago?

That’s one of the famous questions Enrico Fermi posed to his physics students. Not because he cared about piano tuners, but because he wanted to teach them how to reason through problems with limited information. You don’t need to know the exact answer – you need to know how to think about it.

Let’s break it down: Chicago has maybe 3 million people. Perhaps 1 in 20 households has a piano – certainly not as common as TVs (nearly everyone), but more common than harps (quite rare). That’s roughly 150,000 pianos. Each piano needs tuning once a year. A tuner can probably do 4 pianos a day – more than 1 (they’d barely make a living), fewer than 10 (each tuning takes a couple hours) – so maybe 1,000 per year. That means you need about 150 tuners.

NASA even uses it to teach estimation. You got a reasonable answer using nothing but logical decomposition and bounded guesses.

This is Fermi estimation. And I tried to teach an AI to do it.

Why this matters

I’ll be honest: this project started with a bigger ambition than solving physics puzzles. I wanted to know if AI could become a partner in thinking critically about numbers – not just as a calculator, but as a collaborator in quantitative reasoning. One inspiration was Carl Bergstrom and Jevin West’s book Calling Bullshit. They describe what they call the “new bullshit”: using data, statistics, and visualizations to construct the appearance of scientific rigor – giving false authority to misleading claims. It’s sophisticated deception dressed up as analysis. The book teaches a skill: detecting and dismantling this kind of numerical misdirection. Fermi estimation is a different but complementary skill – it exercises similar muscles: decomposing claims, questioning assumptions, sanity-checking results. The dream was an AI partner for this kind of thinking. Not an oracle that gives you answers, but a collaborator that helps you interrogate whether answers are reasonable. It was Bergstrom and West’s YouTube lectures that first introduced me to Weinstein’s Guesstimation books – which became the methodological backbone of this project.

Full disclosure: my grandmother worked alongside Enrico Fermi at the Enciclopedia Italiana Treccani. Some things run in families.

The frustration

Let me take you back to 2023. I was excited – maybe naively so – about using LLMs for Fermi estimation. These models had access to vast amounts of physics knowledge, and they were supposedly good at reasoning.

But here’s the thing: they were terrible at arithmetic. Not just GPT-3.0 – the problem persisted through 3.5 and even 4.0. I documented one session where GPT-4 attempted to multiply 836 x 1923 eight times. It got the correct answer exactly once – then regressed to different wrong answers on subsequent attempts. It could describe the long multiplication algorithm perfectly while executing it incorrectly. Each attempt produced a different error. There was no stable learning, just occasional lucky generations.

The arithmetic failures made it impossible to even evaluate whether the models could do Fermi estimation. Was the model struggling with decomposition? With bounding? With proceeding under uncertainty? I couldn’t tell – every attempt would derail on basic multiplication or unit conversion long before I could assess the reasoning quality.

The model would freeze when faced with uncertainty. It would apologize profusely for not having exact values. When it did proceed, it would present numbers with no reasoning, no bounds, no indication that this was a guess rather than a fact. Where did that number come from? Is it reasonable? You have no way to know. But I couldn’t systematically address any of this while arithmetic kept breaking.

I started drafting what would become the framework – ideas about decomposition, bounding, proceeding with uncertainty. But I couldn’t finalize anything. The arithmetic problem had to be solved first.

Note: Modern models with carefully designed custom instructions handle arithmetic more reliably. I know it for a fact (I tested it on ChatGPT 4.5, adapting my instruction from Jeremy Howard’s recommendations in A Hackers’ Guide to Language Models, and got the same results on the tricky questions he works through). But the judgment problem remains unsolved.

The Wolfram plugin: the turning point

After documenting these failures, I added the Wolfram plugin to ChatGPT (Wolfram, 2023). That’s when things changed enough to really push on the Fermi idea.

Wolfram fixed the arithmetic: unit consistency improved, scientific notation became reliable, basic math errors disappeared. The model could finally compute accurately and consistently.

And that’s when I could finally see what remained.

The model could compute, but it couldn’t estimate. It still wouldn’t commit to bounded guesses. It still hedged or refused when faced with uncertainty. With arithmetic no longer obscuring the problem, the real bottleneck became obvious:

The model could calculate, but it couldn’t reason like Fermi.

This led to a crucial realization. There were two different problems requiring two different solutions:
1. Arithmetic reliability -> Wolfram plugin solved this
2. Estimation judgment -> Required something else entirely

The bottleneck wasn’t calculation. It was the model’s inability to proceed confidently with explicit uncertainty – to say “I don’t know this exactly, but I can bound it” and then move forward.

Only then could I finalize the framework.

Drafting the Laws: permission to estimate

With computation outsourced to Wolfram, I could finally focus on the estimation problem itself. I drafted what I called “the Laws” – giving the model permission and methodology to reason under uncertainty.

The framework I drafted wasn’t pulled from thin air. The methodology itself comes from Enrico Fermi – this is how he trained his physics students to reason and experiment in their heads. Lawrence Weinstein took that foundational approach and built something substantial: a rich, modern pedagogical method with brilliant and often humorous worked examples in his Guesstimation books (Weinstein & Adam, 2008; Weinstein, 2012). My Laws were an attempt to translate Weinstein’s approach into something an AI could follow – to adapt it from human pedagogy to human-AI partnership.

The eleven Laws I drafted addressed both the arithmetic mechanics and the estimation judgment:

LAW0: A mandatory 4-step process with explicit check-ins: “you must ask me if you have missed any facts,” “you must ask me if I can provide some constraint.” The collaborative DNA was there from the start – I knew the model needed human input. It just wouldn’t ask for it, even when told.

LAW1-LAW9: Scientific notation rules, arithmetic procedures, and unit handling – establishing a procedure the model could reliably follow, making the reasoning transparent and auditable. With Wolfram handling the computation, these laws ensured consistency and transparency in how calculations were performed and reported. LAW8-LAW9 specifically addressed unit conversions and ensuring final answers were always in SI units.

LAW10: “Read all examples before solving.” I was observing my own learning process – how I improved as I worked through more of Weinstein’s examples, how I absorbed his thinking patterns, how my decomposition and bounding strategies refined with practice. I replicated that learning process for the AI by providing worked examples.

LAW11: “Don’t reprint my prompt” – fighting the model’s tendency to waste tokens repeating instructions back to me.

The framework combined reliable computation (Wolfram) with structured methodology (the Laws). It helped – but the results were inconsistent. The model would follow some Laws, ignore others, and systematic improvement remained elusive.

The real breakthrough would have to wait for better models and a rebuilt framework.

Building something better

Fast forward to late 2025. Better models, more experience, and lessons learned from that 2023 framework. I rebuilt it from scratch – not to compensate for failure, but to enable even greater success.

The new framework has 17 Laws: 12 mechanical (arithmetic, units, notation) and 5 focused specifically on estimation judgment. It includes 14 worked examples demonstrating complete reasoning patterns. The core shift was from defensive rules to empowering principles (“bound uncertain quantities with explicit reasoning,” “classify every estimate as KNOWN, BOUNDED, or ASK”).

The 2023 framework worked – it just wasn’t refined enough. The late 2025 version is more systematic, more explicit about judgment, and designed from the ground up for modern models that can reliably compute but still struggle with metacognition.

I tested it with Claude Sonnet 4.5. Here’s a before/after on that drag coefficient:

Before framework:
Model writes: “The drag coefficient for an F1 car is 0.9”

Where did 0.9 come from? Is that reasonable? We have no way to know.

After framework:
Model writes: “Lower bound: more than 0.5 (very streamlined). Upper bound: less than 2 (very blunt). Geometric mean ~ 1.”

Now we see the thinking. We know what the bounds are. Even if you don’t know fluid dynamics, you can think: “is an F1 car closer to streamlined or blunt?” You can disagree intelligently.

The framework doesn’t make the model know more. It makes the model show its uncertainty instead of hiding it behind assertions that sound confident but can’t be audited.

The results: does the framework actually help?

To find out, we ran a controlled comparison. Same eight problems, same model (Sonnet 4.5), same system-wide preferences. The only difference: one condition got the full framework (17 Laws, 14 worked examples, explicit KNOWN/BOUND/ASK classification requirements), while the other got only basic Fermi methodology – “decompose problems, bound uncertain quantities with upper and lower limits, take geometric means, show your work.”

The results:

  • Minimal prompting: 7.9/10
  • Full framework: 8.5/10

That’s a modest lift – 0.6 points – and with only eight problems and a single evaluator, it’s a directional finding, not a statistically significant one. The model can do reasonable Fermi estimation with just basic methodology concepts. Modern LLMs have improved dramatically since 2023.

But the numbers don’t capture everything. Reading the solutions side by side, the framework-trained answers show their reasoning differently. Instead of “drag coefficient ~ 1,” you get explicit bounds with justification. The answers are more auditable, more teachable, more useful for collaboration.

We also tested whether providing reference tables (order-of-magnitude values for common quantities) would help. Score: identical. But the model cited its sources explicitly – “air density: 1.2 kg/m^3 (from reference tables)” – instead of just labeling values as “KNOWN.” Same accuracy, better transparency.

I also ran an earlier exploratory test on eleven different problems from Weinstein’s books. That diagnostic helped shape the framework, revealing where the model needed more guidance. The patterns were consistent: decomposition excellent, bounding inconsistent, asking for help essentially never.

The framework’s value isn’t dramatic capability lift. It’s structure that enables auditing, teaching, and targeted improvement.

What works excellently:

  • Decomposition: Perfect. Every single problem was broken down cleanly.
  • Forward progress: The model proceeds confidently with bounded estimates.
  • Numerical accuracy: Usually within an order of magnitude of Weinstein’s solutions.
  • Comparisons: Good contextualization of answers.

What works when tested explicitly:

  • Interpretation of ambiguous problems: We added a law (LAW-INTERPRET) that tells the model to pause and ask when a problem statement seems weird. We tested it with an absurd prompt – an F1 car at “5% of walking speed.” The model correctly recognized the absurdity, stated its interpretation, and asked for clarification. This works.

What’s inconsistent:

  • Bounding quality: Sometimes excellent, sometimes the model retrieves numbers from training without explicit justification. The model doesn’t know when it’s doing bounding well versus when it’s pulling data without reasoning. The numbers are often correct, but the reasoning isn’t transparent – you can’t audit the thinking or learn from the process.

What almost never triggers:

  • Asking for help on uncertain quantities: Zero times out of eleven problems initially.

As I noted in my evaluation:

“Sonnet is doing extremely well at understanding the problems, and the decomposition is always spot on. The answers are very reasonable, often close to Weinstein’s. However, its bounding is weak. Sometimes it gets it excellently, but more often than not, it gets maybe one bound but completely misses another. And it does not know it does, so it just pulls data out of nowhere, rather than asking.”

That last part is crucial: it doesn’t know that it doesn’t know.

The attempted fix

We tried to solve this. We added a law (LAW-FLAG) requiring the model to classify every estimate:

  • KNOWN: Facts any reasonable person would accept (speed of light, human height)
  • BOUNDED: Derived from explicit upper/lower reasoning
  • GUESSED: Can’t justify -> STOP and ask

Our first version failed. On a neutron star problem, the model classified “neutron star mass ~ 1.4 solar masses” as KNOWN. Technically it’s in the training data, but could you justify that to a skeptic who isn’t an astrophysicist?

We tightened the language: “KNOWN applies only to quantities you could justify to a skeptical non-specialist.”

This worked… once. On a Yellowstone supervolcano problem, the model stopped and asked for the mass of magma. Progress! But on an astrophysics problem (what would kill us first if the Sun went supernova), it plowed ahead with “KNOWN from supernova theory” and “KNOWN from particle physics.”

Three tests, one success. The fix isn’t reliable.

The evolution summarized

CapabilityGPT-3.0 (2023)Sonnet 4.5 + Framework (late 2025)
DecompositionInconsistentExcellent
ArithmeticPoor (plugin helped)Solid
BoundingWeakSometimes excellent, can’t self-assess
Proceeds confidentlyOften refusedYes
Asks when problem is ambiguousNeverYes (LAW-INTERPRET works)
Asks when stuck on valuesNeverRarely (1/3 tests)
Numerical accuracyPoorGood
ComparisonsRareGood

The gap has narrowed significantly. What remains is the hardest part: metacognition about its own uncertainty. And I’ll be honest-bounding is at least as important as decomposition in Fermi’s approach. Without reliable bounding and self-awareness, the model gets good answers but can’t teach you how it got them or give you confidence about why they’re reasonable.

Where this fits in the research

More recently, I found out that researchers at Allen AI found that fine-tuned LLMs were off by two orders of magnitude on average when solving Fermi problems (Kalyan et al., 2021). Our framework shows similar patterns: decomposition is now essentially solved (perfect scores), and numerical accuracy is much improved (most answers within 10x).

But the deeper problem remains unsolved. The model doesn’t know what it doesn’t know. It produces bounds when it has familiar anchors (everyday quantities, mathematical relationships), but pulls numbers from nowhere when facing exotic physics – and doesn’t recognize the difference. It never asks for help, even when it should. Recent work on LLM calibration confirms this: models are systematically overconfident, with nominal 99% confidence intervals covering truth only 65% of the time (Epstein et al., 2025).

We’ve made progress on the methodology of Fermi estimation. The metacognition gap – knowing when you’re guessing-remains open.

Is the framework worth it?

That depends on what you want. If you just need a rough estimate, modern Sonnet can produce decent answers with minimal prompting (7.9/10). But if you want Fermi-style reasoning – methodical, transparent, bounded estimation – then yes, the structure matters.

Yet the framework alone isn’t the full solution. Throughout this project, reflection was the critical ingredient: challenging unjustified assertions, asking “could you defend that bound to a skeptic?” Catching when the model should have asked for help but didn’t.

The framework makes reasoning visible. Reflection makes it rigorous.

And that reflection layer – currently human, potentially multi-agent in the future – is what Part 3 will explore.

What’s coming next

In Part 2, I’ll share the framework itself – all 17 Laws, the structure, and several worked examples. I’ll show you what works reliably and what we tried to fix.

But I’ll also share what I eventually realized. Throughout this whole project, there was already a “reflection agent” catching assertions without reasoning and challenging unjustified claims. Asking “could you really justify that to a skeptic?” Flagging when the model should have asked for help.

It was me. The human in the loop, with enlisted help from Claude.

In Part 3, we’ll explore what that means. If the model can’t reliably judge its own uncertainty, maybe the solution isn’t better prompting – it’s separating the tasks. One agent estimates. Another agent reflects. That’s the architecture I was already using before I figured it was a “thing”.

Andrew Ng's X post on Agents
Agent vs. Agentic reasoning

A confession

While drafting this post, I asked Sonnet (different instance) to help write an opening example – a bullet fired from Los Angeles, could it reach you in San Diego? Sonnet wrote “the bullet stops in about 2 kilometers” without showing any derivation.

When I pushed back, it admitted: “I just did what we’re critiquing – presented a number without showing the work.”

Then we tried to verify it together. Constant deceleration gives roughly 600 meters – I verified this with ballistic calculators. Where did that 2 km come from? We still don’t know.

That’s why I replaced the bullet problem with the piano tuner question – something we could verify. And that’s why this work matters.

Stay tuned.


References

Bergstrom, C. T., & West, J. D. (2020). Calling Bullshit: The Art of Skepticism in a Data-Driven World. Random House.

Epstein, E. L., Winnicki, J., Sornwanee, T., & Dwaraknath, R. (2025). LLMs are Overconfident: Evaluating Confidence Interval Calibration with FermiEval. arXiv preprint arXiv:2510.26995.

Kalyan, A., Kumar, A., Chandrasekaran, A., Sabharwal, A., & Clark, P. (2021). How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI. arXiv preprint arXiv:2110.14207.

Weinstein, L. (2012). Guesstimation 2.0: Solving Today’s Problems on the Back of a Napkin. Princeton University Press.

Weinstein, L., & Adam, J. A. (2008). Guesstimation: Solving the World’s Problems on the Back of a Cocktail Napkin. Princeton University Press.

Wolfram, S. (2023). ChatGPT Gets Its “Wolfram Superpowers”! Stephen Wolfram Writings. Retrieved from https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/