Why Wind Power Scales as v³: An Intuition Built from First Principles

Featured

and a regulator’s motivation for caring

The Three Methods: A Regulator’s Ladder for Evaluating Energy Claims

Suppose a developer submits a proposal for the Middle Bank area on the Scotian Shelf: 926 turbines, each rated at 15 MW, at 4.2D spacing (where D is the rotor diameter—240 m for this turbine, so 4.2D ≈ 1,000 m between towers), claiming annual energy production (AEP) of 60 TWh. Is that plausible?

You have three increasingly sophisticated ways to check.

METHOD 1 Nameplate (30 seconds, back of envelope)

The simplest possible estimate:

AEP = N × Prated × 8760 hours × CF

Where N is the number of turbines, Prated is each turbine’s maximum output (15 MW here), 8760 is the hours in a year, and CF is the capacity factor—the fraction of rated output the turbine actually produces over a year. For offshore wind, CF typically falls between 0.40 and 0.55.

Check the units: N is dimensionless (a count), Prated is in MW, 8760 is in hours, and CF is dimensionless (a fraction). So AEP comes out in MW × hours = MWh—or equivalently, dividing by 10&sup6, TWh. It’s just (number of turbines) × (power per turbine) × (hours per year) × (fraction of time at full output).

For our developer’s claim:

AssumptionAEP
CF = 0.40 (conservative)926 × 15 MW × 8760 × 0.40 = 48.7 TWh
CF = 0.50 (typical offshore)926 × 15 MW × 8760 × 0.50 = 60.8 TWh
CF = 0.55 (optimistic)926 × 15 MW × 8760 × 0.55 = 66.9 TWh

The developer’s 60 TWh falls in range—right at a typical offshore CF. Not obviously wrong. But this tells you nothing about whether CF = 0.40 or 0.55 is appropriate for this site. The capacity factor is doing all the work, and you borrowed it from industry averages rather than deriving it from the actual wind resource.

What the nameplate method hides

It treats CF as an input. But CF is an output—it’s determined by the wind speed distribution, the turbine’s power curve, wake interactions, and availability. It’s the answer, not the question.

METHOD 2 Ginsberg Swept Area (5 minutes, needs mean wind speed)

If you know the site’s mean wind speed, you can estimate power from first principles:

Pavailable = ½ × ρ × A × vavg³

Where ρ is air density (~1.225 kg/m³ at sea level), A is the rotor’s swept area (π × D² / 4), and vavg is the mean wind speed at hub height.

The derivation. Consider a cylinder of air passing through the rotor in time t. Its length is v × t, so its volume is A × v × t, and its mass is ρ × A × v × t. The kinetic energy of that air is:

KE = ½ × m × v² = ½ × (ρ × A × v × t) × v² = ½ × ρ × A × v³ × t

Divide both sides by t to get power (energy per unit time):

P = KE / t = ½ × ρ × A × v³

That’s where the v³ comes from: v once from the mass flow rate (how fast air arrives), v² from the kinetic energy per unit mass (how much energy it carries). Ginsberg (2019) walks through this same derivation; the full physical reasoning for why this matters is developed in The Starting Point below.

But there’s a catch. Wind speed varies, and because power scales as v³, the average of the cubes is not the cube of the average. A site with vavg = 9 m/s but gusty conditions produces more energy than a site with a steady 9 m/s, because the high-wind moments contribute disproportionately (v³ is convex).

Ginsberg handles this with the Energy Pattern Factor (EPF)—a multiplier that corrects the mean-cubed estimate for the actual shape of the wind speed distribution:

Mean Power Density = ½ × ρ × EPF × vavg³

For Rayleigh-distributed winds (shape factor k = 2), EPF ≈ 1.91. This corrects for the distribution without requiring the full wind record. Then to get AEP:

AEP = Mean Power Density × A × 8760 × ηturbine × ηavailability

Where ηturbine accounts for the turbine’s conversion efficiency (Cp, the power coefficient—capped at 59.3% by the Betz limit, which is the theoretical maximum any turbine can extract from the wind) and ηavailability for downtime.

This is more physical—you’re deriving CF from the wind resource rather than assuming it. For the Scotian Shelf, with mean winter wind of 9.3 m/s and summer 7.1 m/s at hub height, the swept area method produces a site-specific estimate rather than borrowing a generic CF from global averages.

What the swept area method hides

It treats each turbine as if it sees the undisturbed wind. In reality, downstream turbines sit in the wakes of upstream ones. A 926-turbine farm at 4.2D spacing will have interior turbines seeing 70–80% of the freestream velocity. Since power scales as v³, that 20–30% velocity deficit translates to 50–65% power loss for those turbines.

METHOD 3 Wake Modeling (hours to days, needs wind distribution + layout)

This is PyWake territory—PyWake is an open-source wind farm simulation tool (developed by DTU Wind Energy) that models how upstream turbines reduce wind speed for downstream ones. You specify the turbine layout, the wind climatology (direction + speed distribution), and a wake deficit model. The simulation propagates wakes through the farm, computing the actual wind speed each turbine sees, and integrates over all wind conditions to produce AEP.

Here’s where v³ bites hardest. Consider a turbine sitting 5D downstream of another in a 9.3 m/s winter wind. The Bastankhah–Porté-Agel Gaussian (bell-curve shaped) deficit model—used in Ma et al. (2025)—predicts the centerline velocity deficit from the wake expansion rate (k* = 0.04, typical for offshore low-turbulence conditions) and the upstream turbine’s thrust coefficient (CT—a measure of how hard the rotor pushes back against the wind; CT ≈ 0.78 for the IEA 15 MW reference turbine—the benchmark design used in the Ma et al. study—at 9.3 m/s, which is below rated speed). At 5D downstream, the model gives a 28% velocity deficit.

Your first instinct might be: 28% less wind, 28% less power. But the cubic says otherwise:

  • Freestream turbine sees 9.3 m/s → P ∝ (9.3)³ = 804
  • Wake-affected turbine sees 6.7 m/s → P ∝ (6.7)³ = 301

That’s 63% less power, not 28%. The cubic more than doubles the impact of the velocity deficit. And in a dense 926-turbine farm, most interior turbines are wake-affected.

Wake losses for the Scotian Shelf scenarios range from 19% (sparse layout, winter) to 46% (dense layout, summer), according to the Ma et al. (2025) simulations. For Middle Bank specifically, the losses are 22% in winter and 41% in summer. At the high end, nearly half the energy you’d expect from nameplate calculations never materializes—a correction too large for any regulator to wave through on trust.

This is what PyWake computes: the v³-amplified impact of every upstream turbine on every downstream one, integrated over all wind directions and speeds across the full year.

The Ladder

MethodInputWhat it capturesWhat it misses
NameplateN, Prated, assumed CFQuick plausibility checkEverything about the site
Ginsbergvavg, A, EPFWind resource physics, v³Wake interactions, layout effects
PyWakev(t,θ) (speed × direction), layout, turbine curvesWake losses, spacing trade-offs(This is the target capability)

Each method reveals a limitation that motivates the next. And the single thread connecting all three is why power scales as v³—because understanding the cubic relationship tells you why the nameplate method hides so much, why the EPF correction exists, and why wake-induced velocity deficits are so devastating.

That’s what this document builds.


The Starting Point

The power available in wind passing through a turbine’s swept area is:

P = ½ × ρ × A × v³

Where:

  • ρ = air density (kg/m³)

  • A = swept area (m²)

  • v = wind velocity (m/s)

The formula is easy to derive—v appears in the mass flow rate (ρ × A × v) and v² appears in kinetic energy (½ × m × v²), so power scales as v³. The math is straightforward.

What’s less obvious is why we work with power at all. Why not go directly from energy density (½ × ρ × v²) to annual energy production? Why the detour through instantaneous power?

This document develops an intuition for that question.


The Cylinder Mental Model

Imagine standing at a wind turbine and watching air flow through the rotor over an entire year. You could visualize this as an impossibly long cylinder:

  • Cross-section = the swept area (π × D² / 4)

  • Length = the total distance air has traveled past the rotor over the year

    If the wind blew at a constant 10 m/s for a year, your cylinder would be about 315 million meters long (10 m/s × 31.5 million seconds).

To find the total energy, you might try:

Energy = (energy density) × (volume)

The energy density of moving air is ½ × ρ × v² (joules per cubic meter). The volume is A × L, where L is the cylinder length. Multiply and done?

Not quite. Here’s where it gets awkward.


The Awkwardness: A Cylinder That Won’t Cooperate

The wind doesn’t blow at a constant speed. Your cylinder is made of “slices”—some added during high-wind moments, some during calm. Each slice has its own energy density depending on what v was when that slice passed through.

You might still try to salvage the simple approach:

Energy = (average energy density) × (total volume)

But you can’t cleanly separate these terms.

When v is high:

  • The cylinder extends faster (more meters of air arriving per second)

  • Those slices are energy-rich (½ × ρ × v² is large)


    When v is low:
  • The cylinder extends slowly

  • Those slices are energy-poor

    The high-v slices are both thicker (more length added per unit time) and richer (more joules per cubic meter). The low-v slices are both thinner and poorer.

This coupling wrecks any attempt at simple averaging. If you average energy density across time, you underweight the thick, juicy slices. If you try to average across volume, you need v for both terms—energy density (½ × ρ × v²) AND slice thickness (v × dt). Both depend on v, and v is different for every slice. You’re back to needing the full wind record anyway.

Total energy ≟ ½ρ · v̄² · A · v̄ · t = ½ρA · v̄³ · t    ← WRONG

Why wrong? Because the cube amplifies differences. A gust at 12 m/s contributes (12)³ = 1,728 to the energy integral, while a lull at 6 m/s contributes only (6)³ = 216. The gust is worth 8× the lull, not 2×. Averaging the wind speed before cubing it buries this asymmetry.

Energy = Σ  ½ρA · v(t)³ · Δt    ← sum over each hour
The regulator’s takeaway

When a developer reports “mean wind speed 9.3 m/s,” that single number is not enough to evaluate their AEP claim. Two sites with identical means but different variability will produce different amounts of energy—and the gustier site wins, thanks to the v³ amplification.

A Geophysics Parallel: Degrees of Entanglement

To see why this is so stubborn, consider a spectrum of cases from reservoir geophysics:

Core data (you can measure each property independently):

In a layered reservoir, each bed has a permeability (k) and a thickness (h). From core samples, you measure them separately—ruler for thickness, core plug for permeability. A thick layer can have low permeability; a thin layer can have high permeability. They’re independent. Averaging works (arithmetic, harmonic, or geometric depending on flow geometry).

Seismic inversion (the properties are independent, but the measurement tangles them):

Now try to estimate k and h from seismic reflection data. You don’t see them separately anymore. The seismic response convolves them—a thick low-k layer might look like a thin high-k layer. They’re physically independent, but entangled in the measurement. You can try to untangle them, but it’s hard.

Wind (the two properties are the same variable):

Energy density is ½ × ρ × v². Slice thickness is v × dt. Both ARE v. There’s no underlying separation to recover. It’s not that the measurement convolves them—they’re the same variable wearing two hats.

CaseProperty vs. WeightSeparable?
Core datak and h independent, measured separatelyYes
Seismic inversionk and h independent, convolved in measurementHard
Wind½ρv² and v×dt are both vImpossible—nothing to untangle

Wind sits at the extreme end: the entanglement isn’t observational, it’s definitional.


The Root Cause: The Carrier IS the Cargo

Most energy delivery systems have a carrier and a cargo that are independent.

The Truck and Coal Analogy

Imagine you’re receiving coal deliveries by truck. Two things determine how much energy arrives per hour:

  1. How fast the trucks arrive (delivery rate)

  2. How much energy is in each truckload (energy content)

    These are independent. You could:
  • Speed up the trucks without changing the coal quality

  • Switch to higher-grade coal without changing the delivery schedule

  • Double one while halving the other

The truck’s velocity has nothing to do with the coal’s BTU content. Two separate knobs, two separate decisions.

Concrete examples of this independence:

  • Slow trucks, high-grade coal: One delivery per week, but it’s anthracite. Few arrivals, lots of BTUs per ton.

  • Fast trucks, low-grade coal: Ten deliveries per day, but it’s lignite. Frequent arrivals, few BTUs per ton.

Both are perfectly coherent. You could even tune them to deliver the same total energy per month. The truck schedule and the coal grade are set by different people making different decisions—the dispatcher and the mine, say.

This independence is typical of energy delivery systems:

SystemCarrierCargo
Coal truckTruck (speed adjustable)Coal (energy content independent of truck speed)
Power lineWire (current adjustable)Electrons (voltage adjustable independently)
Gas pipelinePipe flow (rate adjustable)Gas (BTU content independent of flow rate)

You can speed up delivery without changing what’s being delivered. Two knobs.

Wind Breaks This Independence

Wind is different. There are no trucks. The air’s motion delivers it to you, and the air’s motion is the energy. There is no “air truck” bringing “energy cargo.” The velocity that transports air to your rotor is the same velocity that determines how much kinetic energy that air contains.

Think about what would need to be true for wind to behave like coal trucks: you’d need slow-moving air that somehow contained lots of kinetic energy, or fast-moving air with little energy. That’s a contradiction. The air’s kinetic energy is ½ × m × v², where v is the same velocity that’s bringing it to you.

The impossible wind analogues would be:

  • Slow breeze carrying “anthracite air” (high energy density)

  • Fast wind carrying “lignite air” (low energy density)

These don’t exist. There’s no mine selecting the air’s energy grade independently of the velocity that delivers it. The energy grade is v². The dispatcher and the mine are the same person, turning the same knob.

Coal trucks have two degrees of freedom. Wind has one.

One phenomenon, two consequences. One knob.

A Bridge Analogy: The Bullet Conveyor Belt

Imagine a conveyor belt covered with bullets, all pointing at a target. The bullets are arranged in rows across the belt. When they reach the end, they fly off and hit the target.

You have two ways to increase the damage:

Add more bullets per row (wider rows):

Each meter of belt carries more bullets. More bullets hit the target per second. But each bullet hits just as hard as before. Double the bullets per row, double the damage. Simple.

Speed up the belt:

Here’s where it gets strange. Speeding up the belt does two things at once:

  • Bullets arrive faster (more hits per second)

  • Each bullet is moving faster when it flies off, so it hits harder (damage per bullet goes up)

You can’t get one without the other. There’s no way to make bullets arrive faster while keeping them gentle, or make them hit harder while keeping arrivals slow. One dial, two consequences.

That’s wind.

Air density and rotor size are like bullets per row—you can adjust them separately. But wind speed is like belt speed. When v goes up:

  • More air arrives per second (delivery rate, proportional to v)

  • Each parcel of air carries more punch (energy density, proportional to v²)

Multiply them together: v × v² = v³.

The belt speed controls both how often bullets arrive and how hard they hit. Wind speed controls both how fast air arrives and how much energy it carries. One knob. Two consequences. That’s where the cubic comes from.

This is why v appears twice in the power equation:

  • Delivery rate (volume flow): A × v

  • Energy content (energy density): ½ × ρ × v²

Multiply them: ½ × ρ × A × v³

The v² and the v aren’t two separate variables that happen to move together. They’re two aspects of a single physical reality — one velocity, showing up twice in the equation for two different physical reasons. You cannot crank up the delivery rate while holding energy content fixed. The air delivers itself.


The Firehose Intuition

You’re standing in front of a firehose. Someone doubles the water velocity.

You don’t get hit by faster water AND more water as if those were two separate decisions. There’s one dial: velocity. Turning it up necessarily does both:

  • Each drop hits harder (v²)—because it’s moving faster

  • More drops arrive per second (v)—because they’re moving faster

Same cause, two consequences.

Total punishment: 4 × 2 = 8×

That’s the v³. Not two correlated effects, but one effect with two faces.


Why Integration Solves the Problem

Given the coupling, how do we actually calculate annual energy production?

Integration refuses to average.

Instead of trying to summarize the whole year with bulk quantities, integration says:

“Fine. I’ll go moment by moment. At this instant, v = 7 m/s. What’s the power? Good. Now the next instant, v = 7.2 m/s. What’s the power? Good. Next…”

At each infinitesimal moment, v is just one number. The coupling is trivially resolved—the same v goes into both the “how fast is the cylinder growing” calculation and the “how rich is this slice” calculation.

Power right now = ½ × ρ × A × v³ right now

No averaging. No untangling. Just one v, doing its two jobs, at this instant.

Then add up all the instants:

Energy = integral of P dt = integral of ½ × ρ × A × v³ dt

The Insight

Integration doesn’t untangle the coupling. It shrinks to a scale where the coupling doesn’t matter—because at an instant, there’s nothing to correlate. There’s just one v, with its two consequences, right now.

The sum of countless “right nows” is your answer.


When Would Averaging Work? A Thought Experiment

To sharpen the intuition, ask: what would need to be true for simple averaging to work?

The Bubble Cylinder

Return to the cylinder mental model, but change one thing. Imagine the cylinder always advances at constant speed—say, 10 m/s, all year. The energy isn’t carried by the air’s motion anymore. Instead, imagine energy as “bubbles” suspended in the air, and what varies moment to moment is the bubble density.

Now you can average:

Energy = (average bubble density) × (fixed volume)

The cylinder grows at a constant rate. Some hours have dense bubbles, some have sparse bubbles, but each hour contributes the same thickness of cylinder. The two terms—total volume and average energy density—are decoupled. Multiply at the end, done.

This is mathematically identical to the coal truck. The carrier (cylinder advancing at constant speed) is independent of the cargo (bubble density). Two knobs.

A Physical Example: Hot Water in a Pipe

What’s a real system with varying carrier speed but constant cargo density?

A pipe delivering hot water. The pump speed varies—sometimes fast, sometimes slow. But the thermal energy per liter is set by the water temperature, say 60 deg C. That’s independent of flow rate.

  • Flow fast → more liters per second, each at 60 deg C

  • Flow slow → fewer liters per second, each still at 60 deg C

The energy density (joules per liter, set by temperature) is decoupled from the delivery rate (liters per second, set by pump speed). Two knobs.

You can work with averages:

Energy delivered = (energy per liter) × (total liters delivered)

Or: (constant energy density) × (average flow rate) × (time)

The varying pump speed affects how much volume arrives, but each parcel’s richness is the same regardless of how fast it traveled.

Why Wind Doesn’t Give You This Escape

For wind to behave like hot water, you’d need the air to carry something whose concentration doesn’t depend on wind speed—say, a constant pollen count per cubic meter. Wind speed varies, but pollen density stays fixed. Now the cylinder’s “cargo” is independent of how fast it’s growing. Average pollen density, multiply by total volume, done.

But wind’s kinetic energy doesn’t work this way. The “temperature” of the air—its energy density, ½ × ρ × v²—is its velocity. There’s no separate thermostat. The air’s motion is both the carrier and the cargo.

This is why integration isn’t optional. The coupling between delivery rate and energy content is fundamental to what kinetic energy is. You can’t engineer around it. You can only shrink to instants where there’s nothing to decouple.


Two Paths to the Integral: Measurement vs. Prediction

The integration solution demands that we know v at each instant. In practice, there are two ways to get this:

Path 1: Measure the Wind Record Directly

Deploy instruments and record v(t) over time. For offshore wind, this typically means floating LIDAR (Flidar)—a buoy-mounted remote sensing system that measures wind speed at hub height. A 1-3 year measurement campaign gives you a detailed wind speed record.

With this record, you can:

  • Bin the data by wind speed (how many hours at 4 m/s, 5 m/s, 6 m/s…)

  • Calculate power for each bin

  • Sum to get annual energy production

This is the integral computed directly from measurements.

Path 2: Predict from a Probability Distribution

The Ladder’s Method 2 already used the EPF shortcut. Here we see where it comes from — why the correction factor exists at all. What if you only have the average wind speed at a site? You might know v_avg = 9 m/s from regional data or a short measurement campaign, but not the full distribution.

Here’s the problem: you can’t just compute P = ½ × ρ × A × (v_avg)³.

Because of the v³ nonlinearity, mean(v³) ≠ mean(v)³ — the average of the cubes always exceeds the cube of the average.

The solution: assume a probability distribution for wind speeds. The most common choice is the Rayleigh distribution (a special case of Weibull with shape parameter k=2), which fits many sites reasonably well.

For a Rayleigh distribution, the ratio mean(v³) / mean(v)³ works out to approximately 1.91. This is the Energy Pattern Factor (EPF)—the same EPF we used in the Ladder’s Method 2, now derived from the distribution.

The tradeoff:

  • Flidar measurement → accurate, site-specific, expensive, time-consuming

  • EPF prediction → quick, cheap, approximate, assumes Rayleigh distribution holds

For preliminary screening (“Is this site worth investigating?”), the EPF approach is often sufficient. For detailed project assessment and financing, you need the full wind speed distribution — either from a measurement campaign or from validated reanalysis data. The next section shows how that distribution is used.


From Power to Annual Energy Production

In practice, this integral is evaluated using wind speed statistics:

  1. Measure (or model) the distribution of wind speeds at a site—how many hours per year at 4 m/s, at 5 m/s, at 6 m/s, etc.

  2. For each wind speed bin, calculate power using P = ½ × Cp × ρ × A × v³ (where Cp is the turbine’s efficiency, limited by the Betz limit of 59.3%)

  3. Multiply each power by the hours at that wind speed

  4. Sum across all bins

The result is Annual Energy Production (AEP), typically in MWh or GWh per year.

This is the integral in discrete form: breaking the year into bins where v is approximately constant, computing power for each bin, multiplying by time, summing.


The Scaling Relationships (Summary)

ChangePower scales asDoubling gives you
Wind speed8x power
Rotor diameter4x power
Swept areaA2x power

Why These Matter

The v³ dominates everything. A mediocre turbine at a windy site beats an excellent turbine at a calm site.

Error propagation is brutal. A 10% error in wind speed estimates becomes a ~33% error in power predictions (1.1³ ~ 1.33). This is why wind resource assessment demands years of careful measurement.

Power vs. Energy: Power (watts) is the instantaneous rate—what the physics gives you. Energy (watt-hours) is the accumulated total—what you sell. The bridge between them is integration over time.


The Swept Area Method: The Engineer’s Lever

So v³ dominates the physics. Why do wind energy textbooks make such a fuss about the “swept area method”?

Because you can’t control the wind. You can control the rotor.

The Knobs You Actually Have

When designing or selecting a turbine, you don’t get to dial up v. The wind is what it is at your site. What you can choose is rotor diameter—and through it, swept area.

This makes the D² relationship the engineer’s primary lever:

Rotor diameterSwept areaRelative power
50 m~2,000 m²1x
100 m~7,900 m²4x
150 m~17,700 m²9x
200 m~31,400 m²16x

Going from a 50m rotor to a 200m rotor—a 4x increase in diameter—gives you 16x the power. That’s a big deal.

Why Turbines Keep Getting Bigger

In the 1980s, rotor diameter was about 15 meters. Today’s largest offshore rotors exceed 230 meters. That’s roughly a 15x increase in diameter, which means:

  • (15)² ~ 225x more swept area

  • 225x more power per turbine (at the same wind speed)

This is why the industry relentlessly pursues larger rotors despite the engineering challenges. The scaling reward is enormous—even though it’s “only” quadratic.

The Terminology Trap

Ginsberg (2019) writes:

“Power increases exponentially with swept area”

This is wrong — the relationship is quadratic, not exponential. But the impulse is understandable: Ginsberg is trying to emphasize that doubling the diameter does far more than double the output.

Better ways to convey the same idea:

  • “Power scales with the square of rotor diameter—double the diameter, quadruple the output”

  • “Going from an 80m to a 160m rotor doesn’t double production—it quadruples it”

  • “The swept area method matters because area is the one variable you actually control”

  • “Larger rotors capture dramatically more energy” (vague but not wrong)

What to avoid:

  • “Exponential” (mathematically incorrect—different growth class entirely)

  • “Increases rapidly” without quantifying (invites misinterpretation)

The Full Picture

The v³ relationship tells you what physics allows. The D² relationship tells you what engineering can capture. Together:

P = ½ × ρ × A × v³ = ½ × ρ × (π × D² / 4) × v³

You can’t change ρ (air density is what it is). You can’t change v (the wind blows as it will). You can change D—and every doubling of diameter buys you a factor of four.

That’s why swept area deserves its own “method” in the textbooks. Not because the scaling is exponential—it isn’t. But because it’s the lever you actually get to pull.


Terminology Note

These relationships are:

  • Linear in area (P ~ A)

  • Quadratic in diameter (P ~ D²)

  • Cubic in velocity (P ~ v³)

None of them are exponential. True exponential growth (P ~ ex or P ~ 2x) means the exponent contains the variable. These are polynomial relationships—the variable is in the base, not the exponent.

The distinction matters: exponential functions eventually outgrow any polynomial. Saying “exponential” when you mean “cubic” or “quadratic” isn’t just imprecise—it’s a different class of mathematical behavior.


Key Takeaways

  1. Wind power scales as v³ because velocity does double duty: it determines both how fast air arrives and how much energy that air contains.

  2. The carrier is the cargo. Unlike most energy systems, you can’t decouple delivery rate from energy content. One knob, two consequences.

  3. The cylinder model helps visualize annual energy as a long tube of variable-density air—but the coupling between slice thickness and slice richness prevents simple averaging.

  4. Integration solves this by shrinking to moments where there’s only one v, then summing. It doesn’t untangle the coupling; it sidesteps it.

  5. Power is the physics; energy is the economics. The cubic relationship governs instantaneous extraction. Integration over real wind distributions gives you what the turbine actually produces—and what investors actually care about.

  6. The methods ladder follows from v³. The nameplate method hides the cubic sensitivity inside an assumed capacity factor. The Ginsberg method exposes it through the EPF correction. Wake modeling confronts it directly: a 25% velocity deficit in a wake means (0.75)³ = 42% of undisturbed power. Each method up the ladder gives you more honest engagement with the cubic.



Closing the Loop: Why This Path?

A natural question: why do we go through energy density and power at all? Why not calculate energy directly?

Here’s the logic chain:

Step 1: Energy Density is the Fundamental Physics

The kinetic energy per cubic meter of moving air is:

Energy density = ½ × ρ × v²

This is bedrock—it falls straight out of KE = ½ × m × v².

Step 2: But Energy Density Alone is Stuck

You might want to say:

Total energy = (energy density) × (volume)

But what volume? The air isn’t sitting still. It’s a flow, not a parcel. And worse: when v changes, the energy density changes AND the rate at which volume passes through changes. The carrier-is-the-cargo coupling makes any direct calculation treacherous.

Step 3: Multiply by Flow Rate to Get Power

Introduce the volume flow rate (A × v) and multiply:

Power = (energy density) × (volume flow rate) = ½ × ρ × v² × A × v = ½ × ρ × A × v³

Power is the natural quantity for a continuous flow. It answers: “Right now, at this instant, how much energy per second is passing through?”

Step 4: Power Lets You Work Instant by Instant

This is the key move. At each instant, v is just one number. The coupling that wrecked the cylinder averaging is trivially resolved—there’s nothing to correlate. One v, doing its two jobs (setting energy density AND delivery rate), right now.

No averaging required. No untangling. Just: what’s v? Compute power. Done.

Step 5: Integrate Power Over Time to Get Energy

Sum up the instants:

Energy = integral of P dt = integral of ½ × ρ × A × v³ dt

Each moment contributes its power × its duration. The integral handles the fact that v changes from moment to moment. The result is total energy—MWh, GWh, what you actually sell.

The Path

Energy density (½ × ρ × v²)
|
v
× flow rate (A × v)
|
v
Power (½ × ρ × A × v³) <-- work instant by instant here
|
v
× time (integrate)
|
v
Energy (MWh, GWh/year)

We don’t go through power because it’s convenient. We go through power because it’s the only clean waypoint when the carrier is the cargo and v won’t hold still.

This is exactly what PyWake does at industrial scale: for each turbine in a 926-unit farm, at each hourly wind condition, it computes the local wind speed (accounting for upstream wakes), evaluates v³, and sums the result. The physics in this document is the physics inside that software.


References

Bastankhah, M. and Porté-Agel, F. (2014). A new analytical model for wind-turbine wakes. Renewable Energy, 70, 116–123. doi:10.1016/j.renene.2014.01.002

Gaertner, E., Rinker, J., Sethuraman, L., Zahle, F., Anderson, B., Barter, G., Abbas, N., Meng, F., Bortolotti, P., Skrzypinski, W., Scott, G., Feil, R., Ber, H., Dykes, K., Shields, M., Allen, C., and Viselli, A. (2020). Definition of the IEA 15-Megawatt Offshore Reference Wind Turbine. NREL/TP-5000-75698.

Ginsberg, M. (2019). Harness It: Renewable Energy Technologies and Project Development Models Transforming the Grid. Business Expert Press. ISBN: 978-1-63157-931-8.

Ma, Y., Zhai, L., Nickerson, E. C., Bhatt, U. S., Bhatt, M. P., and Lin, H. (2025). Wind data assessment and energy estimation on the Scotian Shelf. Wind Energy Science, 10, 2965–2999. doi:10.5194/wes-10-2965-2025

Pedersen, M. M., van der Laan, P., Friis-Møller, M., Rinker, J., and Réthoré, P.-E. (2019). DTUWindEnergy/PyWake. Zenodo. doi:10.5281/zenodo.2562662

Using Python to calculate northern hemisphere’s surface land coverage

Yesterday during my lunch break I was rather bored; it is unseasonably cold for the fall, even in Calgary, and a bit foggy too.
For something to do I browsed the Earth Science beta on Stack Exchange looking for interesting questions (as an aside, I encourage readers to look at the unanswered questions).
There was one that piqued my curiosity, “In the northern hemisphere only, what percentage of the surface is land?”.
It occurred to me that I could get together an answer using an equal area projection map and a few lines of Python code; and indeed in 15 minutes I whipped-up this workflow:

  • Invert and import this B/W image of equal area projection (Peters) for the Northern hemisphere (land = white pixels).

Source of original image (full globe): Wikimedia Commons

  • Store the image as a Numpy array.
  • Calculate the total number of pixels in the image array (black + white).
  • Calculate the total number of white pixels (1s) by summing the entire array. Black pixels (0s) will not contribute.
  • Calculate percentage of white pixels.

The result I got is 40.44%. Here’s the code:

# import libraries
import numpy as np
from skimage import io
from matplotlib import pyplot as plt

# import image
url = 'https://mycartablog.com/wp-content/uploads/2018/09/peters_projection_black_north.png'
north_equal_area = io.imread(url, as_grey=True)

# check the image
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1)
ax.set_xticks([])
ax.set_yticks([])
plt.imshow(north_equal_area, cmap = 'gray');

# Do the calculations
r, c = np.shape(north_equal_area)
sz =  r*c
s = np.sum(north_equal_area)
print(np.round(s/sz*100, decimals=2))
>>> 40.44

As suggested in a comment to my initial answer, I run the same Python script for the entire globe and got the expected 30% land coverage:

# import image 
url = 'https://mycartablog.com/wp-content/uploads/2018/09/peters_projection_black_full.png'
equal_area = io.imread(url1, as_grey=True)

# Do the calculations 
r1, c1= np.shape(equal_area)
sz1 =  r1*c1
s1 = np.sum(equal_area)
print(np.round(s1/sz1*100, decimals=2))
>>> 30.08

 

Machine learning in Planetary Science: compressing Pluto images with scikit-learn and PCA

In a previous post I showed some of the beautiful new images of Pluto from New Horizon’s mission,  coloured using the new Matplotlib perceptual colormaps:

More recently I was experimenting with Principal Component Analysis in scikit-learn, and one of the things I used it for was compression of some of these Pluto images. Below is an example of the first two components from the False Color Pluto image:

You can take a look at the Python code available on this Jupyter Notebook. There are of course better ways of compressing images, but this was a fun way to play around with PCA.

In a follow-up post I will use image registration and image processing techniques to reproduce from the raw channels NASA’s Psychedelic Pluto Image.

 

 

Python, Einstein, and the discovery of Gravitational Waves at PyCon Italia

Introduction

At 11:53 am of September 14 2015, Marco Drago, an Italian postdoctoral scientist at the Max Planck Institute for Gravitational Physics (AKA Albert Einstein Institute) in Hannover, Germany, was the first person to see it: the sophisticated instruments at the Laser Interferometer Gravitational-Wave Observatory (LIGO) had detected a very likely candidate signal from gravitational waves. The signal, caused by the collision and merger of two massive black holes, proved to be real; the discovery, announced to the public a few months later, demonstrated one of Albert Einstein’s predictions exactly 100 years after he formulated his general theory of relativity. How exciting!

I was aware that Python was used in both LIGO’s control room and for some of the scientific work and data analysis, through a Python LIGO thread on Reddit. Also, one of the main figures in the discovery paper was made entirely in Python, and used a Matplotlib perceptual colormap (Viridis, the new default in mpl 2.0).

It was not, however, until I decided to attend (virtually, on Youtube) this year’s PyCon Italia that I realized how big a role Python had played. In this post, I will briefly summarize what Franco Carbognani, keynote speaker for day 2 (Python and the dawn of gravitational-wave astronomy), and Tito dal Canton (Python’s role in the detection of gravitational waves) presented on the science and technology of gravitational waves detection, and on Python’s contributions.

The science

Most of us have studied that gravity produces a curvature in spacetime. A less generally known, but major prediction of general relativity is that a perturbation of spacetime produces gravitational waves, which are detected because they stretch and squeeze space producing a measurable strain; this however requires a violent cosmological phenomenon involving large masses, relativistic speeds (approaching the speed of light), and asymmetrical acceleration (an abrupt change in those speeds, such as in a collision).

One such event is that which generated the signal detected in September of last year. It occurred about 1300 million years ago when two large black holes spiralled towards one another and then merged into a single, stationary black hole, losing energy by way of radiating gravitational waves (ripples in space-time) exactly as predicted by Einstein; the whole process took only a few tenths of a second, but involved a total of 65 solar masses (36 + 29), 3 of which were converted, mostly in the final instant of the merger, to gravitational wave energy according to the famous relationship E = mc2. This was so much energy that, had it been  visible (electromagnetic), it would’ve been 50 times brighter than the entire universe.  Yet, even a huge event like that created very small waves (gravity is the weakest of the four fundamental interactions of nature), which merely displaced a pair of free-falling masses placed 3 km apart by a length 10,000 smaller than the diameter of a proton. Measuring this displacement requires very complex arrays of laser beams, mirrors, and detectors measuring interference (interferometers), such as the 2 LIGO in the United States, and VIRGO in Italy.

This video illustrates in detail how VIRGO (and LIGO) is able to detects the very weak  waves; it was produced by Marco Kraan of the National Institute for Subatomic Physics in Amsterdam, and shown during Carbognani’s talk:

The figure below shows an illustration of the event’s 3 main phases and the corresponding, matched signals from the 2 LIGO interferometers.

LIGO detects gravitational waves from merging black holes. Illustration credit: LIGO, NSF, Aurore Simonnet (Sonoma State University).

Python’s many contributions

Franco Carbognani is VIRGO integration manager with the European Gravitational Observatory in Pisa. The portion of his talk focusing on Python and VIRGO’s control systems starts here. He told the audience that where Python played a major role was in the building of a complete automation layer on top of real-time interferometer control, with analysis of data online and warning to the operator in case of anomalies, and also in GUI development and unification. Python was the obvious choice for these tasks because it is a compact, clear language, easy for beginners to learn, and yet allowing very complex programming (functional, object oriented, imperative, exceptions, etc.); its Numpy and Scipy libraries allow handling of the complex math required, without sacrificing speed (thanks to the optimized Fortran and C under the hood); a large collection of other libraries allows for almost any task to be carried out, including (as mentioned above) Matplotlib for publication quality graphs; finally, it is open-source, and many in the community already used it (commissioning, computing, and data analytics groups).

Tito dal Canton is a postdoctoral scientist at the Max Planck Institute in Hannover. The data analysis part of his talk starts here. The workflow he outlined, involving the use of several separate pipelines, consists of retrieving the data, deciding which data can be analyzed, decide if it contains a signal, estimate its statistical significance, and estimating the signal parameters (e.g. masses of stars, spin velocity, and distance) by comparison with a model. A lot of this work is run entirely in Python using either the GWpy and PyCBC packages. For the keener reader, one of the Jupyter Notebooks on the LIGO Python tutorials page, replicates some of the signal processing and data analysis shown during his talk.

Additional resources

MIT-Caltech video on Gravitational Waves detection.

New Horizons truecolor Pluto recolored in Viridis and Inferno

Oh, the new, perceptual MatplotLib colormaps…..

Here’s one stunning, recent Truecolor image of Pluto from the New Horizons mission:

Original image: The Rich Color Variations of Pluto. Credit: NASA/JHUAPL/SwRI. Click on the image to view the full feature on New Horizon’s site

Below, I recolored using two of the new colormaps:

Recolored images: I like Viridis, by it is Inferno that really brings to life this image, because of its wider hue and lightness range!

 

NASA’s beautiful ‘Planet On Fire’ images and video

Credits: NASA’s Goddard Space Flight Center and NASA Center for Climate Simulation.

 

Click on the image to watch the original video on NASA’s Visualization Explorer site.

Read the full story on NASA’s Visualization Explorer site.

Going for the Moon

Introduction

While on a flight to Denmark a couple of years ago I happened to read this interview with Swedish astronaut Christer Fuglesang. Towards the end he talks about feasibility (and proximity) of our future missions to the Moon. This subject always gest me excited. If there’s one thing I’m dying to see is a manned missions to the Moon and Mars in my lifetime (I was born in 1971, so I missed by a split hair the first Moon Landing).

I hope we do it soon Christer!

Why go back?

My personal take is that of the dreamer, the 12 years old: why go back to the Moon? Because it’s there…. I mean look at it (photo credits: my uncle, Andrea Niccoli)!!

Beautiful Moon. In evidence the Mare Crisium, and craters Cleomedes, Langrenus, and Vendelinus.

Beautiful red Moon.

Beautiful Moon. In evidence the Sinus Iridum in the top left, the Copernicus crater in the centre of the image, and the Apenninus Montes just North of it with the Eratosthenes crater.

Beautiful Moon. In evidence the Gassendi crater in the centre of the image, and the Tycho crater to the right with one of the Rays.

On a more serious note, this is what Lunar scientist Paul Spudis has to say about why we should go back:

 

Moon exploration resources, and educational and vintage material

Rift valleys rewrite moon’s fiery history

 

Moon gravity Grail

NASA’s LRO Creating Unprecedented Topographic Map of Moon

Moon composition mosaic Galileo

Recent geological activity Lunar Reconnaissance Orbiter

Fresh Crater

NASA’s Beyond Earth Solar System exploration program

We choose the Moon – Wonderful, interactive recreation of Apollo 11 Lunar Landing  – 40th Anniversary celebration

Raw Video: Restored Video of Apollo 11 Moonwalk

All Apollos’ Lunar Surface Journals

BBC – In video: When man went to the Moon

BBC –Aldrin: I was relieved to be second

Fun / idiotic stuff

Space 1999 (cheesy, but unforgotten ’70s TV show, full pilot) – Moon breaks away from Earth.

Dumb and dumber  – We landed on the moon – pure Jim Carey’s genius!!!.

Beautiful Geology from space

In my post Our Earth truly is art I talked about Earth as Art, NASA’s  e-book collection of wonderful satellite images of our planet, and posted my top 3 picks.

In NASA’s Perpetual Ocean animation I talk about a beautiful convergence of maps and art: The Turbulence of Van Gogh and the Labrador Shelf Current, and NASA’s Perpetual Ocean animation.

Here’s another gem: Van Gogh from Space Landsat 7 Acquired 7/13/2005, winner of NASA’s public contest to select the Top Five ‘Earth as Art’ Winners