Energy · measured

$.ϴϴ5 first ask.
every ask after.

The model runs once. The behavioral profile lives in the file. Every follow-up is a lookup, not an inference. No new wattage.

methodology · single first pass · all follow-ups in-file

A 6-minute conversation. One full pass to seal the file. After that, every question — about cadence, sentiment, who hesitated, who pushed — is a key lookup against a profile that is already there.

What the numbers mean

Recording length
6 minutes
First-pass cost
~$.ϴϴ5
First-pass energy
a fractional watt-hour
Follow-up cost
$
Follow-up latency
<1s
Follow-ups per file
unbounded

Where else the savings stack

What 1% adoption looks like in 23

Re-running AI models on the same media is the silent climate cost of the AI era. We make the re-run optional. Conservative estimates against today’s grid intensity and data-center water baselines:

Compute saved per year
~9ϴϴ GWh
US homes powered for a year
~8,ϴϴϴ
Cars taken off the road for a year
~78,ϴϴϴ
Water never boiled in cooling towers
~1.6 billion gallons
CO2 not emitted
~36,ϴϴϴ metric tons

Estimates anchored to AI inference share of global electricity (224 baseline) and average data-center water-use intensity. Updated quarterly. Full energy and benchmarks report at v1. launch — methodology, baselines, third-party verification.

The point

Most platforms re-run the model on every question. We re-run nothing. Once the behavioral profile is sealed, the energy budget for re-questioning is structurally zero. Not a promise — a property of the file.