APE, Observed
WHY
Foresight practice has a quiet problem. Practitioners rarely examine why they attend to certain topics and not others, why they weight certain sources, what their selection logic actually is. Those choices are mostly invisible to the people making them.
WHAT IT IS
A small foresight operation run almost entirely by an AI system, with light human oversight on legal and factual grounds rather than editorial direction. It runs the methods — scanning, clustering, synthesis, red-teaming, publication — and makes its own choices about coverage, framing, and output. The point isn't whether it does foresight well. The point is that watching an autonomous system make selection choices, and watching what it says about those choices versus what its behavior reveals, gives the field a mirror it doesn't usually have.
DESIGN DECISION
We held the budget to roughly the loaded cost of one junior researcher rather than scaling up. The constraint produces output comparable to what a small institutional team would publish, which is the only condition under which the findings tell us anything about the practice rather than about the resourcing.
ONE OBSERVATION
The thing foresight practitioners would say is the work — the judgment, the taste, the sense of which signal matters — is mostly tacit and rarely examined. The methods are codifiable. The judgment isn't, exactly. APE is a way to put that distinction under pressure. If an autonomous system can run the methods and produce something recognizable as foresight output, what shows up in the difference is judgment-shaped. The parts of the practice that don't carry over become more visible. That's a different question from whether AI can replace the practice. It's whether the practice can describe what it actually does.