What is FASO

FASO is an independent, neutrality-preserving organisation built to produce reproducible, audit-bound observations about safety-relevant, discovery, change and drift in advanced AI systems. We are not a regulator, not a lab and not a command centre. We do not issue directives. We build trusted visibility. what changed, how it was verified and what confidence is warranted—including what the evidence does not support.

FASO combines technology and human review to turn scattered signals into something serious decision-makers can use. Our systems collect and structure evidence into provenance-bound records with custody and replay discipline, then render it into bounded visualisations that show what changed, what is confidently supported, and where uncertainty remains. Human analysts then verify, interpret, and explain those outputs—internally for disciplined analysis and externally in publication-safe form. This allows labs, regulators, and safety bodies to understand the observation quickly without requiring privileged access to raw data.

The result is neutral, auditable visibility designed to improve decision quality under uncertainty—without becoming an enforcement body, a policy advocate, or a private service for any one actor.

Why FASO Needs to Exist

AI capability and deployment are moving faster than traditional institutional cycles. Decisions are increasingly made under uncertainty, evidence is fragmented, changes may be undeclared or hard to attribute and incentives do not always align with public safety. FASO exists to reduce that uncertainty by turning scattered signals into disciplined, verifiable records and clear, bounded outputs that readers can trust—because they are reproducible, provenance-bound and explicit about limitations.

Phase 0 (Pilot)

Phase 0 is the build-and-proof stage. The goal is to define a reviewable core method—Discovery, ingress discipline, provenance/custody logging, replay-first verification, bounded scoring/visualisation, and a strict Publication Boundary—and then validate it under real operating constraints. In Phase 0 we also calibrate thresholds and parameters (without relaxing invariants), test failure and degraded modes, and produce a tight evidence pack so external reviewers can assess whether the method is sound before any wider operational scale-up.