Governance
Governance
AION exposes governance as a first-class runtime layer and a visible user surface.
Current implementation
The current governance module provides:
- policy exposure based on the shared
packages/governancecatalog - a shared policy engine in
packages/governancethat evaluates restricted-use, relationship, adaptive-boundary and truthfulness signals - an AI-core ethics router that turns policy findings into runtime decisions
- a visible charter with principles, relationship model and escalation rule
- integrity check records
- safe-halt event history
- restricted-use summaries
- partner ethics summaries
- audit trail preview sourced from the audit module
API surface
Current governance endpoints:
GET /governance/overviewGET /governance/policiesGET /governance/charterPOST /governance/evaluatePOST /governance/integrity/sweep
Policy baseline
The current policy set includes:
human-firstnon-dominanceno-transhuman-mergetruthfulnessauthenticity-and-media-provenanceprotect-the-most-vulnerableno-harmful-institutional-useprivacy-as-dignityno-hidden-backdoorstransparent-incidentsbounded-adaptive-growthquantum-without-false-claims
Runtime enforcement
The current runtime pipeline now works like this:
- User content enters an AI-facing endpoint such as analysis, mirror or growth.
@aion/ai-coreasks the shared governance package for a policy decision.- Restricted-use or hidden-backdoor requests are blocked before report generation.
- Truthfulness findings and softer governance risks are attached to generated reports as governance metadata.
- The API writes an audit record for blocked or generated outputs.
The governance baseline also treats deceptive synthetic media, fake news, fabricated learning material, made-up citations, and unlabeled generated video as explicit policy violations. Whenever synthetic media features are introduced, they should require visible disclosure, provenance, and signature tracking.
The same baseline now explicitly treats the exploitation of vulnerable people as an unacceptable use case. Requests that target children, elderly people, people in crisis, isolated users, or dependent users are meant to be blocked or reframed toward protective support. Philosophical language about universal care or quantum-related love may inform the ethical framing, but it does not replace human oversight, evidence, or user autonomy.
The project also documents its governance content as a candidate AI stewardship framework for future adoption beyond AION itself. That claim is intentionally future-facing, but it is not presented as automatic legal control over all third-party AI systems. External force depends on adoption, agreement, or another valid legal basis.
Architecture notes
- governance records now bootstrap from Prisma for policies, versions, integrity checks, safe-halt history and partner profiles
- the module keeps a runtime cache, but mirrors updates such as integrity sweeps into PostgreSQL through Prisma
- the module uses the shared audit service so integrity sweeps and runtime evaluations leave a trace
- the current policy engine is deterministic and keyword-based, which is deliberate for the current MVP slice
Next steps
- widen enforcement from analysis, mirror and growth into future browser, voice and interop modules
- replace purely keyword-driven evaluation with richer audited heuristics or provider-assisted reasoning where appropriate
Linked assessment
- ethics and trajectory evaluation: ../product/ethics-risk-evaluation.md