Layer 01
The Scoring Engine
Auditable. Operator-controlled.
This is the trust anchor. Every downstream output, insights, comparisons, recommendations, risk flags, rolls up to a score produced by a fixed, inspectable process.
Six risk dimensions
Product Credibility
Whether the AI claims being made are real and substantiated.
Tooling & Vendor Exposure
How much of the system depends on external providers, and how concentrated that dependency is.
Data & Sensitivity Risk
How data is sourced, handled, licensed, and protected.
Governance & Safety
What controls exist, and whether they match the system's operating context.
Production Readiness
Whether the system is genuinely operational or still prototype-grade.
Open Validation
What has been independently verified, and what remains untested.
Four properties define how this layer behaves
Scores are reproducible
The same inputs produce the same output, every time. No model variance, no drift between runs, no ‘the AI felt differently today.’
Failure-weighted, not feature-weighted
A system that looks complete on the surface but is weak on claim integrity cannot score its way out through strong peripheral signals.
The operator assigns every base score
The AI does not. This is a hard architectural constraint, not a configuration setting, not user-toggleable.
Every score carries rationale
Nothing is stored as a number alone. Every sub-criterion is accompanied by the reasoning that produced it.