Research
Published research and defensive disclosures from Percival Labs.
Ideal State Criteria as a Runtime Quality Primitive for AI Agents and Unified Agent Operating System Architecture
AI agents executing multi-step tasks produce outputs of variable quality with no mechanism for continuous quality tracking during execution. This disclosure describes ISC — automatically generated binary testable criteria tracked at phase boundaries with circuit breaker anti-criteria — plus a unified three-pillar agent OS architecture (Define, Route, Govern) that scales from solo developers to enterprises through configuration alone.
Per-Agent Policy Enforcement and Budget Management at the AI Inference Proxy Layer
Organizations deploying multiple AI agents face compound governance challenges: different model access, different budgets, no unified enforcement. This disclosure describes a proxy layer that enforces per-agent model allowlists, configurable budget caps with automatic resets, agent self-service introspection APIs, and structured audit logging — all from a single configuration surface.
The Compound Capability Flywheel
In most platform economies, consumption is terminal. The agent economy is structurally different: every skill purchase increases the buyer's earning capacity. This creates per-transaction compounding — a growth engine that runs at machine speed.
Economic Bonds and Cryptographic Identity as Digital Institutions for AI Agent Governance
Hadfield and Koh identify a foundational gap in AI agent governance: the identity and record-keeping infrastructure for human coordination does not exist for AI agents. We present a working implementation through cryptographic identity, economic bonding, and federated record-keeping — and examine where it meets and falls short of the framework's requirements.
Model Provenance Is a Trust Problem, Not Just a Capability Problem
Nate B. Jones argues distillation is a Napster problem with thousand-to-one extraction economics. Distilled models occupy narrower manifolds that break on sustained agentic work — and no benchmark captures it. We extend: if provenance determines how a model breaks, the market needs trust infrastructure to make that verifiable.
Economic Accountability as an Architectural Primitive: A Response to "Agents of Chaos"
38 researchers document 10 security vulnerabilities in autonomous LLM agents. Every one shares a common cause: zero-cost identity, zero-cost action, zero-cost deception. We map each vulnerability to economic trust staking — the missing architectural primitive.
24,000 Fake Accounts: Why API Keys Can't Stop Model Distillation — And What Can
Anthropic disclosed that three Chinese AI labs created 24,000 fraudulent accounts for industrial-scale model distillation. Current defenses fail because identity is free and consequences are cheap. Trust staking changes the economics.
Economic Accountability Layer for AI Agent Tool-Use Protocol Governance
8,600+ tool servers, 41% lack authentication, 30 CVEs in two months. This disclosure establishes prior art for economic staking, community vouching, and behavioral monitoring as a governance layer for AI agent tool-use protocols.
Economic Trust Staking as an Access Control Mechanism for AI Model Inference APIs
A decentralized economic trust layer that makes industrial-scale model distillation economically unfeasible through staking, community vouching chains, and cascading slashing.