
Richard Anaya
Nov 13, 2025
7 min read
Physical AI: From Rules to Principles
For most of robotics history, we didn’t have a choice. Robots were driven by rules because that’s all we could reliably execute.
You decomposed behavior into if–then logic:
If obstacle A, then avoidance routine B.
If sensor reading X, then state Y.
If battery < threshold, then return to dock.
This was the necessary tool for the era: limited compute, narrow perception, deterministic environments. If you wanted a robot to behave safely and predictably, you wrote more rules.
As long as you could bring the task to the machine this worked but as robots left carefully controlled labs and entered warehouses, hospitals, farms, and factories, that approach hit its limits. The real world is too messy to fully script.
Physical AI exists because we finally have something better than rules: we have principles, unlocked by AI systems capable of applying them.
Why Robotics Started with Rules
Rules were not a mistake; they were a necessity.
Early robots had:
Limited sensing and context.
No real capacity to generalize.
Extremely narrow task definitions.
In that world, the safest thing you could do was explicitly enumerate behavior:
Detect specific conditions.
Map each condition to a known response.
Add more rules when something broke.
The result is familiar to anyone who has shipped robots: branching logic, state machines, and ever-growing collections of “edge-case handlers.” As deployments grow, the rulebase grows faster:
New environments → new rules.
New SKUs / workflows → new rules.
New interactions between robots and humans → rules about rules.
Eventually you hit a wall: the system becomes brittle, hard to reason about, and expensive to maintain. One missed edge case can halt a line or a fleet.
Rules got us into the field. They don’t get us to scale.
The Unlock: AI That Can Work with Principles
Modern AI changes the constraints.
Instead of encoding all the behaviors, we can encode how to think about behavior:
“Prioritize safety over efficiency when signals conflict.”
“Maintain operational readiness while minimizing downtime.”
“Respect defined risk boundaries; optimize freely inside them.”
Large models and related systems can:
Interpret these principles in context.
Use tools (maps, telemetry, APIs, incident systems).
Generalize from prior examples to new situations.
The shift looks like this:
Rules-based robotics: “If battery < 15%, return to charger.”
Principle-based Physical AI: “Maintain readiness and avoid unplanned downtime,” then choose between: finish task, reroute, slow down, or charge—depending on current workload, queue, geography, and risk.
The hardware hasn’t changed. The control layer has.
What This Means at Formant
At Formant, we saw our customers run into the limits of rules the hard way: fleets stuck in “perpetual pilot,” systems that behaved well in tightly scripted demos but struggled once reality intruded.
We’re now using Physical AI to invert the stack:
Start from principles: safety posture, risk tolerance, throughput vs reliability, cost vs uptime.
Encode them into our AI agents.
Let those agents decide which tools to use—robot APIs, maps, historical data, incident systems—to act in line with those principles.
We still use rules where they make sense: hard safety boundaries, regulatory requirements, non-negotiable constraints. But the center of behavior moves up a level—from scripts to principles.
The goal isn’t “robots that follow more rules.” It’s Physical AI systems that behave more like thoughtful coworkers: aware of goals, aware of constraints, able to handle situations no one wrote a flowchart for.
Principles Scale. Rules Don’t.
Rules will always be part of robotics. Some things really are binary: do / don’t, allowed / forbidden.
But in dynamic physical environments, most of the interesting decisions live in the gray areas—tradeoffs, priorities, context.
That’s where rules fall short and principles shine.
Physical AI is what you get when you:
Keep the reliability and discipline of traditional robotics.
Add AI systems capable of reasoning with principles.
Let those principles, not a tangle of if–then logic, drive behavior at scale.
Rules were the right abstraction when robots were effectively blind and narrow. Now that we have powerful AI in the loop, we can finally give machines something closer to what we give people:
Not just instructions. Principles.
