
Fin Elliott
Oct 8, 2025
4 min read
Physical AI: The Practitioner's View
My name is Fin Elliott. I've been at Formant for a while now as a PM, mostly focused on our Analytics product. More recently I’ve transitioned largely to building our AI infrastructure - and the biggest thing I’ve learned is: the gap between "AI will revolutionize robotics" and "why did things stop working at 2 AM" is huge.
Everyone’s talking about physical AI right now. We’re excited too and actively building it which means dealing with the messy reality of making robots work in the real world. We believe what we have learned will help guide us toward what happens next.
The Stuff That Actually Matters
Physical AI gets interesting when we apply it to real-world operational challenges:
Catching problems before they happen - Our agentic AI system, called Theopolis, monitors your people, your data, and enterprise systems and spots the patterns that usually lead to failures. Instead of waiting for something to break or a machine to stop due to an error, Theopolis identifies the root cause, confirms past incident responses, and determines a path to recovery.
Progressive autonomy for trust calibration and risk matching - We talk a lot about the autonomy slider at Formant, and this is how we think about it: when something does go wrong, Theopolis figures out what happened and safely takes action when it is within its predetermined mandate. No waiting for someone to wake up and check Slack. Where riskier actions are required for recovery, Theopolis will require a human-in-the-loop for safe control.
Making robots run better - Constantly analyzing how robots are performing and tweaking things like sensor set-points to make the system more efficient or last longer. Theopolis can adjust what robots are doing based on conditions, their current state, and what's worked before in similar situations utilizing concepts of “memories” built into our AI infrastructure. The result is fewer incidents, faster recovery, and steady proactive productivity gains —without adding headcount.
Why This Actually Works
Context is everything!
We've been opinionated since day one about how we handle robot data. More importantly, we have specific ways of organizing and executing commands that can be sent to robots on our platform. When Theopolis sees something weird happening, it's not just looking at current sensor data. It knows what this robot has been through, how similar robots behaved in similar situations, and which fixes have actually worked in production based on data from multiple sources - even down to scanning through messaging tools like Slack or Teams or taking in data from an enterprise system like ServiceNow or Zendesk.
Physical AI is here now
We're building deployments with our customers right now. Their physical operations systems feed data into our platform, but instead of pinging a staff member, our AI analyzes what's happening, safely takes action when it should, escalates when it must, and continuously helps improve our customer’s workflows.
Physical AI isn't only about making individual robots smarter. It's about building systems that understand operational context and can act on it reliably using the best AI tools for the job. And we get better at this every day by dealing with real problems in real deployments, not just imagining what the future might look like from the safety of the lab.
The robots and other advanced hardware systems we need are already here. Physical AI means leveraging AI and the context from enterprise systems to finally make them actually work the way they were always supposed to.