Henrik Bennetsen

Jan 13, 2026

7 min read

Hardware OEMs: Attention Doesn’t Scale, But Your Support Can

Many hardware OEMs live inside a paradox.

Your deployed fleets generate a massive volume of signals, alarms, logs, images, and telemetry, yet your support organizations remain stuck in a reactive mode. This is the classic “data rich, action poor” problem. The data is there to predict and prevent failures, but the human attention required to do so has historically been economically unfeasible.

At Formant, we learned this from years spent alongside OEM support teams and operators in the messy reality of production fleets. We’ve watched teams drown in alerts, tribal knowledge, and one-off customer workflows, and we’ve watched the same few experts become the bottleneck for every hard case.

That’s why we built our agent product: to change the economics of attention in OEM support. Not by adding more dashboards, and not by promising “full autonomy,” but by putting a software agent in the loop, one that can watch continuously, work through incidents, and drive issues to verified closure under explicit customer control.

This post lays out the most important lessons we’ve learned building and deploying agents for OEM support, what works, what fails, and what OEM leaders should focus on to make measurable progress.

Your Real Problem Isn’t Data, It’s Economics

Your support team operates reactively not because they don’t care, and not because they lack data. They operate reactively because historically, “watching everything” has been too expensive from a labor standpoint. The only way to increase proactive monitoring was to add more people, and that doesn’t scale as your fleet grows.

This leads to a foundational insight:

Attention does not scale with fleet size.

That framing matters because it shifts the problem from “How do we collect more data?” to “How do we apply the data we already have without linear headcount growth?”

This is where Formant changes the equation.

Our agent can monitor signals continuously, correlate context across streams, and do the first pass of triage, without needing a human to stare at dashboards all day. It doesn’t replace your experts. It protects their time by handling the routine, the repetitive, and the “figure out what’s going on” work that otherwise consumes support capacity.

The outcome: proactive attention becomes economically feasible.

The Goal Isn’t “Insight.” It’s Verified Closure.

One of the most expensive traps in support tooling is confusing visibility with progress.

Visibility without action often creates more work: more alerts, more tabs, more “someone should look at this.” If your system generates insights but doesn’t reduce repeat incidents, it’s not helping, it’s just producing noise with a nice UI.

The real value comes only when an issue is driven to verified closure, a defined confirmation that the problem is actually resolved.

A practical definition we use internally is:

Our agent is valuable when it closes the loop from signals to verified outcomes.

That “verified” part is the difference between an impressive demo and a system your support org will trust. Closure means:

  • the right evidence was gathered,

  • the right workflow was executed,

  • the right humans were involved (when needed),

  • and the resolution is confirmed, by a test, a telemetry change, an operator acknowledgement, or a clear “done” state in the customer’s system of record.

This is also where the ROI hides: verified closure reduces repeat incidents. Over time, it shrinks the total workload, because the same failure mode stops recurring as frequently, or is resolved faster and more consistently when it does.

So the strategic goal shifts from “build better dashboards” to:

Complete workflows that deliver outcomes.

Real Progress Starts by Putting the Agent on a Leash

The idea of an AI agent can sound scary, especially in high-stakes deployments. The practical path is the opposite of “let it do everything.” It starts with tight boundaries.

In our deployments, the turning point is a simple concept: a control surface, the explicit list of actions the agent is allowed to take.

Early on, those actions are boring on purpose:

  • draft a ticket with evidence attached

  • pull logs, images, and recent history into a single summary

  • classify the incident type and suggest a playbook

  • request approval from a manager

  • ask a technician a structured question

  • route the issue to the right queue

These bounded actions are how trust starts. They also match the reality of OEM support: most of the cost isn’t in the final “fix,” it’s in the time spent gathering context, reproducing the issue, and coordinating the right people.

The counterintuitive lesson: constraints accelerate adoption. Our customers don’t want magic. They want control, safety, and repeatability, especially with operations and uptime on the line.

Autonomy Must Be Earned, Not Assumed

When supporting production fleets, trust is non-negotiable. Much like a new employee starts with the easier stuff and gets more advanced with time. A credible agent strategy can’t assume it can automate complex tasks from day one.

In our experience, customers need three trust promises before they’ll let an agent touch real workflows:

  1. The customer defines and controls permissions.

  2. Every action has a clear audit trail.

  3. You can override the system at any time.

Once those are true, autonomy becomes a ladder, a controlled progression instead of a leap of faith:

  • Assist: The agent summarizes, retrieves, and explains information, but performs no writes.

  • Recommend: The agent proposes next steps with evidence and confidence cues for human review.

  • Coordinate: The agent takes procedural steps, creating and routing tickets, requesting approvals, following up.

  • Execute (bounded): The agent performs pre-approved actions, with rollback and full auditability.

The core principle is simple:

Autonomy is earned, not assumed.

And importantly: “Execute” is not “agent takes over.” In support, “execute” often means narrow, valuable actions like applying a known configuration change, scheduling a diagnostic, initiating a standard workflow, or running a verified checklist, always inside guardrails.

Your “Snowflake” Customers Are the Rule, Not the Exception

Here’s the ground truth we keep running into, across customers and industries:

Variability is the norm.

Even within a single enterprise customer, different sites behave like different planets. That variability shows up in three places:

  • Tooling: Enterprise systems such as ServiceNow vs. Jira, custom fields, different conventions, different SLAs.

  • Operating procedures: Escalation rules, approvals, definitions of “done,” site-specific playbooks.

  • Staffing skill: Experts vs. new techs, shift coverage, tribal knowledge, “only Maria knows how this works.”

Any strategy that ignores this heterogeneity, or tries to force-fit a uniform workflow, isn’t adapting to reality. It will fail in the long tail.

This is exactly why we designed Formant’s agent layer around integration and explicit control surfaces. The agent adapts to your workflow; never the other way around. It’s a system that can:

  • ingest specific business context,

  • operate within real tools,

  • and execute real procedures, bounded by what they allow.

This is also why the “on a leash” approach isn’t a limitation. It’s a prerequisite.

Rigid automation breaks in heterogeneous environments. Our bounded agentic workflows scale.

Conclusion: From Magic to Method

The most important lesson we have learned from deploying agents in our OEM customers’ support is that success isn’t about magical technology. It’s about disciplined operations.

The path that works looks like:

  • integrate into real workflows,

  • define explicit governance,

  • build trust through auditability and control,

  • and focus relentlessly on verified closure.

The believable early win isn’t autonomy.

It’s preventing downtime by catching issues earlier, and by resolving incidents faster with less expert involvement. That reduces emergency escalations, cuts expensive urgent travel, and makes support operations predictable and scalable.

So the first question shouldn’t be “What can AI do?”

It should be:

What is our highest-volume, most repeatable incident, and how will we prove we’ve closed the loop?

Want to see what this looks like on your fleet? Our Bootcamp is the fastest way to get value: Our forward deployed engineers join you on-site to help pick one high-volume incident type, connect the signals and evidence you already have, integrate with your ticketing/workflow tools, and deploy a tightly controlled agent that drives issues to verified closure. In days, not quarters, you’ll have a measurable baseline and early KPI movement (triage time, time-to-resolution, repeat incidents) you can share internally. We'd love to hear from you.