MY TAKE FOR ORLANDO: folks with #industrialautomation experience will nail AI agents for mission-critical work by applying battle-tested principles from robotics. (Feel free to challenge these at the bar at A3 biz forum this week! Sean will) Silicon Valley is learning for the first time that operating in a non-deterministic world is hard (like, "raise a few $B"-hard). While many think the solution is just the next SoA model, that's chasing incremental returns. Rigorous process architecture is the enabling strategy. Here are my knock-out arguments based on my own work with robotics and LLM tool call loops (what we call "agents") 👇 🥊 INSTRUMENTATION > PROMPTING: Reliability doesn't come from a "better" command (although it doesn't hurt). Rather, decomposing complex work into simpler tasks with review-gates is the only way to prevent AI slop. This is basically the LLM version of a "part present"-sensor between each operation. 🥊 AGENTS ARE KINDA PLCs: An LLM tool call loop is just a deterministic state-machine abstraction over a non-deterministic reality of an AI model (YA, I agree, this is a mouthful). When a PLC reports "part present", every robotics expert knows that it represents a sensor reading, NOT necessarily the truth. Likewise, when a tool call loop says "I paid all the bills I considered urgent", it might be wise to check your bank account balance. Signal ≠ Reality. 🥊 FENCING: You don’t need a "superhuman" model to use LLMs safely. You need guardrails. Fencing in unpredictable systems, whether it’s a 6-axis arm or an LLM, is something the industrial sector already owns. How do you build virtual guard rails? The first step is to not provide an LLM with the superadmin API key 🤷♂️ jokes aside, credentials and access scopes is the name of the game (this is what prevents you from installing games on your work computer) 🥊 THE LIGHTS-OUT AUTOMATION MIRAGE: Chasing 100% "lights-out" autonomy is usually an ROI trap. The real win is automating 95% of the process and building a seamless human-in-the-loop handoff for the 5% of edge cases. "Automating everything" while often possible, simply isn't pragmatic whether we're talking AI agents or robots. There it is!! game on Looking forward to networking with the people who push out industry forward: Jeff, Nino, Robert, Robert, Joe, Michael, Juan, Erik, and all the others A3 - Association for Advancing Automation
The ears are great and I bet that robot in the background is using Olis Robotics
Lights out is a great goal to aim at but yes will never be 100% for 100% of time. Manual back ups for the back pocket if a critical breakdown happens a must for many ops like auto industry, health care labs, manufacturing.
Fredrik Rydén haha, love this take.. especially coming from the non-deterministic robotics domain.. Agents aren’t magic coworkers, they’re like optimistic toddlers with API keys.. better vocab than PLCs, but terrible discipline! Verification loops, guardrails, and necessary human-in-the-loop beats ‘next superhuman model’ every time!
Reliability comes from architecture, not hype. This is how real automation scales.
This is systems engineering, not prompt engineering. Instrumentation, guardrails, and explicit handoffs are how every reliable industrial system was built — AI won’t be different. Agents that move fast without structure just scale uncertainty.
Great take and 100% agree!
Love the ears - and your observations!
Festive! Have a great time!
Welcome to Orlando Fredrik! Let’s discuss.
Please tell me that was an AI generated photo :)