Risk and Progress: It's Complicated
Everyone is jumping first and looking later. That might actually be the right call.
There’s a conversation happening in every company right now that goes something like this:
CTO: "We need to start using agentic AI in engineering and marketing."
CISO: "But we haven't figured out how to secure them yet."
CTO: "I know. Do it anyway, we'll figure that out later."
They’re both right.
An Uncomfortable Reality
The uncomfortable truth about agentic AI is that progress and risk are two sides of the same coin. Turn up capability to get more done, and you’re also increasing the chances of working the weekend and presenting an incident report on Monday. Dial it back to stay safe, and you’re falling behind while you workshop your governance framework. Yikes, the incident report isn’t sounding so bad all of a sudden. 😆
This isn’t a new dynamic — every technology boom has this duality. But agentic AI has compressed the timeline from “we should think about this” to “it already happened”.
To increase competitive ability, you must take on additional risk. To reduce risk, you have to reduce capability, diminishing competitive ability.
That’s not a problem to solve. It’s a risk to manage. But if you can put a number on the risk, you can make a case for managing it.
Quantifying the Uncomfortable
This is where it gets interesting for the bean counters, the spreadsheet folks. If a business can quantify the competitive advantage of increasing AI capability, they’ve also quantified the value of risk mitigation.
If deploying an AI agent saves your team 200 hours a month, that’s real value. The guardrails aren’t a line item — they’re what let you keep your competitive advantage. Without them, you keep the gains until something breaks, and then you’re writing an incident report and rolling everything back. Maybe for six months. Maybe longer. And good luck getting approval for anything with “AI” in the name after that.
Nobody Is Waiting for Permission
The pace of agentic AI adoption right now is genuinely wild. People are jumping first and looking later. Companies aren’t running careful pilots with six months of evaluation. They’re plugging agents into production systems because their competitors already did it last Tuesday.
And there’s a rational logic to it, even if it feels reckless. If you wait for the perfect security posture before deploying AI agents, you’re not being cautious — you’re being slow. And slow is its own kind of risk. The company that ships an imperfect but functional AI workflow today learns faster than the one still drafting requirements. Talk to any team using agents in production without controls, and they’ll have stories. Sometimes funny, sometimes expensive. Usually both.
Perfect Is the Enemy of Good
We’re building Maybe Don’t in the middle of a land rush. MCP gateway. CLI proxy. We’re trying to cover every way AI acts in the real world — and that surface area is expanding faster than anyone can map it.
Have we nailed every edge case? Nope.
There is a real early mover advantage in this problem space. Not because the first solution will be the last solution, but because the teams building guardrails now are learning what actually matters. They’re seeing the real failure modes. They’re building intuition for a problem that every team using AI agents will have to face.
Nobody can afford to wait for the perfect answer.
No Awards for What Didn’t Happen
Guardrails are invisible when they’re working. Nobody notices the agent that didn’t delete the database, or the email that wasn’t sent with hallucinated data. It’s sort of like telling your spouse you didn’t cheat — that’s great, but don’t expect any awards, it just means you’re not a $!#%. 😎
We’re not in this for the fame — we know nobody will notice if we do it well. It’s sort of like parenting. We don’t want to be in the way, but when an agent does screw up, we can gently respond with a no, and an explanation. We’re not angry, just disappointed, and we’re worried about you.
The goal isn’t to stop AI from being useful. It’s to keep the useful AI from occasionally being really bad for business. That’s a narrower problem than it sounds, and it’s the problem we’re working on.
Learn, Adapt, Keep Up
We have not solved this yet. I don’t think anyone has. The reality is, it’s early, we’re early, we’re going really fast, and we’re working on a problem that is not going to solve itself.
New problems will emerge, existing problems will evolve. New protocols, new agent architectures, new ways for AI to interact with the world that nobody has thought of yet. The teams that will be ready are the ones that are already building, already adapting, already in the fight.
We’d rather be early and iterating than late and perfect. Because “late and perfect” in this market just means late.
If your team is deploying AI agents and wondering how to keep them from doing something regrettable, hit the Book a Demo button at the top of the page. We might not have every answer, but we’ve probably been thinking about the same problems, and are actively building solutions.