California Just Deleted the "AI Did It" Defense

AB 316 ends the autonomous harm defense. You now own what your AI does.

We’ve all been hoping to use the convenient defense:

We didn’t do it — it was the AI.

Nobody believed this would hold up forever. But when the alternative was slowing down while your competitors didn’t, you took the risk.

But just like when you told your 3rd grade teacher the dog ate your homework — she wasn’t buying it, and as of January 1, 2026 — California just provided legal clarification that they aren’t either. Now that “the AI did it” is clearly a weak defense, managing the risk isn’t what will make you less competitive — ignoring it is.

AI Will Screw Up. Count on It.

If you’re deploying AI agents that can take real actions—moving data, provisioning infrastructure, interacting with customers, making decisions—something will eventually go sideways.

Not because you’re careless. Because that’s how these systems work. They hallucinate. They misinterpret. They optimize for the wrong thing. They do exactly what you told them to do, which turns out to be catastrophically different from what you meant.

The risk isn’t theoretical. It’s operational.

The Algorithm Is Not a Defense

AB 316 makes it explicit: if you developed, modified, or used an AI system that causes harm, you cannot argue in court that the AI acted autonomously and therefore you’re off the hook.

To be clear, AB 316 doesn’t create strict liability—a plaintiff still has to prove your AI caused the harm and that it was foreseeable. You can still mount a defense. What you can’t do is hide behind the AI’s autonomy as if it were some kind of force of nature that wandered off on its own.

That means the old escape hatch is gone. If a court finds your AI caused the damage, that liability traces back to you. Your company. Your officers.

And civil liability is often just the opening act. If your AI does something reckless enough—say, in healthcare, financial services, or anything touching consumer safety—criminal exposure can follow. Negligence. Fraud. Worse.

The algorithm doesn’t get sued. You do.

The Letter of the Law

The law is surgical. It adds Section 1714.46 to the California Civil Code, prohibiting defendants from asserting that an AI system “autonomously caused the harm” in any lawsuit alleging AI-related damage.

Plain English: You built it, modified it, or used it. You own what it does — and that applies to every link in the chain, from the model developer to the company that deployed it.

Action Items

The law doesn’t tell you how to protect yourself — but it does preserve defenses around causation, foreseeability, and comparative fault. Translation: you can still show you acted reasonably. Here’s how to build that case.

1. Know what your AI can touch.

Most companies don’t have a clear inventory of where AI agents have write access. Fix that first. You can’t argue foreseeability if you don’t even know what your AI has access to.

2. Document everything.

Decisions, guardrails, risk assessments, and the reasoning behind them. If you end up in court, you’ll need to show you identified the risks and made deliberate choices — not that you shipped it and hoped for the best.

3. Build runtime controls.

Governance policies describe what should happen. Runtime controls enforce what actually happens. Logging every action, flagging dangerous operations, and blocking them when necessary gives you evidence that your safeguards were active — not just written down in a doc somewhere.

What Runtime Controls Actually Look Like

With Maybe Don’t, every action your AI takes is logged. Dangerous operations are flagged and blocked before they execute — not after the damage is done. When something gets blocked, the agent is told why. Your rules. Your risk tolerance. Enforced in real-time.

Because the question isn’t whether your AI will do something stupid. It’s whether you’ll catch it before it becomes a lawsuit.

AB 316 is the law. The “autonomous harm” defense is gone. Get in touch and put guardrails between your AI and your liability.

Additional Reading