Home
Documentation
What is Maybe Don’t?
Maybe Don’t provides guardrails and observability for agentic AI. It sits between AI agents and the tools they use — MCP servers and CLI commands — evaluating every operation against your policies before it executes. You define the rules, Maybe Don’t enforces them.
Think of it as a security checkpoint for your AI’s actions. Every tool call and CLI command is audited, validated, and logged.
Why Use It?
- Audit everything — See exactly what your AI agents are doing, across MCP tool calls and CLI commands
- Set guardrails — Block dangerous operations before they happen, using AI or deterministic policies
- Stay in control — Enforce policies, review decisions, and trace any action back to its source
- Observe and learn — Run in audit-only mode to understand agent behavior before enforcing rules