Autonomous AI agents.
Zero data leakage.
Agents that run shell commands, delegate to each other, and use tools like GitHub, Slack, and Google Workspace. All inside network-isolated sandboxes on your own infrastructure. Nothing leaves without you knowing.
the problem
AI agents are powerful. That's exactly why they're dangerous.
Tools like OpenClaw proved that AI agents can do incredible things: execute shell commands, manage files, automate workflows. But they run with full system access, no network isolation, and community plugins that have been caught exfiltrating credentials and injecting prompts. That's not something you can deploy at work.
- × Full internet access, can exfiltrate data
- × API keys visible to agent and plugins
- × No audit trail of what data was sent to LLMs
- × Community plugins with no security vetting
- ✓ Network-isolated sandbox that can't reach the internet
- ✓ Keys stay on the server, never enter sandbox
- ✓ Every prompt, response, and tool call logged
- ✓ MCP integrations run server-side with scoped access
architecture
Everything routes through Prometheal
LLM calls, MCP tool use, and sandbox commands all flow through the server. The sandbox never touches the internet directly.
Tool routing
Context mgmt
Key management
Spending limits
Credential vault
Scope rules
IN / OUT / RSP
Full trail
capabilities
Everything an agent needs. Nothing it shouldn't have.
Full Agent Runtime
Agents get a complete Linux environment with shell access, file system, and a virtual desktop you can watch in real-time. They can write code, run scripts, manage files, and work autonomously. Like OpenClaw, but inside a locked sandbox.
MCP Tool Integrations
Connect agents to GitHub, Slack, Google Workspace, Postgres, and more via MCP. Unlike community plugin systems, MCP clients run server-side. Credentials never enter the sandbox, and scope rules limit what each agent can access.
Network Isolation
Each sandbox runs in Docker with gVisor, firewalled to only talk to the Prometheal server. No internet, no lateral movement, no data exfiltration. The agent can do anything inside its sandbox, but nothing gets out.
iptables -A OUTPUT -j DROP
iptables -I OUTPUT -d prometheal-host -j ACCEPT
Full Observability
Watch agents work through live desktop streaming. Every LLM call, tool use, and data movement is logged to the audit trail. Spending limits prevent runaway costs. You see everything the agent does, not just what it tells you.
Multi-Agent Teams
Agents can delegate tasks to other agents. A coordinator agent can route math questions to a math specialist, writing tasks to a copywriter, and code reviews to a developer agent. Each runs in its own sandbox with its own tools.
External Channels
Expose agents beyond the web UI. Connect them to Telegram, Slack, or any webhook-based platform. Users interact in their preferred app while the agent runs securely inside Prometheal with the same sandboxing and audit guarantees.
open ports on sandbox
MCP integrations
command to deploy
licensed, forever
get started
Deploy in under 5 minutes
Deploy with one command
Clone the repo and run ./deploy.sh.
It sets up everything — HTTPS, PostgreSQL, Redis — and the setup wizard creates your admin account.
Create agents and connect your knowledge base
Configure agents with system prompts and model selection. Connect internal data sources (Google Workspace, Slack, databases, filesystems) via MCP integrations. Invite team members via secure links.
Let agents work
Chat with agents that can run commands, write files, use MCP tools, and work autonomously in their sandbox. Watch them through live desktop streaming. All LLM calls and tool use are logged automatically so you stay in control without slowing them down.