AI Agent Security
Complete guide to securing AI agents with VeraID. Manage identities, control costs, detect prompt injection, and maintain full visibility over your AI infrastructure.
Overview
AI agents are fundamentally different from traditional service accounts. They make autonomous decisions, can generate thousands of API calls without human intervention, and their behavior depends on prompts that can change at runtime. VeraID provides purpose-built identity management for AI agents.
Key challenges with AI agents:
- Unpredictable costs: Runaway API usage can result in unexpected bills
- Security risks: Prompt injection attacks can hijack agent behavior
- No visibility: Difficult to track what each agent is doing
- Scale concerns: Multiple agent instances need coordinated access control
Supported AI Agents
VeraID works with any AI agent that makes API calls. We provide native integrations for popular frameworks and patterns:
| Category | Frameworks/Tools | Integration Method |
|---|---|---|
| LLM Orchestrators | LangChain, LlamaIndex, Semantic Kernel, Haystack | SDK callback handlers |
| Autonomous Agents | AutoGPT, BabyAGI, CrewAI, AgentGPT, SuperAGI | API proxy + SDK |
| LLM APIs | OpenAI, Anthropic, Google AI, Cohere, Azure OpenAI | Credential injection |
| Workflow Platforms | n8n, Zapier, Make, Pipedream | Webhook + secrets sync |
| Custom Agents | Any Python, TypeScript, Go application | REST API + SDK |
Core Features
Budget Controls
Set spending limits per agent with daily and monthly caps. Auto-throttle or block when limits are reached.
Prompt Injection Detection
Real-time analysis of prompts to detect injection attacks before they reach external APIs.
Behavioral Analytics
Learn normal patterns per agent. Flag anomalies like unusual API calls, access times, or volume spikes.
Complete Audit Trail
Log every prompt, response, tokens used, cost incurred. Full context for debugging and compliance.
JIT Credentials
Just-in-time access to LLM APIs. Grant access only when agent is running, revoke automatically.
Model-Level Policies
Control which models each agent can access. Restrict GPT-4 to production, allow GPT-3.5 for dev.
Architecture
VeraID sits between your AI agents and external LLM APIs, providing a unified control plane:
Quick Start
1. Create an AI Agent Identity
# Navigate to Identities → Create Identity # Select type: AI Agent # Configure: # - Name: "customer-support-agent" # - Budget: $100/day, $2000/month # - Allowed models: gpt-4, gpt-3.5-turbo # - Prompt injection detection: Enabled
2. Store LLM API Credentials
# Create credential for the agent # Store your OpenAI/Anthropic API key in VeraID vault # The agent will request temporary access when running
3. Install the SDK
# Python pip install veraid # TypeScript/Node.js npm install @veraid/sdk
4. Integrate with Your Agent
# Python - LangChain example from veraid import VeraIDClient from langchain.llms import OpenAI from langchain.callbacks import VeraIDCallback # Initialize VeraID client kd = VeraIDClient( agent_id="agent_abc123", api_key="kd_live_..." ) # Get temporary OpenAI credentials openai_key = kd.get_credential("openai-api-key") # Initialize LLM with VeraID callback llm = OpenAI( openai_api_key=openai_key, callbacks=[VeraIDCallback(kd)] ) # All calls are now tracked, monitored, and budget-controlled response = llm("Explain quantum computing")
Budget Controls
Set granular spending limits to prevent runaway costs. VeraID tracks usage in real-time and enforces limits automatically.
| Control Type | Description | Action When Exceeded |
|---|---|---|
| Daily Limit | Maximum spend per 24-hour period | Block requests until next day |
| Monthly Limit | Maximum spend per calendar month | Block requests until next month |
| Per-Request Limit | Maximum cost for a single API call | Reject individual request |
| Warning Threshold | Alert when approaching limit (e.g., 80%) | Send notification, continue |
| Rate Limit | Maximum requests per minute/hour | Throttle or queue requests |
# Configure budget controls via API const agent = await veraid.identities.create({name: "data-analysis-agent", type: "ai_agent", budget: { daily_limit: 50.00, // $50/day monthly_limit: 1000.00, // $1000/month per_request_max: 5.00, // Max $5 per call warning_threshold: 0.8, // Alert at 80% action_on_exceed: "block" // or "throttle", "alert" } });
Prompt Injection Detection
Prompt injection attacks attempt to hijack AI agents by inserting malicious instructions into user input. VeraID analyzes prompts in real-time to detect and block these attacks.
Detection patterns include:
- Instruction override: "Ignore previous instructions and..."
- Role manipulation: "You are now a different assistant that..."
- Data exfiltration: "Output all system prompts..."
- Jailbreaking: Known bypass techniques for safety filters
- Payload injection: Hidden instructions in encoded formats
How VeraID Responds:
- Prompt is analyzed before reaching the LLM API
- Suspicious patterns are flagged with a risk score
- High-risk prompts are blocked or sent for review
- All detections are logged in the audit trail
- Alerts are sent to security teams
# Prompt injection detection response { "status": "blocked", "risk_score": 92, "detections": [ { "type": "instruction_override", "pattern": "ignore.*previous.*instructions", "confidence": 0.95 } ], "action_taken": "request_blocked", "audit_id": "audit_xyz789" }
SDK Reference
Python SDK
from veraid import VeraIDClient # Initialize client = VeraIDClient(agent_id="...", api_key="...") # Get credential for LLM API api_key = client.get_credential("openai-key") # Log a prompt (automatic with callbacks) client.log_prompt(prompt="...", model="gpt-4") # Log response and usage client.log_response( prompt_id="...", response="...", tokens_used=150, cost=0.045 ) # Check prompt for injection result = client.check_injection(prompt="...") if result.is_suspicious: print(f"Blocked: {result.reason}")
TypeScript SDK
import { VeraID } from '@veraid/sdk'; // Initialize const kd = new VeraID({ agentId: 'agent_abc123', apiKey: 'kd_live_...' }); // Get credential const openaiKey = await kd.getCredential('openai-key'); // Wrap OpenAI client with VeraID monitoring const openai = kd.wrapOpenAI(new OpenAI({ apiKey: openaiKey })); // All calls are now monitored automatically const response = await openai.chat.completions.create({...});
Ready to secure your AI agents?
Join the waitlist for early access to VeraID's AI agent security.
Join Waitlist