Scan Modes
FOUR MODES · MODEL · BROWSER · API · AGENT
Model Mode
Directly tests a base model through OpenRouter. No infrastructure needed — just select a model from the curated list and start scanning.
Best for:
- Evaluating base model safety before deployment
- Comparing models on the public leaderboard
- Security benchmarking across providers
Supported models include GPT-4o, Claude, Gemini, Grok, DeepSeek, Llama, Mistral, and more.
Browser Mode
Tests web-based chatbots and LLM-powered UIs through a headless browser. ShieldPi interacts with the target exactly as a real user would — typing prompts, reading responses, and navigating the interface.
Best for:
- Customer-facing chatbot widgets
- Internal AI assistants with web UIs
- Testing the full stack (frontend + backend + model)
Requires the target to be accessible via URL. Supports login flows.
API Mode
Sends attack payloads directly to your LLM API endpoint. Supports four API formats: OpenAI-compatible, Anthropic, Google Gemini, and custom endpoints.
Best for:
- Backend LLM services with REST APIs
- CI/CD pipeline integration (fastest mode)
- Custom wrappers around model providers
Configure the target with your API endpoint, format, and authentication header.
Agent Mode
Tests autonomous AI agents by connecting them to a ShieldPi-controlled chat session. Your agent connects via webhook and ShieldPi probes it with multi-turn conversation attacks, tool abuse scenarios, and privilege escalation attempts.
Best for:
- LangChain / LlamaIndex agents with tool access
- Autonomous agents that can execute actions
- Multi-turn conversation systems
Supports both GET and POST chat endpoints. See the Agent Monitor docs for live monitoring.
Comparison
| Feature | Model | Browser | API | Agent |
|---|---|---|---|---|
| Setup effort | None | URL only | URL + key | Webhook |
| Scan duration | 10-30 min | 5-15 min | 3-10 min | 15-60 min |
| Multi-turn | Yes | Yes | Limited | Full |
| Tool abuse testing | No | No | No | Yes |