
A Security Scanner for your agentic workflows!
View Demo Β· Documentation Β· Report Bug Β· Request Feature
Table of Contents
The Agentic Radar is designed to analyze and assess agentic systems for security and operational insights. It helps developers, researchers, and security professionals understand how agentic systems function and identify potential vulnerabilities.
It allows users to create a security report for agentic systems, including:
- Workflow Visualization - a graph of the agentic system's workflowβ
- Tool Identification - a list of all external and custom tools utilized by the systemβ
- MCP Server Detection - a list of all MCP servers used by system's agentsβ
- Vulnerability Mapping - a table connecting identified tools to known vulnerabilities, providing a security overviewβ
The comprehensive HTML report summarizes all findings and allows for easy reviewing and sharing.
Agentic Radar includes mapping of detected vulnerabilities to well-known security frameworks π‘οΈ.
There are none! Just make sure you have Python (pip) installed on your machine.
pip install agentic-radar
# Check that it is installed
agentic-radar --version
Some features require extra installations, depending on the targeted agentic framework. See more below.
CrewAI extras are needed when using one of the following features in combination with CrewAI:
- Agentic Radar Test
- Descriptions for predefined tools
You can install Agentic Radar with extra CrewAI dependencies by running:
pip install agentic-radar[crewai]
Warning
This will install the crewai-tools
package which is only supported on Python versions >= 3.10 and < 3.13.
If you are using a different python version, the tool descriptions will be less detailed or entirely missing.
OpenAI Agents extras are needed when using one of the following features in combination with OpenAI Agents:
You can install Agentic Radar with extra OpenAI Agents dependencies by running:
pip install agentic-radar[openai-agents]
Agentic Radar now supports two main commands:
Scan code for agentic workflows and generate a report.
agentic-radar scan [OPTIONS] FRAMEWORK:{langgraph|crewai|n8n|openai-agents}
Test agents in an agentic workflow for various vulnerabilities. Requires OPENAI_API_KEY set as environment variable.
agentic-radar test [OPTIONS] FRAMEWORK:{openai-agents} ENTRYPOINT_SCRIPT_WITH_ARGS
See more about this feature here.
Agentic Prompt Hardening automatically improves detected system prompts in your agentic workflow and displays them in the report. It transforms simple agent instructions into high-quality structured system prompts which follow best prompt engineering practices.
Note
Currently supported frameworks (with more to come): OpenAI Agents, CrewAI
It is quite straightforward to use:
-
Set your OPENAI_API_KEY environment variable by running
export OPENAI_API_KEY=<api_key>
. -
Run Agentic Radar with the
--harden-prompts
flag, for example:
agentic-radar scan openai-agents --harden-prompts -i examples/openai-agents/
basic/lifecycle_example -o report.html
- Inspect hardened system prompts in the generated report:
Agentic Radar now supports testing your agent workflows at runtime to identify critical vulnerabilities through simulated adversarial inputs.
This includes automated testing for:
- Prompt Injection
- PII Leakage
- Harmful Content Generation
- Fake News Generation
Currently supported for:
- OpenAI Agents β (more frameworks coming soon)
The test command launches your agentic workflow with a test suite designed to simulate malicious or adversarial inputs. These tests are designed based on real-world attack scenarios aligned with the OWASP LLM Top 10.
Note
This feature requires OPENAI_API_KEY or AZURE_OPENAI_API_KEY set as an environment variable. You can set it via command line or inside a .env file.
Test is run like:
agentic-radar test <framework> "<path/to/the/workflow/main.py any-necessary-args>"
For example:
agentic-radar test openai-agents "examples/openai-agents/basic/lifecycle_example.py"
The tool injects itself into the agentic workflow provided by user, detects necessary information and runs the prepared tests.
You can pass custom tests to Agentic Radar by creating a YAML file with the following structure:
include_default_tests: false
tests:
- name: TestA
input: Mention A.
success_condition: Attack succeeded if A is mentioned.
- name: TestB
input: Mention B.
success_condition: Attack succeeded if B is mentioned.
name
is the name of the test, input
is the input text to be passed to the agent, and success_condition
is a description of what constitutes a successful attack (it will be used by an oracle LLM to determine if the test passed or failed).
You can then run Agentic Radar test with the --config
option pointing to your YAML file:
agentic-radar test openai-agents --config custom_tests.yaml "examples/openai-agents/basic/lifecycle_example.py"
By default, Agentic Radar will also include the built-in tests. You can disable them by setting include_default_tests
to false
in your YAML file.
All test results are printed in a visually rich table format directly in the terminal. Each row shows:
- Agent name
- Type of test
- Injected input
- Agent output
- β Whether the test passed or failed
- π A short explanation of the result
This makes it easy to spot vulnerabilities at a glanceβespecially in multi-agent systems.
This matrix shows which agentic frameworks support all the Agentic Radar features. With time we will strive towards covering all current frameworks with all existing features, as well as introducing new frameworks to the mix.
Feature | Scan | MCP Detection | Prompt Hardening | Agentic Test |
---|---|---|---|---|
OpenAI Agents | β | β | β | β |
CrewAI | β | β | β | β |
n8n | β | β | β | β |
LangGraph | β | β | β | β |
Are there some features you would like to see happen first? Vote anonymously here or open a GitHub Issue.
We welcome contributions from the AI and security community! Join our Discord community or Slack community to connect with other developers, discuss features, get support and contribute to Agentic Radar π
If you like what you see, give us a star! It keeps us inspired to improve and innovate and helps others discover the project π
Q: Is my source code being shared or is everything running locally?
A: The main features (static workflow analysis and vulnerability mapping) are run completely locally and therefore your code is not shared anywhere. For optional advanced features, LLM's might be used. Eg. when using Prompt Hardening, detected system prompts can get sent to LLM for analysis.