BloodHound meets AI
BloodHound is a standard tool for attack path analysis in Active Directory environments. It works well, but getting the most out of it requires familiarity with Cypher queries against the Neo4j backend — which takes time to build. Rio Darmawan recently published an article on connecting BloodHound to Claude AI via the Model Context Protocol (MCP). Credit to him for the idea and the implementation. This post documents how to set it up and use it.
What is MCP?
Model Context Protocol is a protocol developed by Anthropic that allows an AI model to call external tools and APIs in real time. In this context, an MCP server sits between Claude and BloodHound's REST API. Claude can execute Cypher queries, search for objects, and analyze attack paths directly — without you writing the queries manually.
Prerequisites
- Docker and Docker Compose
- Python 3.10+
- Claude Desktop (macOS or Windows)
- SharpHound collection data
Installation
1. BloodHound Community Edition
The quickest way to get BloodHound CE running is via Docker Compose.
curl -L https://ghst.ly/getbhce | docker compose -f - up
Or manually:
git clone https://github.com/SpecterOps/BloodHound.git
cd BloodHound
cp examples/docker-compose/docker-compose.yml docker-compose.yml
docker compose up -d
BloodHound UI will be available at http://localhost:8080. The initial password is printed in the Docker log on first start:
docker compose logs | grep "Initial Password Set To:"
Log in, change the password, and import your SharpHound data through the UI.
2. SharpHound Collection
If you don't already have data, run SharpHound against the target environment:
.\SharpHound.exe -c All --outputdirectory C:\temp\bh
Zip the output and upload it via File Ingest in the BloodHound UI.
3. API Key
The MCP server authenticates via BloodHound's REST API. Create a key under:
Administration → API Keys → Create API Key
Save both the Token ID and Token Key.
4. BloodHound MCP Server
git clone https://github.com/winezer0/bloodhound-mcp
cd bloodhound-mcp
pip install -r requirements.txt
Create a .env file in the project directory:
BLOODHOUND_URL=http://localhost:8080
BLOODHOUND_TOKEN_ID=<your-token-id>
BLOODHOUND_TOKEN_KEY=<your-token-key>
Verify the server starts correctly before connecting it to Claude:
python server.py
5. Claude Desktop Configuration
Open the Claude Desktop config file.
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the BloodHound server under mcpServers:
{
"mcpServers": {
"bloodhound": {
"command": "python",
"args": ["/absolute/path/to/bloodhound-mcp/server.py"],
"env": {
"BLOODHOUND_URL": "http://localhost:8080",
"BLOODHOUND_TOKEN_ID": "<your-token-id>",
"BLOODHOUND_TOKEN_KEY": "<your-token-key>"
}
}
}
}
Restart Claude Desktop. The BloodHound tools should now appear as available in the interface.
Usage
With the integration running, you can query in natural language. A few practical examples:
Domain overview:
"What domains exist and how many users, computers and groups are in each?"
Kerberoasting:
"Show all Kerberoastable accounts with a path to Domain Admins"
AS-REP Roasting:
"Which accounts have pre-authentication disabled?"
ACL abuse:
"Are there any users with GenericAll or WriteDACL over privileged accounts?"
DCSync:
"Which accounts outside of Domain Admins have GetChangesAll rights against the domain?"
Shortest attack path:
"What is the shortest path from john.doe to Enterprise Admins?"
Claude constructs and runs Cypher queries against Neo4j in the background and returns the results in readable form. You can chain follow-up questions based on previous answers, which makes iterating through an analysis straightforward.
Security Considerations
Run this against a local BloodHound instance with test data or in an isolated lab environment. API keys should not be version-controlled — use .env files and add them to .gitignore. If BloodHound CE is exposed on a network interface, restrict port 8080 with firewall rules so it's only accessible locally.
Summary
The setup works as expected. It reduces time spent in the analysis phase, particularly in larger environments where building a complete picture otherwise requires a significant number of manual queries. Credit to fmisec for the original project and write-up.