Local AI (Ollama)¶
ThreatDeflect integrates with Ollama to generate executive summaries and risk analyses using AI models running locally on your machine.
Full privacy
No data is sent to cloud services. Everything runs on your machine.
Installing Ollama¶
Download the installer from ollama.com
Verifying the installation¶
Downloading a model¶
Recommended models¶
| Model | Size | Recommended use |
|---|---|---|
llama3:8b | ~4.7 GB | General use, good speed/quality ratio |
gpt-oss:20b | ~12 GB | More detailed summaries, requires more RAM |
mistral | ~4.1 GB | Fast, good for machines with fewer resources |
llama3:70b | ~40 GB | Best quality, requires a powerful GPU |
Using in CLI¶
Add the --ai flag followed by the model name:
# IOC analysis with AI summary
threatdeflect ioc 8.8.8.8 185.172.128.150 --ai llama3
# Repository analysis with AI
threatdeflect repo https://github.com/org/repo --ai gpt-oss:20b
# File analysis with AI
threatdeflect file suspicious.exe --ai mistral
The summary is automatically included in the generated Excel report.
Using in GUI¶
- Perform an analysis (IOC, Repository or File)
- Select the model from the AI Model list at the bottom
- Click Generate Text Summary or Generate PDF Summary
Endpoint configuration¶
By default, Ollama runs at http://localhost:11434/api/generate.
To change it:
Troubleshooting¶
"AI did not return a response"¶
Check if the model is responding:
curl http://localhost:11434/api/generate -d '{"model":"llama3:8b","prompt":"test","stream":false}' | jq .response
If it returns null, the model may be corrupted:
Ollama is not running¶
Or start the service: