Configure risk_definitions.yml
Customise which libraries map to which risk tier for your project.
The Scanner ships with a default risk_definitions.yml covering the most common AI/ML libraries. To customise, drop your own file at .argus/risk_definitions.yml in your repo root.
File format
high:
- openai
- anthropic
- cohere
- google.generativeai
- mistralai
- ollama
limited:
- langchain
- llama_index
- transformers
- sentence_transformers
- tensorflow
- torch
minimal:
- sklearn
- numpy
- pandas
- scipy
The Scanner walks every Python file’s AST and matches imports against the lists. The first match wins; libraries not in any list don’t surface as findings.
Common customisations
Add an internal library to a tier:
high:
- openai
- your_internal_genai_wrapper
Move a library between tiers (e.g. you treat transformers as high-risk):
high:
- openai
- transformers # moved from limited
limited:
# transformers removed
- langchain
Exclude a library entirely: simply leave it out of all three lists.
Validation
The Scanner validates risk_definitions.yml on each run. If the file is malformed, the PR comment will include the parse error and fall back to the default definitions for that scan.