Configuration
Configure drep with config.yaml to customize platform adapters, LLM settings, rate limiting, caching, and more.
Configuration File
drep automatically discovers your configuration file in this order:
- Explicit path:
--config /path/to/config.yaml(if provided) - Environment variable:
DREP_CONFIGenvironment variable - Project-specific:
./config.yaml(current directory) - User config: Platform-dependent:
- macOS:
~/Library/Application Support/drep/config.yaml - Linux:
~/.config/drep/config.yaml
- macOS:
This means you can run drep scan owner/repo without specifying --config - drep will automatically find your configuration!
Quickest Way to Configure
drep init
The sections below explain all available configuration options if you need to customize beyond the defaults.
Platform Configuration
Configure at least one platform adapter (Gitea, GitHub, or GitLab). You can enable multiple platforms simultaneously.
Gitea
gitea:
url: http://localhost:3000
token: ${GITEA_TOKEN}
repositories:
- owner/* # All repos for this owner
- org/specific-repo # Specific repository
GitHub
github:
token: ${GITHUB_TOKEN}
repositories:
- owner/repo
- organization/*
url: https://api.github.com # Or GitHub Enterprise URL
GitLab
gitlab:
url: https://gitlab.com # Or your self-hosted GitLab URL
token: ${GITLAB_TOKEN}
repositories:
- owner/* # All projects for this owner
- org/specific-project # Specific project
GitLab Fully Supported
LLM Settings
Configure your LLM backend: local models (LM Studio, Ollama, llama.cpp, vLLM) or AWS Bedrock (Claude 4.5 with ZDR).
Local LLM Configuration
For local backends (LM Studio, Ollama, llama.cpp):
llm:
enabled: true
endpoint: http://localhost:1234/v1 # LM Studio
model: qwen3-30b-a3b
temperature: 0.2
timeout: 120
max_tokens: 8000
max_retries: 3
retry_delay: 2
max_concurrent_global: 5
max_concurrent_per_repo: 3
requests_per_minute: 60
max_tokens_per_minute: 80000
cache:
enabled: true
directory: ~/.cache/drep/llm
ttl_days: 30
max_size_gb: 10
LLM Options Explained
| Option | Description | Default |
|---|---|---|
endpoint |
OpenAI-compatible API endpoint | http://localhost:1234/v1 |
model |
Model name to use | qwen3-30b-a3b |
temperature |
Randomness (0.0-2.0, lower = more deterministic) | 0.2 |
timeout |
Request timeout in seconds | 120 |
max_tokens |
Maximum tokens in response | 8000 |
max_retries |
Retry failed requests | 3 |
retry_delay |
Delay between retries (seconds) | 2 |
max_concurrent_global |
Max parallel requests across all repos | 5 |
max_concurrent_per_repo |
Max parallel requests per repository | 3 |
requests_per_minute |
Rate limit (requests/minute) | 60 |
max_tokens_per_minute |
Token budget per minute | 80000 |
AWS Bedrock Configuration
For AWS Bedrock (Claude 4.5 with Zero Data Retention):
llm:
enabled: true
provider: bedrock # Required for Bedrock
bedrock:
region: us-east-1
model: anthropic.claude-sonnet-4-5-20250929-v1:0
# General LLM settings
temperature: 0.2
max_tokens: 4000
# Rate limiting (lower for Bedrock)
max_concurrent_global: 3
requests_per_minute: 30
# Cache configuration
cache:
enabled: true
ttl_days: 30
AWS Credentials
Supported Bedrock Models:
anthropic.claude-sonnet-4-5-20250929-v1:0- Claude Sonnet 4.5 (recommended)anthropic.claude-haiku-4-5-20251001-v1:0- Claude Haiku 4.5 (fast & cost-effective)
Enterprise Benefits:
- Zero Data Retention - Your data is never stored or used for training
- Data Sovereignty - Data stays in your AWS region and account
- Compliance - Meets HIPAA, GDPR, SOC 2 requirements
Cache Settings
drep caches LLM responses to improve performance and reduce costs. Typical cache hit rates are 80%+.
| Option | Description | Default |
|---|---|---|
enabled |
Enable response caching | true |
directory |
Cache storage location | ~/.cache/drep/llm |
ttl_days |
Cache entry lifetime | 30 |
max_size_gb |
Maximum cache size | 10 |
Documentation Settings
documentation:
enabled: true
custom_dictionary:
- drep
- Gitea
- PyPI
markdown_checks: false # Optional: lightweight linting
custom_dictionary: Add project-specific terms to skip spell-checking (e.g., product names, technical jargon).
markdown_checks: Enable non-LLM markdown linting (trailing whitespace, broken links, malformed headings). Use drep lint-docs for standalone linting.
Database Configuration
drep uses SQLite by default for tracking scans and caching.
database_url: sqlite:///./drep.db
For production deployments, you can use PostgreSQL or other SQLAlchemy-supported databases:
database_url: postgresql://user:password@localhost/drep
Environment Variables
Use environment variables for sensitive values like API tokens:
# Platform tokens (recommended over hardcoding)
export GITEA_TOKEN="your-token"
export GITHUB_TOKEN="your-token"
export GITLAB_TOKEN="your-token"
# Override config file location
export DREP_CONFIG="/path/to/config.yaml"
# Override LLM endpoint
export DREP_LLM_ENDPOINT="http://localhost:11434"
Security Best Practice
Complete Configuration Example
Here's a fully annotated config.yaml with all available options:
# Platform configuration
gitea:
url: http://localhost:3000
token: ${GITEA_TOKEN}
repositories:
- your-org/*
github:
token: ${GITHUB_TOKEN}
repositories:
- owner/repo
- organization/*
url: https://api.github.com
# Documentation settings
documentation:
enabled: true
custom_dictionary:
- drep
- Gitea
- PyPI
markdown_checks: false
# Database
database_url: sqlite:///./drep.db
# LLM configuration
llm:
enabled: true
endpoint: http://localhost:1234/v1
model: qwen3-30b-a3b
temperature: 0.2
timeout: 120
max_tokens: 8000
max_retries: 3
retry_delay: 2
max_concurrent_global: 5
max_concurrent_per_repo: 3
requests_per_minute: 60
max_tokens_per_minute: 80000
cache:
enabled: true
directory: ~/.cache/drep/llm
ttl_days: 30
max_size_gb: 10
Next Steps
- Set up your LLM backend with model recommendations and troubleshooting
- View examples of webhook setup, CI/CD integration, and Docker deployment
- Explore the CLI for all available commands and options