Skip to main content

Configuration

Configure drep with config.yaml to customize platform adapters, LLM settings, rate limiting, caching, and more.

Configuration File

drep automatically discovers your configuration file in this order:

  1. Explicit path: --config /path/to/config.yaml (if provided)
  2. Environment variable: DREP_CONFIG environment variable
  3. Project-specific: ./config.yaml (current directory)
  4. User config: Platform-dependent:
    • macOS: ~/Library/Application Support/drep/config.yaml
    • Linux: ~/.config/drep/config.yaml

This means you can run drep scan owner/repo without specifying --config - drep will automatically find your configuration!

Quickest Way to Configure

Run drep init to launch the interactive setup wizard. It will prompt you to choose your platform (GitHub, Gitea, GitLab) and generate a complete config.yaml with correct settings.

bash
drep init

The sections below explain all available configuration options if you need to customize beyond the defaults.

Platform Configuration

Configure at least one platform adapter (Gitea, GitHub, or GitLab). You can enable multiple platforms simultaneously.

Gitea

yaml
gitea:
  url: http://localhost:3000
  token: ${GITEA_TOKEN}
  repositories:
    - owner/*  # All repos for this owner
    - org/specific-repo  # Specific repository

GitHub

yaml
github:
  token: ${GITHUB_TOKEN}
  repositories:
    - owner/repo
    - organization/*
  url: https://api.github.com  # Or GitHub Enterprise URL

GitLab

yaml
gitlab:
  url: https://gitlab.com  # Or your self-hosted GitLab URL
  token: ${GITLAB_TOKEN}
  repositories:
    - owner/*  # All projects for this owner
    - org/specific-project  # Specific project

GitLab Fully Supported

GitLab support is production-ready as of v1.0.0. Works with both GitLab.com and self-hosted instances. See CHANGELOG for details.

LLM Settings

Configure your LLM backend: local models (LM Studio, Ollama, llama.cpp, vLLM) or AWS Bedrock (Claude 4.5 with ZDR).

Local LLM Configuration

For local backends (LM Studio, Ollama, llama.cpp):

yaml
llm:
  enabled: true
  endpoint: http://localhost:1234/v1  # LM Studio
  model: qwen3-30b-a3b
  temperature: 0.2
  timeout: 120
  max_tokens: 8000
  max_retries: 3
  retry_delay: 2
  max_concurrent_global: 5
  max_concurrent_per_repo: 3
  requests_per_minute: 60
  max_tokens_per_minute: 80000
  cache:
    enabled: true
    directory: ~/.cache/drep/llm
    ttl_days: 30
    max_size_gb: 10

LLM Options Explained

Option Description Default
endpoint OpenAI-compatible API endpoint http://localhost:1234/v1
model Model name to use qwen3-30b-a3b
temperature Randomness (0.0-2.0, lower = more deterministic) 0.2
timeout Request timeout in seconds 120
max_tokens Maximum tokens in response 8000
max_retries Retry failed requests 3
retry_delay Delay between retries (seconds) 2
max_concurrent_global Max parallel requests across all repos 5
max_concurrent_per_repo Max parallel requests per repository 3
requests_per_minute Rate limit (requests/minute) 60
max_tokens_per_minute Token budget per minute 80000

AWS Bedrock Configuration

For AWS Bedrock (Claude 4.5 with Zero Data Retention):

yaml
llm:
  enabled: true
  provider: bedrock  # Required for Bedrock

  bedrock:
    region: us-east-1
    model: anthropic.claude-sonnet-4-5-20250929-v1:0

  # General LLM settings
  temperature: 0.2
  max_tokens: 4000

  # Rate limiting (lower for Bedrock)
  max_concurrent_global: 3
  requests_per_minute: 30

  # Cache configuration
  cache:
    enabled: true
    ttl_days: 30

AWS Credentials

Bedrock uses the AWS credentials chain. No credentials in config file. Configure via aws configure, environment variables, or IAM roles. See LLM Setup guide for details.

Supported Bedrock Models:

Enterprise Benefits:

Cache Settings

drep caches LLM responses to improve performance and reduce costs. Typical cache hit rates are 80%+.

Option Description Default
enabled Enable response caching true
directory Cache storage location ~/.cache/drep/llm
ttl_days Cache entry lifetime 30
max_size_gb Maximum cache size 10

Documentation Settings

yaml
documentation:
  enabled: true
  custom_dictionary:
    - drep
    - Gitea
    - PyPI
  markdown_checks: false  # Optional: lightweight linting

custom_dictionary: Add project-specific terms to skip spell-checking (e.g., product names, technical jargon).

markdown_checks: Enable non-LLM markdown linting (trailing whitespace, broken links, malformed headings). Use drep lint-docs for standalone linting.

Database Configuration

drep uses SQLite by default for tracking scans and caching.

yaml
database_url: sqlite:///./drep.db

For production deployments, you can use PostgreSQL or other SQLAlchemy-supported databases:

yaml
database_url: postgresql://user:password@localhost/drep

Environment Variables

Use environment variables for sensitive values like API tokens:

bash
# Platform tokens (recommended over hardcoding)
export GITEA_TOKEN="your-token"
export GITHUB_TOKEN="your-token"
export GITLAB_TOKEN="your-token"

# Override config file location
export DREP_CONFIG="/path/to/config.yaml"

# Override LLM endpoint
export DREP_LLM_ENDPOINT="http://localhost:11434"

Security Best Practice

Never commit tokens to version control. Use environment variables with ${VARIABLE_NAME} syntax in config.yaml, or store tokens in a .env file that's gitignored.

Complete Configuration Example

Here's a fully annotated config.yaml with all available options:

yaml
# Platform configuration
gitea:
  url: http://localhost:3000
  token: ${GITEA_TOKEN}
  repositories:
    - your-org/*

github:
  token: ${GITHUB_TOKEN}
  repositories:
    - owner/repo
    - organization/*
  url: https://api.github.com

# Documentation settings
documentation:
  enabled: true
  custom_dictionary:
    - drep
    - Gitea
    - PyPI
  markdown_checks: false

# Database
database_url: sqlite:///./drep.db

# LLM configuration
llm:
  enabled: true
  endpoint: http://localhost:1234/v1
  model: qwen3-30b-a3b
  temperature: 0.2
  timeout: 120
  max_tokens: 8000
  max_retries: 3
  retry_delay: 2
  max_concurrent_global: 5
  max_concurrent_per_repo: 3
  requests_per_minute: 60
  max_tokens_per_minute: 80000
  cache:
    enabled: true
    directory: ~/.cache/drep/llm
    ttl_days: 30
    max_size_gb: 10

Next Steps