Skip to main content

Development Roadmap

Building the future of AI-powered code review. See what's been completed, what's in progress, and what's coming next.

Current Version: v1.1.0 | Last Updated: November 9, 2025

Phase 3: Platform & LLM Expansion

Status: Complete (November 2025) | Tests: 795 passing

Major Milestone Achieved

drep now supports all three major git platforms (Gitea, GitHub, GitLab) with comprehensive LLM backend options (local + AWS Bedrock). Interactive setup wizard makes configuration effortless.

GitHub Adapter

v0.8.2 | Full REST API v3 integration with GitHub Enterprise support. All 8 BaseAdapter methods implemented. 64 comprehensive tests ensure reliability for automated PR reviews.

AWS Bedrock Provider

v0.8.0 | Enterprise-grade Claude 4.5 via AWS Bedrock with Zero Data Retention. HIPAA, GDPR, SOC 2 compliant. Supports Sonnet 4.5 and Haiku 4.5 models with AWS credentials chain authentication.

GitLab Adapter

v1.0.0 | Production-ready GitLab REST API v4 integration. Supports both GitLab.com and self-hosted instances. 93 comprehensive tests with superior error handling.

Interactive Configuration Wizard

v1.1.0 | Guided drep init setup eliminates manual YAML editing. Real-time input validation, 7 strongly-typed models, and environment variable placeholders prevent configuration errors.

Phase 4: Feature Expansion

Status: Up Next | Duration: 4 sprints (estimated)

Want to Contribute?

Phase 4.1 (Anthropic Direct Provider) is the first priority and only requires 3-4 hours of work. Perfect for contributors familiar with API integrations. See the full roadmap for details.

Anthropic Direct Provider

Effort: Small (3-4 hours) | Impact: High
Direct Anthropic API integration for lower latency and higher rate limits. Simpler setup than Bedrock with access to latest Claude models (Sonnet 4.5, Haiku 4.5, Opus 4.1).

Multi-Language Support

Effort: Large | Impact: High
Support JavaScript, TypeScript, Go, Rust, and Java beyond Python. Language-specific analyzers with ESLint, TSLint, go vet, and clippy integration.

Web UI Dashboard

Effort: Large | Impact: Medium
Interactive dashboard for viewing findings and metrics. Browse scan history, visualize metrics over time, and configure settings via web interface.

Completed Phases

Phase 1: Quick Wins

Status: Complete (2025-11-07) | Duration: 2 sprints

Security audit, BaseAdapter abstract class, configuration constants, enhanced markdown linting. 22 new tests added for comprehensive coverage of quick improvements.

Phase 2: Quality & Testing

Status: Complete (2025-11-07) | Duration: 2 sprints

End-to-end integration tests, API documentation with Sphinx, dependency injection for LLMClient, deprecation of legacy metrics. 18 new tests added with strict TDD methodology.

Success Metrics

Code Quality

  • Zero critical security issues
  • Test coverage target: 90%+ (current: ~85%)
  • All functions under 50 lines (refactoring goal)
  • Consistent naming conventions

Performance

  • Cache hit rate: 80%+ achieved
  • Average scan time: under 5 minutes for 1000-file repo
  • LLM cost per scan: under $5 with caching

Feature Completeness

  • 3 platform adapters: Gitea, GitHub, GitLab
  • Multi-language support (Python complete, others planned)
  • Web UI dashboard (planned)

Adoption Goals

  • 100+ GitHub stars
  • 10+ external contributors
  • 1000+ PyPI downloads per month

Ready to Contribute?

Help shape the future of drep. Check out the full roadmap on GitHub and find an issue that matches your skills.

View Full Roadmap Browse Issues