Documentation & Help
Learn how to configure RAG, connect external agents via MCP, and optimize your PRAI workflow.
Retrieval-Augmented Generation (RAG)
How it works
PRAI indexes your entire codebase into a vector database (LanceDB) to provide deep context during security audits.
Local vs Gemini Embeddings
Local mode uses Transformers.js (free, private), while Gemini mode uses Google's API for higher precision.
Incremental Indexing
The system only re-indexes changed files, saving significant time and compute resources.
Model Context Protocol (MCP)
Connecting External Agents
Use the SSE endpoint (http://localhost:3000/mcp/sse) to link PRAI with Cursor, Claude, or other AI tools.
Available Tools
Exposes "list_indexed_repositories" and "search_codebase" to external agents.
Stdio Support
You can also add external MCP servers to PRAI in the Settings menu.
GitHub Integration
Webhook Setup
Configure GitHub to send PR events to /api/webhook for automated analysis.
Manual Submissions
Paste any PR or Commit URL into the "Manual URL" field on the dashboard for instant scanning.
Authentication
Requires a Personal Access Token (PAT) with "repo" scope to access code diffs.
Security & Optimization
API Hardening
Built-in protection against DoS attacks with request limiting and strict input validation.
Performance
Uses infinite scrolling and virtualized lists to handle thousands of analysis records smoothly.
SQL Aggregation
Statistics are calculated on the server using optimized SQL for instant dashboard updates.
Detailed Setup Guide
Comprehensive walkthrough for deployment and configuration.
Best Practices
How to write secure PRs and optimize your analysis results.
Community & Support
Get help from the developers and other PRAI users.
Quick Setup Checklist
Ensure your instance is running with peak performance and security.