OpenTriage_AI / README.md
KrishnaCosmic's picture
checking changes
8fcd54a
metadata
title: OpenTriage AI Engine
emoji: 🧠
colorFrom: purple
colorTo: pink
sdk: docker
app_port: 7860
pinned: false

🧠 OpenTriage AI Engine (Microservice)

This is the AI Brain of the OpenTriage platform. It is a specialized Python microservice acting as a "Sidecar" to the main TypeScript backend.

It hosts the heavy-lifting AI logic, including RAG (Retrieval-Augmented Generation), Issue Triage, and Mentor Matching, keeping the main application fast and lightweight.

πŸš€ Features

  • πŸ” AI Triage: Automatically classifies GitHub issues by complexity and type.
  • πŸ“š RAG Chatbot: "Chat with Repo" functionality using vector search.
  • 🀝 Mentor Match: Connects contributors to mentors based on tech stack analysis.
  • πŸŽ‰ Hype Generator: Generates celebratory messages for PR merges.

πŸ”— How to Connect

This service exposes a REST API via FastAPI. It is designed to be called by the Main Backend, not directly by users.

Base URL

Your API is live at: https://[YOUR_USERNAME]-opentriage-ai-engine.hf.space

(Check the "Embed this space" menu in the top right to get your exact direct URL)

API Endpoints

Method Endpoint Description
GET /health Server health check (returns 200 OK)
POST /triage Classifies an issue description
POST /chat General AI assistant response
POST /rag/chat Context-aware repository Q&A
POST /mentor-match Finds best matching mentors

πŸ›  Deployment Configuration

1. Environment Variables (Secrets)

You must set these in the Settings tab of this Space under "Variables and secrets":

Secret Name Required? Description
OPENROUTER_API_KEY Yes Required for all AI generation
MONGO_URL Optional If your RAG needs persistent vector storage

2. Docker Configuration

This Space uses a custom Dockerfile to ensure all Python scientific libraries (NumPy, Scikit-learn, LangChain) are installed in a compatible environment.

  • Port: 7860 (Standard HF Space port)
  • User: Runs as non-root user user (ID 1000) for security.

πŸ’» Local Development

If you want to run this brain locally:

# 1. Create venv
python -m venv venv
source venv/bin/activate

# 2. Install
pip install -r requirements.txt

# 3. Run
uvicorn main:app --reload --port 8000