PsTuts-RAG / docs /DEVELOPER.md
mbudisic's picture
feat(app): Update introduction message to include version and source information
66ef1e0

πŸ› οΈ Developer Documentation

Note: The root-level DEVELOPER.md is deprecated. This is the canonical developer documentation. 🚦

πŸ“¦ Project Structure

.
β”œβ”€β”€ app.py                        # Main Chainlit application (multi-agent RAG)
β”œβ”€β”€ app_simple_rag.py             # Simplified single-agent RAG application 
β”œβ”€β”€ Dockerfile                    # Docker container configuration
β”œβ”€β”€ pyproject.toml                # Project configuration and dependencies
β”œβ”€β”€ requirements.txt              # Basic requirements (for legacy compatibility)
β”œβ”€β”€ uv.lock                       # Lock file for uv package manager
β”œβ”€β”€ pstuts_rag/                   # Package directory
β”‚   β”œβ”€β”€ pstuts_rag/               # Source code
β”‚   β”‚   β”œβ”€β”€ __init__.py           # Package initialization
β”‚   β”‚   β”œβ”€β”€ configuration.py      # Application configuration settings
β”‚   β”‚   β”œβ”€β”€ datastore.py          # Vector database and document management
β”‚   β”‚   β”œβ”€β”€ rag.py                # RAG chain implementation and factories
β”‚   β”‚   β”œβ”€β”€ rag_for_transcripts.py# RAG chain for video transcripts (reference packing)
β”‚   β”‚   β”œβ”€β”€ graph.py              # Agent node creation and LangGraph assembly
β”‚   β”‚   β”œβ”€β”€ state.py              # Team state management for agents
β”‚   β”‚   β”œβ”€β”€ prompts.py            # System prompts for different agents
β”‚   β”‚   β”œβ”€β”€ evaluator_utils.py    # RAG evaluation utilities
β”‚   β”‚   └── utils.py              # General utilities
β”‚   β”œβ”€β”€ setup.py                  # Package setup (legacy)
β”‚   └── CERT_SUBMISSION.md        # Certification submission documentation
β”œβ”€β”€ data/                         # Dataset files (JSON format)
β”‚   β”œβ”€β”€ train.json                # Training dataset
β”‚   β”œβ”€β”€ dev.json                  # Development dataset
β”‚   β”œβ”€β”€ test.json                 # Test dataset
β”‚   β”œβ”€β”€ kg_*.json                 # Knowledge graph datasets
β”‚   β”œβ”€β”€ LICENSE.txt               # Dataset license
β”‚   └── README.md                 # Dataset documentation
β”œβ”€β”€ notebooks/                    # Jupyter notebooks for development
β”‚   β”œβ”€β”€ evaluate_rag.ipynb        # RAG evaluation notebook
β”‚   β”œβ”€β”€ transcript_rag.ipynb      # Basic RAG experiments
β”‚   β”œβ”€β”€ transcript_agents.ipynb   # Multi-agent experiments
β”‚   β”œβ”€β”€ Fine_Tuning_Embedding_for_PSTuts.ipynb  # Embedding fine-tuning
β”‚   └── */                        # Fine-tuned model checkpoints
β”œβ”€β”€ docs/                         # Documentation
β”‚   β”œβ”€β”€ DEVELOPER.md              # This file - developer documentation
β”‚   β”œβ”€β”€ ANSWER.md                 # Technical answer documentation
β”‚   β”œβ”€β”€ BLOGPOST*.md              # Blog post drafts
β”‚   β”œβ”€β”€ dataset_card.md           # Dataset card documentation
β”‚   β”œβ”€β”€ TODO.md                   # Development TODO list
β”‚   └── chainlit.md               # Chainlit welcome message
β”œβ”€β”€ scripts/                      # Utility scripts (currently empty)
β”œβ”€β”€ public/                       # Theme and static files (see theming section)
└── README.md                     # User-facing documentation

🧩 Dependency Structure

Dependencies are organized into logical groups in pyproject.toml:

Core Dependencies 🎯

All required dependencies for the RAG system including:

  • LangChain ecosystem: langchain, langchain-core, langchain-community, langchain-openai, langgraph
  • Vector database: qdrant-client, langchain-qdrant
  • ML/AI libraries: sentence-transformers, transformers, torch
  • Web interface: chainlit==2.0.4
  • Data processing: pandas, datasets, pyarrow
  • Evaluation: ragas==0.2.15
  • Jupyter support: ipykernel, jupyter, ipywidgets
  • API integration: tavily-python (web search), requests, python-dotenv

Optional Dependencies πŸ”§

  • dev: Development tools (pytest, black, mypy, deptry, ipdb)
  • web: Web server components (fastapi, uvicorn, python-multipart)

Installation examples:

pip install -e .                    # Core only
pip install -e ".[dev]"            # Core + development tools
pip install -e ".[dev,web]"        # Core + dev + web server

πŸ”§ Technical Architecture

Key Components

πŸ—οΈ Core Classes and Factories

  • Configuration (configuration.py): Application settings including model names, file paths, and parameters
  • Datastore (datastore.py): Manages Qdrant vector store, document loading, and semantic chunking
  • RAGChainFactory (rag.py): Creates retrieval-augmented generation chains with reference compilation
  • RAGChainInstance (rag.py): Encapsulates complete RAG instances with embeddings and vector stores
  • RAG for Transcripts (rag_for_transcripts.py): Implements the RAG chain for searching video transcripts, including reference packing and post-processing for AIMessage responses. Used for context-rich, reference-annotated answers from video data. 🎬
  • Graph Assembly (graph.py): Handles agent node creation, LangGraph assembly, and integration of multi-agent workflows. Provides utilities for building, initializing, and running the agentic graph. πŸ•ΈοΈ

πŸ—„οΈ QdrantClientSingleton (datastore.py)

  • Purpose: Ensures only one instance of QdrantClient exists per process, preventing accidental concurrent access to embedded Qdrant. Thread-safe and logs every access!
  • Usage:
    from pstuts_rag.datastore import QdrantClientSingleton
    client = QdrantClientSingleton.get_client(path="/path/to/db")  # or path=None for in-memory
    
  • Behavior:
    • First call determines the storage location (persistent or in-memory)
    • All subsequent calls return the same client, regardless of path
    • Thread-safe via a lock
    • Every call logs the requested path for debugging πŸͺ΅

πŸͺ Datastore (datastore.py)

  • Collection Creation Logic:
    • On initialization, always tries to create the Qdrant collection for the vector store.
    • If the collection already exists, catches the ValueError and simply fetches the existing collection instead (no crash, no duplicate creation!).
    • This is the recommended robust pattern for Qdrant local mode. 🦺
    • Example log output:
      Collection EVA_AI_transcripts created.
      # or
      Collection EVA_AI_transcripts already exists.
      

πŸ•ΈοΈ Multi-Agent System

  • PsTutsTeamState (state.py): TypedDict managing multi-agent conversation state
  • Agent creation functions (graph.py): Factory functions for different agent types:
    • create_rag_node(): Video search agent using RAG
    • create_tavily_node(): Adobe Help web search agent
    • create_team_supervisor(): LLM-based routing supervisor
  • LangGraph implementation: Multi-agent coordination with state management

πŸ› οΈ Interactive Interrupt System

The system includes a sophisticated interrupt mechanism that allows for human-in-the-loop decision making during workflow execution.

Key Features:

  • Permission-based search control: Users can grant or deny permission for web searches on a per-query basis
  • Real-time interrupts: Workflow pauses execution to request user input when search permission is set to "ASK"
  • Graceful fallback: System continues with local RAG search if web search is denied
  • State persistence: Search permission decisions are maintained throughout the session

Implementation Details:

  • YesNoAsk enum: Manages three permission states - YES, NO, and ASK
  • Interrupt points: Built into the search_help node using LangGraph's interrupt() function
  • Configuration control: Default permission behavior set via EVA_SEARCH_PERMISSION environment variable
  • Interactive prompts: Users receive clear yes/no prompts with automatic parsing

Usage Workflow:

  1. User submits a query requiring web search
  2. If search_permission = ASK, system pauses with interrupt prompt
  3. User responds with "yes" to permit search or any other response to deny
  4. System logs the decision and continues with appropriate search strategy
  5. Permission state persists for the current session

This feature enables controlled access to external resources while maintaining autonomous operation when permissions are pre-configured. πŸ€–βœ‹

πŸ“Š Document Processing

  • VideoTranscriptBulkLoader: Loads entire video transcripts as single documents
  • VideoTranscriptChunkLoader: Loads individual transcript segments with timestamps
  • chunk_transcripts(): Async semantic chunking with timestamp preservation
  • Custom embedding models: Fine-tuned embeddings for PsTuts domain

⚑ Asynchronous Loading System

  • Datastore.loading_complete: AsyncIO Event object that's set when data loading completes
  • Datastore.is_ready(): Convenience method to check if loading is complete
  • Datastore.wait_for_loading(timeout): Async method to wait for loading completion with optional timeout
  • Datastore.add_completion_callback(callback): Register callbacks (sync or async) to be called when loading completes
  • Non-blocking startup: Vector database loading runs in background threads to prevent UI blocking
  • Background processing: asyncio.create_task() used for concurrent data loading during application startup
  • Event-driven notifications: Hook into loading completion for reactive programming patterns

πŸ” Evaluation System

  • evaluator_utils.py: RAG evaluation utilities using RAGAS framework
  • Notebook-based evaluation: evaluate_rag.ipynb for systematic testing

βš™οΈ Configuration Reference

The Configuration class (in pstuts_rag/configuration.py) is powered by Pydantic and supports environment variable overrides for all fields. Below is a reference for all configuration options:

Field Env Var Type Default Description
eva_workflow_name EVA_WORKFLOW_NAME str EVA_workflow 🏷️ Name of the EVA workflow
eva_log_level EVA_LOG_LEVEL str INFO πŸͺ΅ Logging level for EVA
transcript_glob TRANSCRIPT_GLOB str data/test.json πŸ“„ Glob pattern for transcript JSON files (supports : for multiple)
embedding_model EMBEDDING_MODEL str mbudisic/snowflake-arctic-embed-s-ft-pstuts 🧊 Embedding model name (default: custom fine-tuned snowflake)
eva_strip_think EVA_STRIP_THINK bool False πŸ’­ If set (present in env), strips 'think' steps from EVA output
embedding_api EMBEDDING_API ModelAPI HUGGINGFACE πŸ”Œ API provider for embeddings (OPENAI, HUGGINGFACE, OLLAMA)
llm_api LLM_API ModelAPI OLLAMA πŸ€– API provider for LLM (OPENAI, HUGGINGFACE, OLLAMA)
max_research_loops MAX_RESEARCH_LOOPS int 3 πŸ” Maximum number of research loops to perform
llm_tool_model LLM_TOOL_MODEL str smollm2:1.7b-instruct-q2_K πŸ› οΈ LLM model for tool calling
n_context_docs N_CONTEXT_DOCS int 2 πŸ“š Number of context documents to retrieve for RAG
search_permission EVA_SEARCH_PERMISSION str no 🌐 Permission for search (yes, no, ask)
db_persist EVA_DB_PERSIST str or None None πŸ’Ύ Path or flag for DB persistence
eva_reinitialize EVA_REINITIALIZE bool False πŸ”„ If true, reinitializes EVA DB
thread_id THREAD_ID str "" 🧡 Thread ID for the current session
  • All fields can be set via environment variables (see Pydantic BaseSettings docs).
  • Types are enforced at runtime. Defaults are shown above.
  • For advanced usage, see the Configuration class in pstuts_rag/configuration.py.

🎨 UI Customization & Theming

Sepia Theme Implementation πŸ–ΌοΈ

The application features a custom sepia-toned color scheme implemented via public/theme.json and Chainlit's theme configuration:

πŸ“ Theme Files

  • public/theme.json: Defines the sepia color palette and theme variables
  • .chainlit/config.toml: Configuration enabling the sepia theme as default

🎨 Color Palette Design

Theme colors are defined in theme.json and applied through Chainlit's theming system. There is no custom CSS file; all theming is handled via JSON and Chainlit configuration.

βš™οΈ Configuration Setup

# .chainlit/config.toml
[UI]
default_theme = "light"           # Set light theme as default
custom_theme = "/public/theme.json"  # Enable custom sepia theme

🎯 Features

  • Responsive Design: Adapts to both light and dark preferences
  • Accessibility: Maintains sufficient contrast ratios in both themes
  • Visual Cohesion: Unified sepia treatment across all UI elements
  • Performance: JSON-based theme for minimal runtime overhead
  • User Control: Native Chainlit theme switcher toggles between variants

The sepia theme creates a warm, nostalgic atmosphere perfect for Adobe Photoshop tutorials, giving the application a distinctive visual identity that stands out from standard blue/gray interfaces. πŸ“Έβœ¨

πŸš€ Running the Applications

Multi-Agent RAG (Recommended) πŸ€–

chainlit run app.py

Features team of agents including video search and web search capabilities.

Simple RAG (Basic) πŸ”

chainlit run app_simple_rag.py

Single-agent RAG system for straightforward queries.

πŸ”¬ Development Workflow

  1. Environment Setup:
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -e ".[dev]"
  1. Environment Variables:
export OPENAI_API_KEY="your-openai-key"
export TAVILY_API_KEY="your-tavily-key"  # Optional, for web search
  1. Code Quality Tools:
# Dependency analysis
deptry .

# Code formatting and linting
black .
ruff check .
mypy .

# Development debugging
ipdb  # Available for interactive debugging
  1. Notebook Development:
    • Use notebooks/ for experimentation
    • evaluate_rag.ipynb for systematic evaluation
    • Fine-tuning experiments in Fine_Tuning_Embedding_for_PSTuts.ipynb

🏷️ Versioning & Automated Tagging Workflow

This project uses semantic versioning and automated GitHub Actions to keep track of releases and make version management a breeze! πŸš€

πŸ“¦ Where is the version stored?

  • The current app version is stored in version.py at the project root:
    __version__ = "1.0.0"
    
  • This version is displayed to users in the app UI ("I'm Eva v.X.Y.Z...").

πŸ”Ό How to bump the version

  1. Edit version.py and update the __version__ string (e.g., to 1.1.0).
  2. Commit the change to the main branch:
    git add version.py
    git commit -m "chore: Bump version to 1.1.0"
    git push
    

πŸ€– What happens next? (GitHub Actions magic)

  • On every push to main that changes version.py, a GitHub Actions workflow (.github/workflows/tag-on-version.yml) runs:
    1. Extracts the version from version.py.
    2. Checks if a tag vX.Y.Z already exists.
    3. If not, creates and pushes a tag (e.g., v1.1.0) to the repo.
  • The workflow uses the official actions/checkout and is granted contents: write permission to push tags.

🏷️ Tags & Releases

  • Tags are visible in the GitHub Releases and Tags pages.
  • You can create a GitHub Release from any tag for changelogs, downloads, etc.
  • The latest version is always shown in the README badge: GitHub tag (latest by date)

πŸ“ Example: Bumping the Version

# 1. Edit version.py to set __version__ = "1.2.0"
# 2. Commit and push to main
# 3. The workflow will create and push tag v1.2.0 automatically!

πŸ“š References


🌊 Lazy Graph Initialization

The project uses a lazy initialization pattern for the LangGraph to avoid expensive compilation during module imports while maintaining compatibility with LangGraph Studio.

πŸ”§ Implementation Pattern

# In pstuts_rag/nodes.py
_compiled_graph = None

def graph(config: RunnableConfig = None):
    """Graph factory function for LangGraph Studio compatibility.
    
    This function provides lazy initialization of the graph and datastore,
    allowing the module to be imported without triggering compilation.
    LangGraph Studio requires this function to take exactly one RunnableConfig argument.
    
    Args:
        config: RunnableConfig (required by LangGraph Studio, but can be None)
    
    Returns:
        Compiled LangGraph instance
    """
    global _compiled_graph
    if _compiled_graph is None:
        _compiled_graph = graph_builder.compile()
        # Initialize datastore when graph is first accessed
        asyncio.run(datastore.from_json_globs(Configuration().transcript_glob))
    return _compiled_graph

def get_graph():
    """Convenience function to get the compiled graph without config argument."""
    return graph()

🎯 Benefits

  • Fast imports: Module loading doesn't trigger graph compilation πŸš€
  • LangGraph Studio compatibility: Maintains expected graph variable for discovery πŸ› οΈ
  • On-demand initialization: Graph and datastore only initialize when actually used ⚑
  • Memory efficiency: Resources allocated only when needed πŸ’Ύ

πŸ“„ Studio Configuration

The langgraph.json file correctly references the factory function:

{
    "graphs": {
        "enhanced_video_archive": "./pstuts_rag/pstuts_rag/nodes.py:graph"
    }
}

When LangGraph Studio accesses the graph function, it automatically triggers lazy initialization and provides the compiled graph instance. The factory function pattern ensures compatibility while maintaining performance benefits.

πŸ—οΈ Architecture Notes

  • Embedding models: Uses custom fine-tuned snowflake-arctic-embed-s-ft-pstuts by default
  • Vector store: Qdrant with semantic chunking for optimal retrieval
  • LLM: GPT-4.1-mini for generation and routing
  • Web search: Tavily integration targeting helpx.adobe.com
  • State management: LangGraph for multi-agent coordination
  • Evaluation: RAGAS framework for retrieval and generation metrics

πŸ†• Recent Refactors & Enhancements (Spring 2024)

πŸ—οΈ Modular App Structure & Async Initialization

  • The main application (app.py) is now more modular and async-friendly! Initialization of the datastore, agent graph, and session state is handled with care for concurrency and user experience.
  • The agent graph is now referenced as ai_graph (formerly compiled_graph) for clarity and onboarding ease.
  • Chainlit session and callback management is improved, making it easier to hook into events and extend the app. 🚦

πŸ€– Robust API/Model Selection Logic

  • All API/model selection (for LLMs and embeddings) is now centralized in utils.py via get_chat_api and get_embeddings_api.
  • These functions robustly parse string input to the ModelAPI enum, so you can use any case or format (e.g., "openai", "OPENAI", "Ollama") and it will Just Workβ„’.
  • This eliminates a whole class of bugs from mismatched config strings! πŸŽ‰

πŸ” Smarter Search Phrase Generation

  • The search phrase generation logic (in prompts.py and node code) now uses previous queries and conversation history to generate unique, context-aware search phrases.
  • This means less repetition, more relevance, and a more natural research workflow for the agents. 🧠✨

βš™οΈ Enhanced LLM API & Configuration

  • The Configuration class (configuration.py) now supports robust environment variable overrides and easy conversion to/from RunnableConfig.
  • All config parameters are logged and managed with dataclass fields, making debugging and onboarding a breeze.

🎨 Sepia Theme Update

  • The UI now features a beautiful sepia color palette for a warm, inviting look (see above for details!).
  • Theme files and configuration have been updated for seamless switching between light and dark sepia modes.
  • Perfect for those late-night Photoshop tutorial sessions! β˜•πŸ–ΌοΈ

πŸ“š Resources

πŸ”„ Usage Examples

Event-Based Loading with Callbacks

# Option 1: Custom callback passed to startup
async def my_completion_handler():
    print("βœ… Database is ready for queries!")
    await notify_users("System ready")

datastore = await startup(
    config=my_config,
    on_loading_complete=my_completion_handler
)

# Option 2: Register callbacks after initialization
datastore = await startup(config=my_config)

# Add additional callbacks
def on_complete():
    print("βœ… Loading finished!")

async def on_complete_async():
    await send_notification("Database ready")

datastore.add_completion_callback(on_complete)
datastore.add_completion_callback(on_complete_async)

# Option 3: Wait for completion with timeout
if await datastore.wait_for_loading(timeout=60):
    print("Loading completed within timeout")
else:
    print("Loading timed out")

πŸ› οΈ Interactive Interrupt System Usage

Environment Configuration:

# Enable interactive prompts (default)
export EVA_SEARCH_PERMISSION="ask"

# Pre-approve all searches (autonomous mode)
export EVA_SEARCH_PERMISSION="yes" 

# Block all searches (local-only mode)  
export EVA_SEARCH_PERMISSION="no"

Node Implementation Example:

# In search_help node (nodes.py)
decision = state["search_permission"]
if decision == YesNoAsk.ASK:
    # Pause execution and request user input
    response = interrupt(
        f"Do you allow Internet search for query '{query}'?"
        "Answer 'yes' will perform the search, any other answer will skip it."
    )
    
    # Parse user response  
    decision = YesNoAsk.YES if "yes" in response.strip() else YesNoAsk.NO
    
    # Update state and continue
    return Command(
        update={"search_permission": decision}, 
        goto=search_help.__name__
    )

Runtime Behavior:

User Query: "How do I use layer masks in Photoshop?"
System: "Do you allow Internet search for query 'How do I use layer masks in Photoshop?'? Answer 'yes' will perform the search, any other answer will skip it."
User: "yes"  
System: [Continues with web search + local RAG search]

User Query: "What are blend modes?"
System: "Do you allow Internet search for query 'What are blend modes?'? Answer 'yes' will perform the search, any other answer will skip it."  
User: "no"
System: [Skips web search, continues with local RAG only]

πŸ› οΈ Robust HTML Title Extraction

get_title_streaming(url)

This function fetches the HTML from a URL and extracts the page title using all the most common conventions, in this order:

  1. <meta property="og:title" content="..."> (Open Graph, for social sharing)
  2. <meta name="twitter:title" content="..."> (Twitter Cards)
  3. <meta name="title" content="..."> (sometimes used for SEO)
  4. <title>...</title> (the classic HTML title tag)

It returns the first found value as a string, or None if no title is found. All extraction is done with BeautifulSoup for maximum reliability and standards compliance.

Example usage:

from pstuts_rag.utils import get_title_streaming
url = "https://example.com"
title = get_title_streaming(url)
print(title)  # Prints the best available title, or None

πŸ₯£ Requirements

  • This function requires beautifulsoup4 to be installed:
    pip install beautifulsoup4
    

"A page by any other name would still be as sweet... but it's nice to get the right one!" πŸ˜„

πŸ“ Automatic TODO Extraction

This repo uses flake8-todos to collect all TODO-style comments from Python files and writes them to a TODO.md file at the project root.

How it works

  • Run uv run python scripts/generate_todo_md.py to (re)generate TODO.md.
  • A manual pre-commit hook is provided to automate this:
    1. Copy it into your git hooks:
      cp scripts/pre-commit .git/hooks/pre-commit && chmod +x .git/hooks/pre-commit
    2. On every commit, it will update TODO.md and stage it automatically.

Why manual?

  • This hook is not installed by default. You must opt-in by copying it yourself (see above).
  • This keeps your workflow flexible and avoids surprises for new contributors.

Example output

# πŸ“ TODOs in Codebase

- `pstuts_rag/agent.py:42`: TD003 TODO: Refactor this function
- `scripts/generate_todo_md.py:10`: TD002 FIXME: Handle edge case

Happy hacking! πŸš€

πŸ› οΈ Chainlit Settings Integration: Web Search Permission 🌐

You can now control the EVA web search permission (EVA_SEARCH_PERMISSION) directly from the Chainlit chat UI! πŸŽ›οΈ

  • The setting appears as a dropdown in the chat settings (top right βš™οΈ):
    • "Ask every time" (ask)
    • "Always allow" (yes)
    • "Never allow" (no)
  • When the user changes this setting, the backend updates the Configuration object in the session, so all subsequent actions use the new value.
  • The rest of the app always reads the current value from the session's Configuration.

How to extend:

  • To add more user-configurable settings, just add them to the Chainlit settings schema and update the session's Configuration in the @cl.on_settings_update handler. Easy as pie! πŸ₯§