burtenshaw commited on
Commit
05c2ac8
Β·
1 Parent(s): a68746d

first commit

Browse files
Files changed (10) hide show
  1. .python-version +1 -0
  2. Dockerfile +33 -0
  3. README.md +243 -1
  4. _README.md +0 -0
  5. env.example +12 -0
  6. mcp_server.py +81 -0
  7. pyproject.toml +24 -0
  8. requirements.txt +78 -0
  9. server.py +192 -0
  10. uv.lock +0 -0
.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.11
Dockerfile ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # Set working directory
4
+ WORKDIR /app
5
+
6
+ # Install system dependencies
7
+ RUN apt-get update && apt-get install -y \
8
+ git \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ # Copy project files
12
+ COPY pyproject.toml .
13
+ COPY server.py .
14
+ COPY mcp_server.py .
15
+ COPY env.example .
16
+ COPY README.md .
17
+
18
+ # Install Python dependencies
19
+ RUN pip install --no-cache-dir -e .
20
+
21
+ # Create a non-root user
22
+ RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
23
+ USER appuser
24
+
25
+ # Expose port
26
+ EXPOSE 8000
27
+
28
+ # Health check
29
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
30
+ CMD curl -f http://localhost:8000/ || exit 1
31
+
32
+ # Run the application
33
+ CMD ["python", "server.py"]
README.md CHANGED
@@ -9,4 +9,246 @@ app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  pinned: false
10
  ---
11
 
12
+ # πŸ€– Hugging Face Discussion Bot
13
+
14
+ A FastAPI and Gradio application that automatically responds to Hugging Face Hub discussion comments using AI-powered responses via Hugging Face Inference API with MCP integration.
15
+
16
+ ## ✨ Features
17
+
18
+ - **Webhook Integration**: Receives real-time webhooks from Hugging Face Hub when new discussion comments are posted
19
+ - **AI-Powered Responses**: Uses Hugging Face Inference API with MCP support for intelligent, context-aware responses
20
+ - **Interactive Dashboard**: Beautiful Gradio interface to monitor comments and test functionality
21
+ - **Automatic Posting**: Posts AI responses back to the original discussion thread
22
+ - **Testing Tools**: Built-in webhook simulation and AI testing capabilities
23
+ - **MCP Server**: Includes a Model Context Protocol server for advanced tool integration
24
+
25
+ ## πŸš€ Quick Start
26
+
27
+ ### 1. Installation
28
+
29
+ ```bash
30
+ # Clone the repository
31
+ git clone <your-repo-url>
32
+ cd mcp-course-unit3-example
33
+
34
+ # Install dependencies
35
+ pip install -e .
36
+ ```
37
+
38
+ ### 2. Environment Setup
39
+
40
+ Copy the example environment file and configure your API keys:
41
+
42
+ ```bash
43
+ cp env.example .env
44
+ ```
45
+
46
+ Edit `.env` with your credentials:
47
+
48
+ ```env
49
+ # Webhook Configuration
50
+ WEBHOOK_SECRET=your-secure-webhook-secret
51
+
52
+ # Hugging Face Configuration
53
+ HF_TOKEN=hf_your_hugging_face_token_here
54
+
55
+ # Model Configuration (optional)
56
+ HF_MODEL=microsoft/DialoGPT-medium
57
+ HF_PROVIDER=huggingface
58
+ ```
59
+
60
+ ### 3. Run the Application
61
+
62
+ ```bash
63
+ python server.py
64
+ ```
65
+
66
+ The application will start on `http://localhost:8000` with:
67
+ - πŸ“Š **Gradio Dashboard**: `http://localhost:8000/gradio`
68
+ - πŸ”— **Webhook Endpoint**: `http://localhost:8000/webhook`
69
+ - πŸ“‹ **API Documentation**: `http://localhost:8000/docs`
70
+
71
+ ## πŸ”§ Configuration
72
+
73
+ ### Hugging Face Hub Webhook Setup
74
+
75
+ 1. Go to your Hugging Face repository settings
76
+ 2. Navigate to the "Webhooks" section
77
+ 3. Create a new webhook with:
78
+ - **URL**: `https://your-domain.com/webhook`
79
+ - **Secret**: Same as `WEBHOOK_SECRET` in your `.env`
80
+ - **Events**: Subscribe to "Community (PR & discussions)"
81
+
82
+ ### Required API Keys
83
+
84
+ #### Hugging Face Token
85
+ 1. Go to [Hugging Face Settings](https://huggingface.co/settings/tokens)
86
+ 2. Create a new token with "Write" permissions
87
+ 3. Add it to your `.env` as `HF_TOKEN`
88
+
89
+ ## πŸ“Š Dashboard Features
90
+
91
+ ### Recent Comments Tab
92
+ - View all processed discussion comments
93
+ - See AI responses in real-time
94
+ - Refresh and filter capabilities
95
+
96
+ ### Test HF Inference Tab
97
+ - Direct testing of the Hugging Face Inference API
98
+ - Custom prompt input
99
+ - Response preview
100
+
101
+ ### Simulate Webhook Tab
102
+ - Test webhook processing without real HF events
103
+ - Mock discussion scenarios
104
+ - Validate AI response generation
105
+
106
+ ### Configuration Tab
107
+ - View current setup status
108
+ - Check API key configuration
109
+ - Monitor processing statistics
110
+
111
+ ## πŸ”Œ API Endpoints
112
+
113
+ ### POST `/webhook`
114
+ Receives webhooks from Hugging Face Hub.
115
+
116
+ **Headers:**
117
+ - `X-Webhook-Secret`: Your webhook secret
118
+
119
+ **Body:** HF Hub webhook payload
120
+
121
+ ### GET `/comments`
122
+ Returns all processed comments and responses.
123
+
124
+ ### GET `/`
125
+ Basic API information and available endpoints.
126
+
127
+ ## πŸ€– MCP Server
128
+
129
+ The application includes a Model Context Protocol (MCP) server that provides tools for:
130
+
131
+ - **get_discussions**: Retrieve discussions from HF repositories
132
+ - **get_discussion_details**: Get detailed information about specific discussions
133
+ - **comment_on_discussion**: Add comments to discussions
134
+ - **generate_ai_response**: Generate AI responses using HF Inference
135
+ - **respond_to_discussion**: Generate and post AI responses automatically
136
+
137
+ ### Running the MCP Server
138
+
139
+ ```bash
140
+ python mcp_server.py
141
+ ```
142
+
143
+ The MCP server uses stdio transport and can be integrated with MCP clients following the [Tiny Agents pattern](https://huggingface.co/blog/python-tiny-agents).
144
+
145
+ ## πŸ§ͺ Testing
146
+
147
+ ### Local Testing
148
+ Use the "Simulate Webhook" tab in the Gradio dashboard to test without real webhooks.
149
+
150
+ ### Webhook Testing
151
+ You can test the webhook endpoint directly:
152
+
153
+ ```bash
154
+ curl -X POST http://localhost:8000/webhook \
155
+ -H "Content-Type: application/json" \
156
+ -H "X-Webhook-Secret: your-webhook-secret" \
157
+ -d '{
158
+ "event": {"action": "create", "scope": "discussion.comment"},
159
+ "comment": {
160
+ "content": "@discussion-bot How do I use this model?",
161
+ "author": "test-user",
162
+ "created_at": "2024-01-01T00:00:00Z"
163
+ },
164
+ "discussion": {
165
+ "title": "Test Discussion",
166
+ "num": 1,
167
+ "url": {"api": "https://huggingface.co/api/repos/test/repo/discussions"}
168
+ },
169
+ "repo": {"name": "test/repo"}
170
+ }'
171
+ ```
172
+
173
+ ## πŸ—οΈ Architecture
174
+
175
+ ```
176
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
177
+ β”‚ HF Hub │───▢│ FastAPI │───▢│ HF Inference β”‚
178
+ β”‚ Webhook β”‚ β”‚ Server β”‚ β”‚ API β”‚
179
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
180
+ β”‚
181
+ β–Ό
182
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
183
+ β”‚ Gradio β”‚
184
+ β”‚ Dashboard β”‚
185
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
186
+ β”‚
187
+ β–Ό
188
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
189
+ β”‚ MCP Server β”‚
190
+ β”‚ (Tools) β”‚
191
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
192
+ ```
193
+
194
+ ## πŸ”’ Security
195
+
196
+ - Webhook secret verification prevents unauthorized requests
197
+ - Environment variables keep sensitive data secure
198
+ - CORS middleware configured for safe cross-origin requests
199
+
200
+ ## πŸš€ Deployment
201
+
202
+ ### Using Docker (Recommended)
203
+
204
+ ```dockerfile
205
+ FROM python:3.11-slim
206
+
207
+ WORKDIR /app
208
+ COPY . .
209
+ RUN pip install -e .
210
+
211
+ EXPOSE 8000
212
+ CMD ["python", "server.py"]
213
+ ```
214
+
215
+ ### Using Cloud Platforms
216
+
217
+ The application can be deployed on:
218
+ - **Hugging Face Spaces** (recommended for HF integration)
219
+ - **Railway**
220
+ - **Render**
221
+ - **Heroku**
222
+ - **AWS/GCP/Azure**
223
+
224
+ ## 🀝 Contributing
225
+
226
+ 1. Fork the repository
227
+ 2. Create a feature branch
228
+ 3. Make your changes
229
+ 4. Add tests if applicable
230
+ 5. Submit a pull request
231
+
232
+ ## πŸ“ License
233
+
234
+ This project is licensed under the MIT License.
235
+
236
+ ## πŸ†˜ Support
237
+
238
+ If you encounter issues:
239
+
240
+ 1. Check the Configuration tab in the dashboard
241
+ 2. Verify your API keys are correct
242
+ 3. Ensure webhook URL is accessible
243
+ 4. Check the application logs
244
+
245
+ For additional help, please open an issue in the repository.
246
+
247
+ ## πŸ”— Related Links
248
+
249
+ - [Hugging Face Webhooks Guide](https://huggingface.co/docs/hub/en/webhooks-guide-discussion-bot)
250
+ - [Hugging Face Hub Python Library](https://huggingface.co/docs/huggingface_hub/en/guides/community)
251
+ - [Tiny Agents in Python Blog Post](https://huggingface.co/blog/python-tiny-agents)
252
+ - [FastAPI Documentation](https://fastapi.tiangolo.com/)
253
+ - [Gradio Documentation](https://gradio.app/)
254
+ - [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
_README.md ADDED
File without changes
env.example ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Webhook Configuration
2
+ WEBHOOK_SECRET=your-webhook-secret-here
3
+
4
+ # Hugging Face Configuration
5
+ HF_TOKEN=your-huggingface-token-here
6
+
7
+ # Model Configuration (optional)
8
+ HF_MODEL=microsoft/DialoGPT-medium
9
+ HF_PROVIDER=huggingface
10
+
11
+ # Optional: Custom bot username for mention detection
12
+ BOT_USERNAME=discussion-bot
mcp_server.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Simplified MCP Server for HuggingFace Hub Operations using FastMCP
4
+ """
5
+
6
+ import os
7
+ from fastmcp import FastMCP
8
+ from huggingface_hub import comment_discussion, InferenceClient
9
+ from dotenv import load_dotenv
10
+
11
+ load_dotenv()
12
+
13
+ # Configuration
14
+ HF_TOKEN = os.getenv("HF_TOKEN")
15
+ DEFAULT_MODEL = os.getenv("HF_MODEL", "Qwen/Qwen2.5-72B-Instruct")
16
+
17
+ # Initialize HF client
18
+ inference_client = (
19
+ InferenceClient(model=DEFAULT_MODEL, token=HF_TOKEN) if HF_TOKEN else None
20
+ )
21
+
22
+ # Create the FastMCP server
23
+ mcp = FastMCP("hf-discussion-bot")
24
+
25
+
26
+ @mcp.tool()
27
+ def generate_discussion_response(
28
+ discussion_title: str, comment_content: str, repo_name: str
29
+ ) -> str:
30
+ """Generate AI response for a HuggingFace discussion comment"""
31
+ if not inference_client:
32
+ return "Error: HF token not configured for inference"
33
+
34
+ prompt = f"""
35
+ Discussion: {discussion_title}
36
+ Repository: {repo_name}
37
+ Comment: {comment_content}
38
+
39
+ Provide a helpful response to this comment.
40
+ """
41
+
42
+ try:
43
+ messages = [
44
+ {
45
+ "role": "system",
46
+ "content": ("You are a helpful AI assistant for ML discussions."),
47
+ },
48
+ {"role": "user", "content": prompt},
49
+ ]
50
+
51
+ response = inference_client.chat_completion(messages=messages, max_tokens=150)
52
+ content = response.choices[0].message.content
53
+ ai_response = content.strip() if content else "No response generated"
54
+ return ai_response
55
+
56
+ except Exception as e:
57
+ return f"Error generating response: {str(e)}"
58
+
59
+
60
+ @mcp.tool()
61
+ def post_discussion_comment(repo_id: str, discussion_num: int, comment: str) -> str:
62
+ """Post a comment to a HuggingFace discussion"""
63
+ if not HF_TOKEN:
64
+ return "Error: HF token not configured"
65
+
66
+ try:
67
+ comment_discussion(
68
+ repo_id=repo_id,
69
+ discussion_num=discussion_num,
70
+ comment=comment,
71
+ token=HF_TOKEN,
72
+ )
73
+ success_msg = f"Successfully posted comment to discussion #{discussion_num}"
74
+ return success_msg
75
+
76
+ except Exception as e:
77
+ return f"Error posting comment: {str(e)}"
78
+
79
+
80
+ if __name__ == "__main__":
81
+ mcp.run()
pyproject.toml ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "mcp-course-unit3-example"
3
+ version = "0.1.0"
4
+ description = "FastAPI and Gradio app for Hugging Face Hub discussion webhooks"
5
+ readme = "README.md"
6
+ requires-python = ">=3.11"
7
+ dependencies = [
8
+ "fastapi>=0.104.0",
9
+ "uvicorn[standard]>=0.24.0",
10
+ "gradio>=4.0.0",
11
+ "huggingface-hub[mcp]>=0.32.0",
12
+ "pydantic>=2.0.0",
13
+ "python-multipart>=0.0.6",
14
+ "requests>=2.31.0",
15
+ "python-dotenv>=1.0.0",
16
+ "fastmcp>=2.0.0",
17
+ ]
18
+
19
+ [build-system]
20
+ requires = ["hatchling"]
21
+ build-backend = "hatchling.build"
22
+
23
+ [tool.hatch.build.targets.wheel]
24
+ packages = ["src"]
requirements.txt ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file was autogenerated by uv via the following command:
2
+ # uv export --format requirements-txt --no-hashes
3
+ -e .
4
+ aiofiles==24.1.0
5
+ aiohappyeyeballs==2.6.1
6
+ aiohttp==3.12.2
7
+ aiosignal==1.3.2
8
+ annotated-types==0.7.0
9
+ anyio==4.9.0
10
+ attrs==25.3.0
11
+ audioop-lts==0.2.1 ; python_full_version >= '3.13'
12
+ certifi==2025.4.26
13
+ charset-normalizer==3.4.2
14
+ click==8.2.1
15
+ colorama==0.4.6 ; sys_platform == 'win32' or platform_system == 'Windows'
16
+ exceptiongroup==1.3.0
17
+ fastapi==0.115.12
18
+ fastmcp==2.5.1
19
+ ffmpy==0.5.0
20
+ filelock==3.18.0
21
+ frozenlist==1.6.0
22
+ fsspec==2025.5.1
23
+ gradio==5.31.0
24
+ gradio-client==1.10.1
25
+ groovy==0.1.2
26
+ h11==0.16.0
27
+ hf-xet==1.1.2 ; platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'
28
+ httpcore==1.0.9
29
+ httptools==0.6.4
30
+ httpx==0.28.1
31
+ httpx-sse==0.4.0
32
+ huggingface-hub==0.32.2
33
+ idna==3.10
34
+ jinja2==3.1.6
35
+ markdown-it-py==3.0.0
36
+ markupsafe==3.0.2
37
+ mcp==1.9.1
38
+ mdurl==0.1.2
39
+ multidict==6.4.4
40
+ numpy==2.2.6
41
+ openapi-pydantic==0.5.1
42
+ orjson==3.10.18
43
+ packaging==25.0
44
+ pandas==2.2.3
45
+ pillow==11.2.1
46
+ propcache==0.3.1
47
+ pydantic==2.11.5
48
+ pydantic-core==2.33.2
49
+ pydantic-settings==2.9.1
50
+ pydub==0.25.1
51
+ pygments==2.19.1
52
+ python-dateutil==2.9.0.post0
53
+ python-dotenv==1.1.0
54
+ python-multipart==0.0.20
55
+ pytz==2025.2
56
+ pyyaml==6.0.2
57
+ requests==2.32.3
58
+ rich==14.0.0
59
+ ruff==0.11.11 ; sys_platform != 'emscripten'
60
+ safehttpx==0.1.6
61
+ semantic-version==2.10.0
62
+ shellingham==1.5.4
63
+ six==1.17.0
64
+ sniffio==1.3.1
65
+ sse-starlette==2.3.5
66
+ starlette==0.46.2
67
+ tomlkit==0.13.2
68
+ tqdm==4.67.1
69
+ typer==0.16.0
70
+ typing-extensions==4.13.2
71
+ typing-inspection==0.4.1
72
+ tzdata==2025.2
73
+ urllib3==2.4.0
74
+ uvicorn==0.34.2
75
+ uvloop==0.21.0 ; platform_python_implementation != 'PyPy' and sys_platform != 'cygwin' and sys_platform != 'win32'
76
+ watchfiles==1.0.5
77
+ websockets==15.0.1
78
+ yarl==1.20.0
server.py ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from datetime import datetime
3
+ from typing import List, Dict, Any, Optional
4
+
5
+ from fastapi import FastAPI, Request, BackgroundTasks
6
+ from fastapi.middleware.cors import CORSMiddleware
7
+ import gradio as gr
8
+ import uvicorn
9
+ from pydantic import BaseModel
10
+ from huggingface_hub.inference._mcp.agent import Agent
11
+ from dotenv import load_dotenv
12
+
13
+ load_dotenv()
14
+
15
+ # Configuration
16
+ WEBHOOK_SECRET = os.getenv("WEBHOOK_SECRET", "your-webhook-secret")
17
+ HF_TOKEN = os.getenv("HF_TOKEN")
18
+ HF_MODEL = os.getenv("HF_MODEL", "microsoft/DialoGPT-medium")
19
+ HF_PROVIDER = os.getenv("HF_PROVIDER", "huggingface")
20
+
21
+ # Simple storage for processed comments
22
+ comments_store: List[Dict[str, Any]] = []
23
+
24
+ # Agent instance
25
+ agent_instance: Optional[Agent] = None
26
+
27
+
28
+ class WebhookEvent(BaseModel):
29
+ event: Dict[str, str]
30
+ comment: Dict[str, Any]
31
+ discussion: Dict[str, Any]
32
+ repo: Dict[str, str]
33
+
34
+
35
+ app = FastAPI(title="HF Discussion Bot")
36
+ app.add_middleware(CORSMiddleware, allow_origins=["*"])
37
+
38
+
39
+ async def get_agent():
40
+ """Get or create Agent instance"""
41
+ global agent_instance
42
+ if agent_instance is None and HF_TOKEN:
43
+ agent_instance = Agent(
44
+ model=HF_MODEL,
45
+ provider=HF_PROVIDER,
46
+ api_key=HF_TOKEN,
47
+ servers=[
48
+ {
49
+ "type": "stdio",
50
+ "config": {"command": "python", "args": ["mcp_server.py"]},
51
+ }
52
+ ],
53
+ )
54
+ await agent_instance.load_tools()
55
+ return agent_instance
56
+
57
+
58
+ async def process_webhook_comment(webhook_data: Dict[str, Any]):
59
+ """Process webhook using Agent with MCP tools"""
60
+ comment_content = webhook_data["comment"]["content"]
61
+ discussion_title = webhook_data["discussion"]["title"]
62
+ repo_name = webhook_data["repo"]["name"]
63
+ discussion_num = webhook_data["discussion"]["num"]
64
+
65
+ agent = await get_agent()
66
+ if not agent:
67
+ ai_response = "Error: Agent not configured (missing HF_TOKEN)"
68
+ else:
69
+ # Use Agent to respond to the discussion
70
+ prompt = f"""
71
+ Please respond to this HuggingFace discussion comment using the available tools.
72
+
73
+ Repository: {repo_name}
74
+ Discussion: {discussion_title} (#{discussion_num})
75
+ Comment: {comment_content}
76
+
77
+ First use generate_discussion_response to create a helpful response, then use post_discussion_comment to post it.
78
+ """
79
+
80
+ try:
81
+ response_parts = []
82
+ async for item in agent.run(prompt):
83
+ # Collect the agent's response
84
+ if hasattr(item, "content") and item.content:
85
+ response_parts.append(item.content)
86
+ elif isinstance(item, str):
87
+ response_parts.append(item)
88
+
89
+ ai_response = (
90
+ " ".join(response_parts) if response_parts else "No response generated"
91
+ )
92
+ except Exception as e:
93
+ ai_response = f"Error using agent: {str(e)}"
94
+
95
+ # Store the interaction with reply link
96
+ discussion_url = f"https://huggingface.co/{repo_name}/discussions/{discussion_num}"
97
+
98
+ interaction = {
99
+ "timestamp": datetime.now().isoformat(),
100
+ "repo": repo_name,
101
+ "discussion_title": discussion_title,
102
+ "discussion_num": discussion_num,
103
+ "discussion_url": discussion_url,
104
+ "original_comment": comment_content,
105
+ "ai_response": ai_response,
106
+ "comment_author": webhook_data["comment"]["author"],
107
+ }
108
+
109
+ comments_store.append(interaction)
110
+ return ai_response
111
+
112
+
113
+ @app.post("/webhook")
114
+ async def webhook_handler(request: Request, background_tasks: BackgroundTasks):
115
+ """Handle HF Hub webhooks"""
116
+ webhook_secret = request.headers.get("X-Webhook-Secret")
117
+ if webhook_secret != WEBHOOK_SECRET:
118
+ return {"error": "Invalid webhook secret"}
119
+
120
+ payload = await request.json()
121
+ event = payload.get("event", {})
122
+
123
+ if event.get("action") == "create" and event.get("scope") == "discussion.comment":
124
+ background_tasks.add_task(process_webhook_comment, payload)
125
+ return {"status": "processing"}
126
+
127
+ return {"status": "ignored"}
128
+
129
+
130
+ async def simulate_webhook(
131
+ repo_name: str, discussion_title: str, comment_content: str
132
+ ) -> str:
133
+ """Simulate webhook for testing"""
134
+ if not all([repo_name, discussion_title, comment_content]):
135
+ return "Please fill in all fields."
136
+
137
+ mock_payload = {
138
+ "event": {"action": "create", "scope": "discussion.comment"},
139
+ "comment": {
140
+ "content": comment_content,
141
+ "author": "test-user",
142
+ "created_at": datetime.now().isoformat(),
143
+ },
144
+ "discussion": {
145
+ "title": discussion_title,
146
+ "num": len(comments_store) + 1,
147
+ },
148
+ "repo": {"name": repo_name},
149
+ }
150
+
151
+ response = await process_webhook_comment(mock_payload)
152
+ return f"βœ… Processed! AI Response: {response}"
153
+
154
+
155
+ def create_gradio_app():
156
+ """Create Gradio interface"""
157
+ with gr.Blocks(title="HF Discussion Bot", theme=gr.themes.Soft()) as demo:
158
+ gr.Markdown("# πŸ€– HF Discussion Bot Dashboard")
159
+ gr.Markdown("*Powered by HuggingFace Tiny Agents + FastMCP*")
160
+
161
+ with gr.Column():
162
+ sim_repo = gr.Textbox(label="Repository", value="microsoft/DialoGPT-medium")
163
+ sim_title = gr.Textbox(label="Discussion Title", value="Test Discussion")
164
+ sim_comment = gr.Textbox(
165
+ label="Comment",
166
+ lines=3,
167
+ value="How do I use this model?",
168
+ )
169
+ sim_btn = gr.Button("πŸ“€ Test Webhook")
170
+
171
+ with gr.Column():
172
+ sim_result = gr.Textbox(label="Result", lines=8)
173
+
174
+ sim_btn.click(
175
+ fn=simulate_webhook,
176
+ inputs=[sim_repo, sim_title, sim_comment],
177
+ outputs=[sim_result],
178
+ )
179
+
180
+ return demo
181
+
182
+
183
+ # Mount Gradio app
184
+ gradio_app = create_gradio_app()
185
+ app = gr.mount_gradio_app(app, gradio_app, path="/gradio")
186
+
187
+
188
+ if __name__ == "__main__":
189
+ print("πŸš€ Starting HF Discussion Bot with Tiny Agents...")
190
+ print("πŸ“Š Dashboard: http://localhost:8001/gradio")
191
+ print("πŸ”— Webhook: http://localhost:8001/webhook")
192
+ uvicorn.run("server:app", host="0.0.0.0", port=8001, reload=True)
uv.lock ADDED
The diff for this file is too large to render. See raw diff