Spaces:
Running
Running
Chunhua Liao
commited on
Commit
Β·
81d7a7f
1
Parent(s):
a0927d3
Convert project to Gradio app for Hugging Face Spaces deployment
Browse files- Create app.py with full Gradio interface for AI Co-Scientist system
- Add Gradio to requirements.txt replacing FastAPI/Uvicorn
- Implement comprehensive UI with research goal input, advanced settings
- Add automatic environment detection and cost control for HF Spaces
- Create README_HF.md with proper HF Spaces metadata and documentation
- Add deployment guide in docs/huggingface_deployment.md
- Create test suite in tests/test_gradio.py with full validation
- Maintain all existing functionality: hypothesis generation, evolution, ranking
- Add literature integration with arXiv search and paper display
- Include deployment status banner and model filtering
- Support both local development and production deployment modes
- README_HF.md +86 -0
- app.py +414 -0
- docs/huggingface_deployment.md +191 -0
- requirements.txt +1 -2
- tests/test_gradio.py +120 -0
README_HF.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: AI Co-Scientist
|
3 |
+
emoji: π¬
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: green
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 4.44.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
short_description: Generate, review, rank, and evolve research hypotheses using AI agents
|
12 |
+
---
|
13 |
+
|
14 |
+
# π¬ AI Co-Scientist - Hypothesis Evolution System
|
15 |
+
|
16 |
+
An AI-powered system for generating, reviewing, ranking, and evolving research hypotheses using multiple AI agents. This system helps researchers explore research spaces and identify promising hypotheses through iterative refinement.
|
17 |
+
|
18 |
+
## π Features
|
19 |
+
|
20 |
+
- **Multi-Agent System**: Uses specialized AI agents for generation, reflection, ranking, evolution, and meta-review
|
21 |
+
- **Hypothesis Evolution**: Combines top-performing hypotheses to create improved versions
|
22 |
+
- **Literature Integration**: Automatically finds related arXiv papers for your research topic
|
23 |
+
- **Cost Control**: Automatically filters to cost-effective models in production deployment
|
24 |
+
- **Interactive Interface**: Easy-to-use Gradio interface with advanced settings
|
25 |
+
|
26 |
+
## π― How to Use
|
27 |
+
|
28 |
+
1. **Enter Research Goal**: Describe what you want to research in the text area
|
29 |
+
2. **Adjust Settings** (optional): Expand "Advanced Settings" to customize:
|
30 |
+
- LLM model selection
|
31 |
+
- Number of hypotheses per cycle
|
32 |
+
- Temperature settings for creativity vs. analysis
|
33 |
+
- Ranking and evolution parameters
|
34 |
+
3. **Set Goal**: Click "Set Research Goal" to initialize the system
|
35 |
+
4. **Run Cycles**: Click "Run Cycle" to generate and evolve hypotheses iteratively
|
36 |
+
|
37 |
+
## π§ How It Works
|
38 |
+
|
39 |
+
The system uses a multi-agent approach:
|
40 |
+
|
41 |
+
1. **Generation Agent**: Creates new research hypotheses
|
42 |
+
2. **Reflection Agent**: Reviews and assesses hypotheses for novelty and feasibility
|
43 |
+
3. **Ranking Agent**: Uses Elo rating system to rank hypotheses
|
44 |
+
4. **Evolution Agent**: Combines top hypotheses to create improved versions
|
45 |
+
5. **Proximity Agent**: Analyzes similarity between hypotheses
|
46 |
+
6. **Meta-Review Agent**: Provides overall critique and suggests next steps
|
47 |
+
|
48 |
+
## π Literature Integration
|
49 |
+
|
50 |
+
- Automatically searches arXiv for papers related to your research goal
|
51 |
+
- Displays relevant papers with full metadata, abstracts, and links
|
52 |
+
- Helps contextualize generated hypotheses within existing research
|
53 |
+
|
54 |
+
## π‘ Example Research Goals
|
55 |
+
|
56 |
+
- "Develop new methods for increasing the efficiency of solar panels"
|
57 |
+
- "Create novel approaches to treat Alzheimer's disease"
|
58 |
+
- "Design sustainable materials for construction"
|
59 |
+
- "Improve machine learning model interpretability"
|
60 |
+
- "Develop new quantum computing algorithms"
|
61 |
+
|
62 |
+
## βοΈ Technical Details
|
63 |
+
|
64 |
+
- **Models**: Uses OpenRouter API with cost-effective models in production
|
65 |
+
- **Environment Detection**: Automatically detects Hugging Face Spaces deployment
|
66 |
+
- **Cost Control**: Filters to budget-friendly models (Gemini Flash, GPT-3.5-turbo, Claude Haiku, etc.)
|
67 |
+
- **Iterative Process**: Each cycle builds on previous results for continuous improvement
|
68 |
+
|
69 |
+
## π§ Configuration
|
70 |
+
|
71 |
+
The system automatically configures itself based on the deployment environment:
|
72 |
+
|
73 |
+
- **Production (HF Spaces)**: Limited to cost-effective models for budget control
|
74 |
+
- **Development**: Full access to all available models
|
75 |
+
|
76 |
+
## π Research Paper
|
77 |
+
|
78 |
+
Based on the AI Co-Scientist research: https://storage.googleapis.com/coscientist_paper/ai_coscientist.pdf
|
79 |
+
|
80 |
+
## π€ Contributing
|
81 |
+
|
82 |
+
This is an open-source project. Feel free to contribute improvements, bug fixes, or new features.
|
83 |
+
|
84 |
+
## β οΈ Note
|
85 |
+
|
86 |
+
This system requires an OpenRouter API key to function. The public demo uses a limited budget, so please use it responsibly. For extensive research, consider running your own instance with your API key.
|
app.py
ADDED
@@ -0,0 +1,414 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gradio as gr
|
2 |
+
import os
|
3 |
+
import json
|
4 |
+
import time
|
5 |
+
from typing import List, Dict, Optional, Tuple
|
6 |
+
import logging
|
7 |
+
|
8 |
+
# Import the existing app components
|
9 |
+
from app.models import ResearchGoal, ContextMemory
|
10 |
+
from app.agents import SupervisorAgent
|
11 |
+
from app.utils import logger, is_huggingface_space, get_deployment_environment
|
12 |
+
from app.tools.arxiv_search import ArxivSearchTool
|
13 |
+
import requests
|
14 |
+
|
15 |
+
# Global state for the Gradio app
|
16 |
+
global_context = ContextMemory()
|
17 |
+
supervisor = SupervisorAgent()
|
18 |
+
current_research_goal: Optional[ResearchGoal] = None
|
19 |
+
available_models: List[str] = []
|
20 |
+
|
21 |
+
# Configure logging for Gradio
|
22 |
+
logging.basicConfig(level=logging.INFO)
|
23 |
+
|
24 |
+
def fetch_available_models():
|
25 |
+
"""Fetch available models from OpenRouter with environment-based filtering."""
|
26 |
+
global available_models
|
27 |
+
|
28 |
+
# Detect deployment environment
|
29 |
+
deployment_env = get_deployment_environment()
|
30 |
+
is_hf_spaces = is_huggingface_space()
|
31 |
+
|
32 |
+
logger.info(f"Detected deployment environment: {deployment_env}")
|
33 |
+
logger.info(f"Is Hugging Face Spaces: {is_hf_spaces}")
|
34 |
+
|
35 |
+
# Define cost-effective models for production deployment
|
36 |
+
ALLOWED_MODELS_PRODUCTION = [
|
37 |
+
"google/gemini-2.0-flash-001",
|
38 |
+
"google/gemini-flash-1.5",
|
39 |
+
"openai/gpt-3.5-turbo",
|
40 |
+
"anthropic/claude-3-haiku",
|
41 |
+
"meta-llama/llama-3.1-8b-instruct",
|
42 |
+
"mistralai/mistral-7b-instruct",
|
43 |
+
"microsoft/phi-3-mini-4k-instruct"
|
44 |
+
]
|
45 |
+
|
46 |
+
try:
|
47 |
+
response = requests.get("https://openrouter.ai/api/v1/models", timeout=10)
|
48 |
+
response.raise_for_status()
|
49 |
+
models_data = response.json().get("data", [])
|
50 |
+
|
51 |
+
# Extract all model IDs
|
52 |
+
all_models = sorted([model.get("id") for model in models_data if model.get("id")])
|
53 |
+
|
54 |
+
# Apply filtering based on environment
|
55 |
+
if is_hf_spaces:
|
56 |
+
# Filter to only cost-effective models in HF Spaces
|
57 |
+
available_models = [model for model in all_models if model in ALLOWED_MODELS_PRODUCTION]
|
58 |
+
logger.info(f"Hugging Face Spaces: Filtered to {len(available_models)} cost-effective models")
|
59 |
+
else:
|
60 |
+
# Use all models in local/development environment
|
61 |
+
available_models = all_models
|
62 |
+
logger.info(f"Local/Development: Using all {len(available_models)} models")
|
63 |
+
|
64 |
+
except Exception as e:
|
65 |
+
logger.error(f"Failed to fetch models from OpenRouter: {e}")
|
66 |
+
# Fallback to safe defaults
|
67 |
+
available_models = ALLOWED_MODELS_PRODUCTION if is_hf_spaces else ["google/gemini-2.0-flash-001"]
|
68 |
+
|
69 |
+
return available_models
|
70 |
+
|
71 |
+
def get_deployment_status():
|
72 |
+
"""Get deployment status information."""
|
73 |
+
deployment_env = get_deployment_environment()
|
74 |
+
is_hf_spaces = is_huggingface_space()
|
75 |
+
|
76 |
+
if is_hf_spaces:
|
77 |
+
status = f"π Running in {deployment_env} | Models filtered for cost control ({len(available_models)} available)"
|
78 |
+
color = "orange"
|
79 |
+
else:
|
80 |
+
status = f"π» Running in {deployment_env} | All models available ({len(available_models)} total)"
|
81 |
+
color = "blue"
|
82 |
+
|
83 |
+
return status, color
|
84 |
+
|
85 |
+
def set_research_goal(
|
86 |
+
description: str,
|
87 |
+
llm_model: str = None,
|
88 |
+
num_hypotheses: int = 3,
|
89 |
+
generation_temperature: float = 0.7,
|
90 |
+
reflection_temperature: float = 0.5,
|
91 |
+
elo_k_factor: int = 32,
|
92 |
+
top_k_hypotheses: int = 2
|
93 |
+
) -> Tuple[str, str]:
|
94 |
+
"""Set the research goal and initialize the system."""
|
95 |
+
global current_research_goal, global_context
|
96 |
+
|
97 |
+
if not description.strip():
|
98 |
+
return "β Error: Please enter a research goal.", ""
|
99 |
+
|
100 |
+
try:
|
101 |
+
# Create research goal with settings
|
102 |
+
current_research_goal = ResearchGoal(
|
103 |
+
description=description.strip(),
|
104 |
+
constraints={},
|
105 |
+
llm_model=llm_model if llm_model and llm_model != "-- Select Model --" else None,
|
106 |
+
num_hypotheses=num_hypotheses,
|
107 |
+
generation_temperature=generation_temperature,
|
108 |
+
reflection_temperature=reflection_temperature,
|
109 |
+
elo_k_factor=elo_k_factor,
|
110 |
+
top_k_hypotheses=top_k_hypotheses
|
111 |
+
)
|
112 |
+
|
113 |
+
# Reset context
|
114 |
+
global_context = ContextMemory()
|
115 |
+
|
116 |
+
logger.info(f"Research goal set: {description}")
|
117 |
+
logger.info(f"Settings: model={current_research_goal.llm_model}, num={current_research_goal.num_hypotheses}")
|
118 |
+
|
119 |
+
status_msg = f"β
Research goal set successfully!\n\n**Goal:** {description}\n**Model:** {current_research_goal.llm_model or 'Default'}\n**Hypotheses per cycle:** {num_hypotheses}"
|
120 |
+
|
121 |
+
return status_msg, "Ready to run first cycle. Click 'Run Cycle' to begin."
|
122 |
+
|
123 |
+
except Exception as e:
|
124 |
+
error_msg = f"β Error setting research goal: {str(e)}"
|
125 |
+
logger.error(error_msg)
|
126 |
+
return error_msg, ""
|
127 |
+
|
128 |
+
def run_cycle() -> Tuple[str, str, str]:
|
129 |
+
"""Run a single research cycle."""
|
130 |
+
global current_research_goal, global_context, supervisor
|
131 |
+
|
132 |
+
if not current_research_goal:
|
133 |
+
return "β Error: No research goal set. Please set a research goal first.", "", ""
|
134 |
+
|
135 |
+
try:
|
136 |
+
iteration = global_context.iteration_number + 1
|
137 |
+
logger.info(f"Running cycle {iteration}")
|
138 |
+
|
139 |
+
# Run the cycle
|
140 |
+
cycle_details = supervisor.run_cycle(current_research_goal, global_context)
|
141 |
+
|
142 |
+
# Format results for display
|
143 |
+
results_html = format_cycle_results(cycle_details)
|
144 |
+
|
145 |
+
# Get references
|
146 |
+
references_html = get_references_html(cycle_details)
|
147 |
+
|
148 |
+
# Status message
|
149 |
+
status_msg = f"β
Cycle {iteration} completed successfully!"
|
150 |
+
|
151 |
+
return status_msg, results_html, references_html
|
152 |
+
|
153 |
+
except Exception as e:
|
154 |
+
error_msg = f"β Error during cycle execution: {str(e)}"
|
155 |
+
logger.error(error_msg, exc_info=True)
|
156 |
+
return error_msg, "", ""
|
157 |
+
|
158 |
+
def format_cycle_results(cycle_details: Dict) -> str:
|
159 |
+
"""Format cycle results as HTML."""
|
160 |
+
html = f"<h2>π¬ Iteration {cycle_details.get('iteration', 'Unknown')}</h2>"
|
161 |
+
|
162 |
+
# Meta-review
|
163 |
+
if cycle_details.get('meta_review'):
|
164 |
+
meta_review = cycle_details['meta_review']
|
165 |
+
html += "<h3>π Meta-Review</h3>"
|
166 |
+
|
167 |
+
if meta_review.get('meta_review_critique'):
|
168 |
+
html += "<h4>Critique:</h4><ul>"
|
169 |
+
for critique in meta_review['meta_review_critique']:
|
170 |
+
html += f"<li>{critique}</li>"
|
171 |
+
html += "</ul>"
|
172 |
+
|
173 |
+
if meta_review.get('research_overview', {}).get('suggested_next_steps'):
|
174 |
+
html += "<h4>Suggested Next Steps:</h4><ul>"
|
175 |
+
for step in meta_review['research_overview']['suggested_next_steps']:
|
176 |
+
html += f"<li>{step}</li>"
|
177 |
+
html += "</ul>"
|
178 |
+
|
179 |
+
# Hypotheses from different steps
|
180 |
+
all_hypotheses = []
|
181 |
+
for step_name, step_data in cycle_details.get('steps', {}).items():
|
182 |
+
if step_data.get('hypotheses'):
|
183 |
+
all_hypotheses.extend(step_data['hypotheses'])
|
184 |
+
|
185 |
+
if all_hypotheses:
|
186 |
+
# Sort by Elo score
|
187 |
+
all_hypotheses.sort(key=lambda h: h.get('elo_score', 0), reverse=True)
|
188 |
+
|
189 |
+
html += "<h3>π§ Top Hypotheses</h3>"
|
190 |
+
for i, hypo in enumerate(all_hypotheses[:10]): # Show top 10
|
191 |
+
html += f"""
|
192 |
+
<div style="border: 1px solid #ddd; padding: 15px; margin: 10px 0; border-radius: 8px; background-color: #f9f9f9;">
|
193 |
+
<h4>#{i+1}: {hypo.get('title', 'Untitled')}</h4>
|
194 |
+
<p><strong>ID:</strong> {hypo.get('id', 'Unknown')} |
|
195 |
+
<strong>Elo Score:</strong> {hypo.get('elo_score', 0):.2f}</p>
|
196 |
+
<p><strong>Description:</strong> {hypo.get('text', 'No description')}</p>
|
197 |
+
<p><strong>Novelty:</strong> {hypo.get('novelty_review', 'Not assessed')} |
|
198 |
+
<strong>Feasibility:</strong> {hypo.get('feasibility_review', 'Not assessed')}</p>
|
199 |
+
</div>
|
200 |
+
"""
|
201 |
+
|
202 |
+
return html
|
203 |
+
|
204 |
+
def get_references_html(cycle_details: Dict) -> str:
|
205 |
+
"""Get references HTML for the cycle."""
|
206 |
+
try:
|
207 |
+
# Search for arXiv papers related to the research goal
|
208 |
+
if current_research_goal and current_research_goal.description:
|
209 |
+
arxiv_tool = ArxivSearchTool(max_results=5)
|
210 |
+
papers = arxiv_tool.search_papers(
|
211 |
+
query=current_research_goal.description,
|
212 |
+
max_results=5,
|
213 |
+
sort_by="relevance"
|
214 |
+
)
|
215 |
+
|
216 |
+
if papers:
|
217 |
+
html = "<h3>π Related arXiv Papers</h3>"
|
218 |
+
for paper in papers:
|
219 |
+
html += f"""
|
220 |
+
<div style="border: 1px solid #e0e0e0; padding: 15px; margin: 10px 0; border-radius: 8px; background-color: #fafafa;">
|
221 |
+
<h4>{paper.get('title', 'Untitled')}</h4>
|
222 |
+
<p><strong>Authors:</strong> {', '.join(paper.get('authors', [])[:5])}</p>
|
223 |
+
<p><strong>arXiv ID:</strong> {paper.get('arxiv_id', 'Unknown')} |
|
224 |
+
<strong>Published:</strong> {paper.get('published', 'Unknown')}</p>
|
225 |
+
<p><strong>Abstract:</strong> {paper.get('abstract', 'No abstract')[:300]}...</p>
|
226 |
+
<p>
|
227 |
+
<a href="{paper.get('arxiv_url', '#')}" target="_blank">π View on arXiv</a> |
|
228 |
+
<a href="{paper.get('pdf_url', '#')}" target="_blank">π Download PDF</a>
|
229 |
+
</p>
|
230 |
+
</div>
|
231 |
+
"""
|
232 |
+
return html
|
233 |
+
else:
|
234 |
+
return "<p>No related arXiv papers found.</p>"
|
235 |
+
else:
|
236 |
+
return "<p>No research goal set for reference search.</p>"
|
237 |
+
|
238 |
+
except Exception as e:
|
239 |
+
logger.error(f"Error fetching references: {e}")
|
240 |
+
return f"<p>Error loading references: {str(e)}</p>"
|
241 |
+
|
242 |
+
def create_gradio_interface():
|
243 |
+
"""Create the Gradio interface."""
|
244 |
+
|
245 |
+
# Fetch models on startup
|
246 |
+
fetch_available_models()
|
247 |
+
|
248 |
+
# Get deployment status
|
249 |
+
status_text, status_color = get_deployment_status()
|
250 |
+
|
251 |
+
with gr.Blocks(
|
252 |
+
title="AI Co-Scientist - Hypothesis Evolution System",
|
253 |
+
theme=gr.themes.Soft(),
|
254 |
+
css="""
|
255 |
+
.status-box {
|
256 |
+
padding: 10px;
|
257 |
+
border-radius: 8px;
|
258 |
+
margin-bottom: 20px;
|
259 |
+
font-weight: bold;
|
260 |
+
}
|
261 |
+
.orange { background-color: #fff3cd; border: 1px solid #ffeaa7; }
|
262 |
+
.blue { background-color: #d1ecf1; border: 1px solid #bee5eb; }
|
263 |
+
"""
|
264 |
+
) as demo:
|
265 |
+
|
266 |
+
# Header
|
267 |
+
gr.Markdown("# π¬ AI Co-Scientist - Hypothesis Evolution System")
|
268 |
+
gr.Markdown("Generate, review, rank, and evolve research hypotheses using AI agents.")
|
269 |
+
|
270 |
+
# Deployment status
|
271 |
+
gr.HTML(f'<div class="status-box {status_color}">π§ Deployment Status: {status_text}</div>')
|
272 |
+
|
273 |
+
# Main interface
|
274 |
+
with gr.Row():
|
275 |
+
with gr.Column(scale=2):
|
276 |
+
# Research goal input
|
277 |
+
research_goal_input = gr.Textbox(
|
278 |
+
label="Research Goal",
|
279 |
+
placeholder="Enter your research goal (e.g., 'Develop new methods for increasing the efficiency of solar panels')",
|
280 |
+
lines=3
|
281 |
+
)
|
282 |
+
|
283 |
+
# Advanced settings
|
284 |
+
with gr.Accordion("βοΈ Advanced Settings", open=False):
|
285 |
+
model_dropdown = gr.Dropdown(
|
286 |
+
choices=["-- Select Model --"] + available_models,
|
287 |
+
value="-- Select Model --",
|
288 |
+
label="LLM Model",
|
289 |
+
info="Leave as default to use system default model"
|
290 |
+
)
|
291 |
+
|
292 |
+
with gr.Row():
|
293 |
+
num_hypotheses = gr.Slider(
|
294 |
+
minimum=1, maximum=10, value=3, step=1,
|
295 |
+
label="Hypotheses per Cycle"
|
296 |
+
)
|
297 |
+
top_k_hypotheses = gr.Slider(
|
298 |
+
minimum=2, maximum=5, value=2, step=1,
|
299 |
+
label="Top K for Evolution"
|
300 |
+
)
|
301 |
+
|
302 |
+
with gr.Row():
|
303 |
+
generation_temp = gr.Slider(
|
304 |
+
minimum=0.1, maximum=1.0, value=0.7, step=0.1,
|
305 |
+
label="Generation Temperature (Creativity)"
|
306 |
+
)
|
307 |
+
reflection_temp = gr.Slider(
|
308 |
+
minimum=0.1, maximum=1.0, value=0.5, step=0.1,
|
309 |
+
label="Reflection Temperature (Analysis)"
|
310 |
+
)
|
311 |
+
|
312 |
+
elo_k_factor = gr.Slider(
|
313 |
+
minimum=1, maximum=100, value=32, step=1,
|
314 |
+
label="Elo K-Factor (Ranking Sensitivity)"
|
315 |
+
)
|
316 |
+
|
317 |
+
# Action buttons
|
318 |
+
with gr.Row():
|
319 |
+
set_goal_btn = gr.Button("π― Set Research Goal", variant="primary")
|
320 |
+
run_cycle_btn = gr.Button("π Run Cycle", variant="secondary")
|
321 |
+
|
322 |
+
# Status display
|
323 |
+
status_output = gr.Textbox(
|
324 |
+
label="Status",
|
325 |
+
value="Enter a research goal and click 'Set Research Goal' to begin.",
|
326 |
+
interactive=False,
|
327 |
+
lines=3
|
328 |
+
)
|
329 |
+
|
330 |
+
with gr.Column(scale=1):
|
331 |
+
# Instructions
|
332 |
+
gr.Markdown("""
|
333 |
+
### π Instructions
|
334 |
+
|
335 |
+
1. **Enter Research Goal**: Describe what you want to research
|
336 |
+
2. **Adjust Settings** (optional): Customize model and parameters
|
337 |
+
3. **Set Goal**: Click to initialize the system
|
338 |
+
4. **Run Cycles**: Generate and evolve hypotheses iteratively
|
339 |
+
|
340 |
+
### π‘ Tips
|
341 |
+
- Start with 3-5 hypotheses per cycle
|
342 |
+
- Higher generation temperature = more creative ideas
|
343 |
+
- Lower reflection temperature = more analytical reviews
|
344 |
+
- Each cycle builds on previous results
|
345 |
+
""")
|
346 |
+
|
347 |
+
# Results section
|
348 |
+
with gr.Row():
|
349 |
+
with gr.Column():
|
350 |
+
results_output = gr.HTML(
|
351 |
+
label="Results",
|
352 |
+
value="<p>Results will appear here after running cycles.</p>"
|
353 |
+
)
|
354 |
+
|
355 |
+
# References section
|
356 |
+
with gr.Row():
|
357 |
+
with gr.Column():
|
358 |
+
references_output = gr.HTML(
|
359 |
+
label="References",
|
360 |
+
value="<p>Related research papers will appear here.</p>"
|
361 |
+
)
|
362 |
+
|
363 |
+
# Event handlers
|
364 |
+
set_goal_btn.click(
|
365 |
+
fn=set_research_goal,
|
366 |
+
inputs=[
|
367 |
+
research_goal_input,
|
368 |
+
model_dropdown,
|
369 |
+
num_hypotheses,
|
370 |
+
generation_temp,
|
371 |
+
reflection_temp,
|
372 |
+
elo_k_factor,
|
373 |
+
top_k_hypotheses
|
374 |
+
],
|
375 |
+
outputs=[status_output, results_output]
|
376 |
+
)
|
377 |
+
|
378 |
+
run_cycle_btn.click(
|
379 |
+
fn=run_cycle,
|
380 |
+
inputs=[],
|
381 |
+
outputs=[status_output, results_output, references_output]
|
382 |
+
)
|
383 |
+
|
384 |
+
# Example inputs
|
385 |
+
gr.Examples(
|
386 |
+
examples=[
|
387 |
+
["Develop new methods for increasing the efficiency of solar panels"],
|
388 |
+
["Create novel approaches to treat Alzheimer's disease"],
|
389 |
+
["Design sustainable materials for construction"],
|
390 |
+
["Improve machine learning model interpretability"],
|
391 |
+
["Develop new quantum computing algorithms"]
|
392 |
+
],
|
393 |
+
inputs=[research_goal_input],
|
394 |
+
label="Example Research Goals"
|
395 |
+
)
|
396 |
+
|
397 |
+
return demo
|
398 |
+
|
399 |
+
if __name__ == "__main__":
|
400 |
+
# Check for API key
|
401 |
+
if not os.getenv("OPENROUTER_API_KEY"):
|
402 |
+
print("β οΈ Warning: OPENROUTER_API_KEY environment variable not set.")
|
403 |
+
print("The app will start but may not function properly without an API key.")
|
404 |
+
|
405 |
+
# Create and launch the Gradio app
|
406 |
+
demo = create_gradio_interface()
|
407 |
+
|
408 |
+
# Launch with appropriate settings for HF Spaces
|
409 |
+
demo.launch(
|
410 |
+
server_name="0.0.0.0",
|
411 |
+
server_port=7860,
|
412 |
+
share=False,
|
413 |
+
show_error=True
|
414 |
+
)
|
docs/huggingface_deployment.md
ADDED
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Hugging Face Spaces Deployment Guide
|
2 |
+
|
3 |
+
This guide explains how to deploy the AI Co-Scientist system as a Gradio app on Hugging Face Spaces.
|
4 |
+
|
5 |
+
## π Prerequisites
|
6 |
+
|
7 |
+
1. **Hugging Face Account**: Create an account at [huggingface.co](https://huggingface.co)
|
8 |
+
2. **OpenRouter API Key**: Get an API key from [openrouter.ai](https://openrouter.ai) with sufficient balance ($5+ recommended)
|
9 |
+
|
10 |
+
## π Deployment Steps
|
11 |
+
|
12 |
+
### Step 1: Create a New Space
|
13 |
+
|
14 |
+
1. Go to [Hugging Face Spaces](https://huggingface.co/spaces)
|
15 |
+
2. Click "Create new Space"
|
16 |
+
3. Fill in the details:
|
17 |
+
- **Space name**: `ai-co-scientist` (or your preferred name)
|
18 |
+
- **License**: MIT
|
19 |
+
- **SDK**: Gradio
|
20 |
+
- **Hardware**: CPU Basic (free tier is sufficient)
|
21 |
+
- **Visibility**: Public or Private (your choice)
|
22 |
+
|
23 |
+
### Step 2: Upload Files
|
24 |
+
|
25 |
+
Upload these files to your Space:
|
26 |
+
|
27 |
+
1. **README.md**: Copy content from `README_HF.md` in this repository
|
28 |
+
2. **app.py**: The main Gradio application file
|
29 |
+
3. **requirements.txt**: Python dependencies
|
30 |
+
4. **app/**: The entire app directory with all Python modules
|
31 |
+
|
32 |
+
**File Structure in HF Space:**
|
33 |
+
```
|
34 |
+
your-space/
|
35 |
+
βββ README.md # Copy from README_HF.md
|
36 |
+
βββ app.py # Main Gradio app
|
37 |
+
βββ requirements.txt # Dependencies
|
38 |
+
βββ app/ # Application modules
|
39 |
+
βββ __init__.py
|
40 |
+
βββ agents.py
|
41 |
+
βββ api.py
|
42 |
+
βββ config.py
|
43 |
+
βββ main.py
|
44 |
+
βββ models.py
|
45 |
+
βββ utils.py
|
46 |
+
βββ tools/
|
47 |
+
βββ __init__.py
|
48 |
+
βββ arxiv_search.py
|
49 |
+
```
|
50 |
+
|
51 |
+
### Step 3: Configure Environment Variables
|
52 |
+
|
53 |
+
1. In your Space, go to **Settings** β **Variables and secrets**
|
54 |
+
2. Add a new secret:
|
55 |
+
- **Name**: `OPENROUTER_API_KEY`
|
56 |
+
- **Value**: Your OpenRouter API key
|
57 |
+
- **Type**: Secret (not visible to others)
|
58 |
+
|
59 |
+
### Step 4: Deploy
|
60 |
+
|
61 |
+
1. Commit your changes in the Space
|
62 |
+
2. The Space will automatically build and deploy
|
63 |
+
3. Wait for the build to complete (usually 2-5 minutes)
|
64 |
+
|
65 |
+
## π§ Configuration Details
|
66 |
+
|
67 |
+
### Automatic Environment Detection
|
68 |
+
|
69 |
+
The app automatically detects when running in Hugging Face Spaces using these environment variables:
|
70 |
+
- `SPACE_ID`
|
71 |
+
- `SPACE_AUTHOR_NAME`
|
72 |
+
- `SPACE_REPO_NAME`
|
73 |
+
|
74 |
+
### Cost Control Features
|
75 |
+
|
76 |
+
When deployed to HF Spaces, the app automatically:
|
77 |
+
- Filters to cost-effective models only (7 models vs. all available)
|
78 |
+
- Shows deployment status banner
|
79 |
+
- Limits expensive model access to protect your API budget
|
80 |
+
|
81 |
+
**Allowed Models in Production:**
|
82 |
+
- `google/gemini-2.0-flash-001`
|
83 |
+
- `google/gemini-flash-1.5`
|
84 |
+
- `openai/gpt-3.5-turbo`
|
85 |
+
- `anthropic/claude-3-haiku`
|
86 |
+
- `meta-llama/llama-3.1-8b-instruct`
|
87 |
+
- `mistralai/mistral-7b-instruct`
|
88 |
+
- `microsoft/phi-3-mini-4k-instruct`
|
89 |
+
|
90 |
+
## π§ͺ Testing Before Deployment
|
91 |
+
|
92 |
+
Run the test suite locally to verify everything works:
|
93 |
+
|
94 |
+
```bash
|
95 |
+
# From project root
|
96 |
+
python tests/test_gradio.py
|
97 |
+
```
|
98 |
+
|
99 |
+
Or test the Gradio app locally:
|
100 |
+
|
101 |
+
```bash
|
102 |
+
# Set your API key
|
103 |
+
export OPENROUTER_API_KEY=your_key_here
|
104 |
+
|
105 |
+
# Run the app
|
106 |
+
python app.py
|
107 |
+
```
|
108 |
+
|
109 |
+
## π Usage Monitoring
|
110 |
+
|
111 |
+
### Cost Monitoring
|
112 |
+
- Each research cycle typically costs $0.10-$0.50
|
113 |
+
- Monitor your OpenRouter usage at [openrouter.ai/activity](https://openrouter.ai/activity)
|
114 |
+
- Set up billing alerts in OpenRouter dashboard
|
115 |
+
|
116 |
+
### Space Analytics
|
117 |
+
- View usage statistics in your HF Space settings
|
118 |
+
- Monitor app performance and user interactions
|
119 |
+
|
120 |
+
## π Security Considerations
|
121 |
+
|
122 |
+
### API Key Protection
|
123 |
+
- β
**DO**: Store API key as a secret in HF Spaces
|
124 |
+
- β **DON'T**: Include API key in code or README
|
125 |
+
- β **DON'T**: Share your API key publicly
|
126 |
+
|
127 |
+
### Rate Limiting
|
128 |
+
- The app includes automatic model filtering for cost control
|
129 |
+
- Consider implementing additional rate limiting for high-traffic scenarios
|
130 |
+
- Monitor usage patterns and adjust as needed
|
131 |
+
|
132 |
+
## π Troubleshooting
|
133 |
+
|
134 |
+
### Common Issues
|
135 |
+
|
136 |
+
**1. "Module not found" errors**
|
137 |
+
- Ensure all files in the `app/` directory are uploaded
|
138 |
+
- Check that `__init__.py` files are present
|
139 |
+
|
140 |
+
**2. "API key not found" errors**
|
141 |
+
- Verify `OPENROUTER_API_KEY` is set as a secret in Space settings
|
142 |
+
- Check that the secret name matches exactly
|
143 |
+
|
144 |
+
**3. "Insufficient funds" errors**
|
145 |
+
- Add balance to your OpenRouter account
|
146 |
+
- Verify your API key has access to the models being used
|
147 |
+
|
148 |
+
**4. App won't start**
|
149 |
+
- Check the Space logs for detailed error messages
|
150 |
+
- Ensure `requirements.txt` includes all dependencies
|
151 |
+
- Verify Python syntax in uploaded files
|
152 |
+
|
153 |
+
### Debugging Steps
|
154 |
+
|
155 |
+
1. **Check Space Logs**: View build and runtime logs in the Space interface
|
156 |
+
2. **Test Locally**: Run `python tests/test_gradio.py` to verify setup
|
157 |
+
3. **Verify Files**: Ensure all required files are uploaded correctly
|
158 |
+
4. **Check Secrets**: Confirm API key is properly configured
|
159 |
+
|
160 |
+
## π Updates and Maintenance
|
161 |
+
|
162 |
+
### Updating the App
|
163 |
+
1. Make changes to your local files
|
164 |
+
2. Upload updated files to the Space
|
165 |
+
3. The Space will automatically rebuild
|
166 |
+
|
167 |
+
### Model Updates
|
168 |
+
- The app automatically fetches available models from OpenRouter
|
169 |
+
- New cost-effective models can be added to the `ALLOWED_MODELS_PRODUCTION` list in `app.py`
|
170 |
+
|
171 |
+
### Monitoring
|
172 |
+
- Regularly check OpenRouter usage and costs
|
173 |
+
- Monitor Space performance and user feedback
|
174 |
+
- Update dependencies as needed
|
175 |
+
|
176 |
+
## π Support
|
177 |
+
|
178 |
+
If you encounter issues:
|
179 |
+
|
180 |
+
1. **Check the logs** in your HF Space for error details
|
181 |
+
2. **Test locally** using the test script
|
182 |
+
3. **Review this guide** for common solutions
|
183 |
+
4. **Check OpenRouter status** at their website
|
184 |
+
5. **File an issue** in the original repository if needed
|
185 |
+
|
186 |
+
## π Success!
|
187 |
+
|
188 |
+
Once deployed, your AI Co-Scientist will be available at:
|
189 |
+
`https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME`
|
190 |
+
|
191 |
+
Users can now generate and evolve research hypotheses using your deployed system!
|
requirements.txt
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
openai
|
2 |
-
|
3 |
-
uvicorn
|
4 |
pydantic
|
5 |
PyYAML
|
6 |
requests # Added for fetching models
|
|
|
1 |
openai
|
2 |
+
gradio
|
|
|
3 |
pydantic
|
4 |
PyYAML
|
5 |
requests # Added for fetching models
|
tests/test_gradio.py
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Test script for the Gradio AI Co-Scientist app.
|
4 |
+
Run this to test the app locally before deploying to Hugging Face Spaces.
|
5 |
+
"""
|
6 |
+
|
7 |
+
import os
|
8 |
+
import sys
|
9 |
+
|
10 |
+
def test_imports():
|
11 |
+
"""Test that all required imports work."""
|
12 |
+
print("Testing imports...")
|
13 |
+
|
14 |
+
try:
|
15 |
+
import gradio as gr
|
16 |
+
print("β
Gradio imported successfully")
|
17 |
+
except ImportError as e:
|
18 |
+
print(f"β Failed to import Gradio: {e}")
|
19 |
+
return False
|
20 |
+
|
21 |
+
try:
|
22 |
+
from app.models import ResearchGoal, ContextMemory
|
23 |
+
from app.agents import SupervisorAgent
|
24 |
+
from app.utils import logger, is_huggingface_space, get_deployment_environment
|
25 |
+
from app.tools.arxiv_search import ArxivSearchTool
|
26 |
+
print("β
App components imported successfully")
|
27 |
+
except ImportError as e:
|
28 |
+
print(f"β Failed to import app components: {e}")
|
29 |
+
return False
|
30 |
+
|
31 |
+
return True
|
32 |
+
|
33 |
+
def test_environment_detection():
|
34 |
+
"""Test environment detection functions."""
|
35 |
+
print("\nTesting environment detection...")
|
36 |
+
|
37 |
+
try:
|
38 |
+
from app.utils import is_huggingface_space, get_deployment_environment
|
39 |
+
|
40 |
+
is_hf = is_huggingface_space()
|
41 |
+
env = get_deployment_environment()
|
42 |
+
|
43 |
+
print(f"β
Is Hugging Face Spaces: {is_hf}")
|
44 |
+
print(f"β
Deployment environment: {env}")
|
45 |
+
|
46 |
+
return True
|
47 |
+
except Exception as e:
|
48 |
+
print(f"β Environment detection failed: {e}")
|
49 |
+
return False
|
50 |
+
|
51 |
+
def test_gradio_app():
|
52 |
+
"""Test that the Gradio app can be created."""
|
53 |
+
print("\nTesting Gradio app creation...")
|
54 |
+
|
55 |
+
try:
|
56 |
+
# Add parent directory to path for imports
|
57 |
+
import os
|
58 |
+
parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
59 |
+
sys.path.insert(0, parent_dir)
|
60 |
+
|
61 |
+
# Import the app creation function from the root app.py file
|
62 |
+
import importlib.util
|
63 |
+
app_path = os.path.join(parent_dir, 'app.py')
|
64 |
+
spec = importlib.util.spec_from_file_location("gradio_app", app_path)
|
65 |
+
gradio_app = importlib.util.module_from_spec(spec)
|
66 |
+
spec.loader.exec_module(gradio_app)
|
67 |
+
|
68 |
+
# Create the interface (but don't launch)
|
69 |
+
demo = gradio_app.create_gradio_interface()
|
70 |
+
print("β
Gradio interface created successfully")
|
71 |
+
|
72 |
+
return True
|
73 |
+
except Exception as e:
|
74 |
+
print(f"β Failed to create Gradio interface: {e}")
|
75 |
+
return False
|
76 |
+
|
77 |
+
def main():
|
78 |
+
"""Run all tests."""
|
79 |
+
print("π¬ AI Co-Scientist Gradio App Test Suite")
|
80 |
+
print("=" * 50)
|
81 |
+
|
82 |
+
# Check API key
|
83 |
+
api_key = os.getenv("OPENROUTER_API_KEY")
|
84 |
+
if api_key:
|
85 |
+
print(f"β
OPENROUTER_API_KEY is set (length: {len(api_key)})")
|
86 |
+
else:
|
87 |
+
print("β οΈ OPENROUTER_API_KEY is not set - app will show warnings")
|
88 |
+
|
89 |
+
# Run tests
|
90 |
+
tests = [
|
91 |
+
test_imports,
|
92 |
+
test_environment_detection,
|
93 |
+
test_gradio_app
|
94 |
+
]
|
95 |
+
|
96 |
+
passed = 0
|
97 |
+
for test in tests:
|
98 |
+
if test():
|
99 |
+
passed += 1
|
100 |
+
print()
|
101 |
+
|
102 |
+
print("=" * 50)
|
103 |
+
print(f"Test Results: {passed}/{len(tests)} tests passed")
|
104 |
+
|
105 |
+
if passed == len(tests):
|
106 |
+
print("π All tests passed! The app should work correctly.")
|
107 |
+
print("\nTo run the app locally:")
|
108 |
+
print(" python app.py")
|
109 |
+
print("\nTo deploy to Hugging Face Spaces:")
|
110 |
+
print(" 1. Copy README_HF.md to README.md in your HF Space")
|
111 |
+
print(" 2. Upload app.py and requirements.txt")
|
112 |
+
print(" 3. Set OPENROUTER_API_KEY in Space secrets")
|
113 |
+
else:
|
114 |
+
print("β Some tests failed. Please fix the issues before deploying.")
|
115 |
+
return 1
|
116 |
+
|
117 |
+
return 0
|
118 |
+
|
119 |
+
if __name__ == "__main__":
|
120 |
+
sys.exit(main())
|