Spaces:
Sleeping
Sleeping
Upload 21 files
Browse files- .gitattributes +1 -0
- README.md +179 -10
- README_PYTHON.md +70 -0
- audio_services.py +24 -0
- bun.lockb +3 -0
- config.py +11 -0
- enhancement_services.py +60 -0
- export_services.py +48 -0
- image_services.py +23 -0
- index.html +17 -18
- knowledge_assistant.py +194 -0
- langchain_tools.py +217 -0
- main.py +594 -0
- models.py +282 -0
- rag_services.py +224 -0
- requirements.txt +5 -0
- run.py +27 -0
- ui_components.py +396 -0
- vite.config.js +13 -0
.gitattributes
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
bun.lockb filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,11 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 5.33.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: apache-2.0
|
11 |
-
---
|
|
|
1 |
+
|
2 |
+
# ScriptVoice - AI-Powered TTS Script Editor
|
3 |
+
|
4 |
+
A powerful Gradio-based application for creating, editing, and managing text-to-speech scripts with AI enhancement capabilities.
|
5 |
+
|
6 |
+
## 🎯 Features
|
7 |
+
|
8 |
+
- **📝 Script Management**: Create, edit, and organize multiple scripts
|
9 |
+
- **🔊 Text-to-Speech**: Generate high-quality audio from your scripts using gTTS
|
10 |
+
- **📝 Notes System**: Add and manage notes for each script
|
11 |
+
- **📷 OCR Integration**: Extract text from images using Tesseract OCR
|
12 |
+
- **🤖 AI Enhancement**: Enhance scripts with different tones and styles (framework ready)
|
13 |
+
- **📤 Export Options**: Export scripts as text files or audio files
|
14 |
+
- **⚙️ Customizable Settings**: Adjust voice speed, volume, and accessibility options
|
15 |
+
- **📊 Real-time Word Count**: Track script length as you type
|
16 |
+
|
17 |
+
## 🚀 Quick Start
|
18 |
+
|
19 |
+
### Local Development
|
20 |
+
|
21 |
+
1. **Clone the repository**
|
22 |
+
```bash
|
23 |
+
git clone <your-repo-url>
|
24 |
+
cd scriptvoice-gradio
|
25 |
+
```
|
26 |
+
|
27 |
+
2. **Install dependencies**
|
28 |
+
```bash
|
29 |
+
pip install -r requirements.txt
|
30 |
+
```
|
31 |
+
|
32 |
+
3. **Install Tesseract OCR** (for image text extraction)
|
33 |
+
- **Ubuntu/Debian**: `sudo apt-get install tesseract-ocr`
|
34 |
+
- **macOS**: `brew install tesseract`
|
35 |
+
- **Windows**: Download from [GitHub releases](https://github.com/UB-Mannheim/tesseract/wiki)
|
36 |
+
|
37 |
+
4. **Run the application**
|
38 |
+
```bash
|
39 |
+
python app.py
|
40 |
+
```
|
41 |
+
|
42 |
+
5. **Open your browser** and navigate to `http://localhost:7860`
|
43 |
+
|
44 |
+
### HuggingFace Spaces Deployment
|
45 |
+
|
46 |
+
This app is designed to run on HuggingFace Spaces. Simply:
|
47 |
+
|
48 |
+
1. Create a new Space on [HuggingFace](https://huggingface.co/spaces)
|
49 |
+
2. Upload `app.py`, `requirements.txt`, and this `README.md`
|
50 |
+
3. Set the Space SDK to "Gradio"
|
51 |
+
4. Your app will be automatically deployed!
|
52 |
+
|
53 |
+
## 🎮 How to Use
|
54 |
+
|
55 |
+
### Creating Your First Script
|
56 |
+
|
57 |
+
1. **Create a New Project**: Enter a name in the "New Project Name" field and click "➕ Create Project"
|
58 |
+
2. **Write Your Script**: Use the main editor to write your content
|
59 |
+
3. **Add Notes**: Use the notes section for reminders, directions, or script metadata
|
60 |
+
4. **Generate Audio**: Click "🔊 Play TTS" to hear your script read aloud
|
61 |
+
5. **Save Your Work**: Click "💾 Save" to persist your changes
|
62 |
+
|
63 |
+
### Advanced Features
|
64 |
+
|
65 |
+
- **OCR Text Extraction**: Upload an image containing text, and the app will extract it for you
|
66 |
+
- **Voice Customization**: Adjust speed and volume in the settings panel
|
67 |
+
- **Export Options**: Download your scripts as text files or audio files
|
68 |
+
- **AI Enhancement**: (Framework ready) Enhance your scripts with different tones
|
69 |
+
|
70 |
+
### Keyboard Shortcuts
|
71 |
+
|
72 |
+
- **Ctrl+S** (planned): Quick save
|
73 |
+
- **Ctrl+P** (planned): Play TTS
|
74 |
+
- **Ctrl+N** (planned): New project
|
75 |
+
|
76 |
+
## 🛠️ Technical Architecture
|
77 |
+
|
78 |
+
### Core Components
|
79 |
+
|
80 |
+
- **Gradio Blocks**: Modern, responsive UI framework
|
81 |
+
- **gTTS**: Google Text-to-Speech for audio generation
|
82 |
+
- **Tesseract OCR**: Image text extraction
|
83 |
+
- **JSON Storage**: Simple, portable project persistence
|
84 |
+
- **Python 3.10+**: Modern Python features and type hints
|
85 |
+
|
86 |
+
### File Structure
|
87 |
+
|
88 |
+
```
|
89 |
+
scriptvoice-gradio/
|
90 |
+
├── app.py # Main Gradio application
|
91 |
+
├── requirements.txt # Python dependencies
|
92 |
+
├── README.md # This file
|
93 |
+
├── projects.json # Auto-generated project storage
|
94 |
+
└── temp/ # Auto-generated temp files
|
95 |
+
```
|
96 |
+
|
97 |
+
### Data Persistence
|
98 |
+
|
99 |
+
Projects are stored in `projects.json` with the following structure:
|
100 |
+
|
101 |
+
```json
|
102 |
+
{
|
103 |
+
"projects": {
|
104 |
+
"1": {
|
105 |
+
"id": "1",
|
106 |
+
"name": "Project Name",
|
107 |
+
"content": "Script content...",
|
108 |
+
"notes": "Project notes...",
|
109 |
+
"created_at": "2024-01-01T00:00:00",
|
110 |
+
"word_count": 42
|
111 |
+
}
|
112 |
+
},
|
113 |
+
"settings": {
|
114 |
+
"dyslexic_mode": false,
|
115 |
+
"voice_speed": 1.0,
|
116 |
+
"voice_volume": 1.0
|
117 |
+
}
|
118 |
+
}
|
119 |
+
```
|
120 |
+
|
121 |
+
## 🔮 Future Enhancements
|
122 |
+
|
123 |
+
### Planned Features
|
124 |
+
|
125 |
+
- **🤖 Full AI Integration**: Connect to OpenAI GPT-4 or HuggingFace models for script enhancement
|
126 |
+
- **🎭 Voice Cloning**: Integrate with voice cloning services
|
127 |
+
- **📊 Analytics**: Script performance metrics and reading time estimates
|
128 |
+
- **🌐 Multi-language Support**: Support for multiple TTS languages
|
129 |
+
- **☁️ Cloud Storage**: Integration with Google Drive, Dropbox, or AWS S3
|
130 |
+
- **👥 Collaboration**: Multi-user editing and sharing capabilities
|
131 |
+
|
132 |
+
### AI Enhancement Framework
|
133 |
+
|
134 |
+
The app includes a ready-to-extend framework for AI script enhancement:
|
135 |
+
|
136 |
+
```python
|
137 |
+
def enhance_script_with_ai(text, enhancement_type, api_key):
|
138 |
+
"""
|
139 |
+
Future implementation for AI script enhancement
|
140 |
+
- Connect to OpenAI API or HuggingFace Hub
|
141 |
+
- Apply different enhancement styles
|
142 |
+
- Return enhanced script content
|
143 |
+
"""
|
144 |
+
pass
|
145 |
+
```
|
146 |
+
|
147 |
+
## 🏆 Hackathon Demo
|
148 |
+
|
149 |
+
This project was built for **Track 3: Agentic Demo** with the following goals:
|
150 |
+
|
151 |
+
- ✅ **Rapid Prototyping**: From TypeScript to Python/Gradio in record time
|
152 |
+
- ✅ **AI-Ready Architecture**: Framework for intelligent script enhancement
|
153 |
+
- ✅ **Production Deployment**: Ready for HuggingFace Spaces hosting
|
154 |
+
- ✅ **User-Friendly Interface**: Notion-like editing experience for storytellers
|
155 |
+
|
156 |
+
### Demo Script
|
157 |
+
|
158 |
+
"Welcome to ScriptVoice - where your words come to life! This AI-powered editor lets you craft compelling scripts, generate professional voiceovers, and enhance your content with intelligent suggestions. Whether you're a content creator, educator, or storyteller, ScriptVoice transforms your text into engaging audio experiences."
|
159 |
+
|
160 |
+
## 🤝 Contributing
|
161 |
+
|
162 |
+
We welcome contributions! Areas where help is needed:
|
163 |
+
|
164 |
+
- **AI Model Integration**: Connect OpenAI or HuggingFace models
|
165 |
+
- **Voice Options**: Add more TTS providers and voice choices
|
166 |
+
- **UI/UX Improvements**: Enhance the user interface and experience
|
167 |
+
- **Performance Optimization**: Improve app speed and responsiveness
|
168 |
+
- **Testing**: Add comprehensive test coverage
|
169 |
+
|
170 |
+
## 📄 License
|
171 |
+
|
172 |
+
MIT License - see LICENSE file for details.
|
173 |
+
|
174 |
+
## 🏷️ Tags
|
175 |
+
|
176 |
+
`#gradio` `#text-to-speech` `#ai` `#python` `#huggingface` `#agent-demo-track` `#tts` `#script-editor` `#voice-generation`
|
177 |
+
|
178 |
---
|
179 |
+
|
180 |
+
**Built with ❤️ using Gradio and deployed on HuggingFace Spaces**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README_PYTHON.md
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# ScriptVoice - Pure Python Gradio Application
|
3 |
+
|
4 |
+
## 🎯 Quick Start
|
5 |
+
|
6 |
+
1. **Install Dependencies**
|
7 |
+
```bash
|
8 |
+
pip install -r requirements.txt
|
9 |
+
```
|
10 |
+
|
11 |
+
2. **Run the Application**
|
12 |
+
```bash
|
13 |
+
python main.py
|
14 |
+
```
|
15 |
+
or
|
16 |
+
```bash
|
17 |
+
python run.py
|
18 |
+
```
|
19 |
+
|
20 |
+
3. **Access the Application**
|
21 |
+
Open your browser to: http://localhost:7860
|
22 |
+
|
23 |
+
## 📦 Dependencies
|
24 |
+
|
25 |
+
This application uses the following Python packages:
|
26 |
+
- `gradio>=4.0.0` - Web interface framework
|
27 |
+
- `gtts>=2.3.0` - Text-to-speech
|
28 |
+
- `pytesseract>=0.3.10` - OCR text extraction
|
29 |
+
- `Pillow>=10.0.0` - Image processing
|
30 |
+
- `langchain>=0.1.0` - LLM framework
|
31 |
+
- `sentence-transformers>=2.2.0` - Text embeddings
|
32 |
+
- `faiss-cpu>=1.7.0` - Vector database
|
33 |
+
- `langchain-openai>=0.1.0` - OpenAI integration
|
34 |
+
- `tiktoken>=0.5.0` - Text tokenization
|
35 |
+
|
36 |
+
## 🌟 Features
|
37 |
+
|
38 |
+
### Scripts Tab
|
39 |
+
- Project management and script editing
|
40 |
+
- Text-to-speech generation
|
41 |
+
- OCR text extraction from images
|
42 |
+
- AI script enhancement
|
43 |
+
- Export functionality
|
44 |
+
|
45 |
+
### Story Intelligence Tab
|
46 |
+
- Knowledge Assistant with command processing
|
47 |
+
- Context-aware AI tools
|
48 |
+
- Character and story management
|
49 |
+
- World building elements
|
50 |
+
- RAG-powered search and analysis
|
51 |
+
|
52 |
+
## 🔧 Configuration
|
53 |
+
|
54 |
+
- All data is stored in local JSON files
|
55 |
+
- No external database required
|
56 |
+
- Gradio handles the web interface automatically
|
57 |
+
|
58 |
+
## 🚀 Deployment
|
59 |
+
|
60 |
+
The application is ready for deployment on platforms that support Python:
|
61 |
+
- Hugging Face Spaces
|
62 |
+
- Railway
|
63 |
+
- Render
|
64 |
+
- Heroku
|
65 |
+
- Any Python hosting service
|
66 |
+
|
67 |
+
For production deployment, make sure to:
|
68 |
+
1. Set appropriate server configurations
|
69 |
+
2. Configure environment variables if needed
|
70 |
+
3. Ensure all dependencies are installed
|
audio_services.py
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""Audio and TTS services for ScriptVoice."""
|
3 |
+
|
4 |
+
import tempfile
|
5 |
+
from gtts import gTTS
|
6 |
+
from typing import Tuple, Optional
|
7 |
+
|
8 |
+
|
9 |
+
def generate_tts(text: str, speed: float = 1.0) -> Tuple[Optional[str], str]:
|
10 |
+
"""Generate TTS audio from text."""
|
11 |
+
if not text.strip():
|
12 |
+
return None, '<div class="status-error">❌ Please enter some text to convert to speech</div>'
|
13 |
+
|
14 |
+
try:
|
15 |
+
# Create a temporary file for the audio
|
16 |
+
tts = gTTS(text=text, lang='en', slow=(speed < 1.0))
|
17 |
+
|
18 |
+
# Use a temporary file
|
19 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix='.mp3') as tmp_file:
|
20 |
+
tts.save(tmp_file.name)
|
21 |
+
return tmp_file.name, '<div class="status-success">✅ Audio generated successfully</div>'
|
22 |
+
|
23 |
+
except Exception as e:
|
24 |
+
return None, f'<div class="status-error">❌ Error generating audio: {str(e)}</div>'
|
bun.lockb
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a3c575fd4a99dc9d1d74a4a7a31b979d577c15f379bc5cb0dd7e823586e98c23
|
3 |
+
size 198351
|
config.py
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""Configuration constants and settings for ScriptVoice."""
|
3 |
+
|
4 |
+
PROJECTS_FILE = "projects.json"
|
5 |
+
|
6 |
+
# ScriptVoice color scheme
|
7 |
+
SCRIPT_RED = "#E63946"
|
8 |
+
SCRIPT_GOLD = "#FFD700"
|
9 |
+
SCRIPT_BLACK = "#000000"
|
10 |
+
SCRIPT_DARK_GRAY = "#1a1a1a"
|
11 |
+
SCRIPT_BORDER = "rgba(230, 57, 70, 0.2)"
|
enhancement_services.py
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""AI enhancement services for ScriptVoice with RAG integration."""
|
3 |
+
|
4 |
+
from typing import Tuple
|
5 |
+
from langchain_tools import context_enhancer
|
6 |
+
from rag_services import rag_service
|
7 |
+
|
8 |
+
|
9 |
+
def enhance_script_placeholder(text: str, enhancement_type: str) -> Tuple[str, str]:
|
10 |
+
"""Enhanced script enhancement with context awareness."""
|
11 |
+
if not text.strip():
|
12 |
+
return text, '<div class="status-error">❌ Please provide text to enhance</div>'
|
13 |
+
|
14 |
+
# Use context-aware enhancement
|
15 |
+
enhanced_text, status = context_enhancer.enhance_script_with_context(text, enhancement_type)
|
16 |
+
|
17 |
+
return enhanced_text, status
|
18 |
+
|
19 |
+
|
20 |
+
def enhance_script_with_context(text: str, enhancement_type: str) -> Tuple[str, str]:
|
21 |
+
"""Context-aware script enhancement using RAG."""
|
22 |
+
return context_enhancer.enhance_script_with_context(text, enhancement_type)
|
23 |
+
|
24 |
+
|
25 |
+
def analyze_character_consistency(text: str) -> Tuple[str, str]:
|
26 |
+
"""Analyze character consistency in the provided text."""
|
27 |
+
if not text.strip():
|
28 |
+
return "", '<div class="status-error">❌ Please provide text to analyze</div>'
|
29 |
+
|
30 |
+
analysis = context_enhancer.analyze_character_consistency(text)
|
31 |
+
|
32 |
+
return analysis, '<div class="status-success">✅ Character consistency analysis complete</div>'
|
33 |
+
|
34 |
+
|
35 |
+
def suggest_story_elements(text: str) -> Tuple[str, str]:
|
36 |
+
"""Suggest relevant story elements for the text."""
|
37 |
+
if not text.strip():
|
38 |
+
return "", '<div class="status-error">❌ Please provide text for suggestions</div>'
|
39 |
+
|
40 |
+
suggestions = context_enhancer.suggest_story_elements(text)
|
41 |
+
|
42 |
+
return suggestions, '<div class="status-success">✅ Story element suggestions generated</div>'
|
43 |
+
|
44 |
+
|
45 |
+
def update_knowledge_base(content_type: str, content_id: str, title: str, content: str) -> str:
|
46 |
+
"""Update the knowledge base with new or modified content."""
|
47 |
+
try:
|
48 |
+
rag_service.add_content(content, content_type, content_id, title)
|
49 |
+
return '<div class="status-success">✅ Knowledge base updated</div>'
|
50 |
+
except Exception as e:
|
51 |
+
return f'<div class="status-error">❌ Error updating knowledge base: {str(e)}</div>'
|
52 |
+
|
53 |
+
|
54 |
+
def remove_from_knowledge_base(content_id: str) -> str:
|
55 |
+
"""Remove content from the knowledge base."""
|
56 |
+
try:
|
57 |
+
rag_service.remove_content(content_id)
|
58 |
+
return '<div class="status-success">✅ Content removed from knowledge base</div>'
|
59 |
+
except Exception as e:
|
60 |
+
return f'<div class="status-error">❌ Error removing from knowledge base: {str(e)}</div>'
|
export_services.py
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""Export services for ScriptVoice."""
|
3 |
+
|
4 |
+
import tempfile
|
5 |
+
from gtts import gTTS
|
6 |
+
from models import load_projects
|
7 |
+
from typing import Tuple, Optional
|
8 |
+
|
9 |
+
|
10 |
+
def export_project(project_id: str, export_type: str) -> Tuple[Optional[str], str]:
|
11 |
+
"""Export project content."""
|
12 |
+
if not project_id:
|
13 |
+
return None, '<div class="status-error">❌ No project selected</div>'
|
14 |
+
|
15 |
+
data = load_projects()
|
16 |
+
if project_id not in data["projects"]:
|
17 |
+
return None, '<div class="status-error">❌ Project not found</div>'
|
18 |
+
|
19 |
+
project = data["projects"][project_id]
|
20 |
+
|
21 |
+
if export_type == "text":
|
22 |
+
# Create text file
|
23 |
+
content = f"Project: {project['name']}\n"
|
24 |
+
content += f"Created: {project['created_at']}\n"
|
25 |
+
content += f"Word Count: {project['word_count']}\n\n"
|
26 |
+
content += "SCRIPT:\n" + "="*50 + "\n"
|
27 |
+
content += project['content'] + "\n\n"
|
28 |
+
content += "NOTES:\n" + "="*50 + "\n"
|
29 |
+
content += project['notes']
|
30 |
+
|
31 |
+
with tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='.txt', encoding='utf-8') as tmp_file:
|
32 |
+
tmp_file.write(content)
|
33 |
+
return tmp_file.name, '<div class="status-success">✅ Text file exported</div>'
|
34 |
+
|
35 |
+
elif export_type == "audio":
|
36 |
+
# Generate TTS audio
|
37 |
+
if not project['content'].strip():
|
38 |
+
return None, '<div class="status-error">❌ No content to convert to audio</div>'
|
39 |
+
|
40 |
+
try:
|
41 |
+
tts = gTTS(text=project['content'], lang='en')
|
42 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix='.mp3') as tmp_file:
|
43 |
+
tts.save(tmp_file.name)
|
44 |
+
return tmp_file.name, '<div class="status-success">✅ Audio file exported</div>'
|
45 |
+
except Exception as e:
|
46 |
+
return None, f'<div class="status-error">❌ Error generating audio: {str(e)}</div>'
|
47 |
+
|
48 |
+
return None, '<div class="status-error">❌ Invalid export type</div>'
|
image_services.py
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""Image processing and OCR services for ScriptVoice."""
|
3 |
+
|
4 |
+
import pytesseract
|
5 |
+
from PIL import Image
|
6 |
+
from typing import Tuple
|
7 |
+
|
8 |
+
|
9 |
+
def extract_text_from_image(image) -> Tuple[str, str]:
|
10 |
+
"""Extract text from uploaded image using OCR."""
|
11 |
+
if image is None:
|
12 |
+
return "", '<div class="status-error">❌ Please upload an image</div>'
|
13 |
+
|
14 |
+
try:
|
15 |
+
# Use pytesseract to extract text
|
16 |
+
text = pytesseract.image_to_string(Image.open(image))
|
17 |
+
if text.strip():
|
18 |
+
return text.strip(), '<div class="status-success">✅ Text extracted successfully</div>'
|
19 |
+
else:
|
20 |
+
return "", '<div class="status-error">❌ No text found in the image</div>'
|
21 |
+
|
22 |
+
except Exception as e:
|
23 |
+
return "", f'<div class="status-error">❌ Error extracting text: {str(e)}</div>'
|
index.html
CHANGED
@@ -1,27 +1,26 @@
|
|
1 |
|
2 |
-
<!
|
3 |
<html lang="en">
|
4 |
<head>
|
5 |
<meta charset="UTF-8" />
|
|
|
6 |
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
7 |
-
<title>
|
8 |
-
<meta name="description" content="Text-to-speech script editor for writers" />
|
9 |
-
<meta name="author" content="Lovable" />
|
10 |
-
|
11 |
-
<meta property="og:title" content="Script Voice - TTS Script Editor" />
|
12 |
-
<meta property="og:description" content="Text-to-speech script editor for writers" />
|
13 |
-
<meta property="og:type" content="website" />
|
14 |
-
<meta property="og:image" content="https://lovable.dev/opengraph-image-p98pqg.png" />
|
15 |
-
|
16 |
-
<meta name="twitter:card" content="summary_large_image" />
|
17 |
-
<meta name="twitter:site" content="@lovable_dev" />
|
18 |
-
<meta name="twitter:image" content="https://lovable.dev/opengraph-image-p98pqg.png" />
|
19 |
</head>
|
20 |
-
|
21 |
<body>
|
22 |
-
<div id="root"
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
</body>
|
27 |
</html>
|
|
|
1 |
|
2 |
+
<!doctype html>
|
3 |
<html lang="en">
|
4 |
<head>
|
5 |
<meta charset="UTF-8" />
|
6 |
+
<link rel="icon" type="image/svg+xml" href="/favicon.ico" />
|
7 |
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
8 |
+
<title>ScriptVoice - Python Gradio App</title>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
</head>
|
|
|
10 |
<body>
|
11 |
+
<div id="root">
|
12 |
+
<div style="padding: 20px; font-family: Arial, sans-serif; text-align: center;">
|
13 |
+
<h1>ScriptVoice - AI-Powered Story Intelligence Platform</h1>
|
14 |
+
<p>This is a Python Gradio application. To run it:</p>
|
15 |
+
<ol style="text-align: left; max-width: 400px; margin: 0 auto;">
|
16 |
+
<li>Install dependencies: <code>pip install -r requirements.txt</code></li>
|
17 |
+
<li>Run the app: <code>python main.py</code></li>
|
18 |
+
<li>Access at: <a href="http://localhost:7860">http://localhost:7860</a></li>
|
19 |
+
</ol>
|
20 |
+
<p style="margin-top: 20px; color: #666;">
|
21 |
+
The actual application interface will be served by Gradio on port 7860.
|
22 |
+
</p>
|
23 |
+
</div>
|
24 |
+
</div>
|
25 |
</body>
|
26 |
</html>
|
knowledge_assistant.py
ADDED
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""Knowledge assistant for intelligent content queries."""
|
3 |
+
|
4 |
+
from typing import List, Dict, Any, Tuple
|
5 |
+
from rag_services import rag_service
|
6 |
+
|
7 |
+
|
8 |
+
class KnowledgeAssistant:
|
9 |
+
"""Provides intelligent assistance based on the knowledge base."""
|
10 |
+
|
11 |
+
def __init__(self):
|
12 |
+
self.commands = {
|
13 |
+
"!search": self.search_knowledge,
|
14 |
+
"!characters": self.list_characters,
|
15 |
+
"!stories": self.list_stories,
|
16 |
+
"!world": self.list_world_elements,
|
17 |
+
"!analyze": self.analyze_content,
|
18 |
+
"!suggest": self.suggest_related,
|
19 |
+
"!consistency": self.check_consistency,
|
20 |
+
"!rebuild": self.rebuild_index
|
21 |
+
}
|
22 |
+
|
23 |
+
def process_query(self, query: str) -> str:
|
24 |
+
"""Process user queries and provide intelligent responses."""
|
25 |
+
query = query.strip()
|
26 |
+
|
27 |
+
# Check for commands
|
28 |
+
if query.startswith("!"):
|
29 |
+
command_parts = query.split(" ", 1)
|
30 |
+
command = command_parts[0]
|
31 |
+
args = command_parts[1] if len(command_parts) > 1 else ""
|
32 |
+
|
33 |
+
if command in self.commands:
|
34 |
+
return self.commands[command](args)
|
35 |
+
else:
|
36 |
+
return f"Unknown command: {command}\nAvailable commands: {', '.join(self.commands.keys())}"
|
37 |
+
|
38 |
+
# Regular search query
|
39 |
+
return self.search_knowledge(query)
|
40 |
+
|
41 |
+
def search_knowledge(self, query: str) -> str:
|
42 |
+
"""Search across all knowledge base content."""
|
43 |
+
if not query.strip():
|
44 |
+
return "Please provide a search query."
|
45 |
+
|
46 |
+
results = rag_service.search(query, k=5)
|
47 |
+
|
48 |
+
if not results:
|
49 |
+
return f"No results found for: {query}"
|
50 |
+
|
51 |
+
response = f"🔍 SEARCH RESULTS FOR: '{query}'\n\n"
|
52 |
+
|
53 |
+
for i, result in enumerate(results, 1):
|
54 |
+
metadata = result['metadata']
|
55 |
+
content_type = metadata.get('content_type', 'content')
|
56 |
+
title = metadata.get('title', 'Unknown')
|
57 |
+
content = result['content'][:200] + "..." if len(result['content']) > 200 else result['content']
|
58 |
+
score = result['score']
|
59 |
+
|
60 |
+
response += f"{i}. {content_type.title()}: {title} (Relevance: {score:.2f})\n"
|
61 |
+
response += f" {content}\n\n"
|
62 |
+
|
63 |
+
return response
|
64 |
+
|
65 |
+
def list_characters(self, query: str = "") -> str:
|
66 |
+
"""List characters in the knowledge base."""
|
67 |
+
results = rag_service.search(query if query else "character", k=10, content_type="character")
|
68 |
+
|
69 |
+
if not results:
|
70 |
+
return "No characters found in knowledge base."
|
71 |
+
|
72 |
+
response = "👥 CHARACTERS IN KNOWLEDGE BASE:\n\n"
|
73 |
+
for result in results:
|
74 |
+
title = result['metadata'].get('title', 'Unknown')
|
75 |
+
content = result['content'][:150] + "..." if len(result['content']) > 150 else result['content']
|
76 |
+
response += f"• {title}\n {content}\n\n"
|
77 |
+
|
78 |
+
return response
|
79 |
+
|
80 |
+
def list_stories(self, query: str = "") -> str:
|
81 |
+
"""List stories in the knowledge base."""
|
82 |
+
results = rag_service.search(query if query else "story", k=10, content_type="story")
|
83 |
+
|
84 |
+
if not results:
|
85 |
+
return "No stories found in knowledge base."
|
86 |
+
|
87 |
+
response = "📚 STORIES IN KNOWLEDGE BASE:\n\n"
|
88 |
+
for result in results:
|
89 |
+
title = result['metadata'].get('title', 'Unknown')
|
90 |
+
content = result['content'][:150] + "..." if len(result['content']) > 150 else result['content']
|
91 |
+
response += f"• {title}\n {content}\n\n"
|
92 |
+
|
93 |
+
return response
|
94 |
+
|
95 |
+
def list_world_elements(self, query: str = "") -> str:
|
96 |
+
"""List world elements in the knowledge base."""
|
97 |
+
results = rag_service.search(query if query else "world", k=10, content_type="world_element")
|
98 |
+
|
99 |
+
if not results:
|
100 |
+
return "No world elements found in knowledge base."
|
101 |
+
|
102 |
+
response = "🌍 WORLD ELEMENTS IN KNOWLEDGE BASE:\n\n"
|
103 |
+
for result in results:
|
104 |
+
title = result['metadata'].get('title', 'Unknown')
|
105 |
+
content = result['content'][:150] + "..." if len(result['content']) > 150 else result['content']
|
106 |
+
response += f"• {title}\n {content}\n\n"
|
107 |
+
|
108 |
+
return response
|
109 |
+
|
110 |
+
def analyze_content(self, content: str) -> str:
|
111 |
+
"""Analyze provided content against the knowledge base."""
|
112 |
+
if not content.strip():
|
113 |
+
return "Please provide content to analyze."
|
114 |
+
|
115 |
+
# Find related content
|
116 |
+
results = rag_service.search(content, k=5)
|
117 |
+
|
118 |
+
response = "📊 CONTENT ANALYSIS:\n\n"
|
119 |
+
|
120 |
+
if results:
|
121 |
+
response += "Related content found:\n"
|
122 |
+
for result in results:
|
123 |
+
metadata = result['metadata']
|
124 |
+
content_type = metadata.get('content_type', 'content')
|
125 |
+
title = metadata.get('title', 'Unknown')
|
126 |
+
score = result['score']
|
127 |
+
response += f"• {content_type.title()}: {title} (Similarity: {score:.2f})\n"
|
128 |
+
else:
|
129 |
+
response += "No related content found in knowledge base."
|
130 |
+
|
131 |
+
return response
|
132 |
+
|
133 |
+
def suggest_related(self, content: str) -> str:
|
134 |
+
"""Suggest related content based on input."""
|
135 |
+
if not content.strip():
|
136 |
+
return "Please provide content for suggestions."
|
137 |
+
|
138 |
+
# Get diverse suggestions
|
139 |
+
char_results = rag_service.search(content, k=2, content_type="character")
|
140 |
+
story_results = rag_service.search(content, k=2, content_type="story")
|
141 |
+
world_results = rag_service.search(content, k=2, content_type="world_element")
|
142 |
+
|
143 |
+
response = "💡 SUGGESTIONS BASED ON YOUR CONTENT:\n\n"
|
144 |
+
|
145 |
+
if char_results:
|
146 |
+
response += "Relevant Characters:\n"
|
147 |
+
for result in char_results:
|
148 |
+
title = result['metadata'].get('title', 'Unknown')
|
149 |
+
response += f"• {title}\n"
|
150 |
+
response += "\n"
|
151 |
+
|
152 |
+
if story_results:
|
153 |
+
response += "Related Stories:\n"
|
154 |
+
for result in story_results:
|
155 |
+
title = result['metadata'].get('title', 'Unknown')
|
156 |
+
response += f"• {title}\n"
|
157 |
+
response += "\n"
|
158 |
+
|
159 |
+
if world_results:
|
160 |
+
response += "Relevant World Elements:\n"
|
161 |
+
for result in world_results:
|
162 |
+
title = result['metadata'].get('title', 'Unknown')
|
163 |
+
response += f"• {title}\n"
|
164 |
+
response += "\n"
|
165 |
+
|
166 |
+
if not any([char_results, story_results, world_results]):
|
167 |
+
response += "No related content found in knowledge base."
|
168 |
+
|
169 |
+
return response
|
170 |
+
|
171 |
+
def check_consistency(self, content: str) -> str:
|
172 |
+
"""Check content consistency against knowledge base."""
|
173 |
+
if not content.strip():
|
174 |
+
return "Please provide content to check for consistency."
|
175 |
+
|
176 |
+
from langchain_tools import context_enhancer
|
177 |
+
analysis = context_enhancer.analyze_character_consistency(content)
|
178 |
+
|
179 |
+
response = "✅ CONSISTENCY CHECK:\n\n"
|
180 |
+
response += analysis
|
181 |
+
|
182 |
+
return response
|
183 |
+
|
184 |
+
def rebuild_index(self, args: str = "") -> str:
|
185 |
+
"""Rebuild the vector index from current data."""
|
186 |
+
try:
|
187 |
+
rag_service.rebuild_index_from_projects()
|
188 |
+
return "✅ Knowledge base index rebuilt successfully!"
|
189 |
+
except Exception as e:
|
190 |
+
return f"❌ Error rebuilding index: {str(e)}"
|
191 |
+
|
192 |
+
|
193 |
+
# Global assistant instance
|
194 |
+
knowledge_assistant = KnowledgeAssistant()
|
langchain_tools.py
ADDED
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""LangChain tools and chains for context-aware AI enhancement."""
|
3 |
+
|
4 |
+
from typing import List, Dict, Any, Optional
|
5 |
+
from langchain.prompts import PromptTemplate
|
6 |
+
from langchain.chains import LLMChain
|
7 |
+
from langchain.schema import BaseOutputParser
|
8 |
+
from rag_services import rag_service
|
9 |
+
|
10 |
+
|
11 |
+
class ContextAwareEnhancer:
|
12 |
+
"""Handles context-aware script enhancement using RAG."""
|
13 |
+
|
14 |
+
def __init__(self):
|
15 |
+
self.enhancement_prompts = {
|
16 |
+
"dramatic": """
|
17 |
+
You are enhancing a script with dramatic flair. Use the following context about characters and story elements to make the enhancement more consistent and engaging.
|
18 |
+
|
19 |
+
CONTEXT:
|
20 |
+
{context}
|
21 |
+
|
22 |
+
ORIGINAL SCRIPT:
|
23 |
+
{script}
|
24 |
+
|
25 |
+
Please enhance this script with dramatic elements while maintaining consistency with the established characters and world. Focus on:
|
26 |
+
- Heightened emotional stakes
|
27 |
+
- Compelling character motivations
|
28 |
+
- Dramatic tension and conflict
|
29 |
+
- Rich, evocative language
|
30 |
+
|
31 |
+
ENHANCED SCRIPT:
|
32 |
+
""",
|
33 |
+
"romantic": """
|
34 |
+
You are enhancing a script with romantic elements. Use the following context about characters and relationships to create authentic romantic moments.
|
35 |
+
|
36 |
+
CONTEXT:
|
37 |
+
{context}
|
38 |
+
|
39 |
+
ORIGINAL SCRIPT:
|
40 |
+
{script}
|
41 |
+
|
42 |
+
Please enhance this script with romantic elements while staying true to the established characters and their relationships. Focus on:
|
43 |
+
- Emotional intimacy and connection
|
44 |
+
- Character chemistry and dynamics
|
45 |
+
- Tender, heartfelt dialogue
|
46 |
+
- Romantic atmosphere and mood
|
47 |
+
|
48 |
+
ENHANCED SCRIPT:
|
49 |
+
""",
|
50 |
+
"professional": """
|
51 |
+
You are enhancing a script for professional presentation. Use the following context to ensure accuracy and consistency.
|
52 |
+
|
53 |
+
CONTEXT:
|
54 |
+
{context}
|
55 |
+
|
56 |
+
ORIGINAL SCRIPT:
|
57 |
+
{script}
|
58 |
+
|
59 |
+
Please enhance this script for a professional context while maintaining consistency with established facts and characters. Focus on:
|
60 |
+
- Clear, authoritative language
|
61 |
+
- Proper structure and flow
|
62 |
+
- Professional tone and delivery
|
63 |
+
- Accurate information and details
|
64 |
+
|
65 |
+
ENHANCED SCRIPT:
|
66 |
+
""",
|
67 |
+
"casual": """
|
68 |
+
You are making a script more casual and conversational. Use the following context about characters to match their established personalities.
|
69 |
+
|
70 |
+
CONTEXT:
|
71 |
+
{context}
|
72 |
+
|
73 |
+
ORIGINAL SCRIPT:
|
74 |
+
{script}
|
75 |
+
|
76 |
+
Please enhance this script with a casual, conversational tone while keeping character voices consistent. Focus on:
|
77 |
+
- Natural, everyday language
|
78 |
+
- Relaxed, friendly tone
|
79 |
+
- Character-appropriate dialogue
|
80 |
+
- Conversational flow and rhythm
|
81 |
+
|
82 |
+
ENHANCED SCRIPT:
|
83 |
+
""",
|
84 |
+
"character_consistent": """
|
85 |
+
You are enhancing a script to be more consistent with established characters. Use the character information below to guide your enhancement.
|
86 |
+
|
87 |
+
CHARACTER CONTEXT:
|
88 |
+
{context}
|
89 |
+
|
90 |
+
ORIGINAL SCRIPT:
|
91 |
+
{script}
|
92 |
+
|
93 |
+
Please enhance this script to be more consistent with the established characters. Ensure:
|
94 |
+
- Dialogue matches character personalities and speech patterns
|
95 |
+
- Actions align with character motivations
|
96 |
+
- Character relationships are honored
|
97 |
+
- Character development feels authentic
|
98 |
+
|
99 |
+
ENHANCED SCRIPT:
|
100 |
+
""",
|
101 |
+
"plot_coherent": """
|
102 |
+
You are enhancing a script to improve plot coherence. Use the story context below to ensure consistency.
|
103 |
+
|
104 |
+
STORY CONTEXT:
|
105 |
+
{context}
|
106 |
+
|
107 |
+
ORIGINAL SCRIPT:
|
108 |
+
{script}
|
109 |
+
|
110 |
+
Please enhance this script to improve plot coherence and consistency. Focus on:
|
111 |
+
- Logical story progression
|
112 |
+
- Consistent world-building elements
|
113 |
+
- Proper setup and payoff
|
114 |
+
- Clear cause and effect relationships
|
115 |
+
|
116 |
+
ENHANCED SCRIPT:
|
117 |
+
"""
|
118 |
+
}
|
119 |
+
|
120 |
+
def get_relevant_context(self, script: str, enhancement_type: str, max_context_length: int = 1000) -> str:
|
121 |
+
"""Get relevant context for script enhancement."""
|
122 |
+
if not script.strip():
|
123 |
+
return "No relevant context found."
|
124 |
+
|
125 |
+
# Search for relevant content
|
126 |
+
results = rag_service.search(script, k=5)
|
127 |
+
|
128 |
+
if not results:
|
129 |
+
return "No relevant context found."
|
130 |
+
|
131 |
+
# Build context string
|
132 |
+
context_parts = []
|
133 |
+
current_length = 0
|
134 |
+
|
135 |
+
for result in results:
|
136 |
+
metadata = result['metadata']
|
137 |
+
content = result['content']
|
138 |
+
|
139 |
+
# Create context entry
|
140 |
+
context_entry = f"\n--- {metadata.get('content_type', 'Content').title()}: {metadata.get('title', 'Unknown')} ---\n{content}\n"
|
141 |
+
|
142 |
+
if current_length + len(context_entry) > max_context_length:
|
143 |
+
break
|
144 |
+
|
145 |
+
context_parts.append(context_entry)
|
146 |
+
current_length += len(context_entry)
|
147 |
+
|
148 |
+
return "\n".join(context_parts) if context_parts else "No relevant context found."
|
149 |
+
|
150 |
+
def enhance_script_with_context(self, script: str, enhancement_type: str) -> tuple[str, str]:
|
151 |
+
"""Enhance script using relevant context from the knowledge base."""
|
152 |
+
if enhancement_type not in self.enhancement_prompts:
|
153 |
+
return script, f'<div class="status-error">❌ Unknown enhancement type: {enhancement_type}</div>'
|
154 |
+
|
155 |
+
# Get relevant context
|
156 |
+
context = self.get_relevant_context(script, enhancement_type)
|
157 |
+
|
158 |
+
# For now, return a placeholder with context info
|
159 |
+
enhanced_script = f"""[CONTEXT-AWARE {enhancement_type.upper()} ENHANCEMENT]
|
160 |
+
|
161 |
+
RELEVANT CONTEXT FOUND:
|
162 |
+
{context[:300]}{'...' if len(context) > 300 else ''}
|
163 |
+
|
164 |
+
ENHANCED SCRIPT:
|
165 |
+
{script}
|
166 |
+
|
167 |
+
(Note: Full LLM integration will be added when API keys are configured)
|
168 |
+
"""
|
169 |
+
|
170 |
+
status_message = f'<div class="status-success">✅ Enhanced with {enhancement_type} style using relevant context from knowledge base</div>'
|
171 |
+
return enhanced_script, status_message
|
172 |
+
|
173 |
+
def analyze_character_consistency(self, script: str) -> str:
|
174 |
+
"""Analyze script for character consistency."""
|
175 |
+
# Search for character-related content
|
176 |
+
results = rag_service.search(script, k=3, content_type="character")
|
177 |
+
|
178 |
+
if not results:
|
179 |
+
return "No character information found in knowledge base."
|
180 |
+
|
181 |
+
analysis = "CHARACTER CONSISTENCY ANALYSIS:\n\n"
|
182 |
+
for result in results:
|
183 |
+
char_name = result['metadata'].get('title', 'Unknown Character')
|
184 |
+
analysis += f"• {char_name}: Found in knowledge base\n"
|
185 |
+
analysis += f" Context: {result['content'][:100]}...\n\n"
|
186 |
+
|
187 |
+
return analysis
|
188 |
+
|
189 |
+
def suggest_story_elements(self, script: str) -> str:
|
190 |
+
"""Suggest relevant story elements for the script."""
|
191 |
+
# Search across all content types
|
192 |
+
story_results = rag_service.search(script, k=2, content_type="story")
|
193 |
+
world_results = rag_service.search(script, k=2, content_type="world_element")
|
194 |
+
|
195 |
+
suggestions = "STORY ELEMENT SUGGESTIONS:\n\n"
|
196 |
+
|
197 |
+
if story_results:
|
198 |
+
suggestions += "Related Stories:\n"
|
199 |
+
for result in story_results:
|
200 |
+
title = result['metadata'].get('title', 'Unknown')
|
201 |
+
suggestions += f"• {title}\n"
|
202 |
+
|
203 |
+
if world_results:
|
204 |
+
suggestions += "\nRelevant World Elements:\n"
|
205 |
+
for result in world_results:
|
206 |
+
title = result['metadata'].get('title', 'Unknown')
|
207 |
+
elem_type = result['metadata'].get('content_type', 'element')
|
208 |
+
suggestions += f"• {title} ({elem_type})\n"
|
209 |
+
|
210 |
+
if not story_results and not world_results:
|
211 |
+
suggestions += "No relevant story elements found."
|
212 |
+
|
213 |
+
return suggestions
|
214 |
+
|
215 |
+
|
216 |
+
# Global enhancer instance
|
217 |
+
context_enhancer = ContextAwareEnhancer()
|
main.py
ADDED
@@ -0,0 +1,594 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""Main application entry point for ScriptVoice - Pure Gradio Application."""
|
3 |
+
|
4 |
+
import gradio as gr
|
5 |
+
from models import (
|
6 |
+
load_projects, create_new_project, load_project,
|
7 |
+
save_script_content, update_word_count,
|
8 |
+
create_story, create_character, create_world_element,
|
9 |
+
get_all_stories, get_all_characters, get_all_world_elements,
|
10 |
+
search_content
|
11 |
+
)
|
12 |
+
from ui_components import CUSTOM_CSS, get_header_html, get_section_header
|
13 |
+
from audio_services import generate_tts
|
14 |
+
from image_services import extract_text_from_image
|
15 |
+
from enhancement_services import (
|
16 |
+
enhance_script_placeholder, enhance_script_with_context,
|
17 |
+
analyze_character_consistency, suggest_story_elements
|
18 |
+
)
|
19 |
+
from export_services import export_project
|
20 |
+
from knowledge_assistant import knowledge_assistant
|
21 |
+
|
22 |
+
|
23 |
+
# Global state for current project
|
24 |
+
current_project_id = None
|
25 |
+
|
26 |
+
|
27 |
+
def create_story_intelligence_interface():
|
28 |
+
"""Create the story intelligence interface components."""
|
29 |
+
|
30 |
+
with gr.Column():
|
31 |
+
# Knowledge Assistant
|
32 |
+
gr.HTML(get_section_header('🤖 Knowledge Assistant'))
|
33 |
+
with gr.Group():
|
34 |
+
assistant_query = gr.Textbox(
|
35 |
+
label="Ask your Knowledge Assistant",
|
36 |
+
placeholder="Try: !search dragons, !characters, !stories, or ask any question about your stories...",
|
37 |
+
lines=2
|
38 |
+
)
|
39 |
+
assistant_btn = gr.Button("🔍 Query Assistant", elem_classes=["primary-button"])
|
40 |
+
assistant_response = gr.HTML(visible=False)
|
41 |
+
|
42 |
+
# Enhanced AI Tools
|
43 |
+
gr.HTML(get_section_header('🎯 Context-Aware AI Tools'))
|
44 |
+
with gr.Group():
|
45 |
+
ai_analysis_text = gr.Textbox(
|
46 |
+
label="Text to Analyze",
|
47 |
+
placeholder="Paste your script text here for context-aware analysis...",
|
48 |
+
lines=4
|
49 |
+
)
|
50 |
+
with gr.Row():
|
51 |
+
consistency_btn = gr.Button("✅ Check Character Consistency", elem_classes=["secondary-button"])
|
52 |
+
suggest_btn = gr.Button("💡 Suggest Story Elements", elem_classes=["secondary-button"])
|
53 |
+
context_enhance_btn = gr.Button("🎭 Context-Aware Enhancement", elem_classes=["primary-button"])
|
54 |
+
|
55 |
+
context_enhancement_type = gr.Dropdown(
|
56 |
+
choices=["character_consistent", "plot_coherent", "dramatic", "romantic"],
|
57 |
+
label="Enhancement Type",
|
58 |
+
value="character_consistent"
|
59 |
+
)
|
60 |
+
ai_analysis_output = gr.HTML(visible=False)
|
61 |
+
|
62 |
+
# Search functionality
|
63 |
+
gr.HTML(get_section_header('🔍 Search Knowledge Base'))
|
64 |
+
with gr.Row():
|
65 |
+
search_input = gr.Textbox(label="Search", placeholder="Search stories, characters, world...")
|
66 |
+
search_btn = gr.Button("🔍 Search", elem_classes=["primary-button"])
|
67 |
+
search_results = gr.HTML(visible=False)
|
68 |
+
|
69 |
+
# Story Management
|
70 |
+
gr.HTML(get_section_header('📚 Stories'))
|
71 |
+
with gr.Group():
|
72 |
+
new_story_title = gr.Textbox(label="Story Title", placeholder="Enter story title...")
|
73 |
+
new_story_desc = gr.Textbox(label="Description", placeholder="Brief description...", lines=2)
|
74 |
+
create_story_btn = gr.Button("📖 Create Story", elem_classes=["primary-button"])
|
75 |
+
story_status = gr.HTML(visible=False)
|
76 |
+
|
77 |
+
story_dropdown = gr.Dropdown(label="Select Story", choices=[])
|
78 |
+
stories_display = gr.HTML()
|
79 |
+
|
80 |
+
# Character Management
|
81 |
+
gr.HTML(get_section_header('👥 Characters'))
|
82 |
+
with gr.Group():
|
83 |
+
new_char_name = gr.Textbox(label="Character Name", placeholder="Enter character name...")
|
84 |
+
new_char_desc = gr.Textbox(label="Description", placeholder="Character description...", lines=2)
|
85 |
+
create_char_btn = gr.Button("👤 Create Character", elem_classes=["primary-button"])
|
86 |
+
char_status = gr.HTML(visible=False)
|
87 |
+
|
88 |
+
character_dropdown = gr.Dropdown(label="Select Character", choices=[])
|
89 |
+
characters_display = gr.HTML()
|
90 |
+
|
91 |
+
# World Building
|
92 |
+
gr.HTML(get_section_header('🌍 World Elements'))
|
93 |
+
with gr.Group():
|
94 |
+
new_world_name = gr.Textbox(label="Element Name", placeholder="Enter element name...")
|
95 |
+
world_type = gr.Dropdown(
|
96 |
+
choices=["location", "organization", "concept", "item"],
|
97 |
+
label="Type",
|
98 |
+
value="location"
|
99 |
+
)
|
100 |
+
new_world_desc = gr.Textbox(label="Description", placeholder="Element description...", lines=2)
|
101 |
+
create_world_btn = gr.Button("🏛️ Create Element", elem_classes=["primary-button"])
|
102 |
+
world_status = gr.HTML(visible=False)
|
103 |
+
|
104 |
+
world_dropdown = gr.Dropdown(label="Select World Element", choices=[])
|
105 |
+
world_display = gr.HTML()
|
106 |
+
|
107 |
+
# Knowledge Base Management
|
108 |
+
gr.HTML(get_section_header('⚙️ Knowledge Base Management'))
|
109 |
+
with gr.Group():
|
110 |
+
rebuild_btn = gr.Button("🔄 Rebuild Knowledge Index", elem_classes=["secondary-button"])
|
111 |
+
rebuild_status = gr.HTML(visible=False)
|
112 |
+
|
113 |
+
return {
|
114 |
+
'assistant_query': assistant_query,
|
115 |
+
'assistant_btn': assistant_btn,
|
116 |
+
'assistant_response': assistant_response,
|
117 |
+
'ai_analysis_text': ai_analysis_text,
|
118 |
+
'consistency_btn': consistency_btn,
|
119 |
+
'suggest_btn': suggest_btn,
|
120 |
+
'context_enhance_btn': context_enhance_btn,
|
121 |
+
'context_enhancement_type': context_enhancement_type,
|
122 |
+
'ai_analysis_output': ai_analysis_output,
|
123 |
+
'search_input': search_input,
|
124 |
+
'search_btn': search_btn,
|
125 |
+
'search_results': search_results,
|
126 |
+
'new_story_title': new_story_title,
|
127 |
+
'new_story_desc': new_story_desc,
|
128 |
+
'create_story_btn': create_story_btn,
|
129 |
+
'story_status': story_status,
|
130 |
+
'story_dropdown': story_dropdown,
|
131 |
+
'stories_display': stories_display,
|
132 |
+
'new_char_name': new_char_name,
|
133 |
+
'new_char_desc': new_char_desc,
|
134 |
+
'create_char_btn': create_char_btn,
|
135 |
+
'char_status': char_status,
|
136 |
+
'character_dropdown': character_dropdown,
|
137 |
+
'characters_display': characters_display,
|
138 |
+
'new_world_name': new_world_name,
|
139 |
+
'world_type': world_type,
|
140 |
+
'new_world_desc': new_world_desc,
|
141 |
+
'create_world_btn': create_world_btn,
|
142 |
+
'world_status': world_status,
|
143 |
+
'world_dropdown': world_dropdown,
|
144 |
+
'world_display': world_display,
|
145 |
+
'rebuild_btn': rebuild_btn,
|
146 |
+
'rebuild_status': rebuild_status
|
147 |
+
}
|
148 |
+
|
149 |
+
|
150 |
+
def query_knowledge_assistant(query: str) -> tuple[str, any]:
|
151 |
+
"""Process knowledge assistant queries."""
|
152 |
+
if not query.strip():
|
153 |
+
return "Please enter a query.", gr.update(visible=False)
|
154 |
+
|
155 |
+
try:
|
156 |
+
response = knowledge_assistant.process_query(query)
|
157 |
+
formatted_response = f'<div class="search-results"><pre>{response}</pre></div>'
|
158 |
+
return formatted_response, gr.update(visible=True)
|
159 |
+
except Exception as e:
|
160 |
+
error_response = f'<div class="status-error">❌ Error processing query: {str(e)}</div>'
|
161 |
+
return error_response, gr.update(visible=True)
|
162 |
+
|
163 |
+
|
164 |
+
def analyze_consistency(text: str) -> tuple[str, any]:
|
165 |
+
"""Analyze character consistency."""
|
166 |
+
if not text.strip():
|
167 |
+
return "Please provide text to analyze.", gr.update(visible=False)
|
168 |
+
|
169 |
+
analysis, status = analyze_character_consistency(text)
|
170 |
+
formatted_response = f'<div class="search-results"><h4>Character Consistency Analysis</h4><pre>{analysis}</pre></div>'
|
171 |
+
return formatted_response, gr.update(visible=True)
|
172 |
+
|
173 |
+
|
174 |
+
def suggest_elements(text: str) -> tuple[str, any]:
|
175 |
+
"""Suggest story elements."""
|
176 |
+
if not text.strip():
|
177 |
+
return "Please provide text for suggestions.", gr.update(visible=False)
|
178 |
+
|
179 |
+
suggestions, status = suggest_story_elements(text)
|
180 |
+
formatted_response = f'<div class="search-results"><h4>Story Element Suggestions</h4><pre>{suggestions}</pre></div>'
|
181 |
+
return formatted_response, gr.update(visible=True)
|
182 |
+
|
183 |
+
|
184 |
+
def enhance_with_context(text: str, enhancement_type: str) -> tuple[str, any]:
|
185 |
+
"""Enhance text with context awareness."""
|
186 |
+
if not text.strip():
|
187 |
+
return "Please provide text to enhance.", gr.update(visible=False)
|
188 |
+
|
189 |
+
enhanced, status = enhance_script_with_context(text, enhancement_type)
|
190 |
+
formatted_response = f'<div class="search-results"><h4>Context-Aware Enhancement ({enhancement_type})</h4><pre>{enhanced}</pre></div>'
|
191 |
+
return formatted_response, gr.update(visible=True)
|
192 |
+
|
193 |
+
|
194 |
+
def rebuild_knowledge_index() -> tuple[str, any]:
|
195 |
+
"""Rebuild the knowledge base index."""
|
196 |
+
try:
|
197 |
+
from rag_services import rag_service
|
198 |
+
rag_service.rebuild_index_from_projects()
|
199 |
+
response = '<div class="status-success">✅ Knowledge base index rebuilt successfully!</div>'
|
200 |
+
return response, gr.update(visible=True)
|
201 |
+
except Exception as e:
|
202 |
+
response = f'<div class="status-error">❌ Error rebuilding index: {str(e)}</div>'
|
203 |
+
return response, gr.update(visible=True)
|
204 |
+
|
205 |
+
|
206 |
+
def display_stories():
|
207 |
+
"""Display all stories in a formatted way."""
|
208 |
+
stories = get_all_stories()
|
209 |
+
if not stories:
|
210 |
+
return "<p>No stories created yet.</p>"
|
211 |
+
|
212 |
+
html = "<div class='stories-grid'>"
|
213 |
+
for story in stories:
|
214 |
+
html += f"""
|
215 |
+
<div class='story-card'>
|
216 |
+
<h3>{story['title']}</h3>
|
217 |
+
<p>{story['description'][:100]}{'...' if len(story['description']) > 100 else ''}</p>
|
218 |
+
<small>Created: {story['created_at'][:10]}</small>
|
219 |
+
</div>
|
220 |
+
"""
|
221 |
+
html += "</div>"
|
222 |
+
return html
|
223 |
+
|
224 |
+
|
225 |
+
def display_characters():
|
226 |
+
"""Display all characters in a formatted way."""
|
227 |
+
characters = get_all_characters()
|
228 |
+
if not characters:
|
229 |
+
return "<p>No characters created yet.</p>"
|
230 |
+
|
231 |
+
html = "<div class='characters-grid'>"
|
232 |
+
for char in characters:
|
233 |
+
html += f"""
|
234 |
+
<div class='character-card'>
|
235 |
+
<h3>{char['name']}</h3>
|
236 |
+
<p>{char['description'][:100]}{'...' if len(char['description']) > 100 else ''}</p>
|
237 |
+
<small>Created: {char['created_at'][:10]}</small>
|
238 |
+
</div>
|
239 |
+
"""
|
240 |
+
html += "</div>"
|
241 |
+
return html
|
242 |
+
|
243 |
+
|
244 |
+
def display_world_elements():
|
245 |
+
"""Display all world elements in a formatted way."""
|
246 |
+
elements = get_all_world_elements()
|
247 |
+
if not elements:
|
248 |
+
return "<p>No world elements created yet.</p>"
|
249 |
+
|
250 |
+
html = "<div class='world-grid'>"
|
251 |
+
for elem in elements:
|
252 |
+
html += f"""
|
253 |
+
<div class='world-card'>
|
254 |
+
<h3>{elem['name']} <span class='type-badge'>{elem['type']}</span></h3>
|
255 |
+
<p>{elem['description'][:100]}{'...' if len(elem['description']) > 100 else ''}</p>
|
256 |
+
<small>Created: {elem['created_at'][:10]}</small>
|
257 |
+
</div>
|
258 |
+
"""
|
259 |
+
html += "</div>"
|
260 |
+
return html
|
261 |
+
|
262 |
+
|
263 |
+
def perform_search(query):
|
264 |
+
"""Perform search across all content."""
|
265 |
+
if not query.strip():
|
266 |
+
return gr.update(visible=False)
|
267 |
+
|
268 |
+
results = search_content(query)
|
269 |
+
|
270 |
+
html = f"<h4>Search Results for: '{query}'</h4>"
|
271 |
+
|
272 |
+
if results['stories']:
|
273 |
+
html += "<h5>Stories:</h5><ul>"
|
274 |
+
for story in results['stories']:
|
275 |
+
html += f"<li><strong>{story['title']}</strong> - {story['description'][:50]}...</li>"
|
276 |
+
html += "</ul>"
|
277 |
+
|
278 |
+
if results['characters']:
|
279 |
+
html += "<h5>Characters:</h5><ul>"
|
280 |
+
for char in results['characters']:
|
281 |
+
html += f"<li><strong>{char['name']}</strong> - {char['description'][:50]}...</li>"
|
282 |
+
html += "</ul>"
|
283 |
+
|
284 |
+
if results['world_elements']:
|
285 |
+
html += "<h5>World Elements:</h5><ul>"
|
286 |
+
for elem in results['world_elements']:
|
287 |
+
html += f"<li><strong>{elem['name']}</strong> ({elem['type']}) - {elem['description'][:50]}...</li>"
|
288 |
+
html += "</ul>"
|
289 |
+
|
290 |
+
if not any(results.values()):
|
291 |
+
html += "<p>No results found.</p>"
|
292 |
+
|
293 |
+
return gr.update(value=html, visible=True)
|
294 |
+
|
295 |
+
|
296 |
+
def create_interface():
|
297 |
+
"""Create the main Gradio interface."""
|
298 |
+
|
299 |
+
# Load initial projects
|
300 |
+
data = load_projects()
|
301 |
+
project_choices = [(proj["name"], proj_id) for proj_id, proj in data["projects"].items()]
|
302 |
+
|
303 |
+
with gr.Blocks(
|
304 |
+
title="ScriptVoice - AI-Powered Story Intelligence Platform",
|
305 |
+
theme=gr.themes.Base(),
|
306 |
+
css=CUSTOM_CSS
|
307 |
+
) as app:
|
308 |
+
|
309 |
+
# Header with ScriptVoice branding
|
310 |
+
gr.HTML(get_header_html())
|
311 |
+
|
312 |
+
# Main tabbed interface
|
313 |
+
with gr.Tabs():
|
314 |
+
# Scripts Tab (Original functionality)
|
315 |
+
with gr.TabItem("📝 Scripts"):
|
316 |
+
with gr.Row():
|
317 |
+
# Left Sidebar
|
318 |
+
with gr.Column(scale=1, min_width=300, elem_classes=["sidebar-column"]):
|
319 |
+
gr.HTML(get_section_header('📁 Projects'))
|
320 |
+
|
321 |
+
# New Project Section
|
322 |
+
with gr.Group():
|
323 |
+
new_project_name = gr.Textbox(label="New Project Name", placeholder="Enter project name...")
|
324 |
+
create_btn = gr.Button("➕ Create Project", elem_classes=["primary-button"])
|
325 |
+
create_status = gr.HTML(visible=False)
|
326 |
+
|
327 |
+
# Project Selection
|
328 |
+
project_dropdown = gr.Dropdown(
|
329 |
+
choices=project_choices,
|
330 |
+
label="Select Project",
|
331 |
+
value=list(data["projects"].keys())[0] if data["projects"] else None
|
332 |
+
)
|
333 |
+
|
334 |
+
# Notes Section
|
335 |
+
gr.HTML(get_section_header('📝 Notes'))
|
336 |
+
notes_textbox = gr.Textbox(
|
337 |
+
label="Project Notes",
|
338 |
+
placeholder="Add your notes here...",
|
339 |
+
lines=5,
|
340 |
+
max_lines=10
|
341 |
+
)
|
342 |
+
|
343 |
+
# Settings Section
|
344 |
+
gr.HTML(get_section_header('⚙️ Settings'))
|
345 |
+
with gr.Group():
|
346 |
+
dyslexic_mode = gr.Checkbox(label="Dyslexic-friendly font", value=False)
|
347 |
+
voice_speed = gr.Slider(0.5, 2.0, value=1.0, step=0.1, label="Voice Speed")
|
348 |
+
voice_volume = gr.Slider(0.1, 1.0, value=1.0, step=0.1, label="Voice Volume")
|
349 |
+
|
350 |
+
# Main Editor Panel
|
351 |
+
with gr.Column(scale=2):
|
352 |
+
# Word Count Display
|
353 |
+
word_count_display = gr.HTML('<div class="word-count-highlight">📊 Word Count: 0</div>')
|
354 |
+
|
355 |
+
# Script Editor
|
356 |
+
script_textbox = gr.Textbox(
|
357 |
+
label="Script Editor",
|
358 |
+
placeholder="Start writing your script here...",
|
359 |
+
lines=15,
|
360 |
+
max_lines=25
|
361 |
+
)
|
362 |
+
|
363 |
+
# Control Buttons Row
|
364 |
+
with gr.Row():
|
365 |
+
save_btn = gr.Button("💾 Save", elem_classes=["secondary-button"])
|
366 |
+
tts_btn = gr.Button("🔊 Play TTS", elem_classes=["primary-button"])
|
367 |
+
save_status = gr.HTML(visible=False)
|
368 |
+
|
369 |
+
# TTS Audio Output
|
370 |
+
audio_output = gr.Audio(label="Generated Audio")
|
371 |
+
tts_status = gr.HTML(visible=False)
|
372 |
+
|
373 |
+
# OCR Section
|
374 |
+
with gr.Group():
|
375 |
+
gr.HTML(get_section_header('📷 Extract Text from Image'))
|
376 |
+
with gr.Row():
|
377 |
+
image_input = gr.Image(type="filepath", label="Upload Image")
|
378 |
+
ocr_btn = gr.Button("Extract Text", elem_classes=["secondary-button"])
|
379 |
+
ocr_status = gr.HTML(visible=False)
|
380 |
+
|
381 |
+
# AI Enhancement Section
|
382 |
+
with gr.Group():
|
383 |
+
gr.HTML(get_section_header('🤖 AI Script Enhancement'))
|
384 |
+
with gr.Row():
|
385 |
+
enhancement_type = gr.Dropdown(
|
386 |
+
choices=["dramatic", "romantic", "professional", "casual"],
|
387 |
+
label="Enhancement Style",
|
388 |
+
value="dramatic"
|
389 |
+
)
|
390 |
+
enhance_btn = gr.Button("✨ Enhance Script", elem_classes=["primary-button"])
|
391 |
+
enhance_status = gr.HTML(visible=False)
|
392 |
+
|
393 |
+
# Export Section
|
394 |
+
with gr.Group():
|
395 |
+
gr.HTML(get_section_header('📤 Export'))
|
396 |
+
with gr.Row():
|
397 |
+
export_type = gr.Dropdown(
|
398 |
+
choices=["text", "audio"],
|
399 |
+
label="Export Type",
|
400 |
+
value="text"
|
401 |
+
)
|
402 |
+
export_btn = gr.Button("📥 Export", elem_classes=["secondary-button"])
|
403 |
+
export_file = gr.File(label="Download")
|
404 |
+
export_status = gr.HTML(visible=False)
|
405 |
+
|
406 |
+
# Story Intelligence Tab (Enhanced with RAG)
|
407 |
+
with gr.TabItem("📚 Story Intelligence"):
|
408 |
+
story_components = create_story_intelligence_interface()
|
409 |
+
|
410 |
+
# Event Handlers for Scripts Tab
|
411 |
+
|
412 |
+
# Create new project
|
413 |
+
create_btn.click(
|
414 |
+
fn=create_new_project,
|
415 |
+
inputs=[new_project_name],
|
416 |
+
outputs=[create_status, project_dropdown]
|
417 |
+
).then(
|
418 |
+
lambda: ("", gr.update(visible=True)),
|
419 |
+
outputs=[new_project_name, create_status]
|
420 |
+
)
|
421 |
+
|
422 |
+
# Load project when selected
|
423 |
+
project_dropdown.change(
|
424 |
+
fn=load_project,
|
425 |
+
inputs=[project_dropdown],
|
426 |
+
outputs=[script_textbox, notes_textbox, word_count_display]
|
427 |
+
)
|
428 |
+
|
429 |
+
# Update word count as user types
|
430 |
+
script_textbox.change(
|
431 |
+
fn=update_word_count,
|
432 |
+
inputs=[script_textbox],
|
433 |
+
outputs=[word_count_display]
|
434 |
+
)
|
435 |
+
|
436 |
+
# Save script content
|
437 |
+
save_btn.click(
|
438 |
+
fn=save_script_content,
|
439 |
+
inputs=[project_dropdown, script_textbox, notes_textbox],
|
440 |
+
outputs=[save_status]
|
441 |
+
).then(
|
442 |
+
lambda: gr.update(visible=True),
|
443 |
+
outputs=[save_status]
|
444 |
+
)
|
445 |
+
|
446 |
+
# Generate TTS
|
447 |
+
tts_btn.click(
|
448 |
+
fn=generate_tts,
|
449 |
+
inputs=[script_textbox, voice_speed],
|
450 |
+
outputs=[audio_output, tts_status]
|
451 |
+
).then(
|
452 |
+
lambda: gr.update(visible=True),
|
453 |
+
outputs=[tts_status]
|
454 |
+
)
|
455 |
+
|
456 |
+
# OCR text extraction
|
457 |
+
ocr_btn.click(
|
458 |
+
fn=extract_text_from_image,
|
459 |
+
inputs=[image_input],
|
460 |
+
outputs=[script_textbox, ocr_status]
|
461 |
+
).then(
|
462 |
+
lambda: gr.update(visible=True),
|
463 |
+
outputs=[ocr_status]
|
464 |
+
)
|
465 |
+
|
466 |
+
# AI Enhancement
|
467 |
+
enhance_btn.click(
|
468 |
+
fn=enhance_script_placeholder,
|
469 |
+
inputs=[script_textbox, enhancement_type],
|
470 |
+
outputs=[script_textbox, enhance_status]
|
471 |
+
).then(
|
472 |
+
lambda: gr.update(visible=True),
|
473 |
+
outputs=[enhance_status]
|
474 |
+
)
|
475 |
+
|
476 |
+
# Export functionality
|
477 |
+
export_btn.click(
|
478 |
+
fn=export_project,
|
479 |
+
inputs=[project_dropdown, export_type],
|
480 |
+
outputs=[export_file, export_status]
|
481 |
+
).then(
|
482 |
+
lambda: gr.update(visible=True),
|
483 |
+
outputs=[export_status]
|
484 |
+
)
|
485 |
+
|
486 |
+
# Event Handlers for Story Intelligence Tab
|
487 |
+
|
488 |
+
# Knowledge Assistant
|
489 |
+
story_components['assistant_btn'].click(
|
490 |
+
fn=query_knowledge_assistant,
|
491 |
+
inputs=[story_components['assistant_query']],
|
492 |
+
outputs=[story_components['assistant_response'], story_components['assistant_response']]
|
493 |
+
)
|
494 |
+
|
495 |
+
# AI Analysis Tools
|
496 |
+
story_components['consistency_btn'].click(
|
497 |
+
fn=analyze_consistency,
|
498 |
+
inputs=[story_components['ai_analysis_text']],
|
499 |
+
outputs=[story_components['ai_analysis_output'], story_components['ai_analysis_output']]
|
500 |
+
)
|
501 |
+
|
502 |
+
story_components['suggest_btn'].click(
|
503 |
+
fn=suggest_elements,
|
504 |
+
inputs=[story_components['ai_analysis_text']],
|
505 |
+
outputs=[story_components['ai_analysis_output'], story_components['ai_analysis_output']]
|
506 |
+
)
|
507 |
+
|
508 |
+
story_components['context_enhance_btn'].click(
|
509 |
+
fn=enhance_with_context,
|
510 |
+
inputs=[story_components['ai_analysis_text'], story_components['context_enhancement_type']],
|
511 |
+
outputs=[story_components['ai_analysis_output'], story_components['ai_analysis_output']]
|
512 |
+
)
|
513 |
+
|
514 |
+
# Knowledge Base Management
|
515 |
+
story_components['rebuild_btn'].click(
|
516 |
+
fn=rebuild_knowledge_index,
|
517 |
+
outputs=[story_components['rebuild_status'], story_components['rebuild_status']]
|
518 |
+
)
|
519 |
+
|
520 |
+
# Create story
|
521 |
+
story_components['create_story_btn'].click(
|
522 |
+
fn=create_story,
|
523 |
+
inputs=[story_components['new_story_title'], story_components['new_story_desc']],
|
524 |
+
outputs=[story_components['story_status'], story_components['story_dropdown']]
|
525 |
+
).then(
|
526 |
+
lambda: ("", "", gr.update(visible=True)),
|
527 |
+
outputs=[story_components['new_story_title'], story_components['new_story_desc'], story_components['story_status']]
|
528 |
+
).then(
|
529 |
+
fn=display_stories,
|
530 |
+
outputs=[story_components['stories_display']]
|
531 |
+
)
|
532 |
+
|
533 |
+
# Create character
|
534 |
+
story_components['create_char_btn'].click(
|
535 |
+
fn=create_character,
|
536 |
+
inputs=[story_components['new_char_name'], story_components['new_char_desc']],
|
537 |
+
outputs=[story_components['char_status'], story_components['character_dropdown']]
|
538 |
+
).then(
|
539 |
+
lambda: ("", "", gr.update(visible=True)),
|
540 |
+
outputs=[story_components['new_char_name'], story_components['new_char_desc'], story_components['char_status']]
|
541 |
+
).then(
|
542 |
+
fn=display_characters,
|
543 |
+
outputs=[story_components['characters_display']]
|
544 |
+
)
|
545 |
+
|
546 |
+
# Create world element
|
547 |
+
story_components['create_world_btn'].click(
|
548 |
+
fn=create_world_element,
|
549 |
+
inputs=[story_components['new_world_name'], story_components['world_type'], story_components['new_world_desc']],
|
550 |
+
outputs=[story_components['world_status'], story_components['world_dropdown']]
|
551 |
+
).then(
|
552 |
+
lambda: ("", "", gr.update(visible=True)),
|
553 |
+
outputs=[story_components['new_world_name'], story_components['new_world_desc'], story_components['world_status']]
|
554 |
+
).then(
|
555 |
+
fn=display_world_elements,
|
556 |
+
outputs=[story_components['world_display']]
|
557 |
+
)
|
558 |
+
|
559 |
+
# Search functionality
|
560 |
+
story_components['search_btn'].click(
|
561 |
+
fn=perform_search,
|
562 |
+
inputs=[story_components['search_input']],
|
563 |
+
outputs=[story_components['search_results']]
|
564 |
+
)
|
565 |
+
|
566 |
+
# Load initial story intelligence data
|
567 |
+
app.load(
|
568 |
+
fn=display_stories,
|
569 |
+
outputs=[story_components['stories_display']]
|
570 |
+
)
|
571 |
+
app.load(
|
572 |
+
fn=display_characters,
|
573 |
+
outputs=[story_components['characters_display']]
|
574 |
+
)
|
575 |
+
app.load(
|
576 |
+
fn=display_world_elements,
|
577 |
+
outputs=[story_components['world_display']]
|
578 |
+
)
|
579 |
+
|
580 |
+
return app
|
581 |
+
|
582 |
+
|
583 |
+
if __name__ == "__main__":
|
584 |
+
print("🚀 Starting ScriptVoice - AI-Powered Story Intelligence Platform")
|
585 |
+
print("🌐 The app will be available at: http://localhost:7860")
|
586 |
+
|
587 |
+
# Create and launch the app
|
588 |
+
app = create_interface()
|
589 |
+
app.launch(
|
590 |
+
server_name="0.0.0.0",
|
591 |
+
server_port=7860,
|
592 |
+
share=True,
|
593 |
+
show_error=True
|
594 |
+
)
|
models.py
ADDED
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""Data models and project management for ScriptVoice."""
|
3 |
+
|
4 |
+
import json
|
5 |
+
import os
|
6 |
+
from datetime import datetime
|
7 |
+
from typing import Dict, Any, Tuple, Optional, List
|
8 |
+
from config import PROJECTS_FILE
|
9 |
+
|
10 |
+
# ... keep existing code (Story Intelligence Data Models: Character, WorldElement, Scene, Chapter, Story classes)
|
11 |
+
|
12 |
+
def load_projects() -> Dict[str, Any]:
|
13 |
+
"""Load projects from JSON file."""
|
14 |
+
if os.path.exists(PROJECTS_FILE):
|
15 |
+
with open(PROJECTS_FILE, 'r') as f:
|
16 |
+
data = json.load(f)
|
17 |
+
# Ensure new story intelligence fields exist
|
18 |
+
if "stories" not in data:
|
19 |
+
data["stories"] = {}
|
20 |
+
if "characters" not in data:
|
21 |
+
data["characters"] = {}
|
22 |
+
if "world_elements" not in data:
|
23 |
+
data["world_elements"] = {}
|
24 |
+
return data
|
25 |
+
return {
|
26 |
+
"projects": {
|
27 |
+
"1": {
|
28 |
+
"id": "1",
|
29 |
+
"name": "Sample Script",
|
30 |
+
"content": "Welcome to ScriptVoice! This is your first script. Start editing to create amazing voice content.",
|
31 |
+
"notes": "This is a sample note for your script.",
|
32 |
+
"created_at": datetime.now().isoformat(),
|
33 |
+
"word_count": 0
|
34 |
+
}
|
35 |
+
},
|
36 |
+
"stories": {},
|
37 |
+
"characters": {},
|
38 |
+
"world_elements": {},
|
39 |
+
"settings": {
|
40 |
+
"dyslexic_mode": False,
|
41 |
+
"voice_speed": 1.0,
|
42 |
+
"voice_volume": 1.0
|
43 |
+
}
|
44 |
+
}
|
45 |
+
|
46 |
+
|
47 |
+
def save_projects(data: Dict[str, Any]) -> None:
|
48 |
+
"""Save projects to JSON file."""
|
49 |
+
with open(PROJECTS_FILE, 'w') as f:
|
50 |
+
json.dump(data, f, indent=2)
|
51 |
+
|
52 |
+
|
53 |
+
# ... keep existing code (get_word_count, update_word_count functions)
|
54 |
+
|
55 |
+
def create_new_project(name: str) -> Tuple[str, Optional[Any]]:
|
56 |
+
"""Create a new project."""
|
57 |
+
if not name.strip():
|
58 |
+
return '<div class="status-error">❌ Please enter a project name</div>', None
|
59 |
+
|
60 |
+
data = load_projects()
|
61 |
+
new_id = str(len(data["projects"]) + 1)
|
62 |
+
|
63 |
+
data["projects"][new_id] = {
|
64 |
+
"id": new_id,
|
65 |
+
"name": name.strip(),
|
66 |
+
"content": "",
|
67 |
+
"notes": "",
|
68 |
+
"created_at": datetime.now().isoformat(),
|
69 |
+
"word_count": 0
|
70 |
+
}
|
71 |
+
|
72 |
+
save_projects(data)
|
73 |
+
|
74 |
+
# Return updated project choices and select the new project
|
75 |
+
import gradio as gr
|
76 |
+
choices = [(proj["name"], proj_id) for proj_id, proj in data["projects"].items()]
|
77 |
+
return f'<div class="status-success">✅ Project "{name}" created successfully!</div>', gr.update(choices=choices, value=new_id)
|
78 |
+
|
79 |
+
|
80 |
+
def load_project(project_id: str) -> Tuple[str, str, str]:
|
81 |
+
"""Load a specific project."""
|
82 |
+
if not project_id:
|
83 |
+
return "", "", '<div class="word-count-highlight">📊 Word Count: 0</div>'
|
84 |
+
|
85 |
+
data = load_projects()
|
86 |
+
if project_id in data["projects"]:
|
87 |
+
project = data["projects"][project_id]
|
88 |
+
word_count = get_word_count(project["content"])
|
89 |
+
return project["content"], project["notes"], f'<div class="word-count-highlight">📊 Word Count: {word_count}</div>'
|
90 |
+
|
91 |
+
return "", "", '<div class="word-count-highlight">📊 Word Count: 0</div>'
|
92 |
+
|
93 |
+
|
94 |
+
def save_script_content(project_id: str, content: str, notes: str) -> str:
|
95 |
+
"""Save script content and notes."""
|
96 |
+
if not project_id:
|
97 |
+
return '<div class="status-error">❌ No project selected</div>'
|
98 |
+
|
99 |
+
data = load_projects()
|
100 |
+
if project_id in data["projects"]:
|
101 |
+
data["projects"][project_id]["content"] = content
|
102 |
+
data["projects"][project_id]["notes"] = notes
|
103 |
+
data["projects"][project_id]["word_count"] = get_word_count(content)
|
104 |
+
save_projects(data)
|
105 |
+
|
106 |
+
# Update knowledge base
|
107 |
+
try:
|
108 |
+
from enhancement_services import update_knowledge_base
|
109 |
+
project_name = data["projects"][project_id]["name"]
|
110 |
+
update_knowledge_base("script", project_id, project_name, content)
|
111 |
+
except ImportError:
|
112 |
+
pass # RAG services not available yet
|
113 |
+
|
114 |
+
return '<div class="status-success">✅ Saved successfully</div>'
|
115 |
+
|
116 |
+
return '<div class="status-error">❌ Error saving</div>'
|
117 |
+
|
118 |
+
|
119 |
+
# Story Management Functions with RAG Integration
|
120 |
+
def create_story(title: str, description: str = "") -> Tuple[str, Any]:
|
121 |
+
"""Create a new story."""
|
122 |
+
if not title.strip():
|
123 |
+
return '<div class="status-error">❌ Please enter a story title</div>', None
|
124 |
+
|
125 |
+
data = load_projects()
|
126 |
+
new_id = str(len(data["stories"]) + 1)
|
127 |
+
|
128 |
+
story_data = {
|
129 |
+
"id": new_id,
|
130 |
+
"title": title.strip(),
|
131 |
+
"description": description,
|
132 |
+
"content": "",
|
133 |
+
"tags": [],
|
134 |
+
"characters": [],
|
135 |
+
"world_elements": [],
|
136 |
+
"chapters": [],
|
137 |
+
"created_at": datetime.now().isoformat(),
|
138 |
+
"updated_at": datetime.now().isoformat()
|
139 |
+
}
|
140 |
+
|
141 |
+
data["stories"][new_id] = story_data
|
142 |
+
save_projects(data)
|
143 |
+
|
144 |
+
# Update knowledge base
|
145 |
+
try:
|
146 |
+
from enhancement_services import update_knowledge_base
|
147 |
+
content = f"{title}\n\n{description}"
|
148 |
+
update_knowledge_base("story", new_id, title, content)
|
149 |
+
except ImportError:
|
150 |
+
pass # RAG services not available yet
|
151 |
+
|
152 |
+
import gradio as gr
|
153 |
+
choices = [(story["title"], story_id) for story_id, story in data["stories"].items()]
|
154 |
+
return f'<div class="status-success">✅ Story "{title}" created successfully!</div>', gr.update(choices=choices, value=new_id)
|
155 |
+
|
156 |
+
|
157 |
+
def create_character(name: str, description: str = "") -> Tuple[str, Any]:
|
158 |
+
"""Create a new character."""
|
159 |
+
if not name.strip():
|
160 |
+
return '<div class="status-error">❌ Please enter a character name</div>', None
|
161 |
+
|
162 |
+
data = load_projects()
|
163 |
+
new_id = str(len(data["characters"]) + 1)
|
164 |
+
|
165 |
+
character_data = {
|
166 |
+
"id": new_id,
|
167 |
+
"name": name.strip(),
|
168 |
+
"description": description,
|
169 |
+
"traits": [],
|
170 |
+
"relationships": {},
|
171 |
+
"notes": "",
|
172 |
+
"created_at": datetime.now().isoformat(),
|
173 |
+
"updated_at": datetime.now().isoformat()
|
174 |
+
}
|
175 |
+
|
176 |
+
data["characters"][new_id] = character_data
|
177 |
+
save_projects(data)
|
178 |
+
|
179 |
+
# Update knowledge base
|
180 |
+
try:
|
181 |
+
from enhancement_services import update_knowledge_base
|
182 |
+
content = f"{name}\n\n{description}"
|
183 |
+
update_knowledge_base("character", new_id, name, content)
|
184 |
+
except ImportError:
|
185 |
+
pass # RAG services not available yet
|
186 |
+
|
187 |
+
import gradio as gr
|
188 |
+
choices = [(char["name"], char_id) for char_id, char in data["characters"].items()]
|
189 |
+
return f'<div class="status-success">✅ Character "{name}" created successfully!</div>', gr.update(choices=choices, value=new_id)
|
190 |
+
|
191 |
+
|
192 |
+
def create_world_element(name: str, element_type: str, description: str = "") -> Tuple[str, Any]:
|
193 |
+
"""Create a new world element."""
|
194 |
+
if not name.strip():
|
195 |
+
return '<div class="status-error">❌ Please enter an element name</div>', None
|
196 |
+
|
197 |
+
data = load_projects()
|
198 |
+
new_id = str(len(data["world_elements"]) + 1)
|
199 |
+
|
200 |
+
element_data = {
|
201 |
+
"id": new_id,
|
202 |
+
"name": name.strip(),
|
203 |
+
"type": element_type,
|
204 |
+
"description": description,
|
205 |
+
"tags": [],
|
206 |
+
"notes": "",
|
207 |
+
"created_at": datetime.now().isoformat(),
|
208 |
+
"updated_at": datetime.now().isoformat()
|
209 |
+
}
|
210 |
+
|
211 |
+
data["world_elements"][new_id] = element_data
|
212 |
+
save_projects(data)
|
213 |
+
|
214 |
+
# Update knowledge base
|
215 |
+
try:
|
216 |
+
from enhancement_services import update_knowledge_base
|
217 |
+
content = f"{name} ({element_type})\n\n{description}"
|
218 |
+
update_knowledge_base("world_element", new_id, name, content)
|
219 |
+
except ImportError:
|
220 |
+
pass # RAG services not available yet
|
221 |
+
|
222 |
+
import gradio as gr
|
223 |
+
choices = [(elem["name"], elem_id) for elem_id, elem in data["world_elements"].items()]
|
224 |
+
return f'<div class="status-success">✅ World element "{name}" created successfully!</div>', gr.update(choices=choices, value=new_id)
|
225 |
+
|
226 |
+
|
227 |
+
# ... keep existing code (get_all_stories, get_all_characters, get_all_world_elements, search_content functions)
|
228 |
+
|
229 |
+
def get_word_count(text: str) -> int:
|
230 |
+
"""Count words in text."""
|
231 |
+
if not text:
|
232 |
+
return 0
|
233 |
+
return len(text.split())
|
234 |
+
|
235 |
+
|
236 |
+
def update_word_count(text: str) -> str:
|
237 |
+
"""Update word count display with gold highlighting."""
|
238 |
+
count = get_word_count(text)
|
239 |
+
return f'<div class="word-count-highlight">📊 Word Count: {count}</div>'
|
240 |
+
|
241 |
+
|
242 |
+
def get_all_stories() -> List[Dict]:
|
243 |
+
"""Get all stories."""
|
244 |
+
data = load_projects()
|
245 |
+
return list(data["stories"].values())
|
246 |
+
|
247 |
+
|
248 |
+
def get_all_characters() -> List[Dict]:
|
249 |
+
"""Get all characters."""
|
250 |
+
data = load_projects()
|
251 |
+
return list(data["characters"].values())
|
252 |
+
|
253 |
+
|
254 |
+
def get_all_world_elements() -> List[Dict]:
|
255 |
+
"""Get all world elements."""
|
256 |
+
data = load_projects()
|
257 |
+
return list(data["world_elements"].values())
|
258 |
+
|
259 |
+
|
260 |
+
def search_content(query: str) -> Dict[str, List[Dict]]:
|
261 |
+
"""Search across stories, characters, and world elements."""
|
262 |
+
data = load_projects()
|
263 |
+
query_lower = query.lower()
|
264 |
+
|
265 |
+
stories = [story for story in data["stories"].values()
|
266 |
+
if query_lower in story["title"].lower() or
|
267 |
+
query_lower in story["description"].lower() or
|
268 |
+
query_lower in story["content"].lower()]
|
269 |
+
|
270 |
+
characters = [char for char in data["characters"].values()
|
271 |
+
if query_lower in char["name"].lower() or
|
272 |
+
query_lower in char["description"].lower()]
|
273 |
+
|
274 |
+
world_elements = [elem for elem in data["world_elements"].values()
|
275 |
+
if query_lower in elem["name"].lower() or
|
276 |
+
query_lower in elem["description"].lower()]
|
277 |
+
|
278 |
+
return {
|
279 |
+
"stories": stories,
|
280 |
+
"characters": characters,
|
281 |
+
"world_elements": world_elements
|
282 |
+
}
|
rag_services.py
ADDED
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""RAG (Retrieval Augmented Generation) services for ScriptVoice."""
|
3 |
+
|
4 |
+
import os
|
5 |
+
import json
|
6 |
+
import pickle
|
7 |
+
from typing import List, Dict, Any, Tuple, Optional
|
8 |
+
from sentence_transformers import SentenceTransformer
|
9 |
+
import faiss
|
10 |
+
import numpy as np
|
11 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
12 |
+
from langchain.docstore.document import Document
|
13 |
+
from config import PROJECTS_FILE
|
14 |
+
|
15 |
+
|
16 |
+
class RAGService:
|
17 |
+
"""Handles vector database operations and content retrieval."""
|
18 |
+
|
19 |
+
def __init__(self):
|
20 |
+
self.model = SentenceTransformer('all-MiniLM-L6-v2')
|
21 |
+
self.text_splitter = RecursiveCharacterTextSplitter(
|
22 |
+
chunk_size=500,
|
23 |
+
chunk_overlap=50,
|
24 |
+
separators=["\n\n", "\n", ". ", "! ", "? ", " "]
|
25 |
+
)
|
26 |
+
self.index = None
|
27 |
+
self.documents = []
|
28 |
+
self.metadata = []
|
29 |
+
self.index_file = "vector_index.faiss"
|
30 |
+
self.metadata_file = "vector_metadata.pkl"
|
31 |
+
self._load_or_create_index()
|
32 |
+
|
33 |
+
def _load_or_create_index(self):
|
34 |
+
"""Load existing index or create new one."""
|
35 |
+
if os.path.exists(self.index_file) and os.path.exists(self.metadata_file):
|
36 |
+
try:
|
37 |
+
self.index = faiss.read_index(self.index_file)
|
38 |
+
with open(self.metadata_file, 'rb') as f:
|
39 |
+
data = pickle.load(f)
|
40 |
+
self.documents = data['documents']
|
41 |
+
self.metadata = data['metadata']
|
42 |
+
print(f"Loaded vector index with {len(self.documents)} documents")
|
43 |
+
except Exception as e:
|
44 |
+
print(f"Error loading index: {e}")
|
45 |
+
self._create_empty_index()
|
46 |
+
else:
|
47 |
+
self._create_empty_index()
|
48 |
+
|
49 |
+
def _create_empty_index(self):
|
50 |
+
"""Create empty FAISS index."""
|
51 |
+
dimension = 384 # all-MiniLM-L6-v2 dimension
|
52 |
+
self.index = faiss.IndexFlatIP(dimension)
|
53 |
+
self.documents = []
|
54 |
+
self.metadata = []
|
55 |
+
|
56 |
+
def chunk_content(self, content: str, content_type: str, content_id: str, title: str) -> List[Document]:
|
57 |
+
"""Split content into chunks for embedding."""
|
58 |
+
chunks = self.text_splitter.split_text(content)
|
59 |
+
documents = []
|
60 |
+
|
61 |
+
for i, chunk in enumerate(chunks):
|
62 |
+
doc = Document(
|
63 |
+
page_content=chunk,
|
64 |
+
metadata={
|
65 |
+
'content_type': content_type,
|
66 |
+
'content_id': content_id,
|
67 |
+
'title': title,
|
68 |
+
'chunk_id': i,
|
69 |
+
'chunk_count': len(chunks)
|
70 |
+
}
|
71 |
+
)
|
72 |
+
documents.append(doc)
|
73 |
+
|
74 |
+
return documents
|
75 |
+
|
76 |
+
def add_content(self, content: str, content_type: str, content_id: str, title: str):
|
77 |
+
"""Add content to the vector database."""
|
78 |
+
if not content.strip():
|
79 |
+
return
|
80 |
+
|
81 |
+
# Remove existing content for this ID
|
82 |
+
self.remove_content(content_id)
|
83 |
+
|
84 |
+
# Chunk the content
|
85 |
+
documents = self.chunk_content(content, content_type, content_id, title)
|
86 |
+
|
87 |
+
if not documents:
|
88 |
+
return
|
89 |
+
|
90 |
+
# Generate embeddings
|
91 |
+
texts = [doc.page_content for doc in documents]
|
92 |
+
embeddings = self.model.encode(texts)
|
93 |
+
|
94 |
+
# Normalize embeddings for cosine similarity
|
95 |
+
embeddings = embeddings / np.linalg.norm(embeddings, axis=1, keepdims=True)
|
96 |
+
|
97 |
+
# Add to FAISS index
|
98 |
+
self.index.add(embeddings.astype('float32'))
|
99 |
+
|
100 |
+
# Store documents and metadata
|
101 |
+
self.documents.extend(documents)
|
102 |
+
for doc in documents:
|
103 |
+
self.metadata.append(doc.metadata)
|
104 |
+
|
105 |
+
# Save index
|
106 |
+
self._save_index()
|
107 |
+
|
108 |
+
def remove_content(self, content_id: str):
|
109 |
+
"""Remove content from vector database."""
|
110 |
+
indices_to_remove = []
|
111 |
+
for i, metadata in enumerate(self.metadata):
|
112 |
+
if metadata.get('content_id') == content_id:
|
113 |
+
indices_to_remove.append(i)
|
114 |
+
|
115 |
+
if indices_to_remove:
|
116 |
+
# Rebuild index without removed items
|
117 |
+
new_documents = []
|
118 |
+
new_metadata = []
|
119 |
+
new_embeddings = []
|
120 |
+
|
121 |
+
for i, (doc, meta) in enumerate(zip(self.documents, self.metadata)):
|
122 |
+
if i not in indices_to_remove:
|
123 |
+
new_documents.append(doc)
|
124 |
+
new_metadata.append(meta)
|
125 |
+
embedding = self.model.encode([doc.page_content])
|
126 |
+
embedding = embedding / np.linalg.norm(embedding, axis=1, keepdims=True)
|
127 |
+
new_embeddings.append(embedding[0])
|
128 |
+
|
129 |
+
# Recreate index
|
130 |
+
self._create_empty_index()
|
131 |
+
if new_embeddings:
|
132 |
+
embeddings_array = np.array(new_embeddings).astype('float32')
|
133 |
+
self.index.add(embeddings_array)
|
134 |
+
self.documents = new_documents
|
135 |
+
self.metadata = new_metadata
|
136 |
+
|
137 |
+
self._save_index()
|
138 |
+
|
139 |
+
def search(self, query: str, k: int = 5, content_type: Optional[str] = None) -> List[Dict[str, Any]]:
|
140 |
+
"""Search for similar content."""
|
141 |
+
if self.index.ntotal == 0:
|
142 |
+
return []
|
143 |
+
|
144 |
+
# Generate query embedding
|
145 |
+
query_embedding = self.model.encode([query])
|
146 |
+
query_embedding = query_embedding / np.linalg.norm(query_embedding, axis=1, keepdims=True)
|
147 |
+
|
148 |
+
# Search
|
149 |
+
scores, indices = self.index.search(query_embedding.astype('float32'), min(k * 2, self.index.ntotal))
|
150 |
+
|
151 |
+
results = []
|
152 |
+
for score, idx in zip(scores[0], indices[0]):
|
153 |
+
if idx >= 0 and idx < len(self.documents):
|
154 |
+
metadata = self.metadata[idx]
|
155 |
+
|
156 |
+
# Filter by content type if specified
|
157 |
+
if content_type and metadata.get('content_type') != content_type:
|
158 |
+
continue
|
159 |
+
|
160 |
+
result = {
|
161 |
+
'content': self.documents[idx].page_content,
|
162 |
+
'metadata': metadata,
|
163 |
+
'score': float(score)
|
164 |
+
}
|
165 |
+
results.append(result)
|
166 |
+
|
167 |
+
if len(results) >= k:
|
168 |
+
break
|
169 |
+
|
170 |
+
return results
|
171 |
+
|
172 |
+
def get_context_for_content(self, content_id: str, query: str, k: int = 3) -> List[Dict[str, Any]]:
|
173 |
+
"""Get relevant context from other content for a specific item."""
|
174 |
+
results = self.search(query, k=k)
|
175 |
+
# Filter out results from the same content
|
176 |
+
filtered_results = [r for r in results if r['metadata'].get('content_id') != content_id]
|
177 |
+
return filtered_results[:k]
|
178 |
+
|
179 |
+
def _save_index(self):
|
180 |
+
"""Save FAISS index and metadata to disk."""
|
181 |
+
try:
|
182 |
+
faiss.write_index(self.index, self.index_file)
|
183 |
+
with open(self.metadata_file, 'wb') as f:
|
184 |
+
pickle.dump({
|
185 |
+
'documents': self.documents,
|
186 |
+
'metadata': self.metadata
|
187 |
+
}, f)
|
188 |
+
except Exception as e:
|
189 |
+
print(f"Error saving index: {e}")
|
190 |
+
|
191 |
+
def rebuild_index_from_projects(self):
|
192 |
+
"""Rebuild the entire vector index from current projects data."""
|
193 |
+
from models import load_projects
|
194 |
+
|
195 |
+
# Clear existing index
|
196 |
+
self._create_empty_index()
|
197 |
+
|
198 |
+
# Load all projects data
|
199 |
+
data = load_projects()
|
200 |
+
|
201 |
+
# Add stories
|
202 |
+
for story_id, story in data.get("stories", {}).items():
|
203 |
+
content = f"{story['title']}\n\n{story['description']}\n\n{story['content']}"
|
204 |
+
self.add_content(content, "story", story_id, story['title'])
|
205 |
+
|
206 |
+
# Add characters
|
207 |
+
for char_id, char in data.get("characters", {}).items():
|
208 |
+
content = f"{char['name']}\n\n{char['description']}\n\nTraits: {', '.join(char.get('traits', []))}\n\n{char.get('notes', '')}"
|
209 |
+
self.add_content(content, "character", char_id, char['name'])
|
210 |
+
|
211 |
+
# Add world elements
|
212 |
+
for elem_id, elem in data.get("world_elements", {}).items():
|
213 |
+
content = f"{elem['name']} ({elem['type']})\n\n{elem['description']}\n\nTags: {', '.join(elem.get('tags', []))}\n\n{elem.get('notes', '')}"
|
214 |
+
self.add_content(content, "world_element", elem_id, elem['name'])
|
215 |
+
|
216 |
+
# Add scripts
|
217 |
+
for proj_id, proj in data.get("projects", {}).items():
|
218 |
+
if proj.get('content'):
|
219 |
+
content = f"{proj['name']}\n\n{proj['content']}\n\nNotes: {proj.get('notes', '')}"
|
220 |
+
self.add_content(content, "script", proj_id, proj['name'])
|
221 |
+
|
222 |
+
|
223 |
+
# Global RAG service instance
|
224 |
+
rag_service = RAGService()
|
requirements.txt
CHANGED
@@ -4,3 +4,8 @@ gtts>=2.3.0
|
|
4 |
pytesseract>=0.3.10
|
5 |
Pillow>=10.0.0
|
6 |
setuptools>=65.0.0
|
|
|
|
|
|
|
|
|
|
|
|
4 |
pytesseract>=0.3.10
|
5 |
Pillow>=10.0.0
|
6 |
setuptools>=65.0.0
|
7 |
+
langchain>=0.1.0
|
8 |
+
sentence-transformers>=2.2.0
|
9 |
+
faiss-cpu>=1.7.0
|
10 |
+
langchain-openai>=0.1.0
|
11 |
+
tiktoken>=0.5.0
|
run.py
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
#!/usr/bin/env python3
|
3 |
+
"""
|
4 |
+
Simple launcher for ScriptVoice Gradio Application
|
5 |
+
|
6 |
+
Install dependencies with:
|
7 |
+
pip install -r requirements.txt
|
8 |
+
|
9 |
+
Then run:
|
10 |
+
python run.py
|
11 |
+
"""
|
12 |
+
|
13 |
+
if __name__ == "__main__":
|
14 |
+
from main import create_interface
|
15 |
+
|
16 |
+
print("🚀 Starting ScriptVoice - AI-Powered Story Intelligence Platform")
|
17 |
+
print("📦 Make sure you have installed dependencies: pip install -r requirements.txt")
|
18 |
+
print("🌐 The app will be available at: http://localhost:7860")
|
19 |
+
|
20 |
+
# Create and launch the app
|
21 |
+
app = create_interface()
|
22 |
+
app.launch(
|
23 |
+
server_name="0.0.0.0",
|
24 |
+
server_port=7860,
|
25 |
+
share=True,
|
26 |
+
show_error=True
|
27 |
+
)
|
ui_components.py
ADDED
@@ -0,0 +1,396 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
"""UI components and styling for ScriptVoice application."""
|
3 |
+
|
4 |
+
CUSTOM_CSS = """
|
5 |
+
/* ScriptVoice Custom CSS - Black/White/Red/Gold Theme */
|
6 |
+
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap');
|
7 |
+
|
8 |
+
.gradio-container {
|
9 |
+
background: #000000 !important;
|
10 |
+
font-family: 'Inter', -apple-system, BlinkMacSystemFont, sans-serif !important;
|
11 |
+
color: #FFFFFF !important;
|
12 |
+
}
|
13 |
+
|
14 |
+
.word-count-highlight {
|
15 |
+
background: #E63946 !important;
|
16 |
+
color: #FFFFFF !important;
|
17 |
+
font-weight: 600;
|
18 |
+
font-size: 1.1em;
|
19 |
+
padding: 12px 16px;
|
20 |
+
border-radius: 8px;
|
21 |
+
border: 2px solid #E63946;
|
22 |
+
box-shadow: 0 2px 8px rgba(230, 57, 70, 0.3);
|
23 |
+
}
|
24 |
+
|
25 |
+
.primary-button {
|
26 |
+
background: #E63946 !important;
|
27 |
+
border: none !important;
|
28 |
+
color: #FFFFFF !important;
|
29 |
+
font-weight: 600 !important;
|
30 |
+
font-family: 'Inter', sans-serif !important;
|
31 |
+
transition: all 0.3s ease !important;
|
32 |
+
border-radius: 6px !important;
|
33 |
+
padding: 12px 20px !important;
|
34 |
+
}
|
35 |
+
|
36 |
+
.primary-button:hover {
|
37 |
+
background: #d12a36 !important;
|
38 |
+
transform: translateY(-1px) !important;
|
39 |
+
box-shadow: 0 4px 12px rgba(230, 57, 70, 0.4) !important;
|
40 |
+
}
|
41 |
+
|
42 |
+
.secondary-button {
|
43 |
+
background: #1a1a1a !important;
|
44 |
+
border: 2px solid #FFD700 !important;
|
45 |
+
color: #FFD700 !important;
|
46 |
+
font-weight: 600 !important;
|
47 |
+
font-family: 'Inter', sans-serif !important;
|
48 |
+
transition: all 0.3s ease !important;
|
49 |
+
border-radius: 6px !important;
|
50 |
+
padding: 12px 20px !important;
|
51 |
+
}
|
52 |
+
|
53 |
+
.secondary-button:hover {
|
54 |
+
background: #FFD700 !important;
|
55 |
+
color: #000000 !important;
|
56 |
+
transform: translateY(-1px) !important;
|
57 |
+
box-shadow: 0 4px 12px rgba(255, 215, 0, 0.4) !important;
|
58 |
+
}
|
59 |
+
|
60 |
+
.sidebar-column {
|
61 |
+
background: #1a1a1a !important;
|
62 |
+
border-radius: 12px !important;
|
63 |
+
padding: 24px !important;
|
64 |
+
margin-right: 20px !important;
|
65 |
+
border: 1px solid #333333 !important;
|
66 |
+
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.5) !important;
|
67 |
+
}
|
68 |
+
|
69 |
+
.status-success {
|
70 |
+
color: #FFD700 !important;
|
71 |
+
background-color: rgba(255, 215, 0, 0.1) !important;
|
72 |
+
border: 1px solid rgba(255, 215, 0, 0.3) !important;
|
73 |
+
padding: 12px 16px;
|
74 |
+
border-radius: 8px;
|
75 |
+
margin: 10px 0;
|
76 |
+
font-weight: 500;
|
77 |
+
}
|
78 |
+
|
79 |
+
.status-error {
|
80 |
+
color: #E63946 !important;
|
81 |
+
background-color: rgba(230, 57, 70, 0.1) !important;
|
82 |
+
border: 1px solid rgba(230, 57, 70, 0.3) !important;
|
83 |
+
padding: 12px 16px;
|
84 |
+
border-radius: 8px;
|
85 |
+
margin: 10px 0;
|
86 |
+
font-weight: 500;
|
87 |
+
}
|
88 |
+
|
89 |
+
/* Story Intelligence Styling */
|
90 |
+
.stories-grid, .characters-grid, .world-grid {
|
91 |
+
display: grid;
|
92 |
+
grid-template-columns: repeat(auto-fill, minmax(320px, 1fr));
|
93 |
+
gap: 20px;
|
94 |
+
margin: 20px 0;
|
95 |
+
}
|
96 |
+
|
97 |
+
.story-card, .character-card, .world-card {
|
98 |
+
background: #1a1a1a !important;
|
99 |
+
border: 1px solid #333333 !important;
|
100 |
+
border-radius: 12px;
|
101 |
+
padding: 20px;
|
102 |
+
transition: all 0.3s ease;
|
103 |
+
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.3);
|
104 |
+
}
|
105 |
+
|
106 |
+
.story-card:hover, .character-card:hover, .world-card:hover {
|
107 |
+
transform: translateY(-3px);
|
108 |
+
box-shadow: 0 8px 24px rgba(230, 57, 70, 0.2);
|
109 |
+
border-color: #E63946;
|
110 |
+
}
|
111 |
+
|
112 |
+
.story-card h3, .character-card h3, .world-card h3 {
|
113 |
+
color: #E63946 !important;
|
114 |
+
margin: 0 0 12px 0;
|
115 |
+
font-size: 1.2em;
|
116 |
+
font-weight: 600;
|
117 |
+
font-family: 'Inter', sans-serif;
|
118 |
+
}
|
119 |
+
|
120 |
+
.story-card p, .character-card p, .world-card p {
|
121 |
+
color: #CCCCCC !important;
|
122 |
+
margin: 10px 0;
|
123 |
+
line-height: 1.5;
|
124 |
+
font-weight: 400;
|
125 |
+
}
|
126 |
+
|
127 |
+
.story-card small, .character-card small, .world-card small {
|
128 |
+
color: #888888 !important;
|
129 |
+
font-size: 0.85em;
|
130 |
+
font-weight: 300;
|
131 |
+
}
|
132 |
+
|
133 |
+
.type-badge {
|
134 |
+
background: #FFD700 !important;
|
135 |
+
color: #000000 !important;
|
136 |
+
padding: 4px 12px;
|
137 |
+
border-radius: 12px;
|
138 |
+
font-size: 0.8em;
|
139 |
+
font-weight: 600;
|
140 |
+
margin-left: 10px;
|
141 |
+
}
|
142 |
+
|
143 |
+
/* Tab styling */
|
144 |
+
.gradio-tabs .tab-nav {
|
145 |
+
background: #1a1a1a !important;
|
146 |
+
border-radius: 12px !important;
|
147 |
+
margin-bottom: 24px !important;
|
148 |
+
border: 1px solid #333333 !important;
|
149 |
+
}
|
150 |
+
|
151 |
+
.gradio-tabs .tab-nav button {
|
152 |
+
background: transparent !important;
|
153 |
+
color: #CCCCCC !important;
|
154 |
+
border: none !important;
|
155 |
+
padding: 14px 24px !important;
|
156 |
+
border-radius: 8px !important;
|
157 |
+
font-weight: 500 !important;
|
158 |
+
font-family: 'Inter', sans-serif !important;
|
159 |
+
transition: all 0.3s ease !important;
|
160 |
+
}
|
161 |
+
|
162 |
+
.gradio-tabs .tab-nav button.selected {
|
163 |
+
background: #E63946 !important;
|
164 |
+
color: #FFFFFF !important;
|
165 |
+
transform: translateY(-1px) !important;
|
166 |
+
box-shadow: 0 4px 12px rgba(230, 57, 70, 0.3) !important;
|
167 |
+
}
|
168 |
+
|
169 |
+
.gradio-tabs .tab-nav button:hover:not(.selected) {
|
170 |
+
background: rgba(255, 215, 0, 0.1) !important;
|
171 |
+
color: #FFD700 !important;
|
172 |
+
}
|
173 |
+
|
174 |
+
/* Search results styling */
|
175 |
+
.search-results {
|
176 |
+
background: #1a1a1a !important;
|
177 |
+
border: 1px solid #333333 !important;
|
178 |
+
border-radius: 12px;
|
179 |
+
padding: 20px;
|
180 |
+
margin: 20px 0;
|
181 |
+
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.3);
|
182 |
+
}
|
183 |
+
|
184 |
+
.search-results h4 {
|
185 |
+
color: #E63946 !important;
|
186 |
+
margin: 0 0 16px 0;
|
187 |
+
font-weight: 600;
|
188 |
+
font-family: 'Inter', sans-serif;
|
189 |
+
}
|
190 |
+
|
191 |
+
.search-results h5 {
|
192 |
+
color: #FFD700 !important;
|
193 |
+
margin: 16px 0 12px 0;
|
194 |
+
font-weight: 500;
|
195 |
+
font-family: 'Inter', sans-serif;
|
196 |
+
}
|
197 |
+
|
198 |
+
.search-results ul {
|
199 |
+
margin: 0 0 16px 20px;
|
200 |
+
}
|
201 |
+
|
202 |
+
.search-results li {
|
203 |
+
color: #CCCCCC !important;
|
204 |
+
margin: 8px 0;
|
205 |
+
line-height: 1.5;
|
206 |
+
}
|
207 |
+
|
208 |
+
.search-results pre {
|
209 |
+
background: #0a0a0a !important;
|
210 |
+
border: 1px solid #333333 !important;
|
211 |
+
border-radius: 6px;
|
212 |
+
padding: 16px;
|
213 |
+
color: #FFFFFF !important;
|
214 |
+
font-family: 'Inter', monospace;
|
215 |
+
overflow-x: auto;
|
216 |
+
}
|
217 |
+
|
218 |
+
/* Section headers */
|
219 |
+
.section-header {
|
220 |
+
background: linear-gradient(45deg, #E63946, #FFD700) !important;
|
221 |
+
-webkit-background-clip: text !important;
|
222 |
+
-webkit-text-fill-color: transparent !important;
|
223 |
+
background-clip: text !important;
|
224 |
+
font-weight: 700;
|
225 |
+
font-size: 1.3em;
|
226 |
+
margin: 24px 0 16px 0;
|
227 |
+
padding: 12px 0;
|
228 |
+
border-bottom: 2px solid #E63946;
|
229 |
+
font-family: 'Inter', sans-serif;
|
230 |
+
}
|
231 |
+
|
232 |
+
/* Form groups */
|
233 |
+
.gradio-group {
|
234 |
+
background: #1a1a1a !important;
|
235 |
+
border: 1px solid #333333 !important;
|
236 |
+
border-radius: 12px !important;
|
237 |
+
padding: 20px !important;
|
238 |
+
margin: 16px 0 !important;
|
239 |
+
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.2) !important;
|
240 |
+
}
|
241 |
+
|
242 |
+
/* Input styling */
|
243 |
+
.gradio-textbox input, .gradio-textbox textarea {
|
244 |
+
background: #0a0a0a !important;
|
245 |
+
border: 2px solid #333333 !important;
|
246 |
+
border-radius: 8px !important;
|
247 |
+
color: #FFFFFF !important;
|
248 |
+
padding: 12px 16px !important;
|
249 |
+
font-family: 'Inter', sans-serif !important;
|
250 |
+
font-weight: 400 !important;
|
251 |
+
transition: all 0.3s ease !important;
|
252 |
+
}
|
253 |
+
|
254 |
+
.gradio-textbox input:focus, .gradio-textbox textarea:focus {
|
255 |
+
border-color: #E63946 !important;
|
256 |
+
box-shadow: 0 0 0 3px rgba(230, 57, 70, 0.2) !important;
|
257 |
+
outline: none !important;
|
258 |
+
}
|
259 |
+
|
260 |
+
.gradio-textbox input::placeholder, .gradio-textbox textarea::placeholder {
|
261 |
+
color: #666666 !important;
|
262 |
+
}
|
263 |
+
|
264 |
+
.gradio-dropdown .wrap {
|
265 |
+
background: #0a0a0a !important;
|
266 |
+
border: 2px solid #333333 !important;
|
267 |
+
border-radius: 8px !important;
|
268 |
+
color: #FFFFFF !important;
|
269 |
+
}
|
270 |
+
|
271 |
+
.gradio-dropdown .wrap:focus-within {
|
272 |
+
border-color: #E63946 !important;
|
273 |
+
box-shadow: 0 0 0 3px rgba(230, 57, 70, 0.2) !important;
|
274 |
+
}
|
275 |
+
|
276 |
+
.gradio-dropdown .wrap .wrap-inner {
|
277 |
+
background: #0a0a0a !important;
|
278 |
+
color: #FFFFFF !important;
|
279 |
+
}
|
280 |
+
|
281 |
+
.gradio-dropdown .wrap .wrap-inner .token {
|
282 |
+
background: #E63946 !important;
|
283 |
+
color: #FFFFFF !important;
|
284 |
+
}
|
285 |
+
|
286 |
+
/* Label styling */
|
287 |
+
.gradio-group label, .gradio-textbox label, .gradio-dropdown label {
|
288 |
+
color: #FFFFFF !important;
|
289 |
+
font-weight: 500 !important;
|
290 |
+
font-family: 'Inter', sans-serif !important;
|
291 |
+
margin-bottom: 8px !important;
|
292 |
+
}
|
293 |
+
|
294 |
+
/* Audio player styling */
|
295 |
+
.gradio-audio {
|
296 |
+
background: #1a1a1a !important;
|
297 |
+
border: 1px solid #333333 !important;
|
298 |
+
border-radius: 8px !important;
|
299 |
+
}
|
300 |
+
|
301 |
+
/* File upload styling */
|
302 |
+
.gradio-file {
|
303 |
+
background: #1a1a1a !important;
|
304 |
+
border: 2px dashed #333333 !important;
|
305 |
+
border-radius: 8px !important;
|
306 |
+
color: #FFFFFF !important;
|
307 |
+
}
|
308 |
+
|
309 |
+
.gradio-file:hover {
|
310 |
+
border-color: #E63946 !important;
|
311 |
+
background: rgba(230, 57, 70, 0.05) !important;
|
312 |
+
}
|
313 |
+
|
314 |
+
/* Progress bars */
|
315 |
+
.gradio-progress {
|
316 |
+
background: #333333 !important;
|
317 |
+
border-radius: 4px !important;
|
318 |
+
}
|
319 |
+
|
320 |
+
.gradio-progress .progress-bar {
|
321 |
+
background: linear-gradient(45deg, #E63946, #FFD700) !important;
|
322 |
+
border-radius: 4px !important;
|
323 |
+
}
|
324 |
+
|
325 |
+
/* Slider styling */
|
326 |
+
.gradio-slider input[type="range"] {
|
327 |
+
background: #333333 !important;
|
328 |
+
}
|
329 |
+
|
330 |
+
.gradio-slider input[type="range"]::-webkit-slider-thumb {
|
331 |
+
background: #E63946 !important;
|
332 |
+
border: 2px solid #FFFFFF !important;
|
333 |
+
}
|
334 |
+
|
335 |
+
.gradio-slider input[type="range"]::-moz-range-thumb {
|
336 |
+
background: #E63946 !important;
|
337 |
+
border: 2px solid #FFFFFF !important;
|
338 |
+
}
|
339 |
+
|
340 |
+
/* Checkbox styling */
|
341 |
+
.gradio-checkbox input[type="checkbox"]:checked {
|
342 |
+
background: #E63946 !important;
|
343 |
+
border-color: #E63946 !important;
|
344 |
+
}
|
345 |
+
|
346 |
+
/* Scrollbar styling */
|
347 |
+
::-webkit-scrollbar {
|
348 |
+
width: 8px;
|
349 |
+
height: 8px;
|
350 |
+
}
|
351 |
+
|
352 |
+
::-webkit-scrollbar-track {
|
353 |
+
background: #1a1a1a;
|
354 |
+
border-radius: 4px;
|
355 |
+
}
|
356 |
+
|
357 |
+
::-webkit-scrollbar-thumb {
|
358 |
+
background: #E63946;
|
359 |
+
border-radius: 4px;
|
360 |
+
}
|
361 |
+
|
362 |
+
::-webkit-scrollbar-thumb:hover {
|
363 |
+
background: #d12a36;
|
364 |
+
}
|
365 |
+
|
366 |
+
/* Overall container styling */
|
367 |
+
body, .gradio-container, .gradio-container > div {
|
368 |
+
background: #000000 !important;
|
369 |
+
color: #FFFFFF !important;
|
370 |
+
}
|
371 |
+
|
372 |
+
/* Ensure proper contrast for all text elements */
|
373 |
+
h1, h2, h3, h4, h5, h6, p, span, div {
|
374 |
+
color: inherit !important;
|
375 |
+
}
|
376 |
+
"""
|
377 |
+
|
378 |
+
def get_header_html():
|
379 |
+
"""Generate the application header HTML."""
|
380 |
+
return """
|
381 |
+
<div style="text-align: center; padding: 40px 0; background: #000000; border-radius: 15px; margin-bottom: 30px; border: 2px solid #E63946; box-shadow: 0 4px 16px rgba(230, 57, 70, 0.3);">
|
382 |
+
<h1 style="margin: 0; font-size: 3.2em; background: linear-gradient(45deg, #E63946, #FFD700); -webkit-background-clip: text; -webkit-text-fill-color: transparent; font-weight: 700; font-family: 'Inter', sans-serif;">
|
383 |
+
🎬 ScriptVoice
|
384 |
+
</h1>
|
385 |
+
<p style="margin: 15px 0 0 0; color: #FFFFFF; font-size: 1.3em; font-weight: 400; font-family: 'Inter', sans-serif;">
|
386 |
+
AI-Powered Story Intelligence Platform
|
387 |
+
</p>
|
388 |
+
<p style="margin: 8px 0 0 0; color: #CCCCCC; font-size: 1em; font-weight: 300; font-family: 'Inter', sans-serif;">
|
389 |
+
Transform your stories with intelligent script writing, voice synthesis, and creative knowledge management
|
390 |
+
</p>
|
391 |
+
</div>
|
392 |
+
"""
|
393 |
+
|
394 |
+
def get_section_header(title):
|
395 |
+
"""Generate a section header with consistent styling."""
|
396 |
+
return f'<div class="section-header">{title}</div>'
|
vite.config.js
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
import { defineConfig } from 'vite'
|
3 |
+
|
4 |
+
export default defineConfig({
|
5 |
+
// Minimal Vite config for compatibility
|
6 |
+
// The actual app runs on Python Gradio
|
7 |
+
build: {
|
8 |
+
outDir: 'dist',
|
9 |
+
},
|
10 |
+
server: {
|
11 |
+
port: 3000,
|
12 |
+
},
|
13 |
+
})
|