Spaces:
Runtime error
Runtime error
Upload 47 files
Browse files- .gitattributes +1 -0
- README.md +89 -12
- __init__.py +0 -0
- ai-conv-client.pid +1 -0
- ai-conv-server.log +5 -0
- ai-conv-server.pid +1 -0
- animations.css +500 -0
- animations.js +550 -0
- app.cpython-313.pyc +0 -0
- app.py +251 -0
- automation_workflow.html +426 -0
- bridge.log +1 -0
- bridge.pid +1 -0
- bridge_server.py +80 -0
- browser_automation.py +221 -0
- contextual_hints.css +249 -0
- contextual_hints.js +534 -0
- fix_openmanus.py +556 -0
- fix_unified_bridge.py +227 -0
- index.html +548 -289
- install.sh +900 -0
- install_atlas_unified.sh +329 -0
- install_log.txt +2 -0
- layout.html +135 -0
- main.js +269 -0
- main.py +4 -0
- models.py +55 -0
- nlp_processor.cpython-313.pyc +0 -0
- nlp_processor.py +325 -0
- openai_integration.cpython-313.pyc +0 -0
- openai_integration.py +908 -0
- pyproject.toml +18 -0
- quantum-bg.svg +152 -0
- quantum_thinking.py +265 -0
- quantumvision.log +1 -0
- quantumvision.pid +1 -0
- replit.nix +6 -0
- requirements.txt +5 -0
- settings.html +362 -0
- smoke-bg.svg +80 -0
- start_atlas.sh +101 -0
- start_atlas_unified.sh +146 -0
- style.css +478 -17
- task_scheduler.py +298 -0
- update_unified_bridge.py +83 -0
- uv.lock +0 -0
- zap_integrations.html +290 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
9001AFB3-063D-4B72-AD44-79E61CC65111.webp filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,12 +1,89 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Quantum NLP Framework
|
2 |
+
|
3 |
+
A Python-based Flask web application with spaCy NLP processing, quantum-inspired recursive functions, and OpenAI API integration. This framework implements a multi-dimensional, layered thinking process inspired by quantum computing concepts.
|
4 |
+
|
5 |
+
## Features
|
6 |
+
|
7 |
+
- Natural language processing using spaCy
|
8 |
+
- Recursive function simulating quantum-inspired thinking processes
|
9 |
+
- OpenAI API integration for generating human-like text
|
10 |
+
- Flask web interface with input fields for text processing
|
11 |
+
- Multi-dimensional, layered thinking simulation
|
12 |
+
|
13 |
+
## Requirements
|
14 |
+
|
15 |
+
- Python 3.x
|
16 |
+
- Flask
|
17 |
+
- spaCy (with the English model)
|
18 |
+
- OpenAI Python client
|
19 |
+
- requests
|
20 |
+
|
21 |
+
## Installation on Replit
|
22 |
+
|
23 |
+
1. **Create a new Replit project**
|
24 |
+
- Choose Python as the language
|
25 |
+
|
26 |
+
2. **Install dependencies**
|
27 |
+
```
|
28 |
+
pip install flask spacy openai
|
29 |
+
python -m spacy download en_core_web_sm
|
30 |
+
```
|
31 |
+
|
32 |
+
3. **Set up your OpenAI API key**
|
33 |
+
- In Replit, go to the "Secrets" tab in the Tools panel
|
34 |
+
- Add a new secret with the key `OPENAI_API_KEY` and your OpenAI API key as the value
|
35 |
+
|
36 |
+
4. **Run the application**
|
37 |
+
- Click the "Run" button in Replit
|
38 |
+
- The application will be available at `https://your-repl-name.your-username.repl.co`
|
39 |
+
|
40 |
+
## Usage
|
41 |
+
|
42 |
+
1. Enter text in the input field
|
43 |
+
2. Select the depth of quantum analysis (1-4 dimensions)
|
44 |
+
3. Toggle the option to use OpenAI for enhanced analysis
|
45 |
+
4. Click "Analyze Text" to process the input
|
46 |
+
5. View the results in the three results sections:
|
47 |
+
- NLP Analysis
|
48 |
+
- Quantum Thinking Results
|
49 |
+
- AI Analysis (if OpenAI is enabled)
|
50 |
+
|
51 |
+
## How It Works
|
52 |
+
|
53 |
+
### NLP Processing
|
54 |
+
|
55 |
+
The application uses spaCy to process text input, extracting:
|
56 |
+
- Named entities
|
57 |
+
- Part-of-speech tags
|
58 |
+
- Noun chunks
|
59 |
+
- Text statistics
|
60 |
+
|
61 |
+
### Quantum-Inspired Recursive Thinking
|
62 |
+
|
63 |
+
The recursive function simulates a multi-dimensional, layered thinking process by:
|
64 |
+
1. Extracting key concepts from the input text
|
65 |
+
2. For each concept, creating a "thought path" for recursive exploration
|
66 |
+
3. Applying "superposition" by allowing paths to influence each other
|
67 |
+
4. Synthesizing meta-insights from all paths
|
68 |
+
5. Computing a "quantum probability" score
|
69 |
+
|
70 |
+
### OpenAI Integration
|
71 |
+
|
72 |
+
When enabled, the application:
|
73 |
+
1. Creates a prompt that incorporates the quantum thinking results
|
74 |
+
2. Sends the prompt to OpenAI's API
|
75 |
+
3. Displays the AI-generated response
|
76 |
+
|
77 |
+
## Project Structure
|
78 |
+
|
79 |
+
- `main.py`: Entry point for the application
|
80 |
+
- `app.py`: Main Flask application code
|
81 |
+
- `nlp_processor.py`: spaCy NLP processing functions
|
82 |
+
- `quantum_thinking.py`: Quantum-inspired recursive thinking implementation
|
83 |
+
- `openai_integration.py`: Functions for working with the OpenAI API
|
84 |
+
- `templates/`: HTML templates for the web interface
|
85 |
+
- `static/`: CSS and JavaScript files
|
86 |
+
|
87 |
+
## License
|
88 |
+
|
89 |
+
This project is open-source and available under the MIT License.
|
__init__.py
ADDED
File without changes
|
ai-conv-client.pid
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
57093
|
ai-conv-server.log
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
npm error Missing script: "start:prod"
|
2 |
+
npm error
|
3 |
+
npm error To see a list of scripts, run:
|
4 |
+
npm error npm run
|
5 |
+
npm error A complete log of this run can be found in: /Users/lattm/.npm/_logs/2025-04-17T17_24_51_380Z-debug-0.log
|
ai-conv-server.pid
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
57090
|
animations.css
ADDED
@@ -0,0 +1,500 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/*
|
2 |
+
* Quantum Animation Styles
|
3 |
+
* Provides visual effects for the quantum transitions and particle animations
|
4 |
+
*/
|
5 |
+
|
6 |
+
/* Particle Canvas */
|
7 |
+
.particle-canvas {
|
8 |
+
position: fixed;
|
9 |
+
top: 0;
|
10 |
+
left: 0;
|
11 |
+
width: 100%;
|
12 |
+
height: 100%;
|
13 |
+
pointer-events: none;
|
14 |
+
z-index: 1000;
|
15 |
+
backdrop-filter: blur(3px);
|
16 |
+
opacity: 0;
|
17 |
+
transition: opacity 0.3s ease;
|
18 |
+
}
|
19 |
+
|
20 |
+
.particle-canvas.active {
|
21 |
+
opacity: 1;
|
22 |
+
}
|
23 |
+
|
24 |
+
/* Quantum Overlay */
|
25 |
+
#quantum-overlay {
|
26 |
+
position: fixed;
|
27 |
+
top: 0;
|
28 |
+
left: 0;
|
29 |
+
width: 100%;
|
30 |
+
height: 100%;
|
31 |
+
background-color: rgba(10, 10, 16, 0.92);
|
32 |
+
display: none;
|
33 |
+
justify-content: center;
|
34 |
+
align-items: center;
|
35 |
+
z-index: 2000;
|
36 |
+
opacity: 0;
|
37 |
+
transition: opacity 0.3s ease-in-out;
|
38 |
+
overflow: hidden;
|
39 |
+
}
|
40 |
+
|
41 |
+
#quantum-overlay::before {
|
42 |
+
content: '';
|
43 |
+
position: absolute;
|
44 |
+
top: 0;
|
45 |
+
left: 0;
|
46 |
+
width: 100%;
|
47 |
+
height: 100%;
|
48 |
+
background: url('/static/images/quantum-bg.svg');
|
49 |
+
background-size: cover;
|
50 |
+
opacity: 0.2;
|
51 |
+
z-index: -1;
|
52 |
+
filter: blur(2px);
|
53 |
+
animation: quantum-bg-pulse 8s infinite alternate;
|
54 |
+
}
|
55 |
+
|
56 |
+
#quantum-overlay::after {
|
57 |
+
content: '';
|
58 |
+
position: absolute;
|
59 |
+
top: 0;
|
60 |
+
left: 0;
|
61 |
+
width: 100%;
|
62 |
+
height: 100%;
|
63 |
+
background: radial-gradient(
|
64 |
+
circle at center,
|
65 |
+
rgba(218, 75, 134, 0.15),
|
66 |
+
rgba(10, 10, 16, 0.2) 60%
|
67 |
+
);
|
68 |
+
z-index: -1;
|
69 |
+
animation: quantum-pulse 4s infinite alternate;
|
70 |
+
}
|
71 |
+
|
72 |
+
#quantum-overlay.active {
|
73 |
+
opacity: 1;
|
74 |
+
}
|
75 |
+
|
76 |
+
@keyframes quantum-bg-pulse {
|
77 |
+
0% {
|
78 |
+
opacity: 0.1;
|
79 |
+
transform: scale(1);
|
80 |
+
}
|
81 |
+
100% {
|
82 |
+
opacity: 0.3;
|
83 |
+
transform: scale(1.05);
|
84 |
+
}
|
85 |
+
}
|
86 |
+
|
87 |
+
/* Quantum Loader */
|
88 |
+
.quantum-loader {
|
89 |
+
display: flex;
|
90 |
+
flex-direction: column;
|
91 |
+
align-items: center;
|
92 |
+
perspective: 1000px;
|
93 |
+
}
|
94 |
+
|
95 |
+
.quantum-spinner {
|
96 |
+
position: relative;
|
97 |
+
width: 150px;
|
98 |
+
height: 150px;
|
99 |
+
transform-style: preserve-3d;
|
100 |
+
animation: quantum-spin 8s linear infinite;
|
101 |
+
}
|
102 |
+
|
103 |
+
.quantum-message {
|
104 |
+
margin-top: 20px;
|
105 |
+
color: white;
|
106 |
+
font-size: 1.2rem;
|
107 |
+
font-weight: 300;
|
108 |
+
letter-spacing: 2px;
|
109 |
+
animation: quantum-pulse 2s infinite;
|
110 |
+
}
|
111 |
+
|
112 |
+
/* Quantum Orbits */
|
113 |
+
.q-orbit {
|
114 |
+
position: absolute;
|
115 |
+
top: 0;
|
116 |
+
left: 0;
|
117 |
+
width: 100%;
|
118 |
+
height: 100%;
|
119 |
+
border-radius: 50%;
|
120 |
+
border: 2px solid transparent;
|
121 |
+
}
|
122 |
+
|
123 |
+
.q-orbit-1 {
|
124 |
+
border-top-color: #da4b86;
|
125 |
+
border-bottom-color: #da4b86;
|
126 |
+
animation: orbit-spin-1 3s linear infinite;
|
127 |
+
box-shadow: 0 0 10px rgba(218, 75, 134, 0.5);
|
128 |
+
}
|
129 |
+
|
130 |
+
.q-orbit-2 {
|
131 |
+
width: 120px;
|
132 |
+
height: 120px;
|
133 |
+
margin: 15px;
|
134 |
+
border-left-color: #6f42c1;
|
135 |
+
border-right-color: #6f42c1;
|
136 |
+
animation: orbit-spin-2 4s linear infinite;
|
137 |
+
box-shadow: 0 0 10px rgba(111, 66, 193, 0.5);
|
138 |
+
}
|
139 |
+
|
140 |
+
.q-orbit-3 {
|
141 |
+
width: 80px;
|
142 |
+
height: 80px;
|
143 |
+
margin: 35px;
|
144 |
+
border-top-color: #0dcaf0;
|
145 |
+
border-right-color: #0dcaf0;
|
146 |
+
animation: orbit-spin-3 2s linear infinite;
|
147 |
+
box-shadow: 0 0 10px rgba(13, 202, 240, 0.5);
|
148 |
+
}
|
149 |
+
|
150 |
+
.q-core {
|
151 |
+
position: absolute;
|
152 |
+
top: 50%;
|
153 |
+
left: 50%;
|
154 |
+
width: 30px;
|
155 |
+
height: 30px;
|
156 |
+
margin-top: -15px;
|
157 |
+
margin-left: -15px;
|
158 |
+
background: radial-gradient(circle, #da4b86, #6f42c1);
|
159 |
+
border-radius: 50%;
|
160 |
+
box-shadow: 0 0 20px #da4b86, 0 0 40px #6f42c1;
|
161 |
+
animation: pulse 2s ease-in-out infinite alternate;
|
162 |
+
}
|
163 |
+
|
164 |
+
/* Animation for result cards */
|
165 |
+
.card {
|
166 |
+
transition: all 0.5s cubic-bezier(0.215, 0.610, 0.355, 1.000);
|
167 |
+
}
|
168 |
+
|
169 |
+
.quantum-reveal {
|
170 |
+
animation: quantum-reveal 0.8s cubic-bezier(0.215, 0.610, 0.355, 1.000) forwards;
|
171 |
+
}
|
172 |
+
|
173 |
+
/* Special effects for quantum visualization */
|
174 |
+
.quantum-visualized {
|
175 |
+
position: relative;
|
176 |
+
overflow: hidden;
|
177 |
+
}
|
178 |
+
|
179 |
+
.quantum-visualized::before {
|
180 |
+
content: '';
|
181 |
+
position: absolute;
|
182 |
+
top: 0;
|
183 |
+
left: 0;
|
184 |
+
width: 100%;
|
185 |
+
height: 100%;
|
186 |
+
background: radial-gradient(circle, transparent 50%, rgba(66, 133, 244, 0.1) 100%);
|
187 |
+
pointer-events: none;
|
188 |
+
animation: quantum-pulse 8s infinite alternate;
|
189 |
+
}
|
190 |
+
|
191 |
+
/* Define animations */
|
192 |
+
@keyframes quantum-spin {
|
193 |
+
0% {
|
194 |
+
transform: rotateY(0) rotateX(0);
|
195 |
+
}
|
196 |
+
25% {
|
197 |
+
transform: rotateY(90deg) rotateX(45deg);
|
198 |
+
}
|
199 |
+
50% {
|
200 |
+
transform: rotateY(180deg) rotateX(0);
|
201 |
+
}
|
202 |
+
75% {
|
203 |
+
transform: rotateY(270deg) rotateX(-45deg);
|
204 |
+
}
|
205 |
+
100% {
|
206 |
+
transform: rotateY(360deg) rotateX(0);
|
207 |
+
}
|
208 |
+
}
|
209 |
+
|
210 |
+
@keyframes orbit-spin-1 {
|
211 |
+
from { transform: rotateZ(0deg); }
|
212 |
+
to { transform: rotateZ(360deg); }
|
213 |
+
}
|
214 |
+
|
215 |
+
@keyframes orbit-spin-2 {
|
216 |
+
from { transform: rotateY(0deg) rotateX(60deg); }
|
217 |
+
to { transform: rotateY(360deg) rotateX(60deg); }
|
218 |
+
}
|
219 |
+
|
220 |
+
@keyframes orbit-spin-3 {
|
221 |
+
from { transform: rotateY(60deg) rotateX(0deg); }
|
222 |
+
to { transform: rotateY(60deg) rotateX(360deg); }
|
223 |
+
}
|
224 |
+
|
225 |
+
@keyframes pulse {
|
226 |
+
from {
|
227 |
+
transform: scale(0.8);
|
228 |
+
box-shadow: 0 0 20px #da4b86, 0 0 40px #6f42c1;
|
229 |
+
}
|
230 |
+
to {
|
231 |
+
transform: scale(1.1);
|
232 |
+
box-shadow: 0 0 30px #da4b86, 0 0 60px #0dcaf0;
|
233 |
+
}
|
234 |
+
}
|
235 |
+
|
236 |
+
@keyframes quantum-pulse {
|
237 |
+
0% {
|
238 |
+
opacity: 0.6;
|
239 |
+
}
|
240 |
+
50% {
|
241 |
+
opacity: 1;
|
242 |
+
}
|
243 |
+
100% {
|
244 |
+
opacity: 0.6;
|
245 |
+
}
|
246 |
+
}
|
247 |
+
|
248 |
+
@keyframes quantum-reveal {
|
249 |
+
from {
|
250 |
+
opacity: 0;
|
251 |
+
transform: translateY(30px) scale(0.95);
|
252 |
+
}
|
253 |
+
to {
|
254 |
+
opacity: 1;
|
255 |
+
transform: translateY(0) scale(1);
|
256 |
+
}
|
257 |
+
}
|
258 |
+
|
259 |
+
/* Smoke dispersion effect */
|
260 |
+
.smoke-particle {
|
261 |
+
position: absolute;
|
262 |
+
border-radius: 50%;
|
263 |
+
background-color: rgba(255, 255, 255, 0.8);
|
264 |
+
filter: blur(10px);
|
265 |
+
pointer-events: none;
|
266 |
+
}
|
267 |
+
|
268 |
+
/* Tumbleweed whirlwind effect */
|
269 |
+
.tumbleweed {
|
270 |
+
position: absolute;
|
271 |
+
width: 50px;
|
272 |
+
height: 50px;
|
273 |
+
border-radius: 50%;
|
274 |
+
background: radial-gradient(circle, rgba(120, 120, 120, 0.8), rgba(80, 80, 80, 0.5));
|
275 |
+
filter: blur(5px);
|
276 |
+
transition: transform 0.5s, opacity 0.5s;
|
277 |
+
pointer-events: none;
|
278 |
+
}
|
279 |
+
|
280 |
+
/* Dynamic transition effect */
|
281 |
+
.transition-fade {
|
282 |
+
opacity: 0;
|
283 |
+
transition: opacity 0.5s ease;
|
284 |
+
}
|
285 |
+
|
286 |
+
.transition-fade.show {
|
287 |
+
opacity: 1;
|
288 |
+
}
|
289 |
+
|
290 |
+
/* Text to vision transition */
|
291 |
+
.text-to-vision {
|
292 |
+
position: relative;
|
293 |
+
overflow: hidden;
|
294 |
+
}
|
295 |
+
|
296 |
+
.text-to-vision::after {
|
297 |
+
content: '';
|
298 |
+
position: absolute;
|
299 |
+
top: 0;
|
300 |
+
left: 0;
|
301 |
+
width: 100%;
|
302 |
+
height: 100%;
|
303 |
+
background: linear-gradient(to right, transparent, rgba(66, 133, 244, 0.2), transparent);
|
304 |
+
transform: translateX(-100%);
|
305 |
+
animation: text-scan 2s linear infinite;
|
306 |
+
}
|
307 |
+
|
308 |
+
/* Text-to-vision transition with smoke effect */
|
309 |
+
.text-to-vision-active {
|
310 |
+
animation: smoke-disperse 1.5s ease-out forwards;
|
311 |
+
}
|
312 |
+
|
313 |
+
/* Smoke transition effect */
|
314 |
+
@keyframes smoke-disperse {
|
315 |
+
0% {
|
316 |
+
filter: blur(0);
|
317 |
+
opacity: 1;
|
318 |
+
transform: scale(1);
|
319 |
+
}
|
320 |
+
50% {
|
321 |
+
filter: blur(5px);
|
322 |
+
opacity: 0.7;
|
323 |
+
transform: scale(1.02) translateY(5px);
|
324 |
+
}
|
325 |
+
100% {
|
326 |
+
filter: blur(0);
|
327 |
+
opacity: 1;
|
328 |
+
transform: scale(1);
|
329 |
+
}
|
330 |
+
}
|
331 |
+
|
332 |
+
@keyframes text-scan {
|
333 |
+
0% {
|
334 |
+
transform: translateX(-100%);
|
335 |
+
}
|
336 |
+
100% {
|
337 |
+
transform: translateX(100%);
|
338 |
+
}
|
339 |
+
}
|
340 |
+
|
341 |
+
/* Quantum pulse animation for visualized elements */
|
342 |
+
.quantum-visualized {
|
343 |
+
--quantum-pulse-intensity: 0.5;
|
344 |
+
}
|
345 |
+
|
346 |
+
.quantum-visualized::before {
|
347 |
+
content: '';
|
348 |
+
position: absolute;
|
349 |
+
top: 0;
|
350 |
+
left: 0;
|
351 |
+
width: 100%;
|
352 |
+
height: 100%;
|
353 |
+
background: radial-gradient(
|
354 |
+
circle at center,
|
355 |
+
rgba(66, 133, 244, calc(0.1 * var(--quantum-pulse-intensity))),
|
356 |
+
transparent 70%
|
357 |
+
);
|
358 |
+
opacity: 0.7;
|
359 |
+
z-index: -1;
|
360 |
+
animation: quantum-pulse calc(8s / var(--quantum-pulse-intensity)) infinite alternate;
|
361 |
+
}
|
362 |
+
|
363 |
+
/* Special effect for quantum score badge */
|
364 |
+
.quantum-score {
|
365 |
+
position: relative;
|
366 |
+
transition: all 0.3s ease;
|
367 |
+
animation: score-pulse 3s infinite alternate;
|
368 |
+
border-radius: 8px;
|
369 |
+
overflow: hidden;
|
370 |
+
}
|
371 |
+
|
372 |
+
.quantum-score::before {
|
373 |
+
content: '';
|
374 |
+
position: absolute;
|
375 |
+
top: 0;
|
376 |
+
left: 0;
|
377 |
+
right: 0;
|
378 |
+
bottom: 0;
|
379 |
+
background: linear-gradient(45deg,
|
380 |
+
rgba(218, 75, 134, 0.3),
|
381 |
+
rgba(111, 66, 193, 0.3),
|
382 |
+
rgba(13, 202, 240, 0.3)
|
383 |
+
);
|
384 |
+
opacity: 0.3;
|
385 |
+
z-index: -1;
|
386 |
+
}
|
387 |
+
|
388 |
+
.quantum-score:hover {
|
389 |
+
transform: scale(1.1);
|
390 |
+
box-shadow: 0 0 15px rgba(218, 75, 134, 0.5);
|
391 |
+
}
|
392 |
+
|
393 |
+
@keyframes score-pulse {
|
394 |
+
0% {
|
395 |
+
box-shadow: 0 0 5px rgba(218, 75, 134, 0.4);
|
396 |
+
text-shadow: 0 0 3px rgba(218, 75, 134, 0.4);
|
397 |
+
}
|
398 |
+
50% {
|
399 |
+
box-shadow: 0 0 8px rgba(111, 66, 193, 0.5);
|
400 |
+
text-shadow: 0 0 5px rgba(111, 66, 193, 0.5);
|
401 |
+
}
|
402 |
+
100% {
|
403 |
+
box-shadow: 0 0 12px rgba(13, 202, 240, 0.6);
|
404 |
+
text-shadow: 0 0 8px rgba(13, 202, 240, 0.6);
|
405 |
+
}
|
406 |
+
}
|
407 |
+
|
408 |
+
/* Tumbleweed/whirlwind effect with smoke-like appearance */
|
409 |
+
.tumbleweed {
|
410 |
+
position: absolute;
|
411 |
+
border-radius: 50%;
|
412 |
+
background: radial-gradient(circle at 30% 30%,
|
413 |
+
rgba(255, 255, 255, 0.3),
|
414 |
+
rgba(218, 75, 134, 0.15) 20%,
|
415 |
+
rgba(111, 66, 193, 0.1) 40%,
|
416 |
+
rgba(13, 202, 240, 0.05) 60%,
|
417 |
+
transparent 80%
|
418 |
+
);
|
419 |
+
box-shadow: 0 0 12px rgba(218, 75, 134, 0.2);
|
420 |
+
opacity: 0.7;
|
421 |
+
pointer-events: none;
|
422 |
+
filter: blur(3px);
|
423 |
+
transform-origin: center center;
|
424 |
+
animation: tumbleweed-rotate 10s linear infinite;
|
425 |
+
}
|
426 |
+
|
427 |
+
@keyframes tumbleweed-rotate {
|
428 |
+
0% { transform: rotate(0deg) scale(1); }
|
429 |
+
25% { transform: rotate(90deg) scale(1.2); }
|
430 |
+
50% { transform: rotate(180deg) scale(1); }
|
431 |
+
75% { transform: rotate(270deg) scale(0.8); }
|
432 |
+
100% { transform: rotate(360deg) scale(1); }
|
433 |
+
}
|
434 |
+
|
435 |
+
/* Quantum particle effect */
|
436 |
+
.quantum-particle {
|
437 |
+
position: fixed;
|
438 |
+
width: 3px;
|
439 |
+
height: 3px;
|
440 |
+
border-radius: 50%;
|
441 |
+
pointer-events: none;
|
442 |
+
z-index: 1000;
|
443 |
+
opacity: 0.7;
|
444 |
+
box-shadow: 0 0 4px currentColor;
|
445 |
+
animation: particle-fade 1s ease-out forwards;
|
446 |
+
}
|
447 |
+
|
448 |
+
@keyframes particle-fade {
|
449 |
+
0% {
|
450 |
+
transform: scale(1);
|
451 |
+
opacity: 0.7;
|
452 |
+
}
|
453 |
+
100% {
|
454 |
+
transform: scale(0);
|
455 |
+
opacity: 0;
|
456 |
+
}
|
457 |
+
}
|
458 |
+
|
459 |
+
/* Quantum reveal animation for cards */
|
460 |
+
.quantum-card {
|
461 |
+
opacity: 0;
|
462 |
+
transform: translateY(20px);
|
463 |
+
transition: opacity 0.5s ease, transform 0.5s ease;
|
464 |
+
}
|
465 |
+
|
466 |
+
.quantum-card.quantum-reveal {
|
467 |
+
opacity: 1;
|
468 |
+
transform: translateY(0);
|
469 |
+
}
|
470 |
+
|
471 |
+
/* Quantum pulse animation for the background */
|
472 |
+
@keyframes quantum-pulse {
|
473 |
+
0% {
|
474 |
+
opacity: 0.3;
|
475 |
+
transform: scale(0.95);
|
476 |
+
}
|
477 |
+
50% {
|
478 |
+
opacity: 0.5;
|
479 |
+
transform: scale(1.05);
|
480 |
+
}
|
481 |
+
100% {
|
482 |
+
opacity: 0.3;
|
483 |
+
transform: scale(0.95);
|
484 |
+
}
|
485 |
+
}
|
486 |
+
|
487 |
+
/* Atom spin animation for the icon */
|
488 |
+
.quantum-spin {
|
489 |
+
display: inline-block;
|
490 |
+
animation: atom-spin 10s linear infinite;
|
491 |
+
}
|
492 |
+
|
493 |
+
@keyframes atom-spin {
|
494 |
+
from {
|
495 |
+
transform: rotate(0deg);
|
496 |
+
}
|
497 |
+
to {
|
498 |
+
transform: rotate(360deg);
|
499 |
+
}
|
500 |
+
}
|
animations.js
ADDED
@@ -0,0 +1,550 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/**
|
2 |
+
* Quantum Transition Animations
|
3 |
+
* Provides dynamic transitions and particle effects for the Quantum NLP Framework
|
4 |
+
*/
|
5 |
+
|
6 |
+
// Wait for document to be fully loaded
|
7 |
+
document.addEventListener('DOMContentLoaded', function() {
|
8 |
+
// Initialize animations when page loads
|
9 |
+
initQuantumAnimations();
|
10 |
+
|
11 |
+
// Add event listener for analyze button to trigger animations
|
12 |
+
const analyzeBtn = document.getElementById('analyze-btn');
|
13 |
+
if (analyzeBtn) {
|
14 |
+
analyzeBtn.addEventListener('click', function() {
|
15 |
+
// Only trigger animation if form is valid
|
16 |
+
const form = document.getElementById('process-form');
|
17 |
+
if (form.checkValidity()) {
|
18 |
+
showQuantumTransition();
|
19 |
+
}
|
20 |
+
});
|
21 |
+
|
22 |
+
// Enhanced button effects
|
23 |
+
analyzeBtn.addEventListener('mouseover', function() {
|
24 |
+
triggerSmallParticleEffect(this);
|
25 |
+
});
|
26 |
+
}
|
27 |
+
|
28 |
+
// Initialize text-to-vision transitions
|
29 |
+
initTextToVisionTransitions();
|
30 |
+
|
31 |
+
// Check if results are present and apply animations
|
32 |
+
const resultsContainer = document.getElementById('results-container');
|
33 |
+
if (resultsContainer && resultsContainer.children.length > 0) {
|
34 |
+
applyResultsAnimations(resultsContainer);
|
35 |
+
}
|
36 |
+
});
|
37 |
+
|
38 |
+
/**
|
39 |
+
* Initialize text-to-vision transitions with staggered timing
|
40 |
+
*/
|
41 |
+
function initTextToVisionTransitions() {
|
42 |
+
const visionElements = document.querySelectorAll('.text-to-vision');
|
43 |
+
visionElements.forEach((elem, index) => {
|
44 |
+
// Add a staggered delay for smoother visual effect
|
45 |
+
const delay = 300 + (index * 150);
|
46 |
+
setTimeout(() => {
|
47 |
+
elem.classList.add('text-to-vision-active');
|
48 |
+
|
49 |
+
// Reset the animation after it completes to allow replaying
|
50 |
+
setTimeout(() => {
|
51 |
+
elem.classList.remove('text-to-vision-active');
|
52 |
+
}, 1500);
|
53 |
+
}, delay);
|
54 |
+
});
|
55 |
+
}
|
56 |
+
|
57 |
+
/**
|
58 |
+
* Apply animations to results container
|
59 |
+
*/
|
60 |
+
function applyResultsAnimations(container) {
|
61 |
+
// Add entrance animation
|
62 |
+
container.style.opacity = '0';
|
63 |
+
container.style.transform = 'translateY(20px)';
|
64 |
+
|
65 |
+
setTimeout(() => {
|
66 |
+
container.style.transition = 'opacity 0.5s ease, transform 0.5s ease';
|
67 |
+
container.style.opacity = '1';
|
68 |
+
container.style.transform = 'translateY(0)';
|
69 |
+
|
70 |
+
// Apply smoke dispersion effect to each card
|
71 |
+
const cards = container.querySelectorAll('.quantum-card');
|
72 |
+
cards.forEach((card, index) => {
|
73 |
+
setTimeout(() => {
|
74 |
+
card.classList.add('quantum-reveal');
|
75 |
+
}, 300 + (index * 150));
|
76 |
+
});
|
77 |
+
}, 200);
|
78 |
+
}
|
79 |
+
|
80 |
+
/**
|
81 |
+
* Trigger a small particle effect around an element
|
82 |
+
*/
|
83 |
+
function triggerSmallParticleEffect(element) {
|
84 |
+
// Create a few particles around the button
|
85 |
+
const rect = element.getBoundingClientRect();
|
86 |
+
const centerX = rect.left + rect.width / 2;
|
87 |
+
const centerY = rect.top + rect.height / 2;
|
88 |
+
|
89 |
+
// Create 5-10 particles
|
90 |
+
const count = Math.floor(Math.random() * 5) + 5;
|
91 |
+
for (let i = 0; i < count; i++) {
|
92 |
+
const particle = document.createElement('div');
|
93 |
+
particle.className = 'quantum-particle';
|
94 |
+
|
95 |
+
// Random position around the element
|
96 |
+
const angle = Math.random() * Math.PI * 2;
|
97 |
+
const distance = Math.random() * 20 + 10;
|
98 |
+
const x = centerX + Math.cos(angle) * distance;
|
99 |
+
const y = centerY + Math.sin(angle) * distance;
|
100 |
+
|
101 |
+
// Random size
|
102 |
+
const size = Math.random() * 4 + 2;
|
103 |
+
|
104 |
+
// Set styles
|
105 |
+
particle.style.left = x + 'px';
|
106 |
+
particle.style.top = y + 'px';
|
107 |
+
particle.style.width = size + 'px';
|
108 |
+
particle.style.height = size + 'px';
|
109 |
+
particle.style.backgroundColor = getQuantumParticleColor();
|
110 |
+
|
111 |
+
// Add to body
|
112 |
+
document.body.appendChild(particle);
|
113 |
+
|
114 |
+
// Animate and remove
|
115 |
+
setTimeout(() => {
|
116 |
+
particle.remove();
|
117 |
+
}, 1000);
|
118 |
+
}
|
119 |
+
}
|
120 |
+
|
121 |
+
/**
|
122 |
+
* Initialize all quantum animations and effects
|
123 |
+
*/
|
124 |
+
function initQuantumAnimations() {
|
125 |
+
// Create canvas for particle effects
|
126 |
+
createParticleCanvas();
|
127 |
+
|
128 |
+
// Pre-load any assets needed for animations
|
129 |
+
preloadAnimationAssets();
|
130 |
+
}
|
131 |
+
|
132 |
+
/**
|
133 |
+
* Create canvas for particle animations
|
134 |
+
*/
|
135 |
+
function createParticleCanvas() {
|
136 |
+
const canvas = document.createElement('canvas');
|
137 |
+
canvas.id = 'quantum-particles';
|
138 |
+
canvas.className = 'particle-canvas';
|
139 |
+
canvas.width = window.innerWidth;
|
140 |
+
canvas.height = window.innerHeight;
|
141 |
+
|
142 |
+
// Add canvas to the body but keep it hidden initially
|
143 |
+
canvas.style.display = 'none';
|
144 |
+
canvas.style.position = 'fixed';
|
145 |
+
canvas.style.top = 0;
|
146 |
+
canvas.style.left = 0;
|
147 |
+
canvas.style.pointerEvents = 'none';
|
148 |
+
canvas.style.zIndex = 1000;
|
149 |
+
document.body.appendChild(canvas);
|
150 |
+
|
151 |
+
// Handle window resize
|
152 |
+
window.addEventListener('resize', function() {
|
153 |
+
canvas.width = window.innerWidth;
|
154 |
+
canvas.height = window.innerHeight;
|
155 |
+
});
|
156 |
+
}
|
157 |
+
|
158 |
+
/**
|
159 |
+
* Preload any assets needed for animations
|
160 |
+
*/
|
161 |
+
function preloadAnimationAssets() {
|
162 |
+
// Add any image preloading here if needed
|
163 |
+
}
|
164 |
+
|
165 |
+
/**
|
166 |
+
* Show the quantum transition animation when analyzing text
|
167 |
+
*/
|
168 |
+
function showQuantumTransition() {
|
169 |
+
// Get input and display areas
|
170 |
+
const inputArea = document.querySelector('.card-body:has(#input_text)');
|
171 |
+
const resultsArea = document.querySelector('#results-container');
|
172 |
+
|
173 |
+
// Show loading overlay
|
174 |
+
showLoadingOverlay();
|
175 |
+
|
176 |
+
// Show particle effects
|
177 |
+
triggerParticleEffect();
|
178 |
+
|
179 |
+
// The actual form submission happens normally, but we add visual effects
|
180 |
+
}
|
181 |
+
|
182 |
+
/**
|
183 |
+
* Show a quantum-themed loading overlay
|
184 |
+
*/
|
185 |
+
function showLoadingOverlay() {
|
186 |
+
// Create overlay if it doesn't exist
|
187 |
+
let overlay = document.getElementById('quantum-overlay');
|
188 |
+
|
189 |
+
if (!overlay) {
|
190 |
+
overlay = document.createElement('div');
|
191 |
+
overlay.id = 'quantum-overlay';
|
192 |
+
overlay.innerHTML = `
|
193 |
+
<div class="quantum-loader">
|
194 |
+
<div class="quantum-spinner">
|
195 |
+
<div class="q-orbit q-orbit-1"></div>
|
196 |
+
<div class="q-orbit q-orbit-2"></div>
|
197 |
+
<div class="q-orbit q-orbit-3"></div>
|
198 |
+
<div class="q-core"></div>
|
199 |
+
</div>
|
200 |
+
<div class="quantum-message">Quantum Processing</div>
|
201 |
+
</div>
|
202 |
+
`;
|
203 |
+
document.body.appendChild(overlay);
|
204 |
+
}
|
205 |
+
|
206 |
+
// Show the overlay with animation
|
207 |
+
overlay.style.display = 'flex';
|
208 |
+
setTimeout(() => {
|
209 |
+
overlay.classList.add('active');
|
210 |
+
}, 10);
|
211 |
+
}
|
212 |
+
|
213 |
+
/**
|
214 |
+
* Hide the quantum loading overlay
|
215 |
+
*/
|
216 |
+
function hideLoadingOverlay() {
|
217 |
+
const overlay = document.getElementById('quantum-overlay');
|
218 |
+
if (overlay) {
|
219 |
+
overlay.classList.remove('active');
|
220 |
+
setTimeout(() => {
|
221 |
+
overlay.style.display = 'none';
|
222 |
+
}, 500);
|
223 |
+
}
|
224 |
+
}
|
225 |
+
|
226 |
+
/**
|
227 |
+
* Trigger the particle whirlwind effect
|
228 |
+
*/
|
229 |
+
function triggerParticleEffect() {
|
230 |
+
const canvas = document.getElementById('quantum-particles');
|
231 |
+
if (!canvas) return;
|
232 |
+
|
233 |
+
// Show canvas with fade-in
|
234 |
+
canvas.style.display = 'block';
|
235 |
+
setTimeout(() => {
|
236 |
+
canvas.classList.add('active');
|
237 |
+
}, 10);
|
238 |
+
|
239 |
+
// Get canvas context
|
240 |
+
const ctx = canvas.getContext('2d');
|
241 |
+
ctx.clearRect(0, 0, canvas.width, canvas.height);
|
242 |
+
|
243 |
+
// Create particles for the whirlwind effect
|
244 |
+
const particles = [];
|
245 |
+
const particleCount = 150;
|
246 |
+
|
247 |
+
// Initialize particles
|
248 |
+
for (let i = 0; i < particleCount; i++) {
|
249 |
+
particles.push({
|
250 |
+
x: canvas.width / 2,
|
251 |
+
y: canvas.height / 2,
|
252 |
+
size: Math.random() * 5 + 1,
|
253 |
+
color: getQuantumParticleColor(),
|
254 |
+
speedX: (Math.random() - 0.5) * 10,
|
255 |
+
speedY: (Math.random() - 0.5) * 10,
|
256 |
+
rotationRadius: Math.random() * 100 + 50,
|
257 |
+
rotationSpeed: Math.random() * 0.1 + 0.02,
|
258 |
+
rotationAngle: Math.random() * Math.PI * 2,
|
259 |
+
opacity: Math.random() * 0.7 + 0.3,
|
260 |
+
fadeSpeed: Math.random() * 0.02 + 0.005
|
261 |
+
});
|
262 |
+
}
|
263 |
+
|
264 |
+
// Animation variables
|
265 |
+
let frame = 0;
|
266 |
+
const maxFrames = 120;
|
267 |
+
|
268 |
+
// Animate particles
|
269 |
+
function animateParticles() {
|
270 |
+
// Clear canvas
|
271 |
+
ctx.clearRect(0, 0, canvas.width, canvas.height);
|
272 |
+
|
273 |
+
// Update and draw particles
|
274 |
+
for (let i = 0; i < particles.length; i++) {
|
275 |
+
const p = particles[i];
|
276 |
+
|
277 |
+
// Update rotation angle
|
278 |
+
p.rotationAngle += p.rotationSpeed;
|
279 |
+
|
280 |
+
// Calculate position with spiral effect
|
281 |
+
const spiralFactor = 1 + (frame / maxFrames) * 3;
|
282 |
+
const rotRadius = p.rotationRadius * (1 - frame / maxFrames);
|
283 |
+
|
284 |
+
p.x = canvas.width / 2 + Math.cos(p.rotationAngle) * rotRadius * spiralFactor;
|
285 |
+
p.y = canvas.height / 2 + Math.sin(p.rotationAngle) * rotRadius;
|
286 |
+
|
287 |
+
// Add dispersion effect in later frames
|
288 |
+
if (frame > maxFrames * 0.7) {
|
289 |
+
p.x += p.speedX;
|
290 |
+
p.y += p.speedY;
|
291 |
+
p.opacity -= p.fadeSpeed;
|
292 |
+
}
|
293 |
+
|
294 |
+
// Draw particle
|
295 |
+
ctx.globalAlpha = Math.max(0, p.opacity);
|
296 |
+
ctx.fillStyle = p.color;
|
297 |
+
ctx.beginPath();
|
298 |
+
ctx.arc(p.x, p.y, p.size, 0, Math.PI * 2);
|
299 |
+
ctx.fill();
|
300 |
+
}
|
301 |
+
|
302 |
+
// Increment frame
|
303 |
+
frame++;
|
304 |
+
|
305 |
+
// Continue animation or clean up
|
306 |
+
if (frame <= maxFrames) {
|
307 |
+
requestAnimationFrame(animateParticles);
|
308 |
+
} else {
|
309 |
+
// End animation with fade-out
|
310 |
+
canvas.classList.remove('active');
|
311 |
+
|
312 |
+
// After fade-out, hide the canvas completely
|
313 |
+
setTimeout(() => {
|
314 |
+
canvas.style.display = 'none';
|
315 |
+
ctx.clearRect(0, 0, canvas.width, canvas.height);
|
316 |
+
}, 300);
|
317 |
+
}
|
318 |
+
}
|
319 |
+
|
320 |
+
// Start animation
|
321 |
+
animateParticles();
|
322 |
+
|
323 |
+
// Set timeout to match form submission time
|
324 |
+
setTimeout(function() {
|
325 |
+
hideLoadingOverlay();
|
326 |
+
|
327 |
+
// Add reveal animation to results when they load
|
328 |
+
const resultElements = document.querySelectorAll('.card');
|
329 |
+
resultElements.forEach((el, index) => {
|
330 |
+
if (el.closest('#results-container')) {
|
331 |
+
// Add a staggered delay for each card
|
332 |
+
setTimeout(() => {
|
333 |
+
el.classList.add('quantum-reveal');
|
334 |
+
}, index * 150);
|
335 |
+
}
|
336 |
+
});
|
337 |
+
}, 2000); // Adjust timing as needed to match server response time
|
338 |
+
}
|
339 |
+
|
340 |
+
/**
|
341 |
+
* Get a color for quantum particles
|
342 |
+
*/
|
343 |
+
function getQuantumParticleColor() {
|
344 |
+
const colors = [
|
345 |
+
'#4285F4', // Blue
|
346 |
+
'#0F9D58', // Green
|
347 |
+
'#DB4437', // Red
|
348 |
+
'#F4B400', // Yellow
|
349 |
+
'#9C27B0', // Purple
|
350 |
+
'#00BCD4', // Cyan
|
351 |
+
'#3F51B5' // Indigo
|
352 |
+
];
|
353 |
+
return colors[Math.floor(Math.random() * colors.length)];
|
354 |
+
}
|
355 |
+
|
356 |
+
/**
|
357 |
+
* Apply visualization effect to quantum results
|
358 |
+
*/
|
359 |
+
function visualizeQuantumResults() {
|
360 |
+
const quantumScoreEl = document.querySelector('.quantum-score');
|
361 |
+
if (quantumScoreEl) {
|
362 |
+
// Get the quantum score
|
363 |
+
const score = parseFloat(quantumScoreEl.textContent);
|
364 |
+
|
365 |
+
// Create a visualization canvas for the score
|
366 |
+
createQuantumScoreVisualization(quantumScoreEl, score);
|
367 |
+
|
368 |
+
// Apply visualization to the entire container
|
369 |
+
const container = document.querySelector('.quantum-visualized');
|
370 |
+
if (container) {
|
371 |
+
applyQuantumVisualization(container, score);
|
372 |
+
}
|
373 |
+
}
|
374 |
+
}
|
375 |
+
|
376 |
+
/**
|
377 |
+
* Create a visualization for the quantum score
|
378 |
+
*/
|
379 |
+
function createQuantumScoreVisualization(element, score) {
|
380 |
+
// Add special particle effects around the score
|
381 |
+
const parentEl = element.closest('.text-end');
|
382 |
+
if (!parentEl) return;
|
383 |
+
|
384 |
+
// Create a canvas for the visualization
|
385 |
+
const canvas = document.createElement('canvas');
|
386 |
+
canvas.className = 'quantum-score-canvas';
|
387 |
+
canvas.width = 200;
|
388 |
+
canvas.height = 100;
|
389 |
+
canvas.style.position = 'absolute';
|
390 |
+
canvas.style.top = '0';
|
391 |
+
canvas.style.right = '0';
|
392 |
+
canvas.style.pointerEvents = 'none';
|
393 |
+
canvas.style.zIndex = '1';
|
394 |
+
|
395 |
+
// Insert canvas
|
396 |
+
parentEl.style.position = 'relative';
|
397 |
+
parentEl.appendChild(canvas);
|
398 |
+
|
399 |
+
// Create particles
|
400 |
+
const ctx = canvas.getContext('2d');
|
401 |
+
const particles = [];
|
402 |
+
const particleCount = Math.round(score * 50); // More particles for higher scores
|
403 |
+
|
404 |
+
// Create particles based on score
|
405 |
+
for (let i = 0; i < particleCount; i++) {
|
406 |
+
particles.push({
|
407 |
+
x: canvas.width * 0.75,
|
408 |
+
y: canvas.height * 0.5,
|
409 |
+
size: Math.random() * 3 + 1,
|
410 |
+
color: getScoreColor(score),
|
411 |
+
speedX: (Math.random() - 0.5) * 2,
|
412 |
+
speedY: (Math.random() - 0.5) * 2,
|
413 |
+
opacity: Math.random() * 0.5 + 0.3,
|
414 |
+
life: Math.random() * 100 + 50
|
415 |
+
});
|
416 |
+
}
|
417 |
+
|
418 |
+
// Animate particles
|
419 |
+
function animate() {
|
420 |
+
ctx.clearRect(0, 0, canvas.width, canvas.height);
|
421 |
+
|
422 |
+
let remainingParticles = 0;
|
423 |
+
|
424 |
+
for (let i = 0; i < particles.length; i++) {
|
425 |
+
const p = particles[i];
|
426 |
+
|
427 |
+
if (p.life > 0) {
|
428 |
+
p.x += p.speedX;
|
429 |
+
p.y += p.speedY;
|
430 |
+
p.opacity = Math.max(0, p.opacity - 0.005);
|
431 |
+
p.life--;
|
432 |
+
|
433 |
+
ctx.globalAlpha = p.opacity;
|
434 |
+
ctx.fillStyle = p.color;
|
435 |
+
ctx.beginPath();
|
436 |
+
ctx.arc(p.x, p.y, p.size, 0, Math.PI * 2);
|
437 |
+
ctx.fill();
|
438 |
+
|
439 |
+
remainingParticles++;
|
440 |
+
}
|
441 |
+
}
|
442 |
+
|
443 |
+
if (remainingParticles > 0) {
|
444 |
+
requestAnimationFrame(animate);
|
445 |
+
}
|
446 |
+
}
|
447 |
+
|
448 |
+
// Start animation
|
449 |
+
animate();
|
450 |
+
}
|
451 |
+
|
452 |
+
/**
|
453 |
+
* Apply quantum visualization to an element
|
454 |
+
*/
|
455 |
+
function applyQuantumVisualization(element, score) {
|
456 |
+
// Add the visualized class
|
457 |
+
element.classList.add('quantum-visualized');
|
458 |
+
|
459 |
+
// Create dynamic quantum effect based on score
|
460 |
+
const intensity = score || 0.5;
|
461 |
+
|
462 |
+
// Add a pulsing background effect
|
463 |
+
element.style.setProperty('--quantum-pulse-intensity', intensity);
|
464 |
+
|
465 |
+
// Create tumbleweed whirlwind effect
|
466 |
+
createTumbleweeds(element, Math.round(intensity * 10));
|
467 |
+
}
|
468 |
+
|
469 |
+
/**
|
470 |
+
* Create tumbleweed whirlwind particles inside an element
|
471 |
+
*/
|
472 |
+
function createTumbleweeds(element, count) {
|
473 |
+
for (let i = 0; i < count; i++) {
|
474 |
+
const tumbleweed = document.createElement('div');
|
475 |
+
tumbleweed.className = 'tumbleweed';
|
476 |
+
|
477 |
+
// Random position within the element
|
478 |
+
const rect = element.getBoundingClientRect();
|
479 |
+
const x = Math.random() * rect.width;
|
480 |
+
const y = Math.random() * rect.height;
|
481 |
+
|
482 |
+
// Random size
|
483 |
+
const size = Math.random() * 20 + 10;
|
484 |
+
|
485 |
+
// Set styles
|
486 |
+
tumbleweed.style.left = x + 'px';
|
487 |
+
tumbleweed.style.top = y + 'px';
|
488 |
+
tumbleweed.style.width = size + 'px';
|
489 |
+
tumbleweed.style.height = size + 'px';
|
490 |
+
|
491 |
+
// Add to the element
|
492 |
+
element.appendChild(tumbleweed);
|
493 |
+
|
494 |
+
// Animate the tumbleweed
|
495 |
+
animateTumbleweed(tumbleweed, rect);
|
496 |
+
}
|
497 |
+
}
|
498 |
+
|
499 |
+
/**
|
500 |
+
* Animate a tumbleweed particle
|
501 |
+
*/
|
502 |
+
function animateTumbleweed(element, containerRect) {
|
503 |
+
const duration = Math.random() * 8000 + 4000; // 4-12 seconds
|
504 |
+
let startTime = null;
|
505 |
+
|
506 |
+
// The animation function
|
507 |
+
function animate(timestamp) {
|
508 |
+
if (!startTime) startTime = timestamp;
|
509 |
+
const elapsed = timestamp - startTime;
|
510 |
+
const progress = Math.min(elapsed / duration, 1);
|
511 |
+
|
512 |
+
// Spiral motion
|
513 |
+
const angle = progress * Math.PI * 6; // 3 full rotations
|
514 |
+
const radius = progress * 100; // Expand outward
|
515 |
+
const centerX = containerRect.width / 2;
|
516 |
+
const centerY = containerRect.height / 2;
|
517 |
+
|
518 |
+
const x = centerX + Math.cos(angle) * radius;
|
519 |
+
const y = centerY + Math.sin(angle) * radius;
|
520 |
+
|
521 |
+
// Apply position
|
522 |
+
element.style.transform = `translate(${x}px, ${y}px) rotate(${angle * 180 / Math.PI}deg)`;
|
523 |
+
|
524 |
+
// Fade out towards the end
|
525 |
+
if (progress > 0.7) {
|
526 |
+
element.style.opacity = 1 - ((progress - 0.7) / 0.3);
|
527 |
+
}
|
528 |
+
|
529 |
+
// Continue animation if not complete
|
530 |
+
if (progress < 1) {
|
531 |
+
requestAnimationFrame(animate);
|
532 |
+
} else {
|
533 |
+
// Remove element when animation completes
|
534 |
+
element.remove();
|
535 |
+
}
|
536 |
+
}
|
537 |
+
|
538 |
+
// Start the animation
|
539 |
+
requestAnimationFrame(animate);
|
540 |
+
}
|
541 |
+
|
542 |
+
/**
|
543 |
+
* Get color based on score value
|
544 |
+
*/
|
545 |
+
function getScoreColor(score) {
|
546 |
+
if (score < 0.3) return '#dc3545'; // Low - red
|
547 |
+
if (score < 0.6) return '#ffc107'; // Medium - yellow
|
548 |
+
if (score < 0.8) return '#0dcaf0'; // Good - cyan
|
549 |
+
return '#198754'; // High - green
|
550 |
+
}
|
app.cpython-313.pyc
ADDED
Binary file (9.53 kB). View file
|
|
app.py
ADDED
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import logging
|
3 |
+
from flask import Flask, render_template, request, jsonify, flash, redirect, url_for
|
4 |
+
import spacy
|
5 |
+
from flask_sqlalchemy import SQLAlchemy
|
6 |
+
from sqlalchemy.orm import DeclarativeBase
|
7 |
+
|
8 |
+
from nlp_processor import process_text
|
9 |
+
from quantum_thinking import quantum_recursive_thinking
|
10 |
+
from openai_integration import generate_completion
|
11 |
+
|
12 |
+
# Configure logging
|
13 |
+
logging.basicConfig(level=logging.DEBUG)
|
14 |
+
logger = logging.getLogger(__name__)
|
15 |
+
|
16 |
+
# Create the base class for SQLAlchemy models
|
17 |
+
class Base(DeclarativeBase):
|
18 |
+
pass
|
19 |
+
|
20 |
+
# Initialize SQLAlchemy
|
21 |
+
db = SQLAlchemy(model_class=Base)
|
22 |
+
|
23 |
+
# Create Flask app
|
24 |
+
app = Flask(__name__)
|
25 |
+
app.secret_key = os.environ.get("SESSION_SECRET")
|
26 |
+
|
27 |
+
# Configure database
|
28 |
+
app.config["SQLALCHEMY_DATABASE_URI"] = os.environ.get("DATABASE_URL")
|
29 |
+
app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {
|
30 |
+
"pool_recycle": 300,
|
31 |
+
"pool_pre_ping": True,
|
32 |
+
}
|
33 |
+
|
34 |
+
# Initialize the extensions
|
35 |
+
db.init_app(app)
|
36 |
+
|
37 |
+
# Import models and create database tables
|
38 |
+
with app.app_context():
|
39 |
+
import models # This has to be imported after db is initialized
|
40 |
+
db.create_all()
|
41 |
+
|
42 |
+
# Initialize the task scheduler
|
43 |
+
from task_scheduler import scheduler
|
44 |
+
scheduler.start()
|
45 |
+
|
46 |
+
logger.info("Database tables created and task scheduler started")
|
47 |
+
|
48 |
+
# Load spaCy model (English)
|
49 |
+
try:
|
50 |
+
nlp = spacy.load("en_core_web_sm")
|
51 |
+
logger.info("Successfully loaded spaCy English model")
|
52 |
+
except OSError:
|
53 |
+
logger.warning("Downloading spaCy model...")
|
54 |
+
logger.warning("Please run: python -m spacy download en_core_web_sm")
|
55 |
+
# Fallback: use a smaller model
|
56 |
+
try:
|
57 |
+
nlp = spacy.load("en")
|
58 |
+
except:
|
59 |
+
logger.error("Failed to load spaCy model. Using blank model.")
|
60 |
+
nlp = spacy.blank("en")
|
61 |
+
|
62 |
+
@app.route('/')
|
63 |
+
def index():
|
64 |
+
return render_template('index.html')
|
65 |
+
|
66 |
+
@app.route('/settings')
|
67 |
+
def settings():
|
68 |
+
"""Settings page with user preferences for the application."""
|
69 |
+
api_key = os.environ.get("OPENAI_API_KEY", "")
|
70 |
+
api_key_masked = "••••••••" + api_key[-4:] if api_key else ""
|
71 |
+
api_key_status = bool(api_key)
|
72 |
+
ai_model = "gpt-4o" # Default to the newest model
|
73 |
+
|
74 |
+
return render_template(
|
75 |
+
'settings.html',
|
76 |
+
api_key_masked=api_key_masked,
|
77 |
+
api_key_status=api_key_status,
|
78 |
+
ai_model=ai_model
|
79 |
+
)
|
80 |
+
|
81 |
+
@app.route('/zap-integrations')
|
82 |
+
def zap_integrations():
|
83 |
+
integrations = [
|
84 |
+
{
|
85 |
+
"name": "OpenAI Connector",
|
86 |
+
"description": "Connect the quantum framework to OpenAI's GPT models",
|
87 |
+
"icon": "fa-robot",
|
88 |
+
"status": "active" if os.environ.get("OPENAI_API_KEY") else "inactive"
|
89 |
+
},
|
90 |
+
{
|
91 |
+
"name": "Language Processing Pipeline",
|
92 |
+
"description": "NLP processing workflow with quantum enhancement",
|
93 |
+
"icon": "fa-code-branch",
|
94 |
+
"status": "active"
|
95 |
+
},
|
96 |
+
{
|
97 |
+
"name": "Quantum Discord Notifier",
|
98 |
+
"description": "Send multi-dimensional analysis results to Discord",
|
99 |
+
"icon": "fa-bell",
|
100 |
+
"status": "pending"
|
101 |
+
},
|
102 |
+
{
|
103 |
+
"name": "JSON Export Automation",
|
104 |
+
"description": "Export quantum thinking results to JSON format",
|
105 |
+
"icon": "fa-file-export",
|
106 |
+
"status": "active"
|
107 |
+
},
|
108 |
+
{
|
109 |
+
"name": "Email Summarization",
|
110 |
+
"description": "Generate quantum-enhanced summaries of emails",
|
111 |
+
"icon": "fa-envelope",
|
112 |
+
"status": "pending"
|
113 |
+
}
|
114 |
+
]
|
115 |
+
return render_template('zap_integrations.html', integrations=integrations)
|
116 |
+
|
117 |
+
@app.route('/automation-workflow')
|
118 |
+
def automation_workflow():
|
119 |
+
workflow_steps = [
|
120 |
+
{
|
121 |
+
"id": 1,
|
122 |
+
"name": "Text Input",
|
123 |
+
"description": "User enters text for quantum processing",
|
124 |
+
"status": "completed",
|
125 |
+
"color": "#da4b86"
|
126 |
+
},
|
127 |
+
{
|
128 |
+
"id": 2,
|
129 |
+
"name": "NLP Processing",
|
130 |
+
"description": "Initial language processing with spaCy",
|
131 |
+
"status": "completed",
|
132 |
+
"color": "#6f42c1"
|
133 |
+
},
|
134 |
+
{
|
135 |
+
"id": 3,
|
136 |
+
"name": "Quantum Thinking",
|
137 |
+
"description": "Multi-dimensional recursive thinking algorithm",
|
138 |
+
"status": "active",
|
139 |
+
"color": "#0dcaf0"
|
140 |
+
},
|
141 |
+
{
|
142 |
+
"id": 4,
|
143 |
+
"name": "Pattern Recognition",
|
144 |
+
"description": "Identifying patterns across quantum dimensions",
|
145 |
+
"status": "pending",
|
146 |
+
"color": "#6f42c1"
|
147 |
+
},
|
148 |
+
{
|
149 |
+
"id": 5,
|
150 |
+
"name": "Response Generation",
|
151 |
+
"description": "Creating AI response with quantum insights",
|
152 |
+
"status": "pending",
|
153 |
+
"color": "#da4b86"
|
154 |
+
}
|
155 |
+
]
|
156 |
+
return render_template('automation_workflow.html', workflow_steps=workflow_steps)
|
157 |
+
|
158 |
+
@app.route('/process', methods=['POST'])
|
159 |
+
def process():
|
160 |
+
try:
|
161 |
+
input_text = request.form.get('input_text', '')
|
162 |
+
|
163 |
+
if not input_text:
|
164 |
+
flash('Please enter some text to process', 'warning')
|
165 |
+
return redirect(url_for('index'))
|
166 |
+
|
167 |
+
# Process with NLP
|
168 |
+
nlp_results = process_text(nlp, input_text)
|
169 |
+
|
170 |
+
# Process with quantum-inspired recursive thinking
|
171 |
+
depth = int(request.form.get('depth', 3))
|
172 |
+
quantum_results = quantum_recursive_thinking(input_text, depth)
|
173 |
+
|
174 |
+
# Generate OpenAI completion
|
175 |
+
use_ai = request.form.get('use_ai') == 'on'
|
176 |
+
ai_response = None
|
177 |
+
|
178 |
+
if use_ai:
|
179 |
+
try:
|
180 |
+
ai_response = generate_completion(input_text, quantum_results)
|
181 |
+
except Exception as e:
|
182 |
+
logger.error(f"OpenAI API error: {str(e)}")
|
183 |
+
flash(f"Error with OpenAI API: {str(e)}", 'danger')
|
184 |
+
|
185 |
+
return render_template(
|
186 |
+
'index.html',
|
187 |
+
input_text=input_text,
|
188 |
+
nlp_results=nlp_results,
|
189 |
+
quantum_results=quantum_results,
|
190 |
+
ai_response=ai_response,
|
191 |
+
depth=depth
|
192 |
+
)
|
193 |
+
|
194 |
+
except Exception as e:
|
195 |
+
logger.error(f"Error processing request: {str(e)}")
|
196 |
+
flash(f"An error occurred: {str(e)}", 'danger')
|
197 |
+
return redirect(url_for('index'))
|
198 |
+
|
199 |
+
@app.route('/save-api-key', methods=['POST'])
|
200 |
+
def save_api_key():
|
201 |
+
"""Save OpenAI API key."""
|
202 |
+
api_key = request.form.get('api_key', '')
|
203 |
+
ai_model = request.form.get('ai_model', 'gpt-4o')
|
204 |
+
|
205 |
+
# For a production app, we would need to securely store this API key
|
206 |
+
# For this demo, we will just flash a message
|
207 |
+
if api_key:
|
208 |
+
flash('API key settings updated successfully!', 'success')
|
209 |
+
else:
|
210 |
+
flash('API key has been cleared.', 'warning')
|
211 |
+
|
212 |
+
return redirect(url_for('settings'))
|
213 |
+
|
214 |
+
@app.route('/api/process', methods=['POST'])
|
215 |
+
def api_process():
|
216 |
+
try:
|
217 |
+
data = request.get_json()
|
218 |
+
input_text = data.get('input_text', '')
|
219 |
+
depth = data.get('depth', 3)
|
220 |
+
use_ai = data.get('use_ai', False)
|
221 |
+
|
222 |
+
if not input_text:
|
223 |
+
return jsonify({'error': 'No input text provided'}), 400
|
224 |
+
|
225 |
+
# Process with NLP
|
226 |
+
nlp_results = process_text(nlp, input_text)
|
227 |
+
|
228 |
+
# Process with quantum-inspired recursive thinking
|
229 |
+
quantum_results = quantum_recursive_thinking(input_text, depth)
|
230 |
+
|
231 |
+
# Generate OpenAI completion
|
232 |
+
ai_response = None
|
233 |
+
if use_ai:
|
234 |
+
try:
|
235 |
+
ai_response = generate_completion(input_text, quantum_results)
|
236 |
+
except Exception as e:
|
237 |
+
logger.error(f"OpenAI API error: {str(e)}")
|
238 |
+
return jsonify({'error': f'OpenAI API error: {str(e)}'}), 500
|
239 |
+
|
240 |
+
return jsonify({
|
241 |
+
'nlp_results': nlp_results,
|
242 |
+
'quantum_results': quantum_results,
|
243 |
+
'ai_response': ai_response
|
244 |
+
})
|
245 |
+
|
246 |
+
except Exception as e:
|
247 |
+
logger.error(f"API Error: {str(e)}")
|
248 |
+
return jsonify({'error': str(e)}), 500
|
249 |
+
|
250 |
+
if __name__ == "__main__":
|
251 |
+
app.run(host="0.0.0.0", port=5000, debug=True)
|
automation_workflow.html
ADDED
@@ -0,0 +1,426 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{% extends 'layout.html' %}
|
2 |
+
|
3 |
+
{% block content %}
|
4 |
+
<div class="row mb-4">
|
5 |
+
<div class="col-12">
|
6 |
+
<div class="glass-card">
|
7 |
+
<div class="p-4">
|
8 |
+
<h1 class="display-5 quantum-glow">
|
9 |
+
<i class="fas fa-cogs me-2"></i> Automation Workflow
|
10 |
+
</h1>
|
11 |
+
<p class="lead">Visualize and manage the multi-dimensional quantum processing pipeline.</p>
|
12 |
+
<div class="vision-progress">
|
13 |
+
<div class="vision-progress-bar"></div>
|
14 |
+
</div>
|
15 |
+
</div>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
</div>
|
19 |
+
|
20 |
+
<div class="row">
|
21 |
+
<div class="col-md-8">
|
22 |
+
<div class="glass-card mb-4">
|
23 |
+
<div class="card-body p-4">
|
24 |
+
<h3 class="h4 mb-4 quantum-glow">Quantum Processing Pipeline</h3>
|
25 |
+
|
26 |
+
<!-- Pyramid Workflow Visualization -->
|
27 |
+
<div class="workflow-pyramid-container">
|
28 |
+
<div class="workflow-pyramid position-relative">
|
29 |
+
{% for step in workflow_steps %}
|
30 |
+
<div class="workflow-step {% if step.status == 'completed' %}completed{% elif step.status == 'active' %}active{% else %}pending{% endif %}"
|
31 |
+
data-step-id="{{ step.id }}" style="--step-color: {{ step.color }};">
|
32 |
+
<div class="step-indicator">
|
33 |
+
<div class="step-number">{{ step.id }}</div>
|
34 |
+
<div class="step-line"></div>
|
35 |
+
</div>
|
36 |
+
<div class="step-content">
|
37 |
+
<h4 class="step-title">{{ step.name }}</h4>
|
38 |
+
<p class="step-desc">{{ step.description }}</p>
|
39 |
+
<span class="step-status badge {% if step.status == 'completed' %}bg-success{% elif step.status == 'active' %}bg-info{% else %}bg-secondary{% endif %} quantum-score">
|
40 |
+
{{ step.status | capitalize }}
|
41 |
+
</span>
|
42 |
+
</div>
|
43 |
+
|
44 |
+
<!-- LED tracer lines connecting steps -->
|
45 |
+
{% if not loop.last %}
|
46 |
+
<div class="step-connector" style="--connector-color: {{ step.color }};"></div>
|
47 |
+
{% endif %}
|
48 |
+
</div>
|
49 |
+
{% endfor %}
|
50 |
+
|
51 |
+
<!-- 3D Tetrahedron visualization at the bottom -->
|
52 |
+
<div class="tetrahedron-container">
|
53 |
+
<div class="tetrahedron">
|
54 |
+
<div class="tetra-face tetra-face-1"></div>
|
55 |
+
<div class="tetra-face tetra-face-2"></div>
|
56 |
+
<div class="tetra-face tetra-face-3"></div>
|
57 |
+
<div class="tetra-face tetra-face-4"></div>
|
58 |
+
</div>
|
59 |
+
</div>
|
60 |
+
</div>
|
61 |
+
</div>
|
62 |
+
</div>
|
63 |
+
</div>
|
64 |
+
</div>
|
65 |
+
|
66 |
+
<div class="col-md-4">
|
67 |
+
<div class="glass-card mb-4">
|
68 |
+
<div class="card-body p-4">
|
69 |
+
<h3 class="h4 mb-4 quantum-glow">Workflow Status</h3>
|
70 |
+
|
71 |
+
<div class="mb-4">
|
72 |
+
<div class="d-flex justify-content-between align-items-center mb-2">
|
73 |
+
<span>Overall Progress</span>
|
74 |
+
<span class="badge bg-info quantum-score">60%</span>
|
75 |
+
</div>
|
76 |
+
<div class="progress" style="height: 8px;">
|
77 |
+
<div class="progress-bar progress-bar-striped progress-bar-animated bg-info" role="progressbar" style="width: 60%" aria-valuenow="60" aria-valuemin="0" aria-valuemax="100"></div>
|
78 |
+
</div>
|
79 |
+
</div>
|
80 |
+
|
81 |
+
<div class="workflow-stats">
|
82 |
+
<div class="stat-item mb-3">
|
83 |
+
<div class="d-flex justify-content-between">
|
84 |
+
<span>Completed Steps</span>
|
85 |
+
<span class="fw-bold text-success">2</span>
|
86 |
+
</div>
|
87 |
+
</div>
|
88 |
+
<div class="stat-item mb-3">
|
89 |
+
<div class="d-flex justify-content-between">
|
90 |
+
<span>Active Steps</span>
|
91 |
+
<span class="fw-bold text-info">1</span>
|
92 |
+
</div>
|
93 |
+
</div>
|
94 |
+
<div class="stat-item mb-3">
|
95 |
+
<div class="d-flex justify-content-between">
|
96 |
+
<span>Pending Steps</span>
|
97 |
+
<span class="fw-bold text-secondary">2</span>
|
98 |
+
</div>
|
99 |
+
</div>
|
100 |
+
<div class="stat-item mb-3">
|
101 |
+
<div class="d-flex justify-content-between">
|
102 |
+
<span>Total Execution Time</span>
|
103 |
+
<span class="fw-bold">2.4s</span>
|
104 |
+
</div>
|
105 |
+
</div>
|
106 |
+
</div>
|
107 |
+
|
108 |
+
<div class="mt-4 d-grid">
|
109 |
+
<button class="btn btn-outline-light quantum-btn">
|
110 |
+
<i class="fas fa-play me-2"></i> Run Workflow
|
111 |
+
</button>
|
112 |
+
</div>
|
113 |
+
</div>
|
114 |
+
</div>
|
115 |
+
|
116 |
+
<div class="glass-card">
|
117 |
+
<div class="card-body p-4">
|
118 |
+
<h3 class="h4 mb-4 quantum-glow">Current Activity</h3>
|
119 |
+
|
120 |
+
<div class="activity-log" id="activity-log">
|
121 |
+
<div class="activity-item">
|
122 |
+
<div class="activity-timestamp">11:15:30</div>
|
123 |
+
<div class="activity-content">
|
124 |
+
<span class="badge bg-success me-1">Step 2</span>
|
125 |
+
NLP Processing completed
|
126 |
+
</div>
|
127 |
+
</div>
|
128 |
+
<div class="activity-item">
|
129 |
+
<div class="activity-timestamp">11:15:28</div>
|
130 |
+
<div class="activity-content">
|
131 |
+
<span class="badge bg-info me-1">Step 3</span>
|
132 |
+
Starting Quantum Thinking algorithm
|
133 |
+
</div>
|
134 |
+
</div>
|
135 |
+
<div class="activity-item">
|
136 |
+
<div class="activity-timestamp">11:15:25</div>
|
137 |
+
<div class="activity-content">
|
138 |
+
<span class="badge bg-success me-1">Step 1</span>
|
139 |
+
Text input received (128 chars)
|
140 |
+
</div>
|
141 |
+
</div>
|
142 |
+
</div>
|
143 |
+
</div>
|
144 |
+
</div>
|
145 |
+
</div>
|
146 |
+
</div>
|
147 |
+
|
148 |
+
<style>
|
149 |
+
/* Workflow Pyramid Container */
|
150 |
+
.workflow-pyramid-container {
|
151 |
+
padding: 20px 0;
|
152 |
+
position: relative;
|
153 |
+
}
|
154 |
+
|
155 |
+
.workflow-pyramid {
|
156 |
+
display: flex;
|
157 |
+
flex-direction: column;
|
158 |
+
gap: 30px;
|
159 |
+
padding: 20px;
|
160 |
+
position: relative;
|
161 |
+
}
|
162 |
+
|
163 |
+
/* Workflow Steps */
|
164 |
+
.workflow-step {
|
165 |
+
--step-bg: rgba(30, 41, 59, 0.4);
|
166 |
+
--step-border: var(--step-color);
|
167 |
+
|
168 |
+
display: flex;
|
169 |
+
background: var(--step-bg);
|
170 |
+
border-radius: 12px;
|
171 |
+
padding: 15px;
|
172 |
+
position: relative;
|
173 |
+
transition: all 0.3s ease;
|
174 |
+
box-shadow: 0 3px 15px rgba(0, 0, 0, 0.1);
|
175 |
+
border-left: 3px solid var(--step-border);
|
176 |
+
}
|
177 |
+
|
178 |
+
.workflow-step:hover {
|
179 |
+
transform: translateY(-5px);
|
180 |
+
}
|
181 |
+
|
182 |
+
.workflow-step.completed {
|
183 |
+
--step-bg: rgba(40, 167, 69, 0.1);
|
184 |
+
}
|
185 |
+
|
186 |
+
.workflow-step.active {
|
187 |
+
--step-bg: rgba(13, 202, 240, 0.1);
|
188 |
+
animation: step-pulse 2s infinite alternate;
|
189 |
+
}
|
190 |
+
|
191 |
+
@keyframes step-pulse {
|
192 |
+
0% { box-shadow: 0 0 5px var(--step-color); }
|
193 |
+
100% { box-shadow: 0 0 15px var(--step-color); }
|
194 |
+
}
|
195 |
+
|
196 |
+
/* Step Connector - LED Tracer Lines */
|
197 |
+
.step-connector {
|
198 |
+
position: absolute;
|
199 |
+
left: 24px;
|
200 |
+
top: 100%;
|
201 |
+
height: 30px;
|
202 |
+
width: 2px;
|
203 |
+
background: linear-gradient(to bottom, var(--connector-color), transparent);
|
204 |
+
z-index: 1;
|
205 |
+
animation: pulse-connector 2s infinite alternate;
|
206 |
+
}
|
207 |
+
|
208 |
+
@keyframes pulse-connector {
|
209 |
+
0% { opacity: 0.5; }
|
210 |
+
100% { opacity: 1; }
|
211 |
+
}
|
212 |
+
|
213 |
+
/* Step Content */
|
214 |
+
.step-indicator {
|
215 |
+
display: flex;
|
216 |
+
flex-direction: column;
|
217 |
+
align-items: center;
|
218 |
+
margin-right: 15px;
|
219 |
+
}
|
220 |
+
|
221 |
+
.step-number {
|
222 |
+
width: 32px;
|
223 |
+
height: 32px;
|
224 |
+
border-radius: 50%;
|
225 |
+
background: var(--step-color);
|
226 |
+
display: flex;
|
227 |
+
align-items: center;
|
228 |
+
justify-content: center;
|
229 |
+
font-weight: bold;
|
230 |
+
color: white;
|
231 |
+
box-shadow: 0 0 10px var(--step-color);
|
232 |
+
}
|
233 |
+
|
234 |
+
.step-line {
|
235 |
+
width: 2px;
|
236 |
+
height: 100%;
|
237 |
+
background-color: var(--step-color);
|
238 |
+
opacity: 0.5;
|
239 |
+
}
|
240 |
+
|
241 |
+
.step-content {
|
242 |
+
flex: 1;
|
243 |
+
}
|
244 |
+
|
245 |
+
.step-title {
|
246 |
+
font-size: 1.1rem;
|
247 |
+
margin-bottom: 5px;
|
248 |
+
color: var(--step-color);
|
249 |
+
}
|
250 |
+
|
251 |
+
.step-desc {
|
252 |
+
font-size: 0.9rem;
|
253 |
+
opacity: 0.8;
|
254 |
+
margin-bottom: 10px;
|
255 |
+
}
|
256 |
+
|
257 |
+
/* 3D Tetrahedron */
|
258 |
+
.tetrahedron-container {
|
259 |
+
margin-top: 30px;
|
260 |
+
height: 150px;
|
261 |
+
display: flex;
|
262 |
+
justify-content: center;
|
263 |
+
align-items: center;
|
264 |
+
perspective: 1000px;
|
265 |
+
}
|
266 |
+
|
267 |
+
.tetrahedron {
|
268 |
+
width: 100px;
|
269 |
+
height: 100px;
|
270 |
+
position: relative;
|
271 |
+
transform-style: preserve-3d;
|
272 |
+
animation: tetra-rotate 15s linear infinite;
|
273 |
+
}
|
274 |
+
|
275 |
+
@keyframes tetra-rotate {
|
276 |
+
0% { transform: rotateX(0) rotateY(0); }
|
277 |
+
100% { transform: rotateX(360deg) rotateY(360deg); }
|
278 |
+
}
|
279 |
+
|
280 |
+
.tetra-face {
|
281 |
+
position: absolute;
|
282 |
+
width: 0;
|
283 |
+
height: 0;
|
284 |
+
border-style: solid;
|
285 |
+
opacity: 0.7;
|
286 |
+
backface-visibility: visible;
|
287 |
+
}
|
288 |
+
|
289 |
+
.tetra-face-1 {
|
290 |
+
border-width: 0 50px 86.6px 50px;
|
291 |
+
border-color: transparent transparent rgba(218, 75, 134, 0.5) transparent;
|
292 |
+
transform: rotateX(30deg) translateY(-50px) translateZ(28.87px);
|
293 |
+
}
|
294 |
+
|
295 |
+
.tetra-face-2 {
|
296 |
+
border-width: 0 50px 86.6px 50px;
|
297 |
+
border-color: transparent transparent rgba(111, 66, 193, 0.5) transparent;
|
298 |
+
transform: rotateX(30deg) rotateY(120deg) translateY(-50px) translateZ(28.87px);
|
299 |
+
}
|
300 |
+
|
301 |
+
.tetra-face-3 {
|
302 |
+
border-width: 0 50px 86.6px 50px;
|
303 |
+
border-color: transparent transparent rgba(13, 202, 240, 0.5) transparent;
|
304 |
+
transform: rotateX(30deg) rotateY(240deg) translateY(-50px) translateZ(28.87px);
|
305 |
+
}
|
306 |
+
|
307 |
+
.tetra-face-4 {
|
308 |
+
border-width: 0 50px 86.6px 50px;
|
309 |
+
border-color: transparent transparent rgba(255, 255, 255, 0.3) transparent;
|
310 |
+
transform: rotateX(-30deg) rotateY(0deg) translateY(50px) translateZ(28.87px);
|
311 |
+
}
|
312 |
+
|
313 |
+
/* Activity Log */
|
314 |
+
.activity-log {
|
315 |
+
display: flex;
|
316 |
+
flex-direction: column;
|
317 |
+
gap: 15px;
|
318 |
+
height: 200px;
|
319 |
+
overflow-y: auto;
|
320 |
+
}
|
321 |
+
|
322 |
+
.activity-item {
|
323 |
+
display: flex;
|
324 |
+
gap: 10px;
|
325 |
+
font-size: 0.9rem;
|
326 |
+
padding-bottom: 10px;
|
327 |
+
border-bottom: 1px solid rgba(255, 255, 255, 0.1);
|
328 |
+
}
|
329 |
+
|
330 |
+
.activity-timestamp {
|
331 |
+
color: rgba(255, 255, 255, 0.6);
|
332 |
+
white-space: nowrap;
|
333 |
+
}
|
334 |
+
|
335 |
+
.activity-content {
|
336 |
+
flex: 1;
|
337 |
+
}
|
338 |
+
</style>
|
339 |
+
|
340 |
+
<script>
|
341 |
+
document.addEventListener('DOMContentLoaded', function() {
|
342 |
+
// Animate the workflow steps on load
|
343 |
+
const steps = document.querySelectorAll('.workflow-step');
|
344 |
+
|
345 |
+
steps.forEach((step, index) => {
|
346 |
+
setTimeout(() => {
|
347 |
+
step.classList.add('fade-in');
|
348 |
+
}, index * 200);
|
349 |
+
});
|
350 |
+
|
351 |
+
// Simulate activity log updates
|
352 |
+
const activityLog = document.getElementById('activity-log');
|
353 |
+
const activities = [
|
354 |
+
{ time: '11:15:35', step: 3, message: 'Processing quantum dimension 1/3' },
|
355 |
+
{ time: '11:15:42', step: 3, message: 'Processing quantum dimension 2/3' },
|
356 |
+
{ time: '11:15:50', step: 3, message: 'Processing quantum dimension 3/3' }
|
357 |
+
];
|
358 |
+
|
359 |
+
let i = 0;
|
360 |
+
const activityTimer = setInterval(() => {
|
361 |
+
if (i >= activities.length) {
|
362 |
+
clearInterval(activityTimer);
|
363 |
+
return;
|
364 |
+
}
|
365 |
+
|
366 |
+
const activity = activities[i];
|
367 |
+
const activityItem = document.createElement('div');
|
368 |
+
activityItem.className = 'activity-item';
|
369 |
+
activityItem.innerHTML = `
|
370 |
+
<div class="activity-timestamp">${activity.time}</div>
|
371 |
+
<div class="activity-content">
|
372 |
+
<span class="badge bg-info me-1">Step ${activity.step}</span>
|
373 |
+
${activity.message}
|
374 |
+
</div>
|
375 |
+
`;
|
376 |
+
|
377 |
+
activityLog.insertBefore(activityItem, activityLog.firstChild);
|
378 |
+
|
379 |
+
// Apply fade-in effect
|
380 |
+
activityItem.style.opacity = '0';
|
381 |
+
setTimeout(() => {
|
382 |
+
activityItem.style.transition = 'opacity 0.5s ease';
|
383 |
+
activityItem.style.opacity = '1';
|
384 |
+
}, 10);
|
385 |
+
|
386 |
+
i++;
|
387 |
+
}, 5000);
|
388 |
+
|
389 |
+
// Add particle effects to the tetrahedron
|
390 |
+
const tetrahedron = document.querySelector('.tetrahedron');
|
391 |
+
|
392 |
+
setInterval(() => {
|
393 |
+
const rect = tetrahedron.getBoundingClientRect();
|
394 |
+
const x = rect.left + rect.width / 2;
|
395 |
+
const y = rect.top + rect.height / 2;
|
396 |
+
|
397 |
+
// Random position around the tetrahedron
|
398 |
+
const angle = Math.random() * Math.PI * 2;
|
399 |
+
const distance = 20 + Math.random() * 30;
|
400 |
+
const particleX = x + Math.cos(angle) * distance;
|
401 |
+
const particleY = y + Math.sin(angle) * distance;
|
402 |
+
|
403 |
+
// Create particle
|
404 |
+
createParticle(particleX, particleY);
|
405 |
+
}, 300);
|
406 |
+
|
407 |
+
function createParticle(x, y) {
|
408 |
+
const colors = ['#da4b86', '#6f42c1', '#0dcaf0'];
|
409 |
+
const color = colors[Math.floor(Math.random() * colors.length)];
|
410 |
+
|
411 |
+
const particle = document.createElement('div');
|
412 |
+
particle.className = 'quantum-particle';
|
413 |
+
particle.style.left = x + 'px';
|
414 |
+
particle.style.top = y + 'px';
|
415 |
+
particle.style.color = color;
|
416 |
+
|
417 |
+
document.body.appendChild(particle);
|
418 |
+
|
419 |
+
// Remove after animation
|
420 |
+
setTimeout(() => {
|
421 |
+
particle.remove();
|
422 |
+
}, 1000);
|
423 |
+
}
|
424 |
+
});
|
425 |
+
</script>
|
426 |
+
{% endblock %}
|
bridge.log
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
./start_atlas_unified.sh: line 123: python: command not found
|
bridge.pid
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
57095
|
bridge_server.py
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Atlas Unified Bridge Server
|
4 |
+
This server connects all Atlas platform components together
|
5 |
+
"""
|
6 |
+
|
7 |
+
import os
|
8 |
+
import sys
|
9 |
+
import json
|
10 |
+
import time
|
11 |
+
import logging
|
12 |
+
import requests
|
13 |
+
from fastapi import FastAPI, HTTPException, Request
|
14 |
+
from fastapi.middleware.cors import CORSMiddleware
|
15 |
+
from fastapi.responses import JSONResponse
|
16 |
+
import uvicorn
|
17 |
+
from pydantic import BaseModel
|
18 |
+
from typing import Dict, List, Any, Optional
|
19 |
+
|
20 |
+
# Setup logging
|
21 |
+
logging.basicConfig(
|
22 |
+
level=logging.INFO,
|
23 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
24 |
+
handlers=[
|
25 |
+
logging.StreamHandler(),
|
26 |
+
logging.FileHandler("bridge_server.log")
|
27 |
+
]
|
28 |
+
)
|
29 |
+
logger = logging.getLogger("atlas_bridge")
|
30 |
+
|
31 |
+
# Component endpoints
|
32 |
+
OPENMANUS_ENDPOINT = "http://localhost:50505"
|
33 |
+
CASIBASE_ENDPOINT = "http://localhost:7777"
|
34 |
+
CYPHER_ENDPOINT = "http://localhost:5000"
|
35 |
+
QUANTUMVISION_ENDPOINT = "http://localhost:8000"
|
36 |
+
CONVERSATION_ENDPOINT = "http://localhost:8001"
|
37 |
+
|
38 |
+
# Initialize FastAPI app
|
39 |
+
app = FastAPI(title="Atlas Unified Bridge")
|
40 |
+
|
41 |
+
# Add CORS middleware
|
42 |
+
app.add_middleware(
|
43 |
+
CORSMiddleware,
|
44 |
+
allow_origins=["*"], # Adjust in production
|
45 |
+
allow_credentials=True,
|
46 |
+
allow_methods=["*"],
|
47 |
+
allow_headers=["*"],
|
48 |
+
)
|
49 |
+
|
50 |
+
# Health check endpoint
|
51 |
+
@app.get("/health")
|
52 |
+
async def health_check():
|
53 |
+
"""Health check endpoint to verify if bridge server is running"""
|
54 |
+
status = {
|
55 |
+
"bridge": "operational",
|
56 |
+
"components": {
|
57 |
+
"openmanus": check_component_health(OPENMANUS_ENDPOINT),
|
58 |
+
"casibase": check_component_health(CASIBASE_ENDPOINT),
|
59 |
+
"cypher": check_component_health(CYPHER_ENDPOINT),
|
60 |
+
"quantumvision": check_component_health(QUANTUMVISION_ENDPOINT),
|
61 |
+
"conversation": check_component_health(CONVERSATION_ENDPOINT)
|
62 |
+
},
|
63 |
+
"timestamp": time.time()
|
64 |
+
}
|
65 |
+
return status
|
66 |
+
|
67 |
+
def check_component_health(endpoint: str) -> str:
|
68 |
+
"""Check if a component is operational"""
|
69 |
+
try:
|
70 |
+
response = requests.get(f"{endpoint}/health", timeout=3)
|
71 |
+
if response.status_code == 200:
|
72 |
+
return "operational"
|
73 |
+
except:
|
74 |
+
pass
|
75 |
+
return "unavailable"
|
76 |
+
|
77 |
+
# Main entry point
|
78 |
+
if __name__ == "__main__":
|
79 |
+
logger.info("Starting Atlas Unified Bridge Server")
|
80 |
+
uvicorn.run("bridge_server:app", host="0.0.0.0", port=8080, reload=True)
|
browser_automation.py
ADDED
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Browser automation module for web scraping and analysis.
|
3 |
+
This module enables the AI assistant to control a web browser,
|
4 |
+
scrape content, and extract information from websites.
|
5 |
+
"""
|
6 |
+
|
7 |
+
import json
|
8 |
+
import logging
|
9 |
+
import re
|
10 |
+
import urllib.parse
|
11 |
+
from datetime import datetime
|
12 |
+
|
13 |
+
import requests
|
14 |
+
from bs4 import BeautifulSoup
|
15 |
+
from models import WebResource, Task, db
|
16 |
+
|
17 |
+
logger = logging.getLogger(__name__)
|
18 |
+
|
19 |
+
|
20 |
+
class BrowserAutomation:
|
21 |
+
"""Class for handling browser automation and web scraping"""
|
22 |
+
|
23 |
+
def __init__(self, user_agent=None, headers=None):
|
24 |
+
self.user_agent = user_agent or 'QuantumAI Assistant/1.0'
|
25 |
+
self.headers = headers or {
|
26 |
+
'User-Agent': self.user_agent,
|
27 |
+
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
|
28 |
+
'Accept-Language': 'en-US,en;q=0.5',
|
29 |
+
}
|
30 |
+
self.session = requests.Session()
|
31 |
+
self.session.headers.update(self.headers)
|
32 |
+
|
33 |
+
def fetch_page(self, url, task_id=None):
|
34 |
+
"""
|
35 |
+
Fetch a webpage and parse its content
|
36 |
+
|
37 |
+
Args:
|
38 |
+
url (str): The URL to fetch
|
39 |
+
task_id (int, optional): Associated task ID
|
40 |
+
|
41 |
+
Returns:
|
42 |
+
dict: Result containing status, parsed content, and metadata
|
43 |
+
"""
|
44 |
+
try:
|
45 |
+
# Parse and normalize URL
|
46 |
+
parsed_url = urllib.parse.urlparse(url)
|
47 |
+
if not parsed_url.scheme:
|
48 |
+
url = 'https://' + url
|
49 |
+
|
50 |
+
logger.info(f"Fetching URL: {url}")
|
51 |
+
response = self.session.get(url, timeout=15)
|
52 |
+
response.raise_for_status()
|
53 |
+
|
54 |
+
# Parse with BeautifulSoup
|
55 |
+
soup = BeautifulSoup(response.text, 'html.parser')
|
56 |
+
title = soup.title.string if soup.title else "No title found"
|
57 |
+
|
58 |
+
# Store or update the web resource
|
59 |
+
web_resource = self._store_web_resource(url, title, task_id)
|
60 |
+
|
61 |
+
# Extract main content, remove scripts, styles, etc.
|
62 |
+
for script in soup(["script", "style", "meta", "noscript"]):
|
63 |
+
script.extract()
|
64 |
+
|
65 |
+
# Get text content and normalize whitespace
|
66 |
+
text_content = soup.get_text(separator=' ')
|
67 |
+
text_content = re.sub(r'\s+', ' ', text_content).strip()
|
68 |
+
|
69 |
+
return {
|
70 |
+
'status': 'success',
|
71 |
+
'url': url,
|
72 |
+
'title': title,
|
73 |
+
'content': text_content,
|
74 |
+
'html': response.text,
|
75 |
+
'web_resource_id': web_resource.id,
|
76 |
+
'timestamp': datetime.utcnow().isoformat()
|
77 |
+
}
|
78 |
+
|
79 |
+
except Exception as e:
|
80 |
+
logger.error(f"Error fetching URL {url}: {str(e)}")
|
81 |
+
return {
|
82 |
+
'status': 'error',
|
83 |
+
'url': url,
|
84 |
+
'error': str(e),
|
85 |
+
'timestamp': datetime.utcnow().isoformat()
|
86 |
+
}
|
87 |
+
|
88 |
+
def _store_web_resource(self, url, title, task_id=None):
|
89 |
+
"""Store or update web resource in the database"""
|
90 |
+
try:
|
91 |
+
web_resource = WebResource.query.filter_by(url=url).first()
|
92 |
+
|
93 |
+
if not web_resource:
|
94 |
+
web_resource = WebResource(
|
95 |
+
url=url,
|
96 |
+
title=title,
|
97 |
+
category='general',
|
98 |
+
last_accessed=datetime.utcnow(),
|
99 |
+
)
|
100 |
+
if task_id:
|
101 |
+
web_resource.task_id = task_id
|
102 |
+
db.session.add(web_resource)
|
103 |
+
else:
|
104 |
+
web_resource.last_accessed = datetime.utcnow()
|
105 |
+
web_resource.title = title
|
106 |
+
|
107 |
+
db.session.commit()
|
108 |
+
return web_resource
|
109 |
+
|
110 |
+
except Exception as e:
|
111 |
+
logger.error(f"Error storing web resource: {str(e)}")
|
112 |
+
db.session.rollback()
|
113 |
+
# Return a placeholder object if db operation fails
|
114 |
+
return WebResource(url=url, title=title)
|
115 |
+
|
116 |
+
def extract_links(self, html):
|
117 |
+
"""Extract all links from an HTML document"""
|
118 |
+
soup = BeautifulSoup(html, 'html.parser')
|
119 |
+
links = []
|
120 |
+
|
121 |
+
for a_tag in soup.find_all('a', href=True):
|
122 |
+
href = a_tag['href']
|
123 |
+
text = a_tag.get_text(strip=True)
|
124 |
+
|
125 |
+
if href.startswith('#') or href.startswith('javascript:'):
|
126 |
+
continue
|
127 |
+
|
128 |
+
links.append({
|
129 |
+
'href': href,
|
130 |
+
'text': text[:100] if text else ""
|
131 |
+
})
|
132 |
+
|
133 |
+
return links
|
134 |
+
|
135 |
+
def extract_structured_data(self, html):
|
136 |
+
"""Extract structured data (JSON-LD, microdata) from an HTML document"""
|
137 |
+
soup = BeautifulSoup(html, 'html.parser')
|
138 |
+
structured_data = []
|
139 |
+
|
140 |
+
# Extract JSON-LD
|
141 |
+
for script in soup.find_all('script', type='application/ld+json'):
|
142 |
+
try:
|
143 |
+
data = json.loads(script.string)
|
144 |
+
structured_data.append({
|
145 |
+
'type': 'json-ld',
|
146 |
+
'data': data
|
147 |
+
})
|
148 |
+
except json.JSONDecodeError:
|
149 |
+
pass
|
150 |
+
|
151 |
+
# TODO: Add microdata and RDFa extraction if needed
|
152 |
+
|
153 |
+
return structured_data
|
154 |
+
|
155 |
+
def analyze_page_content(self, content, url=None):
|
156 |
+
"""Analyze page content to extract key information using NLP"""
|
157 |
+
# This will be enhanced with our quantum NLP process
|
158 |
+
# For now, just return a simple analysis
|
159 |
+
word_count = len(content.split())
|
160 |
+
sentences = re.split(r'[.!?]+', content)
|
161 |
+
sentence_count = len(sentences)
|
162 |
+
|
163 |
+
return {
|
164 |
+
'word_count': word_count,
|
165 |
+
'sentence_count': sentence_count,
|
166 |
+
'average_sentence_length': word_count / max(1, sentence_count),
|
167 |
+
'url': url
|
168 |
+
}
|
169 |
+
|
170 |
+
|
171 |
+
# Helper functions for browser automation tasks
|
172 |
+
def create_scraping_task(url, title, description=None, scheduled_for=None):
|
173 |
+
"""Create a new web scraping task"""
|
174 |
+
task = Task(
|
175 |
+
title=title,
|
176 |
+
description=description or f"Scrape content from {url}",
|
177 |
+
status='pending',
|
178 |
+
task_type='web_scrape',
|
179 |
+
scheduled_for=scheduled_for,
|
180 |
+
config={'url': url}
|
181 |
+
)
|
182 |
+
db.session.add(task)
|
183 |
+
db.session.commit()
|
184 |
+
return task
|
185 |
+
|
186 |
+
|
187 |
+
def execute_scraping_task(task_id):
|
188 |
+
"""Execute a web scraping task"""
|
189 |
+
task = Task.query.get(task_id)
|
190 |
+
if not task or task.task_type != 'web_scrape':
|
191 |
+
return {'status': 'error', 'message': 'Invalid task'}
|
192 |
+
|
193 |
+
try:
|
194 |
+
task.status = 'in_progress'
|
195 |
+
db.session.commit()
|
196 |
+
|
197 |
+
url = task.config.get('url')
|
198 |
+
browser = BrowserAutomation()
|
199 |
+
result = browser.fetch_page(url, task_id=task.id)
|
200 |
+
|
201 |
+
if result['status'] == 'success':
|
202 |
+
# Also analyze the content
|
203 |
+
analysis = browser.analyze_page_content(result['content'], url)
|
204 |
+
result['analysis'] = analysis
|
205 |
+
|
206 |
+
task.status = 'completed'
|
207 |
+
task.completed_at = datetime.utcnow()
|
208 |
+
task.result = result
|
209 |
+
else:
|
210 |
+
task.status = 'failed'
|
211 |
+
task.result = result
|
212 |
+
|
213 |
+
db.session.commit()
|
214 |
+
return result
|
215 |
+
|
216 |
+
except Exception as e:
|
217 |
+
logger.error(f"Error executing task {task_id}: {str(e)}")
|
218 |
+
task.status = 'failed'
|
219 |
+
task.result = {'error': str(e)}
|
220 |
+
db.session.commit()
|
221 |
+
return {'status': 'error', 'message': str(e)}
|
contextual_hints.css
ADDED
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/* Contextual Hints System - With Quantum Styling
|
2 |
+
-------------------------------------------------- */
|
3 |
+
|
4 |
+
:root {
|
5 |
+
--hint-bg-color: rgba(30, 30, 35, 0.95);
|
6 |
+
--hint-border-color: rgba(111, 66, 193, 0.7);
|
7 |
+
--hint-text-color: rgba(255, 255, 255, 0.9);
|
8 |
+
--hint-shadow-color: rgba(218, 75, 134, 0.3);
|
9 |
+
--hint-icon-color: rgba(13, 202, 240, 0.9);
|
10 |
+
--hint-glow-color: rgba(111, 66, 193, 0.5);
|
11 |
+
--hint-btn-color: rgba(13, 202, 240, 0.8);
|
12 |
+
--hint-transition-time: 0.3s;
|
13 |
+
}
|
14 |
+
|
15 |
+
/* Main Hint Container */
|
16 |
+
.contextual-hint {
|
17 |
+
position: absolute;
|
18 |
+
width: auto;
|
19 |
+
max-width: 300px;
|
20 |
+
min-width: 230px;
|
21 |
+
padding: 15px;
|
22 |
+
background: var(--hint-bg-color);
|
23 |
+
color: var(--hint-text-color);
|
24 |
+
border-radius: 8px;
|
25 |
+
border: 1px solid var(--hint-border-color);
|
26 |
+
box-shadow: 0 3px 15px var(--hint-shadow-color),
|
27 |
+
0 0 8px var(--hint-glow-color);
|
28 |
+
z-index: 9999;
|
29 |
+
opacity: 0;
|
30 |
+
transform: translateY(15px) scale(0.95);
|
31 |
+
transition: opacity var(--hint-transition-time) ease,
|
32 |
+
transform var(--hint-transition-time) ease;
|
33 |
+
backdrop-filter: blur(8px);
|
34 |
+
pointer-events: none;
|
35 |
+
}
|
36 |
+
|
37 |
+
/* Hint that is currently shown */
|
38 |
+
.contextual-hint.active {
|
39 |
+
opacity: 1;
|
40 |
+
transform: translateY(0) scale(1);
|
41 |
+
pointer-events: auto;
|
42 |
+
}
|
43 |
+
|
44 |
+
/* Hint Title */
|
45 |
+
.contextual-hint-title {
|
46 |
+
display: flex;
|
47 |
+
align-items: center;
|
48 |
+
margin-bottom: 10px;
|
49 |
+
padding-bottom: 8px;
|
50 |
+
border-bottom: 1px solid rgba(255, 255, 255, 0.1);
|
51 |
+
}
|
52 |
+
|
53 |
+
.contextual-hint-title i {
|
54 |
+
color: var(--hint-icon-color);
|
55 |
+
margin-right: 8px;
|
56 |
+
font-size: 1.1rem;
|
57 |
+
}
|
58 |
+
|
59 |
+
.contextual-hint-title h5 {
|
60 |
+
margin: 0;
|
61 |
+
font-size: 1rem;
|
62 |
+
font-weight: 600;
|
63 |
+
color: var(--hint-text-color);
|
64 |
+
}
|
65 |
+
|
66 |
+
/* Hint Content */
|
67 |
+
.contextual-hint-content {
|
68 |
+
font-size: 0.95rem;
|
69 |
+
line-height: 1.4;
|
70 |
+
margin-bottom: 15px;
|
71 |
+
}
|
72 |
+
|
73 |
+
/* Hint Actions */
|
74 |
+
.contextual-hint-actions {
|
75 |
+
display: flex;
|
76 |
+
justify-content: space-between;
|
77 |
+
align-items: center;
|
78 |
+
}
|
79 |
+
|
80 |
+
.hint-button {
|
81 |
+
background: transparent;
|
82 |
+
color: var(--hint-btn-color);
|
83 |
+
border: 1px solid var(--hint-btn-color);
|
84 |
+
padding: 5px 10px;
|
85 |
+
border-radius: 4px;
|
86 |
+
font-size: 0.8rem;
|
87 |
+
cursor: pointer;
|
88 |
+
transition: all var(--hint-transition-time) ease;
|
89 |
+
}
|
90 |
+
|
91 |
+
.hint-button:hover {
|
92 |
+
background: var(--hint-btn-color);
|
93 |
+
color: #000;
|
94 |
+
}
|
95 |
+
|
96 |
+
.hint-button-primary {
|
97 |
+
background: var(--hint-btn-color);
|
98 |
+
color: #000;
|
99 |
+
}
|
100 |
+
|
101 |
+
.hint-button-primary:hover {
|
102 |
+
background: transparent;
|
103 |
+
color: var(--hint-btn-color);
|
104 |
+
}
|
105 |
+
|
106 |
+
.hint-dont-show {
|
107 |
+
font-size: 0.8rem;
|
108 |
+
color: rgba(255, 255, 255, 0.5);
|
109 |
+
cursor: pointer;
|
110 |
+
transition: color var(--hint-transition-time) ease;
|
111 |
+
}
|
112 |
+
|
113 |
+
.hint-dont-show:hover {
|
114 |
+
color: var(--hint-text-color);
|
115 |
+
}
|
116 |
+
|
117 |
+
/* Hint Arrow */
|
118 |
+
.contextual-hint::before {
|
119 |
+
content: '';
|
120 |
+
position: absolute;
|
121 |
+
width: 10px;
|
122 |
+
height: 10px;
|
123 |
+
background: var(--hint-bg-color);
|
124 |
+
border: 1px solid var(--hint-border-color);
|
125 |
+
transform: rotate(45deg);
|
126 |
+
}
|
127 |
+
|
128 |
+
/* Position variations */
|
129 |
+
.contextual-hint.position-top::before {
|
130 |
+
bottom: -6px;
|
131 |
+
border-top: none;
|
132 |
+
border-left: none;
|
133 |
+
}
|
134 |
+
|
135 |
+
.contextual-hint.position-bottom::before {
|
136 |
+
top: -6px;
|
137 |
+
border-bottom: none;
|
138 |
+
border-right: none;
|
139 |
+
}
|
140 |
+
|
141 |
+
.contextual-hint.position-left::before {
|
142 |
+
right: -6px;
|
143 |
+
border-left: none;
|
144 |
+
border-bottom: none;
|
145 |
+
}
|
146 |
+
|
147 |
+
.contextual-hint.position-right::before {
|
148 |
+
left: -6px;
|
149 |
+
border-right: none;
|
150 |
+
border-top: none;
|
151 |
+
}
|
152 |
+
|
153 |
+
/* Hover target styling */
|
154 |
+
.hint-target {
|
155 |
+
position: relative;
|
156 |
+
}
|
157 |
+
|
158 |
+
.hint-target::after {
|
159 |
+
content: '';
|
160 |
+
position: absolute;
|
161 |
+
top: -3px;
|
162 |
+
right: -3px;
|
163 |
+
bottom: -3px;
|
164 |
+
left: -3px;
|
165 |
+
border: 2px dashed transparent;
|
166 |
+
border-radius: 5px;
|
167 |
+
transition: border-color var(--hint-transition-time) ease;
|
168 |
+
pointer-events: none;
|
169 |
+
}
|
170 |
+
|
171 |
+
.hint-target.hint-highlight::after {
|
172 |
+
border-color: var(--hint-icon-color);
|
173 |
+
}
|
174 |
+
|
175 |
+
/* Pulse animation for important hints */
|
176 |
+
@keyframes hint-pulse {
|
177 |
+
0% {
|
178 |
+
box-shadow: 0 0 0 0 rgba(13, 202, 240, 0.4);
|
179 |
+
}
|
180 |
+
70% {
|
181 |
+
box-shadow: 0 0 0 10px rgba(13, 202, 240, 0);
|
182 |
+
}
|
183 |
+
100% {
|
184 |
+
box-shadow: 0 0 0 0 rgba(13, 202, 240, 0);
|
185 |
+
}
|
186 |
+
}
|
187 |
+
|
188 |
+
.contextual-hint.important {
|
189 |
+
animation: hint-pulse 2s infinite;
|
190 |
+
}
|
191 |
+
|
192 |
+
/* LED tracer on hints */
|
193 |
+
.contextual-hint .led-tracer {
|
194 |
+
position: absolute;
|
195 |
+
height: 2px;
|
196 |
+
background: linear-gradient(90deg,
|
197 |
+
transparent,
|
198 |
+
rgba(218, 75, 134, 0.7),
|
199 |
+
rgba(13, 202, 240, 0.7),
|
200 |
+
transparent);
|
201 |
+
width: 100%;
|
202 |
+
left: 0;
|
203 |
+
bottom: 0;
|
204 |
+
animation: led-trace 2s infinite linear;
|
205 |
+
}
|
206 |
+
|
207 |
+
@keyframes led-trace {
|
208 |
+
0% {
|
209 |
+
transform: translateX(-100%);
|
210 |
+
}
|
211 |
+
100% {
|
212 |
+
transform: translateX(100%);
|
213 |
+
}
|
214 |
+
}
|
215 |
+
|
216 |
+
/* Quantum particle effect for hints */
|
217 |
+
.contextual-hint.has-particles {
|
218 |
+
overflow: hidden;
|
219 |
+
}
|
220 |
+
|
221 |
+
.hint-particles {
|
222 |
+
position: absolute;
|
223 |
+
top: 0;
|
224 |
+
left: 0;
|
225 |
+
right: 0;
|
226 |
+
bottom: 0;
|
227 |
+
pointer-events: none;
|
228 |
+
}
|
229 |
+
|
230 |
+
.hint-particle {
|
231 |
+
position: absolute;
|
232 |
+
width: 3px;
|
233 |
+
height: 3px;
|
234 |
+
background: var(--hint-icon-color);
|
235 |
+
border-radius: 50%;
|
236 |
+
opacity: 0.5;
|
237 |
+
animation: float-particle 3s infinite ease-in-out;
|
238 |
+
}
|
239 |
+
|
240 |
+
@keyframes float-particle {
|
241 |
+
0%, 100% {
|
242 |
+
transform: translateY(0) translateX(0);
|
243 |
+
opacity: 0.2;
|
244 |
+
}
|
245 |
+
50% {
|
246 |
+
transform: translateY(-15px) translateX(5px);
|
247 |
+
opacity: 0.5;
|
248 |
+
}
|
249 |
+
}
|
contextual_hints.js
ADDED
@@ -0,0 +1,534 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/**
|
2 |
+
* Contextual Hint System
|
3 |
+
*
|
4 |
+
* A subtle, non-intrusive tooltip system providing just-in-time guidance and learning
|
5 |
+
* for the Quantum NLP Framework. This system detects when a user might need help
|
6 |
+
* understanding a feature and provides relevant information without disrupting workflow.
|
7 |
+
*/
|
8 |
+
|
9 |
+
class ContextualHintSystem {
|
10 |
+
constructor() {
|
11 |
+
this.hints = {};
|
12 |
+
this.activeHint = null;
|
13 |
+
this.hintContainer = null;
|
14 |
+
this.shownHints = [];
|
15 |
+
|
16 |
+
this.initialize();
|
17 |
+
}
|
18 |
+
|
19 |
+
/**
|
20 |
+
* Initialize the hint system
|
21 |
+
*/
|
22 |
+
initialize() {
|
23 |
+
// Load previously shown hints
|
24 |
+
this.loadShownHints();
|
25 |
+
|
26 |
+
// Register predefined hints
|
27 |
+
this.registerPredefinedHints();
|
28 |
+
|
29 |
+
// Initialize scroll listener for detecting visible elements
|
30 |
+
this.initScrollListener();
|
31 |
+
|
32 |
+
// Create container for hints if it doesn't exist
|
33 |
+
if (!document.getElementById('contextual-hints-container')) {
|
34 |
+
this.hintContainer = document.createElement('div');
|
35 |
+
this.hintContainer.id = 'contextual-hints-container';
|
36 |
+
document.body.appendChild(this.hintContainer);
|
37 |
+
} else {
|
38 |
+
this.hintContainer = document.getElementById('contextual-hints-container');
|
39 |
+
}
|
40 |
+
|
41 |
+
// Listen for ESC key to dismiss hints
|
42 |
+
document.addEventListener('keydown', (e) => {
|
43 |
+
if (e.key === 'Escape' && this.activeHint) {
|
44 |
+
this.dismissActiveHint();
|
45 |
+
}
|
46 |
+
});
|
47 |
+
|
48 |
+
// Listen for clicks outside hint to dismiss
|
49 |
+
document.addEventListener('click', (e) => {
|
50 |
+
if (this.activeHint && !this.activeHint.contains(e.target)) {
|
51 |
+
// Check if the click was not on the target element either
|
52 |
+
const activeHintId = this.activeHint.getAttribute('data-hint-id');
|
53 |
+
const targetElement = document.querySelector(`[data-hint="${activeHintId}"]`);
|
54 |
+
|
55 |
+
if (!targetElement || !targetElement.contains(e.target)) {
|
56 |
+
this.dismissActiveHint();
|
57 |
+
}
|
58 |
+
}
|
59 |
+
});
|
60 |
+
}
|
61 |
+
|
62 |
+
/**
|
63 |
+
* Register a hint with the system
|
64 |
+
* @param {string} id - Unique identifier for the hint
|
65 |
+
* @param {object} options - Hint options
|
66 |
+
*/
|
67 |
+
registerHint(id, options) {
|
68 |
+
this.hints[id] = {
|
69 |
+
title: options.title || 'Hint',
|
70 |
+
content: options.content || '',
|
71 |
+
position: options.position || 'bottom',
|
72 |
+
important: options.important || false,
|
73 |
+
maxShows: options.maxShows || 3,
|
74 |
+
icon: options.icon || 'fas fa-lightbulb',
|
75 |
+
selector: options.selector || null,
|
76 |
+
particles: options.particles || false,
|
77 |
+
trigger: options.trigger || 'auto', // auto, manual, hover
|
78 |
+
buttonText: options.buttonText || 'Got it',
|
79 |
+
onShown: options.onShown || null,
|
80 |
+
onDismiss: options.onDismiss || null
|
81 |
+
};
|
82 |
+
|
83 |
+
// Attach triggers for this hint if necessary
|
84 |
+
if (options.selector) {
|
85 |
+
this.attachHintTrigger(id);
|
86 |
+
}
|
87 |
+
}
|
88 |
+
|
89 |
+
/**
|
90 |
+
* Register all predefined hints for the application
|
91 |
+
*/
|
92 |
+
registerPredefinedHints() {
|
93 |
+
// Quantum Dimensions hint
|
94 |
+
this.registerHint('quantum-dimensions', {
|
95 |
+
title: 'Quantum Dimensions',
|
96 |
+
content: 'Increase dimensions for deeper, multi-layered analysis of your text. Higher dimensions explore more interconnected thought paths.',
|
97 |
+
position: 'top',
|
98 |
+
selector: '#depth',
|
99 |
+
icon: 'fas fa-layer-group',
|
100 |
+
trigger: 'hover'
|
101 |
+
});
|
102 |
+
|
103 |
+
// OpenAI integration hint
|
104 |
+
this.registerHint('openai-integration', {
|
105 |
+
title: 'AI Enhancement',
|
106 |
+
content: 'Enable this to use OpenAI for generating human-like responses based on quantum analysis results. Requires API key in settings.',
|
107 |
+
position: 'right',
|
108 |
+
selector: '#use_ai',
|
109 |
+
icon: 'fas fa-robot',
|
110 |
+
important: !document.body.classList.contains('has-openai-key')
|
111 |
+
});
|
112 |
+
|
113 |
+
// Analyze button hint
|
114 |
+
this.registerHint('analyze-button', {
|
115 |
+
title: 'Quantum Analysis',
|
116 |
+
content: 'Start the quantum-inspired recursive thinking process to analyze your text through multiple dimensions.',
|
117 |
+
position: 'top',
|
118 |
+
selector: '#analyze-btn',
|
119 |
+
particles: true,
|
120 |
+
maxShows: 2
|
121 |
+
});
|
122 |
+
|
123 |
+
// Quantum Score hint
|
124 |
+
this.registerHint('quantum-score', {
|
125 |
+
title: 'Quantum Score',
|
126 |
+
content: 'This score represents the confidence and coherence of the quantum analysis across all dimensions.',
|
127 |
+
position: 'left',
|
128 |
+
selector: '.quantum-score-visualization',
|
129 |
+
icon: 'fas fa-chart-line',
|
130 |
+
trigger: 'manual'
|
131 |
+
});
|
132 |
+
|
133 |
+
// Zap Integrations hint
|
134 |
+
this.registerHint('zap-integrations', {
|
135 |
+
title: 'ZAP Integrations',
|
136 |
+
content: 'Connect the Quantum Framework to other services and applications to extend its capabilities.',
|
137 |
+
position: 'bottom',
|
138 |
+
selector: 'a[href="/zap-integrations"]',
|
139 |
+
icon: 'fas fa-bolt',
|
140 |
+
trigger: 'hover'
|
141 |
+
});
|
142 |
+
|
143 |
+
// Automation Workflow hint
|
144 |
+
this.registerHint('automation-workflow', {
|
145 |
+
title: 'Automation Workflow',
|
146 |
+
content: 'View and configure automated tasks and workflows using the quantum framework.',
|
147 |
+
position: 'bottom',
|
148 |
+
selector: 'a[href="/automation-workflow"]',
|
149 |
+
icon: 'fas fa-cogs',
|
150 |
+
trigger: 'hover'
|
151 |
+
});
|
152 |
+
|
153 |
+
// Named Entities hint
|
154 |
+
this.registerHint('named-entities', {
|
155 |
+
title: 'Named Entities',
|
156 |
+
content: 'These are specific objects identified in your text like people, organizations, locations, and more.',
|
157 |
+
position: 'right',
|
158 |
+
selector: '.quantum-entity-item',
|
159 |
+
icon: 'fas fa-fingerprint',
|
160 |
+
trigger: 'manual'
|
161 |
+
});
|
162 |
+
}
|
163 |
+
|
164 |
+
/**
|
165 |
+
* Attach a trigger to show a hint when interacting with an element
|
166 |
+
* @param {string} hintId - The ID of the hint to trigger
|
167 |
+
*/
|
168 |
+
attachHintTrigger(hintId) {
|
169 |
+
const hint = this.hints[hintId];
|
170 |
+
if (!hint || !hint.selector) return;
|
171 |
+
|
172 |
+
// Find all matching elements
|
173 |
+
const elements = document.querySelectorAll(hint.selector);
|
174 |
+
if (elements.length === 0) return;
|
175 |
+
|
176 |
+
elements.forEach(element => {
|
177 |
+
// Add data attribute to mark the element as a hint target
|
178 |
+
element.setAttribute('data-hint', hintId);
|
179 |
+
element.classList.add('hint-target');
|
180 |
+
|
181 |
+
// Attach event listeners based on trigger type
|
182 |
+
if (hint.trigger === 'hover') {
|
183 |
+
element.addEventListener('mouseenter', () => {
|
184 |
+
this.considerShowingHint(hintId, element);
|
185 |
+
element.classList.add('hint-highlight');
|
186 |
+
});
|
187 |
+
|
188 |
+
element.addEventListener('mouseleave', () => {
|
189 |
+
element.classList.remove('hint-highlight');
|
190 |
+
});
|
191 |
+
} else if (hint.trigger === 'auto') {
|
192 |
+
// For auto triggers, we'll check visibility in the scroll listener
|
193 |
+
// and show the hint when appropriate
|
194 |
+
}
|
195 |
+
|
196 |
+
// For manual triggers, the hint will be shown programmatically
|
197 |
+
});
|
198 |
+
}
|
199 |
+
|
200 |
+
/**
|
201 |
+
* Consider whether to show a hint based on whether it's been shown before
|
202 |
+
* @param {string} hintId - The ID of the hint to consider showing
|
203 |
+
* @param {Element} target - The target element for the hint
|
204 |
+
*/
|
205 |
+
considerShowingHint(hintId, target) {
|
206 |
+
// Don't show if another hint is active
|
207 |
+
if (this.activeHint) return;
|
208 |
+
|
209 |
+
const hint = this.hints[hintId];
|
210 |
+
if (!hint) return;
|
211 |
+
|
212 |
+
// Count how many times this hint has been shown
|
213 |
+
const timesShown = this.shownHints.filter(id => id === hintId).length;
|
214 |
+
|
215 |
+
// If it's been shown fewer times than the max, show it
|
216 |
+
if (timesShown < hint.maxShows) {
|
217 |
+
this.showHint(hintId, target);
|
218 |
+
}
|
219 |
+
}
|
220 |
+
|
221 |
+
/**
|
222 |
+
* Show a hint for a specific element
|
223 |
+
* @param {string} hintId - The ID of the hint to show
|
224 |
+
* @param {Element} targetElement - The element to attach the hint to
|
225 |
+
*/
|
226 |
+
showHint(hintId, targetElement) {
|
227 |
+
const hint = this.hints[hintId];
|
228 |
+
if (!hint) return;
|
229 |
+
|
230 |
+
// Dismiss any active hint
|
231 |
+
this.dismissActiveHint();
|
232 |
+
|
233 |
+
// Create the hint element
|
234 |
+
const hintElement = document.createElement('div');
|
235 |
+
hintElement.className = `contextual-hint position-${hint.position}`;
|
236 |
+
hintElement.setAttribute('data-hint-id', hintId);
|
237 |
+
|
238 |
+
if (hint.important) {
|
239 |
+
hintElement.classList.add('important');
|
240 |
+
}
|
241 |
+
|
242 |
+
if (hint.particles) {
|
243 |
+
hintElement.classList.add('has-particles');
|
244 |
+
}
|
245 |
+
|
246 |
+
// Add LED tracer effect
|
247 |
+
const ledTracer = document.createElement('div');
|
248 |
+
ledTracer.className = 'led-tracer';
|
249 |
+
hintElement.appendChild(ledTracer);
|
250 |
+
|
251 |
+
// Add content
|
252 |
+
hintElement.innerHTML += `
|
253 |
+
<div class="contextual-hint-title">
|
254 |
+
<i class="${hint.icon}"></i>
|
255 |
+
<h5>${hint.title}</h5>
|
256 |
+
</div>
|
257 |
+
<div class="contextual-hint-content">${hint.content}</div>
|
258 |
+
<div class="contextual-hint-actions">
|
259 |
+
<button class="hint-button hint-button-primary">${hint.buttonText}</button>
|
260 |
+
<span class="hint-dont-show">Don't show again</span>
|
261 |
+
</div>
|
262 |
+
`;
|
263 |
+
|
264 |
+
// Add particles if enabled
|
265 |
+
if (hint.particles) {
|
266 |
+
const particlesContainer = document.createElement('div');
|
267 |
+
particlesContainer.className = 'hint-particles';
|
268 |
+
|
269 |
+
// Add several particles
|
270 |
+
for (let i = 0; i < 8; i++) {
|
271 |
+
const particle = document.createElement('div');
|
272 |
+
particle.className = 'hint-particle';
|
273 |
+
particle.style.top = `${Math.random() * 100}%`;
|
274 |
+
particle.style.left = `${Math.random() * 100}%`;
|
275 |
+
particle.style.animationDelay = `${Math.random() * 2}s`;
|
276 |
+
particlesContainer.appendChild(particle);
|
277 |
+
}
|
278 |
+
|
279 |
+
hintElement.appendChild(particlesContainer);
|
280 |
+
}
|
281 |
+
|
282 |
+
// Add to DOM
|
283 |
+
this.hintContainer.appendChild(hintElement);
|
284 |
+
|
285 |
+
// Position relative to target
|
286 |
+
this.positionHint(hintElement, targetElement, hint.position);
|
287 |
+
|
288 |
+
// Show with animation
|
289 |
+
setTimeout(() => {
|
290 |
+
hintElement.classList.add('active');
|
291 |
+
}, 10);
|
292 |
+
|
293 |
+
// Set as active hint
|
294 |
+
this.activeHint = hintElement;
|
295 |
+
|
296 |
+
// Record that this hint has been shown
|
297 |
+
this.markHintAsShown(hintId);
|
298 |
+
|
299 |
+
// Attach event listeners to buttons
|
300 |
+
const dismissButton = hintElement.querySelector('.hint-button');
|
301 |
+
dismissButton.addEventListener('click', () => {
|
302 |
+
this.dismissActiveHint();
|
303 |
+
|
304 |
+
// Run onDismiss callback if provided
|
305 |
+
if (typeof hint.onDismiss === 'function') {
|
306 |
+
hint.onDismiss();
|
307 |
+
}
|
308 |
+
});
|
309 |
+
|
310 |
+
const dontShowAgain = hintElement.querySelector('.hint-dont-show');
|
311 |
+
dontShowAgain.addEventListener('click', () => {
|
312 |
+
// Add to shown hints enough times to reach maxShows
|
313 |
+
for (let i = timesShown; i < hint.maxShows; i++) {
|
314 |
+
this.markHintAsShown(hintId);
|
315 |
+
}
|
316 |
+
|
317 |
+
this.dismissActiveHint();
|
318 |
+
});
|
319 |
+
|
320 |
+
// Call onShown callback if provided
|
321 |
+
if (typeof hint.onShown === 'function') {
|
322 |
+
hint.onShown();
|
323 |
+
}
|
324 |
+
}
|
325 |
+
|
326 |
+
/**
|
327 |
+
* Position a hint element relative to its target
|
328 |
+
* @param {Element} hintElement - The hint element
|
329 |
+
* @param {Element} targetElement - The target element
|
330 |
+
* @param {string} position - The position (top, bottom, left, right)
|
331 |
+
*/
|
332 |
+
positionHint(hintElement, targetElement, position) {
|
333 |
+
if (!targetElement) return;
|
334 |
+
|
335 |
+
const targetRect = targetElement.getBoundingClientRect();
|
336 |
+
const hintRect = hintElement.getBoundingClientRect();
|
337 |
+
|
338 |
+
let top, left;
|
339 |
+
|
340 |
+
switch (position) {
|
341 |
+
case 'top':
|
342 |
+
top = targetRect.top - hintRect.height - 15;
|
343 |
+
left = targetRect.left + (targetRect.width / 2) - (hintRect.width / 2);
|
344 |
+
hintElement.querySelector('::before').style.left = '50%';
|
345 |
+
break;
|
346 |
+
|
347 |
+
case 'bottom':
|
348 |
+
top = targetRect.bottom + 15;
|
349 |
+
left = targetRect.left + (targetRect.width / 2) - (hintRect.width / 2);
|
350 |
+
hintElement.querySelector('::before').style.left = '50%';
|
351 |
+
break;
|
352 |
+
|
353 |
+
case 'left':
|
354 |
+
top = targetRect.top + (targetRect.height / 2) - (hintRect.height / 2);
|
355 |
+
left = targetRect.left - hintRect.width - 15;
|
356 |
+
hintElement.querySelector('::before').style.top = '50%';
|
357 |
+
break;
|
358 |
+
|
359 |
+
case 'right':
|
360 |
+
top = targetRect.top + (targetRect.height / 2) - (hintRect.height / 2);
|
361 |
+
left = targetRect.right + 15;
|
362 |
+
hintElement.querySelector('::before').style.top = '50%';
|
363 |
+
break;
|
364 |
+
}
|
365 |
+
|
366 |
+
// Adjust if the hint would be off-screen
|
367 |
+
const viewportWidth = window.innerWidth;
|
368 |
+
const viewportHeight = window.innerHeight;
|
369 |
+
|
370 |
+
if (left < 10) left = 10;
|
371 |
+
if (left + hintRect.width > viewportWidth - 10) {
|
372 |
+
left = viewportWidth - hintRect.width - 10;
|
373 |
+
}
|
374 |
+
|
375 |
+
if (top < 10) top = 10;
|
376 |
+
if (top + hintRect.height > viewportHeight - 10) {
|
377 |
+
top = viewportHeight - hintRect.height - 10;
|
378 |
+
}
|
379 |
+
|
380 |
+
// Set the position
|
381 |
+
hintElement.style.top = `${top}px`;
|
382 |
+
hintElement.style.left = `${left}px`;
|
383 |
+
}
|
384 |
+
|
385 |
+
/**
|
386 |
+
* Dismiss the currently active hint
|
387 |
+
*/
|
388 |
+
dismissActiveHint() {
|
389 |
+
if (this.activeHint) {
|
390 |
+
this.activeHint.classList.remove('active');
|
391 |
+
|
392 |
+
// Remove from DOM after animation completes
|
393 |
+
setTimeout(() => {
|
394 |
+
if (this.activeHint && this.activeHint.parentNode) {
|
395 |
+
this.activeHint.parentNode.removeChild(this.activeHint);
|
396 |
+
}
|
397 |
+
this.activeHint = null;
|
398 |
+
}, 300);
|
399 |
+
}
|
400 |
+
}
|
401 |
+
|
402 |
+
/**
|
403 |
+
* Mark a hint as having been shown
|
404 |
+
* @param {string} hintId - The ID of the hint
|
405 |
+
*/
|
406 |
+
markHintAsShown(hintId) {
|
407 |
+
this.shownHints.push(hintId);
|
408 |
+
this.saveShownHints();
|
409 |
+
}
|
410 |
+
|
411 |
+
/**
|
412 |
+
* Check if a hint has been shown the maximum number of times
|
413 |
+
* @param {string} hintId - The ID of the hint
|
414 |
+
* @returns {boolean} Whether the hint has been shown max times
|
415 |
+
*/
|
416 |
+
hasHintBeenShown(hintId) {
|
417 |
+
const hint = this.hints[hintId];
|
418 |
+
if (!hint) return true;
|
419 |
+
|
420 |
+
const timesShown = this.shownHints.filter(id => id === hintId).length;
|
421 |
+
return timesShown >= hint.maxShows;
|
422 |
+
}
|
423 |
+
|
424 |
+
/**
|
425 |
+
* Load the list of shown hints from localStorage
|
426 |
+
*/
|
427 |
+
loadShownHints() {
|
428 |
+
const savedHints = localStorage.getItem('shownHints');
|
429 |
+
if (savedHints) {
|
430 |
+
try {
|
431 |
+
this.shownHints = JSON.parse(savedHints);
|
432 |
+
} catch (e) {
|
433 |
+
this.shownHints = [];
|
434 |
+
}
|
435 |
+
}
|
436 |
+
}
|
437 |
+
|
438 |
+
/**
|
439 |
+
* Save the list of shown hints to localStorage
|
440 |
+
*/
|
441 |
+
saveShownHints() {
|
442 |
+
localStorage.setItem('shownHints', JSON.stringify(this.shownHints));
|
443 |
+
}
|
444 |
+
|
445 |
+
/**
|
446 |
+
* Reset all shown hints so they'll be shown again
|
447 |
+
*/
|
448 |
+
resetShownHints() {
|
449 |
+
this.shownHints = [];
|
450 |
+
localStorage.removeItem('shownHints');
|
451 |
+
}
|
452 |
+
|
453 |
+
/**
|
454 |
+
* Initialize the scroll listener for detecting visible elements
|
455 |
+
*/
|
456 |
+
initScrollListener() {
|
457 |
+
// Check initially
|
458 |
+
this.checkVisibleElementsForHints();
|
459 |
+
|
460 |
+
// Check on scroll
|
461 |
+
window.addEventListener('scroll', () => {
|
462 |
+
this.checkVisibleElementsForHints();
|
463 |
+
});
|
464 |
+
|
465 |
+
// Check on resize
|
466 |
+
window.addEventListener('resize', () => {
|
467 |
+
this.checkVisibleElementsForHints();
|
468 |
+
});
|
469 |
+
|
470 |
+
// Check after a short delay (DOM may still be loading)
|
471 |
+
setTimeout(() => {
|
472 |
+
this.checkVisibleElementsForHints();
|
473 |
+
}, 1000);
|
474 |
+
}
|
475 |
+
|
476 |
+
/**
|
477 |
+
* Check for elements with auto hints that are visible in the viewport
|
478 |
+
*/
|
479 |
+
checkVisibleElementsForHints() {
|
480 |
+
// Don't check if a hint is already active
|
481 |
+
if (this.activeHint) return;
|
482 |
+
|
483 |
+
// Look for elements with auto hints that are in the viewport
|
484 |
+
for (const hintId in this.hints) {
|
485 |
+
const hint = this.hints[hintId];
|
486 |
+
|
487 |
+
if (hint.trigger !== 'auto') continue;
|
488 |
+
if (this.hasHintBeenShown(hintId)) continue;
|
489 |
+
|
490 |
+
const elements = document.querySelectorAll(`[data-hint="${hintId}"]`);
|
491 |
+
|
492 |
+
for (const element of elements) {
|
493 |
+
if (this.isElementInViewport(element)) {
|
494 |
+
this.considerShowingHint(hintId, element);
|
495 |
+
break;
|
496 |
+
}
|
497 |
+
}
|
498 |
+
}
|
499 |
+
}
|
500 |
+
|
501 |
+
/**
|
502 |
+
* Check if an element is in the viewport
|
503 |
+
* @param {Element} el - The element to check
|
504 |
+
* @returns {boolean} Whether the element is in the viewport
|
505 |
+
*/
|
506 |
+
isElementInViewport(el) {
|
507 |
+
const rect = el.getBoundingClientRect();
|
508 |
+
|
509 |
+
return (
|
510 |
+
rect.top >= 0 &&
|
511 |
+
rect.left >= 0 &&
|
512 |
+
rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) &&
|
513 |
+
rect.right <= (window.innerWidth || document.documentElement.clientWidth)
|
514 |
+
);
|
515 |
+
}
|
516 |
+
}
|
517 |
+
|
518 |
+
// Initialize the hint system when the DOM is ready
|
519 |
+
document.addEventListener('DOMContentLoaded', () => {
|
520 |
+
window.contextualHintSystem = new ContextualHintSystem();
|
521 |
+
|
522 |
+
// Manually show hints for elements that may not be detected automatically
|
523 |
+
setTimeout(() => {
|
524 |
+
const quantumScoreElements = document.querySelectorAll('.quantum-score-visualization');
|
525 |
+
if (quantumScoreElements.length > 0) {
|
526 |
+
window.contextualHintSystem.considerShowingHint('quantum-score', quantumScoreElements[0]);
|
527 |
+
}
|
528 |
+
|
529 |
+
const entityItems = document.querySelectorAll('.quantum-entity-item');
|
530 |
+
if (entityItems.length > 0) {
|
531 |
+
window.contextualHintSystem.considerShowingHint('named-entities', entityItems[0]);
|
532 |
+
}
|
533 |
+
}, 2000);
|
534 |
+
});
|
fix_openmanus.py
ADDED
@@ -0,0 +1,556 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
OpenManus Integration Fixer
|
4 |
+
|
5 |
+
This script fixes common issues with OpenManus installations and ensures
|
6 |
+
proper integration with Casibase and other Atlas components.
|
7 |
+
"""
|
8 |
+
|
9 |
+
import os
|
10 |
+
import sys
|
11 |
+
import json
|
12 |
+
import shutil
|
13 |
+
import subprocess
|
14 |
+
import importlib
|
15 |
+
import logging
|
16 |
+
from pathlib import Path
|
17 |
+
from typing import Dict, List, Optional, Tuple, Any
|
18 |
+
|
19 |
+
# Configure logging
|
20 |
+
logging.basicConfig(
|
21 |
+
level=logging.INFO,
|
22 |
+
format='%(asctime)s - %(levelname)s - %(message)s',
|
23 |
+
handlers=[
|
24 |
+
logging.StreamHandler(sys.stdout),
|
25 |
+
logging.FileHandler('openmanus_fix.log')
|
26 |
+
]
|
27 |
+
)
|
28 |
+
|
29 |
+
logger = logging.getLogger(__name__)
|
30 |
+
|
31 |
+
class OpenManusFixer:
|
32 |
+
"""Utility class to fix OpenManus integration issues."""
|
33 |
+
|
34 |
+
def __init__(self, openmanus_path: Optional[str] = None):
|
35 |
+
"""Initialize the fixer with the path to OpenManus."""
|
36 |
+
if openmanus_path:
|
37 |
+
self.openmanus_path = Path(openmanus_path)
|
38 |
+
else:
|
39 |
+
# Try to find OpenManus in the current directory or parent
|
40 |
+
current_dir = Path.cwd()
|
41 |
+
if (current_dir / "app").exists() and (current_dir / "requirements.txt").exists():
|
42 |
+
self.openmanus_path = current_dir
|
43 |
+
else:
|
44 |
+
self.openmanus_path = Path(os.path.expanduser("~/OpenManus"))
|
45 |
+
|
46 |
+
logger.info(f"OpenManus path: {self.openmanus_path}")
|
47 |
+
|
48 |
+
if not self.openmanus_path.exists():
|
49 |
+
raise FileNotFoundError(f"OpenManus directory not found at {self.openmanus_path}")
|
50 |
+
|
51 |
+
def fix_requirements(self) -> None:
|
52 |
+
"""Fix the requirements.txt file to ensure all dependencies are properly listed."""
|
53 |
+
req_file = self.openmanus_path / "requirements.txt"
|
54 |
+
|
55 |
+
if not req_file.exists():
|
56 |
+
logger.error(f"Requirements file not found at {req_file}")
|
57 |
+
return
|
58 |
+
|
59 |
+
logger.info("Fixing requirements.txt...")
|
60 |
+
|
61 |
+
# Read existing requirements
|
62 |
+
with open(req_file, "r") as f:
|
63 |
+
requirements = f.readlines()
|
64 |
+
|
65 |
+
# Clean requirements by removing comments and invalid entries
|
66 |
+
cleaned_requirements = []
|
67 |
+
for req in requirements:
|
68 |
+
req = req.strip()
|
69 |
+
|
70 |
+
# Skip empty lines and comments
|
71 |
+
if not req or req.startswith('#'):
|
72 |
+
cleaned_requirements.append(req)
|
73 |
+
continue
|
74 |
+
|
75 |
+
# Fix any malformed requirements
|
76 |
+
if ' #' in req:
|
77 |
+
req = req.split(' #')[0].strip()
|
78 |
+
|
79 |
+
cleaned_requirements.append(req)
|
80 |
+
|
81 |
+
# Add missing dependencies
|
82 |
+
required_deps = {
|
83 |
+
"flask": "flask~=2.3.3",
|
84 |
+
"flask-cors": "flask-cors~=4.0.0",
|
85 |
+
"requests": "requests~=2.31.0",
|
86 |
+
"pydantic": "pydantic~=2.10.6",
|
87 |
+
"openai": "openai~=1.66.3",
|
88 |
+
"fastapi": "fastapi~=0.115.11",
|
89 |
+
"uvicorn": "uvicorn~=0.34.0",
|
90 |
+
"httpx": "httpx>=0.27.0",
|
91 |
+
"tiktoken": "tiktoken~=0.9.0"
|
92 |
+
}
|
93 |
+
|
94 |
+
# Check for each required dependency
|
95 |
+
for dep_name, dep_req in required_deps.items():
|
96 |
+
found = False
|
97 |
+
for i, req in enumerate(cleaned_requirements):
|
98 |
+
if req.startswith(f"{dep_name}"):
|
99 |
+
found = True
|
100 |
+
# Update the dependency if needed
|
101 |
+
cleaned_requirements[i] = dep_req
|
102 |
+
break
|
103 |
+
|
104 |
+
if not found:
|
105 |
+
cleaned_requirements.append(dep_req)
|
106 |
+
|
107 |
+
# Write back the fixed requirements
|
108 |
+
with open(req_file, "w") as f:
|
109 |
+
f.write("\n".join(cleaned_requirements) + "\n")
|
110 |
+
|
111 |
+
logger.info("Requirements file fixed")
|
112 |
+
|
113 |
+
def install_dependencies(self) -> None:
|
114 |
+
"""Install or upgrade required dependencies."""
|
115 |
+
logger.info("Installing/upgrading dependencies...")
|
116 |
+
|
117 |
+
try:
|
118 |
+
subprocess.run([
|
119 |
+
sys.executable, "-m", "pip", "install", "-r",
|
120 |
+
str(self.openmanus_path / "requirements.txt"),
|
121 |
+
"--upgrade"
|
122 |
+
], check=True)
|
123 |
+
logger.info("Dependencies installed successfully")
|
124 |
+
except subprocess.CalledProcessError as e:
|
125 |
+
logger.error(f"Failed to install dependencies: {e}")
|
126 |
+
|
127 |
+
def ensure_casibase_connector(self) -> None:
|
128 |
+
"""Ensure that the Casibase connector exists and is properly configured."""
|
129 |
+
connector_path = self.openmanus_path / "app" / "casibase_connector.py"
|
130 |
+
|
131 |
+
if not connector_path.exists():
|
132 |
+
logger.info("Creating Casibase connector...")
|
133 |
+
|
134 |
+
# Check if the directory exists
|
135 |
+
connector_path.parent.mkdir(exist_ok=True)
|
136 |
+
|
137 |
+
# Create the connector file
|
138 |
+
self.create_casibase_connector(connector_path)
|
139 |
+
else:
|
140 |
+
logger.info("Updating existing Casibase connector...")
|
141 |
+
self.create_casibase_connector(connector_path)
|
142 |
+
|
143 |
+
logger.info("Casibase connector updated")
|
144 |
+
|
145 |
+
def create_casibase_connector(self, path: Path) -> None:
|
146 |
+
"""Create or update the Casibase connector file."""
|
147 |
+
connector_code = """
|
148 |
+
\"\"\"
|
149 |
+
Casibase Connector for OpenManus
|
150 |
+
This module provides integration with Casibase for document storage, retrieval, and querying.
|
151 |
+
\"\"\"
|
152 |
+
|
153 |
+
import os
|
154 |
+
import json
|
155 |
+
import requests
|
156 |
+
from typing import Dict, List, Optional, Union, Any
|
157 |
+
import logging
|
158 |
+
from urllib.parse import urljoin
|
159 |
+
|
160 |
+
# Import from current package if possible, otherwise create placeholders
|
161 |
+
try:
|
162 |
+
from .exceptions import CasibaseConnectionError
|
163 |
+
from .config import get_config
|
164 |
+
from .logger import get_logger
|
165 |
+
|
166 |
+
logger = get_logger(__name__)
|
167 |
+
except ImportError:
|
168 |
+
# Create minimal implementations if the imports fail
|
169 |
+
class CasibaseConnectionError(Exception):
|
170 |
+
\"\"\"Exception raised for Casibase connection errors.\"\"\"
|
171 |
+
pass
|
172 |
+
|
173 |
+
def get_config():
|
174 |
+
\"\"\"Get configuration values.\"\"\"
|
175 |
+
return {}
|
176 |
+
|
177 |
+
# Set up basic logger
|
178 |
+
logging.basicConfig(level=logging.INFO)
|
179 |
+
logger = logging.getLogger(__name__)
|
180 |
+
|
181 |
+
class CasibaseConnector:
|
182 |
+
\"\"\"
|
183 |
+
Connector for integrating with Casibase services.
|
184 |
+
\"\"\"
|
185 |
+
|
186 |
+
def __init__(
|
187 |
+
self,
|
188 |
+
base_url: Optional[str] = None,
|
189 |
+
api_key: Optional[str] = None,
|
190 |
+
store_name: Optional[str] = None,
|
191 |
+
timeout: int = 30
|
192 |
+
):
|
193 |
+
\"\"\"
|
194 |
+
Initialize the Casibase connector.
|
195 |
+
|
196 |
+
Args:
|
197 |
+
base_url: Base URL for the Casibase API
|
198 |
+
api_key: API key for authentication
|
199 |
+
store_name: Default store name to use
|
200 |
+
timeout: Request timeout in seconds
|
201 |
+
\"\"\"
|
202 |
+
config = get_config()
|
203 |
+
|
204 |
+
self.base_url = base_url or config.get("CASIBASE_URL", "http://localhost:7001")
|
205 |
+
self.base_url = self.base_url.rstrip('/') # Remove trailing slash if present
|
206 |
+
|
207 |
+
self.api_key = api_key or config.get("CASIBASE_API_KEY", "")
|
208 |
+
self.store_name = store_name or config.get("CASIBASE_STORE", "default")
|
209 |
+
self.timeout = timeout
|
210 |
+
|
211 |
+
self.headers = {
|
212 |
+
"Content-Type": "application/json",
|
213 |
+
"Accept": "application/json"
|
214 |
+
}
|
215 |
+
|
216 |
+
if self.api_key:
|
217 |
+
self.headers["Authorization"] = f"Bearer {self.api_key}"
|
218 |
+
|
219 |
+
logger.info(f"Initialized Casibase connector with base_url: {self.base_url}")
|
220 |
+
|
221 |
+
def _make_request(
|
222 |
+
self,
|
223 |
+
method: str,
|
224 |
+
endpoint: str,
|
225 |
+
data: Optional[Dict[str, Any]] = None,
|
226 |
+
files: Optional[Dict[str, Any]] = None,
|
227 |
+
params: Optional[Dict[str, Any]] = None
|
228 |
+
) -> Dict[str, Any]:
|
229 |
+
\"\"\"
|
230 |
+
Make a request to the Casibase API.
|
231 |
+
|
232 |
+
Args:
|
233 |
+
method: HTTP method (GET, POST, etc.)
|
234 |
+
endpoint: API endpoint
|
235 |
+
data: Request data
|
236 |
+
files: Files to upload
|
237 |
+
params: Query parameters
|
238 |
+
|
239 |
+
Returns:
|
240 |
+
Response data as dictionary
|
241 |
+
|
242 |
+
Raises:
|
243 |
+
CasibaseConnectionError: If the request fails
|
244 |
+
\"\"\"
|
245 |
+
url = urljoin(self.base_url, endpoint)
|
246 |
+
|
247 |
+
# If sending files, don't include Content-Type header
|
248 |
+
request_headers = self.headers.copy()
|
249 |
+
if files:
|
250 |
+
request_headers.pop("Content-Type", None)
|
251 |
+
|
252 |
+
try:
|
253 |
+
if method.upper() == "GET":
|
254 |
+
response = requests.get(
|
255 |
+
url,
|
256 |
+
params=params,
|
257 |
+
headers=request_headers,
|
258 |
+
timeout=self.timeout
|
259 |
+
)
|
260 |
+
elif method.upper() == "POST":
|
261 |
+
if files:
|
262 |
+
# Multipart form data for file uploads
|
263 |
+
response = requests.post(
|
264 |
+
url,
|
265 |
+
data=data, # Form data
|
266 |
+
files=files,
|
267 |
+
headers=request_headers,
|
268 |
+
timeout=self.timeout
|
269 |
+
)
|
270 |
+
else:
|
271 |
+
# JSON data
|
272 |
+
response = requests.post(
|
273 |
+
url,
|
274 |
+
json=data,
|
275 |
+
params=params,
|
276 |
+
headers=request_headers,
|
277 |
+
timeout=self.timeout
|
278 |
+
)
|
279 |
+
else:
|
280 |
+
raise CasibaseConnectionError(f"Unsupported HTTP method: {method}")
|
281 |
+
|
282 |
+
# Check for errors
|
283 |
+
if response.status_code >= 400:
|
284 |
+
error_msg = f"Casibase API error: {response.status_code} - {response.text}"
|
285 |
+
logger.error(error_msg)
|
286 |
+
raise CasibaseConnectionError(error_msg)
|
287 |
+
|
288 |
+
# Return JSON response
|
289 |
+
if response.content:
|
290 |
+
return response.json()
|
291 |
+
else:
|
292 |
+
return {"status": "success"}
|
293 |
+
|
294 |
+
except requests.RequestException as e:
|
295 |
+
error_msg = f"Casibase API request failed: {str(e)}"
|
296 |
+
logger.error(error_msg)
|
297 |
+
raise CasibaseConnectionError(error_msg) from e
|
298 |
+
|
299 |
+
def query(
|
300 |
+
self,
|
301 |
+
query_text: str,
|
302 |
+
store_name: Optional[str] = None,
|
303 |
+
model_name: Optional[str] = None,
|
304 |
+
top_k: int = 5,
|
305 |
+
system_prompt: Optional[str] = None
|
306 |
+
) -> Dict[str, Any]:
|
307 |
+
\"\"\"
|
308 |
+
Query Casibase with the given text.
|
309 |
+
|
310 |
+
Args:
|
311 |
+
query_text: Query text
|
312 |
+
store_name: Store name to query (defaults to instance store_name)
|
313 |
+
model_name: Model name to use for the query
|
314 |
+
top_k: Number of documents to retrieve
|
315 |
+
system_prompt: Optional system prompt to include
|
316 |
+
|
317 |
+
Returns:
|
318 |
+
Query response
|
319 |
+
\"\"\"
|
320 |
+
store = store_name or self.store_name
|
321 |
+
|
322 |
+
payload = {
|
323 |
+
"question": query_text,
|
324 |
+
"store": store,
|
325 |
+
"topK": top_k
|
326 |
+
}
|
327 |
+
|
328 |
+
if model_name:
|
329 |
+
payload["modelName"] = model_name
|
330 |
+
|
331 |
+
if system_prompt:
|
332 |
+
payload["systemPrompt"] = system_prompt
|
333 |
+
|
334 |
+
logger.debug(f"Querying Casibase: {payload}")
|
335 |
+
return self._make_request("POST", "/api/query", data=payload)
|
336 |
+
|
337 |
+
def upload_file(
|
338 |
+
self,
|
339 |
+
file_path: str,
|
340 |
+
store_name: Optional[str] = None,
|
341 |
+
metadata: Optional[Dict[str, Any]] = None
|
342 |
+
) -> Dict[str, Any]:
|
343 |
+
\"\"\"
|
344 |
+
Upload a file to Casibase.
|
345 |
+
|
346 |
+
Args:
|
347 |
+
file_path: Path to the file
|
348 |
+
store_name: Store name to upload to (defaults to instance store_name)
|
349 |
+
metadata: Additional metadata to associate with the file
|
350 |
+
|
351 |
+
Returns:
|
352 |
+
Upload response
|
353 |
+
\"\"\"
|
354 |
+
if not os.path.exists(file_path):
|
355 |
+
raise FileNotFoundError(f"File not found: {file_path}")
|
356 |
+
|
357 |
+
store = store_name or self.store_name
|
358 |
+
metadata = metadata or {}
|
359 |
+
|
360 |
+
with open(file_path, "rb") as f:
|
361 |
+
files = {"file": (os.path.basename(file_path), f)}
|
362 |
+
data = {"store": store}
|
363 |
+
|
364 |
+
if metadata:
|
365 |
+
data["metadata"] = json.dumps(metadata)
|
366 |
+
|
367 |
+
logger.debug(f"Uploading file to Casibase: {file_path} to store {store}")
|
368 |
+
return self._make_request("POST", "/api/upload-file", data=data, files=files)
|
369 |
+
|
370 |
+
def get_stores(self) -> List[Dict[str, Any]]:
|
371 |
+
\"\"\"
|
372 |
+
Get available Casibase stores.
|
373 |
+
|
374 |
+
Returns:
|
375 |
+
List of stores
|
376 |
+
\"\"\"
|
377 |
+
response = self._make_request("GET", "/api/get-stores")
|
378 |
+
return response.get("data", [])
|
379 |
+
|
380 |
+
def search_similar(
|
381 |
+
self,
|
382 |
+
query_text: str,
|
383 |
+
store_name: Optional[str] = None,
|
384 |
+
limit: int = 5
|
385 |
+
) -> List[Dict[str, Any]]:
|
386 |
+
\"\"\"
|
387 |
+
Search for similar documents in Casibase.
|
388 |
+
|
389 |
+
Args:
|
390 |
+
query_text: Text to search for
|
391 |
+
store_name: Store to search in (defaults to instance store_name)
|
392 |
+
limit: Maximum number of results
|
393 |
+
|
394 |
+
Returns:
|
395 |
+
List of similar documents
|
396 |
+
\"\"\"
|
397 |
+
store = store_name or self.store_name
|
398 |
+
|
399 |
+
payload = {
|
400 |
+
"query": query_text,
|
401 |
+
"store": store,
|
402 |
+
"limit": limit
|
403 |
+
}
|
404 |
+
|
405 |
+
logger.debug(f"Searching similar documents in Casibase: {payload}")
|
406 |
+
response = self._make_request("POST", "/api/search", data=payload)
|
407 |
+
return response.get("data", [])
|
408 |
+
|
409 |
+
def get_health(self) -> Dict[str, Any]:
|
410 |
+
\"\"\"
|
411 |
+
Check if the Casibase service is healthy.
|
412 |
+
|
413 |
+
Returns:
|
414 |
+
Health status
|
415 |
+
\"\"\"
|
416 |
+
try:
|
417 |
+
return self._make_request("GET", "/api/health-check")
|
418 |
+
except CasibaseConnectionError:
|
419 |
+
return {"status": "error", "message": "Casibase service is unreachable"}
|
420 |
+
"""
|
421 |
+
|
422 |
+
with open(path, "w") as f:
|
423 |
+
f.write(connector_code.lstrip())
|
424 |
+
|
425 |
+
def update_api_server(self) -> None:
|
426 |
+
"""Update the API server to include Casibase integration."""
|
427 |
+
api_server_path = self.openmanus_path / "app" / "api_server.py"
|
428 |
+
|
429 |
+
if not api_server_path.exists():
|
430 |
+
logger.error(f"API server file not found at {api_server_path}")
|
431 |
+
return
|
432 |
+
|
433 |
+
logger.info("Checking API server for Casibase integration...")
|
434 |
+
|
435 |
+
with open(api_server_path, "r") as f:
|
436 |
+
content = f.read()
|
437 |
+
|
438 |
+
# Check if Casibase is already integrated
|
439 |
+
if "CasibaseConnector" in content and "casibase_connector" in content:
|
440 |
+
logger.info("Casibase already integrated in API server")
|
441 |
+
return
|
442 |
+
|
443 |
+
# Add imports
|
444 |
+
if "import argparse" in content:
|
445 |
+
content = content.replace(
|
446 |
+
"import argparse",
|
447 |
+
"import argparse\nfrom .casibase_connector import CasibaseConnector"
|
448 |
+
)
|
449 |
+
|
450 |
+
# Add endpoint for Casibase bridge
|
451 |
+
if "def create_app():" in content:
|
452 |
+
casibase_routes = """
|
453 |
+
# Casibase integration routes
|
454 |
+
@app.route('/api/casibase/health', methods=['GET'])
|
455 |
+
def casibase_health():
|
456 |
+
try:
|
457 |
+
casibase = CasibaseConnector()
|
458 |
+
health_status = casibase.get_health()
|
459 |
+
return jsonify(health_status)
|
460 |
+
except Exception as e:
|
461 |
+
return jsonify({"status": "error", "message": str(e)}), 500
|
462 |
+
|
463 |
+
@app.route('/api/casibase/query', methods=['POST'])
|
464 |
+
def casibase_query():
|
465 |
+
try:
|
466 |
+
data = request.json
|
467 |
+
query_text = data.get('query_text')
|
468 |
+
store_name = data.get('store_name')
|
469 |
+
model_name = data.get('model_name')
|
470 |
+
|
471 |
+
if not query_text:
|
472 |
+
return jsonify({"status": "error", "message": "query_text is required"}), 400
|
473 |
+
|
474 |
+
casibase = CasibaseConnector()
|
475 |
+
result = casibase.query(
|
476 |
+
query_text=query_text,
|
477 |
+
store_name=store_name,
|
478 |
+
model_name=model_name
|
479 |
+
)
|
480 |
+
|
481 |
+
return jsonify(result)
|
482 |
+
except Exception as e:
|
483 |
+
return jsonify({"status": "error", "message": str(e)}), 500
|
484 |
+
|
485 |
+
@app.route('/api/casibase/search', methods=['POST'])
|
486 |
+
def casibase_search():
|
487 |
+
try:
|
488 |
+
data = request.json
|
489 |
+
query_text = data.get('query_text')
|
490 |
+
store_name = data.get('store_name')
|
491 |
+
limit = data.get('limit', 5)
|
492 |
+
|
493 |
+
if not query_text:
|
494 |
+
return jsonify({"status": "error", "message": "query_text is required"}), 400
|
495 |
+
|
496 |
+
casibase = CasibaseConnector()
|
497 |
+
result = casibase.search_similar(
|
498 |
+
query_text=query_text,
|
499 |
+
store_name=store_name,
|
500 |
+
limit=limit
|
501 |
+
)
|
502 |
+
|
503 |
+
return jsonify({"status": "success", "data": result})
|
504 |
+
except Exception as e:
|
505 |
+
return jsonify({"status": "error", "message": str(e)}), 500
|
506 |
+
"""
|
507 |
+
|
508 |
+
# Insert the Casibase routes before the return statement
|
509 |
+
if "return app" in content:
|
510 |
+
content = content.replace(
|
511 |
+
"return app",
|
512 |
+
f"{casibase_routes}\n return app"
|
513 |
+
)
|
514 |
+
|
515 |
+
# Add imports for Flask if needed
|
516 |
+
if "from flask import " in content:
|
517 |
+
if "jsonify" not in content:
|
518 |
+
content = content.replace(
|
519 |
+
"from flask import ",
|
520 |
+
"from flask import jsonify, "
|
521 |
+
)
|
522 |
+
if "request" not in content:
|
523 |
+
content = content.replace(
|
524 |
+
"from flask import ",
|
525 |
+
"from flask import request, "
|
526 |
+
)
|
527 |
+
|
528 |
+
# Write back the updated content
|
529 |
+
with open(api_server_path, "w") as f:
|
530 |
+
f.write(content)
|
531 |
+
|
532 |
+
logger.info("API server updated with Casibase integration")
|
533 |
+
|
534 |
+
def run_fixes(self) -> None:
|
535 |
+
"""Run all fixes."""
|
536 |
+
logger.info("Starting OpenManus fixes...")
|
537 |
+
|
538 |
+
self.fix_requirements()
|
539 |
+
self.install_dependencies()
|
540 |
+
self.ensure_casibase_connector()
|
541 |
+
self.update_api_server()
|
542 |
+
|
543 |
+
logger.info("OpenManus fixes completed successfully")
|
544 |
+
|
545 |
+
def main():
|
546 |
+
"""Main entry point."""
|
547 |
+
try:
|
548 |
+
fixer = OpenManusFixer()
|
549 |
+
fixer.run_fixes()
|
550 |
+
return 0
|
551 |
+
except Exception as e:
|
552 |
+
logger.error(f"Error: {e}")
|
553 |
+
return 1
|
554 |
+
|
555 |
+
if __name__ == "__main__":
|
556 |
+
sys.exit(main())
|
fix_unified_bridge.py
ADDED
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Atlas Intelligence Unified Bridge Fix
|
4 |
+
This script patches the unified_bridge.py file to correct integration issues
|
5 |
+
between components.
|
6 |
+
"""
|
7 |
+
|
8 |
+
import os
|
9 |
+
import sys
|
10 |
+
import shutil
|
11 |
+
import re
|
12 |
+
|
13 |
+
# Define the path to the unified bridge file
|
14 |
+
UNIFIED_BRIDGE_PATH = os.path.expanduser("~/AtlasUnified/unified_bridge.py")
|
15 |
+
BACKUP_PATH = os.path.expanduser("~/AtlasUnified/unified_bridge.py.bak")
|
16 |
+
|
17 |
+
def backup_original_file():
|
18 |
+
"""Create a backup of the original file"""
|
19 |
+
if os.path.exists(UNIFIED_BRIDGE_PATH):
|
20 |
+
print(f"Creating backup at {BACKUP_PATH}")
|
21 |
+
shutil.copy2(UNIFIED_BRIDGE_PATH, BACKUP_PATH)
|
22 |
+
return True
|
23 |
+
else:
|
24 |
+
print(f"Error: Unified bridge file not found at {UNIFIED_BRIDGE_PATH}")
|
25 |
+
return False
|
26 |
+
|
27 |
+
def fix_openmanus_integration(content):
|
28 |
+
"""Fix OpenManus integration code"""
|
29 |
+
# Find and replace the OpenManus integration section
|
30 |
+
openmanus_pattern = r"# Process with OpenManus if enabled.*?(?=\s{4}# Process with|$)"
|
31 |
+
openmanus_replacement = """# Process with OpenManus if enabled - FIXED code to use ask_tool function correctly
|
32 |
+
if config["integrations"]["enable_openmanus"]:
|
33 |
+
try:
|
34 |
+
sys.path.append(config["paths"]["openmanus"])
|
35 |
+
from app.llm import ask_tool
|
36 |
+
|
37 |
+
# Using the ask_tool function directly
|
38 |
+
openmanus_result = ask_tool(query_text)
|
39 |
+
results["openmanus"] = {"response": openmanus_result}
|
40 |
+
|
41 |
+
except Exception as e:
|
42 |
+
logger.error(f"OpenManus processing error: {e}")
|
43 |
+
logger.error(traceback.format_exc())
|
44 |
+
results["openmanus"] = {"error": str(e)}
|
45 |
+
|
46 |
+
"""
|
47 |
+
return re.sub(openmanus_pattern, openmanus_replacement, content, flags=re.DOTALL)
|
48 |
+
|
49 |
+
def fix_quantum_vision_integration(content):
|
50 |
+
"""Fix QuantumVision integration code"""
|
51 |
+
# Find and replace the QuantumVision integration section
|
52 |
+
quantum_pattern = r"# Process with Quantum Vision if enabled.*?(?=\s{4}return results|$)"
|
53 |
+
quantum_replacement = """# Process with Quantum Vision if enabled - FIXED code to use the correct function
|
54 |
+
if config["integrations"]["enable_quantum_vision"]:
|
55 |
+
try:
|
56 |
+
sys.path.append(config["paths"]["quantum_vision"])
|
57 |
+
try:
|
58 |
+
import nlp_processor
|
59 |
+
import openai_integration
|
60 |
+
import spacy
|
61 |
+
|
62 |
+
# Load a small spaCy model if available, otherwise use simple processing
|
63 |
+
try:
|
64 |
+
nlp = spacy.load("en_core_web_sm")
|
65 |
+
except:
|
66 |
+
# Fallback to a simpler model
|
67 |
+
try:
|
68 |
+
nlp = spacy.blank("en")
|
69 |
+
except:
|
70 |
+
# If spaCy isn't properly installed/configured
|
71 |
+
raise ImportError("spaCy models not available")
|
72 |
+
|
73 |
+
# Use the correct function from nlp_processor
|
74 |
+
quantum_result = nlp_processor.process_text(nlp, query_text)
|
75 |
+
results["quantum_vision"] = {"response": quantum_result}
|
76 |
+
|
77 |
+
except (ImportError, AttributeError):
|
78 |
+
# Fallback to direct OpenAI integration if available
|
79 |
+
try:
|
80 |
+
import openai_integration
|
81 |
+
openai_response = openai_integration.generate_text(query_text)
|
82 |
+
results["quantum_vision"] = {"fallback_response": openai_response}
|
83 |
+
except:
|
84 |
+
# If both approaches fail, use a simple response
|
85 |
+
results["quantum_vision"] = {
|
86 |
+
"fallback_response": f"Processed query: {query_text}",
|
87 |
+
"simple_analysis": {"words": len(query_text.split())}
|
88 |
+
}
|
89 |
+
|
90 |
+
except Exception as e:
|
91 |
+
logger.error(f"Quantum Vision processing error: {e}")
|
92 |
+
logger.error(traceback.format_exc())
|
93 |
+
results["quantum_vision"] = {"error": str(e)}
|
94 |
+
|
95 |
+
"""
|
96 |
+
return re.sub(quantum_pattern, quantum_replacement, content, flags=re.DOTALL)
|
97 |
+
|
98 |
+
def fix_integration_helper_functions(content):
|
99 |
+
"""Fix the helper functions for status checks"""
|
100 |
+
# Fix OpenManus status check
|
101 |
+
openmanus_check_pattern = r"def check_openmanus_status\(\).*?(?=def check_quantum|$)"
|
102 |
+
openmanus_check_replacement = """def check_openmanus_status() -> str:
|
103 |
+
\"\"\"Check if OpenManus is available and working\"\"\"
|
104 |
+
try:
|
105 |
+
if not os.path.exists(config["paths"]["openmanus"]):
|
106 |
+
return "unavailable (path not found)"
|
107 |
+
|
108 |
+
# Try importing a key module - fixed import
|
109 |
+
sys.path.append(config["paths"]["openmanus"])
|
110 |
+
import app.llm
|
111 |
+
return "available"
|
112 |
+
except Exception as e:
|
113 |
+
logger.error(f"Failed to check OpenManus: {e}")
|
114 |
+
return f"error: {str(e)}"
|
115 |
+
|
116 |
+
"""
|
117 |
+
|
118 |
+
content = re.sub(openmanus_check_pattern, openmanus_check_replacement, content, flags=re.DOTALL)
|
119 |
+
|
120 |
+
# Fix QuantumVision status check
|
121 |
+
quantum_check_pattern = r"def check_quantum_vision_status\(\).*?(?=def check_casibase|$)"
|
122 |
+
quantum_check_replacement = """def check_quantum_vision_status() -> str:
|
123 |
+
\"\"\"Check if QuantumVision is available and working\"\"\"
|
124 |
+
try:
|
125 |
+
if not os.path.exists(config["paths"]["quantum_vision"]):
|
126 |
+
return "unavailable (path not found)"
|
127 |
+
|
128 |
+
# Try importing the module directly
|
129 |
+
sys.path.append(config["paths"]["quantum_vision"])
|
130 |
+
import nlp_processor
|
131 |
+
return "available"
|
132 |
+
except Exception as e:
|
133 |
+
logger.error(f"Failed to check QuantumVision: {e}")
|
134 |
+
return f"error: {str(e)}"
|
135 |
+
|
136 |
+
"""
|
137 |
+
return re.sub(quantum_check_pattern, quantum_check_replacement, content, flags=re.DOTALL)
|
138 |
+
|
139 |
+
def create_openai_helper_function():
|
140 |
+
"""Create an OpenAI helper function for QuantumVision"""
|
141 |
+
openai_file_path = os.path.expanduser("~/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business/QuantumVision/openai_integration.py")
|
142 |
+
|
143 |
+
# Check if the function already exists
|
144 |
+
if os.path.exists(openai_file_path):
|
145 |
+
with open(openai_file_path, 'r') as file:
|
146 |
+
content = file.read()
|
147 |
+
if 'def generate_text(' in content:
|
148 |
+
print("OpenAI helper function already exists.")
|
149 |
+
return
|
150 |
+
|
151 |
+
# Create or update the file with the necessary function
|
152 |
+
with open(openai_file_path, 'a') as file:
|
153 |
+
file.write('''
|
154 |
+
|
155 |
+
def generate_text(prompt):
|
156 |
+
"""Generate text using OpenAI API
|
157 |
+
|
158 |
+
Args:
|
159 |
+
prompt: The text prompt to send to OpenAI
|
160 |
+
|
161 |
+
Returns:
|
162 |
+
str: The generated response
|
163 |
+
"""
|
164 |
+
try:
|
165 |
+
import logging
|
166 |
+
logger = logging.getLogger(__name__)
|
167 |
+
|
168 |
+
import openai
|
169 |
+
from openai import OpenAI
|
170 |
+
|
171 |
+
# Initialize the OpenAI client with your API key
|
172 |
+
client = OpenAI()
|
173 |
+
|
174 |
+
logger.info(f"Sending prompt to OpenAI: {prompt[:50]}...")
|
175 |
+
|
176 |
+
# Make the API call
|
177 |
+
response = client.chat.completions.create(
|
178 |
+
model="gpt-4o",
|
179 |
+
messages=[
|
180 |
+
{"role": "system", "content": "You are a helpful AI assistant."},
|
181 |
+
{"role": "user", "content": prompt}
|
182 |
+
],
|
183 |
+
max_tokens=500
|
184 |
+
)
|
185 |
+
|
186 |
+
# Extract and return the response text
|
187 |
+
response_text = response.choices[0].message.content
|
188 |
+
return response_text
|
189 |
+
|
190 |
+
except Exception as e:
|
191 |
+
logger.error(f"Error generating text with OpenAI: {str(e)}")
|
192 |
+
return f"Error generating response: {str(e)}"
|
193 |
+
''')
|
194 |
+
|
195 |
+
def main():
|
196 |
+
"""Main function to apply all fixes"""
|
197 |
+
# Check if the unified bridge file exists
|
198 |
+
if not os.path.exists(UNIFIED_BRIDGE_PATH):
|
199 |
+
print(f"Error: Unified bridge file not found at {UNIFIED_BRIDGE_PATH}")
|
200 |
+
return False
|
201 |
+
|
202 |
+
# Create a backup
|
203 |
+
if not backup_original_file():
|
204 |
+
return False
|
205 |
+
|
206 |
+
# Read the file content
|
207 |
+
with open(UNIFIED_BRIDGE_PATH, 'r') as file:
|
208 |
+
content = file.read()
|
209 |
+
|
210 |
+
# Apply fixes
|
211 |
+
content = fix_openmanus_integration(content)
|
212 |
+
content = fix_quantum_vision_integration(content)
|
213 |
+
content = fix_integration_helper_functions(content)
|
214 |
+
|
215 |
+
# Write the updated content back
|
216 |
+
with open(UNIFIED_BRIDGE_PATH, 'w') as file:
|
217 |
+
file.write(content)
|
218 |
+
|
219 |
+
# Create OpenAI helper function
|
220 |
+
create_openai_helper_function()
|
221 |
+
|
222 |
+
print("Fixes applied successfully!")
|
223 |
+
print("Restart the Atlas Unified service to apply the changes.")
|
224 |
+
return True
|
225 |
+
|
226 |
+
if __name__ == "__main__":
|
227 |
+
main()
|
index.html
CHANGED
@@ -1,321 +1,580 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
<
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
<style>
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
background
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
.gradient-bg {
|
33 |
-
background: linear-gradient(135deg, #1a1a1c 0%, #1d1d1f 50%, #202023 100%);
|
34 |
-
}
|
35 |
-
|
36 |
-
.glass-effect {
|
37 |
-
background: rgba(255, 255, 255, 0.05);
|
38 |
-
backdrop-filter: blur(10px);
|
39 |
-
-webkit-backdrop-filter: blur(10px);
|
40 |
-
border: 1px solid rgba(255, 255, 255, 0.1);
|
41 |
-
}
|
42 |
-
|
43 |
-
/* Neomorphism Styles */
|
44 |
-
.neumorph {
|
45 |
-
background: linear-gradient(145deg, var(--neumorph-light), var(--neumorph-dark));
|
46 |
-
box-shadow: 8px 8px 16px #121214,
|
47 |
-
-8px -8px 16px #2a2a2e;
|
48 |
-
border: none;
|
49 |
-
}
|
50 |
-
|
51 |
-
.neumorph-inset {
|
52 |
-
background: linear-gradient(145deg, var(--neumorph-dark), var(--neumorph-light));
|
53 |
-
box-shadow: inset 5px 5px 10px #121214,
|
54 |
-
inset -5px -5px 10px #2a2a2e;
|
55 |
-
border: none;
|
56 |
-
}
|
57 |
-
|
58 |
-
.neumorph-btn {
|
59 |
-
background: linear-gradient(145deg, var(--neumorph-light), var(--neumorph-dark));
|
60 |
-
box-shadow: 4px 4px 8px #121214,
|
61 |
-
-4px -4px 8px #2a2a2e;
|
62 |
-
border: none;
|
63 |
-
transition: all 0.2s ease;
|
64 |
-
}
|
65 |
-
|
66 |
-
.neumorph-btn:active {
|
67 |
-
box-shadow: inset 3px 3px 6px #121214,
|
68 |
-
inset -3px -3px 6px #2a2a2e;
|
69 |
-
}
|
70 |
-
|
71 |
-
.neumorph-btn-primary {
|
72 |
-
background: linear-gradient(145deg, #0062c4, #0071e3);
|
73 |
-
box-shadow: 4px 4px 8px #121214,
|
74 |
-
-4px -4px 8px #2a2a2e;
|
75 |
}
|
76 |
|
77 |
-
|
78 |
-
box-shadow:
|
79 |
-
|
80 |
}
|
81 |
|
82 |
-
.
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
border-radius: 16px;
|
87 |
-
border: none;
|
88 |
}
|
89 |
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
}
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
border: none;
|
102 |
}
|
103 |
|
104 |
-
.
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
.typing-cursor::after {
|
109 |
-
content: "|";
|
110 |
-
animation: blink 1s step-end infinite;
|
111 |
-
}
|
112 |
-
|
113 |
-
@keyframes blink {
|
114 |
-
from, to { opacity: 1; }
|
115 |
-
50% { opacity: 0; }
|
116 |
-
}
|
117 |
-
|
118 |
-
.task-progress {
|
119 |
-
transition: width 0.3s ease;
|
120 |
}
|
121 |
|
122 |
-
.
|
123 |
-
|
|
|
124 |
}
|
125 |
|
126 |
-
.
|
127 |
-
-
|
128 |
-
scrollbar-width: none;
|
129 |
}
|
130 |
|
131 |
-
|
132 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
}
|
134 |
|
135 |
-
|
136 |
-
from { opacity: 0; transform: translateY(10px); }
|
137 |
-
to { opacity: 1; transform: translateY(0); }
|
138 |
-
}
|
139 |
-
|
140 |
-
.pulse {
|
141 |
-
animation: pulse 2s infinite;
|
142 |
-
}
|
143 |
-
|
144 |
-
@keyframes pulse {
|
145 |
-
0% { box-shadow: 0 0 0 0 rgba(0, 113, 227, 0.7); }
|
146 |
-
70% { box-shadow: 0 0 0 10px rgba(0, 113, 227, 0); }
|
147 |
-
100% { box-shadow: 0 0 0 0 rgba(0, 113, 227, 0); }
|
148 |
-
}
|
149 |
-
|
150 |
-
/* Custom Neumorphic Elements */
|
151 |
-
.neumorph-nav-item {
|
152 |
-
padding: 8px 12px;
|
153 |
-
border-radius: 12px;
|
154 |
-
background: linear-gradient(145deg, var(--neumorph-light), var(--neumorph-dark));
|
155 |
-
box-shadow: 3px 3px 6px #121214,
|
156 |
-
-3px -3px 6px #2a2a2e;
|
157 |
-
transition: all 0.2s ease;
|
158 |
-
}
|
159 |
-
|
160 |
-
.neumorph-nav-item:hover {
|
161 |
-
box-shadow: inset 3px 3px 6px #121214,
|
162 |
-
inset -3px -3px 6px #2a2a2e;
|
163 |
-
}
|
164 |
-
|
165 |
-
.neumorph-divider {
|
166 |
-
height: 1px;
|
167 |
-
background: rgba(255,255,255,0.1);
|
168 |
-
box-shadow: 0 1px 2px rgba(0,0,0,0.3);
|
169 |
-
}
|
170 |
-
|
171 |
-
/* Sidebar transitions */
|
172 |
-
.sidebar {
|
173 |
-
transition: transform 0.3s ease, opacity 0.3s ease;
|
174 |
-
}
|
175 |
-
|
176 |
-
.sidebar-hidden {
|
177 |
-
transform: translateX(-100%);
|
178 |
opacity: 0;
|
179 |
-
|
|
|
180 |
}
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
200 |
-
|
201 |
-
|
202 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
203 |
}
|
204 |
|
205 |
-
.
|
206 |
-
|
|
|
|
|
|
|
|
|
|
|
207 |
}
|
208 |
|
209 |
-
|
210 |
-
|
211 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
212 |
}
|
213 |
|
214 |
-
|
215 |
-
|
216 |
-
width: 120px;
|
217 |
-
background-color: #333;
|
218 |
-
color: #fff;
|
219 |
-
text-align: center;
|
220 |
-
border-radius: 6px;
|
221 |
-
padding: 5px;
|
222 |
position: absolute;
|
223 |
-
|
224 |
-
|
225 |
-
|
226 |
-
|
227 |
-
|
228 |
-
|
229 |
-
}
|
230 |
-
|
231 |
-
|
232 |
-
|
233 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
234 |
}
|
235 |
</style>
|
236 |
-
|
237 |
-
<
|
238 |
-
|
239 |
-
|
240 |
-
|
241 |
-
|
242 |
-
|
243 |
-
|
244 |
-
|
245 |
-
|
246 |
-
|
247 |
-
|
248 |
-
|
249 |
-
|
250 |
-
|
251 |
-
|
252 |
-
<a href="#" data-page="settings" class="neumorph-nav-item text-gray-300 hover:text-white transition page-link">Settings</a>
|
253 |
-
</nav>
|
254 |
-
</div>
|
255 |
-
<div class="flex items-center space-x-4">
|
256 |
-
<div class="relative">
|
257 |
-
<button class="p-2 neumorph-btn rounded-full hover:bg-gray-800 transition">
|
258 |
-
<i class="fas fa-bell text-gray-300"></i>
|
259 |
-
</button>
|
260 |
-
<span class="absolute top-0 right-0 w-2 h-2 bg-red-500 rounded-full"></span>
|
261 |
-
</div>
|
262 |
-
<div class="hidden md:flex items-center space-x-2 neumorph-card px-3 py-1">
|
263 |
-
<div class="w-6 h-6 rounded-full bg-blue-500 flex items-center justify-center">
|
264 |
-
<i class="fas fa-robot text-xs text-white"></i>
|
265 |
-
</div>
|
266 |
-
<span class="text-sm">Autonomous Mode: Active</span>
|
267 |
-
</div>
|
268 |
-
</div>
|
269 |
-
</header>
|
270 |
-
|
271 |
-
<!-- Main Content -->
|
272 |
-
<main class="flex flex-1 overflow-hidden">
|
273 |
-
<!-- Sidebar -->
|
274 |
-
<aside id="sidebar" class="sidebar lg:block w-64 bg-black bg-opacity-30 border-r border-gray-800 p-4 overflow-y-auto">
|
275 |
-
<div class="mb-8">
|
276 |
-
<h3 class="text-xs uppercase font-semibold text-gray-500 mb-3">Quick Actions</h3>
|
277 |
-
<div class="space-y-2">
|
278 |
-
<button id="new-task-btn" class="w-full flex items-center space-x-2 px-3 py-2 neumorph-btn rounded-lg hover:bg-gray-800 transition text-left">
|
279 |
-
<i class="fas fa-plus text-blue-500"></i>
|
280 |
-
<span>New Task</span>
|
281 |
-
</button>
|
282 |
-
<button id="quick-automation-btn" class="w-full flex items-center space-x-2 px-3 py-2 neumorph-btn rounded-lg hover:bg-gray-800 transition text-left">
|
283 |
-
<i class="fas fa-bolt text-yellow-500"></i>
|
284 |
-
<span>Quick Automation</span>
|
285 |
-
</button>
|
286 |
-
<button id="ai-training-btn" class="w-full flex items-center space-x-2 px-3 py-2 neumorph-btn rounded-lg hover:bg-gray-800 transition text-left">
|
287 |
-
<i class="fas fa-brain text-purple-500"></i>
|
288 |
-
<span>AI Training</span>
|
289 |
-
</button>
|
290 |
-
</div>
|
291 |
-
</div>
|
292 |
|
293 |
-
|
294 |
-
|
295 |
-
|
296 |
-
|
297 |
-
|
298 |
-
|
299 |
-
|
300 |
-
|
301 |
-
|
302 |
-
|
303 |
-
|
304 |
-
|
305 |
-
|
306 |
-
|
307 |
-
|
308 |
-
|
309 |
-
|
310 |
-
|
311 |
-
|
312 |
-
|
313 |
-
|
314 |
-
|
315 |
-
|
316 |
-
|
317 |
-
|
318 |
-
|
319 |
-
|
320 |
-
|
321 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{% extends "layout.html" %}
|
2 |
+
|
3 |
+
{% block content %}
|
4 |
+
<div class="row mb-4">
|
5 |
+
<div class="col-md-12">
|
6 |
+
<div class="glass-card floating">
|
7 |
+
<div class="p-4">
|
8 |
+
<h1 class="display-5 quantum-glow text-center">
|
9 |
+
<i class="fas fa-atom me-2 quantum-spin"></i> <span class="fw-bold">Quantum</span> NLP Framework
|
10 |
+
</h1>
|
11 |
+
<p class="card-text text-center lead">
|
12 |
+
A multi-dimensional, layered thinking process inspired by quantum computing concepts
|
13 |
+
</p>
|
14 |
+
<hr class="my-4" style="opacity: 0.1;">
|
15 |
+
<p class="card-text">
|
16 |
+
This framework combines natural language processing with a recursive function that simulates quantum-inspired
|
17 |
+
thinking processes, creating a multi-dimensional analysis of your text. Additionally, it can leverage OpenAI's API
|
18 |
+
to generate human-like text responses based on the quantum analysis.
|
19 |
+
</p>
|
20 |
+
<div class="vision-progress">
|
21 |
+
<div class="vision-progress-bar"></div>
|
22 |
+
</div>
|
23 |
+
</div>
|
24 |
+
</div>
|
25 |
+
</div>
|
26 |
+
</div>
|
27 |
+
|
28 |
+
<div class="row">
|
29 |
+
<div class="col-md-12">
|
30 |
+
<div class="glass-card mb-4">
|
31 |
+
<div class="p-4">
|
32 |
+
<h4 class="mb-4 quantum-glow"><i class="fas fa-keyboard me-2"></i> Input Text</h4>
|
33 |
+
|
34 |
+
<form method="POST" action="/process" id="process-form">
|
35 |
+
<div class="mb-4">
|
36 |
+
<textarea class="form-control quantum-input" id="input_text" name="input_text" rows="5" placeholder="Enter your text for analysis..." required>{{ input_text }}</textarea>
|
37 |
+
</div>
|
38 |
+
|
39 |
+
<div class="row mb-4 align-items-center">
|
40 |
+
<div class="col-md-6">
|
41 |
+
<label class="d-block mb-2">Quantum Dimensions</label>
|
42 |
+
<div class="input-group">
|
43 |
+
<span class="input-group-text"><i class="fas fa-layer-group"></i></span>
|
44 |
+
<select class="form-select" id="depth" name="depth">
|
45 |
+
<option value="1" {% if depth == 1 %}selected{% endif %}>Basic (1 dimension)</option>
|
46 |
+
<option value="2" {% if depth == 2 %}selected{% endif %}>Intermediate (2 dimensions)</option>
|
47 |
+
<option value="3" {% if depth == 3 or not depth %}selected{% endif %}>Advanced (3 dimensions)</option>
|
48 |
+
<option value="4" {% if depth == 4 %}selected{% endif %}>Extreme (4 dimensions)</option>
|
49 |
+
</select>
|
50 |
+
</div>
|
51 |
+
</div>
|
52 |
+
<div class="col-md-6">
|
53 |
+
<div class="form-check form-switch">
|
54 |
+
<input class="form-check-input" type="checkbox" id="use_ai" name="use_ai" {% if ai_response %}checked{% endif %}>
|
55 |
+
<label class="form-check-label" for="use_ai">Use OpenAI for enhanced analysis
|
56 |
+
<i class="fas fa-robot ms-1"></i>
|
57 |
+
</label>
|
58 |
+
</div>
|
59 |
+
</div>
|
60 |
+
</div>
|
61 |
+
|
62 |
+
<!-- LED tracer progress line -->
|
63 |
+
<div class="vision-progress mb-4">
|
64 |
+
<div class="vision-progress-bar"></div>
|
65 |
+
</div>
|
66 |
+
|
67 |
+
<div class="d-grid">
|
68 |
+
<button type="submit" class="btn btn-primary quantum-btn" id="analyze-btn">
|
69 |
+
<span class="quantum-btn-content">
|
70 |
+
<i class="fas fa-brain me-2"></i>
|
71 |
+
Analyze with Quantum Algorithm
|
72 |
+
</span>
|
73 |
+
</button>
|
74 |
+
</div>
|
75 |
+
</form>
|
76 |
+
</div>
|
77 |
+
</div>
|
78 |
+
</div>
|
79 |
+
</div>
|
80 |
+
|
81 |
+
<!-- Results container with animation support -->
|
82 |
+
<div id="results-container" class="quantum-results-container">
|
83 |
+
{% if nlp_results %}
|
84 |
+
<div class="row">
|
85 |
+
<div class="col-md-12">
|
86 |
+
<div class="glass-card mb-4 position-relative overflow-hidden">
|
87 |
+
<div class="p-4">
|
88 |
+
<h4 class="mb-4 quantum-glow"><i class="fas fa-chart-line me-2"></i> NLP Analysis Results</h4>
|
89 |
+
|
90 |
+
<!-- LED corner tracers -->
|
91 |
+
<div class="led-corner led-corner-tl"></div>
|
92 |
+
<div class="led-corner led-corner-tr"></div>
|
93 |
+
<div class="led-corner led-corner-bl"></div>
|
94 |
+
<div class="led-corner led-corner-br"></div>
|
95 |
+
|
96 |
+
<div class="row fade-in-staggered">
|
97 |
+
<div class="col-md-4" style="--stagger-delay: 0.1s;">
|
98 |
+
<div class="glass-card mb-3 text-to-vision glow-hover p-3 position-relative overflow-hidden">
|
99 |
+
<h5 class="card-title mb-3"><i class="fas fa-calculator me-2"></i> Text Statistics</h5>
|
100 |
+
<div class="border-start border-info ps-3 mb-2" style="border-width: 3px !important;">
|
101 |
+
<div class="mb-2">Word count: <span class="badge bg-info quantum-score">{{ nlp_results.stats.word_count }}</span></div>
|
102 |
+
<div class="mb-2">Sentence count: <span class="badge bg-info quantum-score">{{ nlp_results.stats.sentence_count }}</span></div>
|
103 |
+
<div>Average sentence length: <span class="badge bg-info quantum-score">{{ nlp_results.stats.average_sentence_length }}</span> words</div>
|
104 |
+
</div>
|
105 |
+
</div>
|
106 |
+
</div>
|
107 |
+
<div class="col-md-4" style="--stagger-delay: 0.2s;">
|
108 |
+
<div class="glass-card mb-3 text-to-vision glow-hover p-3 position-relative overflow-hidden">
|
109 |
+
<h5 class="card-title mb-3"><i class="fas fa-tag me-2"></i> Top Terms</h5>
|
110 |
+
<ul class="list-group list-group-flush">
|
111 |
+
{% for term, freq in nlp_results.top_terms %}
|
112 |
+
<li class="list-group-item d-flex justify-content-between align-items-center bg-transparent border-bottom border-light border-opacity-10 quantum-entity-item">
|
113 |
+
{{ term }}
|
114 |
+
<span class="badge bg-primary quantum-score">{{ freq }}</span>
|
115 |
+
</li>
|
116 |
+
{% endfor %}
|
117 |
+
</ul>
|
118 |
+
</div>
|
119 |
+
</div>
|
120 |
+
<div class="col-md-4" style="--stagger-delay: 0.3s;">
|
121 |
+
<div class="glass-card mb-3 text-to-vision glow-hover p-3 position-relative overflow-hidden">
|
122 |
+
<h5 class="card-title mb-3"><i class="fas fa-fingerprint me-2"></i> Named Entities</h5>
|
123 |
+
{% if nlp_results.entities %}
|
124 |
+
<ul class="list-group list-group-flush">
|
125 |
+
{% for entity in nlp_results.entities[:5] %}
|
126 |
+
<li class="list-group-item d-flex justify-content-between align-items-center bg-transparent border-bottom border-light border-opacity-10 quantum-entity-item">
|
127 |
+
{{ entity.text }}
|
128 |
+
<span class="badge bg-success quantum-score">{{ entity.label }}</span>
|
129 |
+
</li>
|
130 |
+
{% endfor %}
|
131 |
+
</ul>
|
132 |
+
{% else %}
|
133 |
+
<p class="text-muted">No named entities detected</p>
|
134 |
+
{% endif %}
|
135 |
+
</div>
|
136 |
+
</div>
|
137 |
+
</div>
|
138 |
+
</div>
|
139 |
+
</div>
|
140 |
+
</div>
|
141 |
+
</div>
|
142 |
+
{% endif %}
|
143 |
+
|
144 |
+
{% if quantum_results %}
|
145 |
+
<div class="row">
|
146 |
+
<div class="col-md-12">
|
147 |
+
<div class="glass-card mb-4 position-relative overflow-hidden">
|
148 |
+
<div class="p-4">
|
149 |
+
<h4 class="mb-4 quantum-glow">
|
150 |
+
<i class="fas fa-atom me-2 quantum-spin"></i> Quantum Thinking Results
|
151 |
+
</h4>
|
152 |
+
|
153 |
+
<!-- LED corner tracers -->
|
154 |
+
<div class="led-corner led-corner-tl"></div>
|
155 |
+
<div class="led-corner led-corner-tr"></div>
|
156 |
+
<div class="led-corner led-corner-bl"></div>
|
157 |
+
<div class="led-corner led-corner-br"></div>
|
158 |
+
|
159 |
+
<div class="glass-card mb-4 p-3 position-relative overflow-hidden" style="background: rgba(13, 202, 240, 0.1);">
|
160 |
+
<div class="row align-items-center">
|
161 |
+
<div class="col-md-9">
|
162 |
+
<h5 class="mb-0"><strong>Meta-Insight:</strong></h5>
|
163 |
+
<p class="lead mb-0 mt-2">{{ quantum_results.meta_insight }}</p>
|
164 |
+
</div>
|
165 |
+
<div class="col-md-3 text-center">
|
166 |
+
<div class="quantum-score-visualization" data-score="{{ quantum_results.quantum_score }}">
|
167 |
+
<div class="score-circle">
|
168 |
+
<span class="score-value">{{ quantum_results.quantum_score }}</span>
|
169 |
+
</div>
|
170 |
+
<p class="mt-2 mb-0">Quantum Score</p>
|
171 |
+
</div>
|
172 |
+
</div>
|
173 |
+
</div>
|
174 |
+
</div>
|
175 |
+
|
176 |
+
<div class="row mb-3 align-items-center">
|
177 |
+
<div class="col-md-6">
|
178 |
+
<div class="d-flex align-items-center">
|
179 |
+
<i class="fas fa-layer-group me-2 quantum-glow" style="font-size: 1.5rem;"></i>
|
180 |
+
<h5 class="mb-0">Dimension Level: <span class="badge bg-primary">{{ quantum_results.dimension }}</span></h5>
|
181 |
+
</div>
|
182 |
+
</div>
|
183 |
+
<div class="col-md-6 text-end">
|
184 |
+
<div class="d-flex align-items-center justify-content-end">
|
185 |
+
<span class="me-2">Quantum Paths:</span>
|
186 |
+
<span class="badge bg-primary">{{ quantum_results.paths|length }}</span>
|
187 |
+
</div>
|
188 |
+
</div>
|
189 |
+
</div>
|
190 |
+
|
191 |
+
<!-- LED tracer progress line -->
|
192 |
+
<div class="vision-progress mb-4">
|
193 |
+
<div class="vision-progress-bar"></div>
|
194 |
+
</div>
|
195 |
+
|
196 |
+
<div class="accordion quantum-accordion" id="quantumPathsAccordion">
|
197 |
+
{% for path in quantum_results.paths %}
|
198 |
+
<div class="accordion-item glass-card text-to-vision mb-3 position-relative fade-in-staggered" style="--stagger-delay: {{ 0.15 * loop.index }}s;">
|
199 |
+
<h2 class="accordion-header">
|
200 |
+
<button class="accordion-button {% if loop.index > 1 %}collapsed{% endif %} glass-card" type="button" data-bs-toggle="collapse" data-bs-target="#collapse{{ loop.index }}">
|
201 |
+
<div class="path-number me-3">{{ loop.index }}</div>
|
202 |
+
<i class="fas fa-route me-2"></i> Path: <span class="ms-2 fw-bold">{{ path.concept }}</span>
|
203 |
+
</button>
|
204 |
+
</h2>
|
205 |
+
<div id="collapse{{ loop.index }}" class="accordion-collapse collapse {% if loop.index == 1 %}show{% endif %}">
|
206 |
+
<div class="accordion-body glass-card">
|
207 |
+
<div class="row">
|
208 |
+
<div class="col-md-12 mb-3">
|
209 |
+
<div class="glass-card p-3" style="background: rgba(218, 75, 134, 0.05);">
|
210 |
+
<h6 class="mb-2"><i class="fas fa-lightbulb me-2"></i>Insight:</h6>
|
211 |
+
<p class="mb-0">{{ path.insight }}</p>
|
212 |
+
</div>
|
213 |
+
</div>
|
214 |
+
<div class="col-md-12 mb-3">
|
215 |
+
<div class="glass-card p-3" style="background: rgba(111, 66, 193, 0.05);">
|
216 |
+
<h6 class="mb-2"><i class="fas fa-comment-dots me-2"></i>Prompt:</h6>
|
217 |
+
<p class="mb-0 text-muted">{{ path.prompt }}</p>
|
218 |
+
</div>
|
219 |
+
</div>
|
220 |
+
</div>
|
221 |
+
|
222 |
+
{% if path.entangled_insight %}
|
223 |
+
<div class="glass-card p-3 mt-3" style="background: rgba(13, 202, 240, 0.05);">
|
224 |
+
<h6 class="mb-2"><i class="fas fa-link me-2"></i>Quantum Entanglement:</h6>
|
225 |
+
<div class="d-flex align-items-center mb-2">
|
226 |
+
<span class="badge bg-info me-2">Entangled with Path: {{ path.entangled_with }}</span>
|
227 |
+
<div class="vision-progress" style="flex-grow: 1; height: 2px;">
|
228 |
+
<div class="vision-progress-bar"></div>
|
229 |
+
</div>
|
230 |
+
</div>
|
231 |
+
<p class="mb-0">{{ path.entangled_insight }}</p>
|
232 |
+
</div>
|
233 |
+
{% endif %}
|
234 |
+
|
235 |
+
{% if path.sub_dimensions and path.sub_dimensions.paths %}
|
236 |
+
<div class="glass-card p-3 mt-3" style="background: rgba(255, 193, 7, 0.05);">
|
237 |
+
<h6 class="mb-3"><i class="fas fa-cubes me-2"></i>Sub-dimensions:</h6>
|
238 |
+
<div class="row">
|
239 |
+
{% for subpath in path.sub_dimensions.paths[:2] %}
|
240 |
+
<div class="col-md-6 mb-2">
|
241 |
+
<div class="glass-card p-3 h-100">
|
242 |
+
<h6 class="mb-2 text-info">{{ subpath.concept }}</h6>
|
243 |
+
<p class="mb-0 small">{{ subpath.insight }}</p>
|
244 |
+
</div>
|
245 |
+
</div>
|
246 |
+
{% endfor %}
|
247 |
+
</div>
|
248 |
+
</div>
|
249 |
+
{% endif %}
|
250 |
+
</div>
|
251 |
+
</div>
|
252 |
+
|
253 |
+
<!-- Path color indicator -->
|
254 |
+
<div class="path-color-indicator" style="background:
|
255 |
+
{% if loop.index % 3 == 1 %}#da4b86{% elif loop.index % 3 == 2 %}#6f42c1{% else %}#0dcaf0{% endif %};"></div>
|
256 |
+
</div>
|
257 |
+
{% endfor %}
|
258 |
+
</div>
|
259 |
+
</div>
|
260 |
+
</div>
|
261 |
+
</div>
|
262 |
+
</div>
|
263 |
+
|
264 |
<style>
|
265 |
+
/* Quantum Score Visualization */
|
266 |
+
.quantum-score-visualization {
|
267 |
+
display: flex;
|
268 |
+
flex-direction: column;
|
269 |
+
align-items: center;
|
270 |
+
padding: 10px;
|
271 |
+
}
|
272 |
+
|
273 |
+
.score-circle {
|
274 |
+
width: 80px;
|
275 |
+
height: 80px;
|
276 |
+
border-radius: 50%;
|
277 |
+
display: flex;
|
278 |
+
align-items: center;
|
279 |
+
justify-content: center;
|
280 |
+
background: radial-gradient(circle, rgba(13, 202, 240, 0.2) 0%, rgba(218, 75, 134, 0.2) 100%);
|
281 |
+
border: 2px solid rgba(111, 66, 193, 0.5);
|
282 |
+
position: relative;
|
283 |
+
box-shadow: 0 0 15px rgba(111, 66, 193, 0.5);
|
284 |
+
animation: pulse-glow 2s infinite alternate;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
285 |
}
|
286 |
|
287 |
+
@keyframes pulse-glow {
|
288 |
+
0% { box-shadow: 0 0 5px rgba(111, 66, 193, 0.5); }
|
289 |
+
100% { box-shadow: 0 0 20px rgba(111, 66, 193, 0.8); }
|
290 |
}
|
291 |
|
292 |
+
.score-value {
|
293 |
+
font-size: 1.5rem;
|
294 |
+
font-weight: bold;
|
295 |
+
color: white;
|
|
|
|
|
296 |
}
|
297 |
|
298 |
+
/* Path number indicator */
|
299 |
+
.path-number {
|
300 |
+
display: flex;
|
301 |
+
align-items: center;
|
302 |
+
justify-content: center;
|
303 |
+
width: 30px;
|
304 |
+
height: 30px;
|
305 |
+
border-radius: 50%;
|
306 |
+
background: rgba(111, 66, 193, 0.2);
|
307 |
+
color: white;
|
308 |
+
font-weight: bold;
|
309 |
+
border: 1px solid rgba(111, 66, 193, 0.5);
|
310 |
}
|
311 |
|
312 |
+
/* Path color indicator */
|
313 |
+
.path-color-indicator {
|
314 |
+
position: absolute;
|
315 |
+
top: 0;
|
316 |
+
left: 0;
|
317 |
+
width: 4px;
|
318 |
+
height: 100%;
|
319 |
+
border-top-left-radius: 4px;
|
320 |
+
border-bottom-left-radius: 4px;
|
321 |
+
}
|
322 |
+
|
323 |
+
/* Accordion styling */
|
324 |
+
.quantum-accordion .accordion-button {
|
325 |
+
background: rgba(30, 41, 59, 0.4);
|
326 |
+
color: white;
|
327 |
border: none;
|
328 |
}
|
329 |
|
330 |
+
.quantum-accordion .accordion-button:not(.collapsed) {
|
331 |
+
color: white;
|
332 |
+
background: rgba(30, 41, 59, 0.5);
|
333 |
+
box-shadow: none;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
334 |
}
|
335 |
|
336 |
+
.quantum-accordion .accordion-button:focus {
|
337 |
+
box-shadow: none;
|
338 |
+
border-color: rgba(111, 66, 193, 0.5);
|
339 |
}
|
340 |
|
341 |
+
.quantum-accordion .accordion-button::after {
|
342 |
+
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='white'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");
|
|
|
343 |
}
|
344 |
|
345 |
+
/* Staggered fade-in animation */
|
346 |
+
@keyframes fadeInUp {
|
347 |
+
from {
|
348 |
+
opacity: 0;
|
349 |
+
transform: translateY(20px);
|
350 |
+
}
|
351 |
+
to {
|
352 |
+
opacity: 1;
|
353 |
+
transform: translateY(0);
|
354 |
+
}
|
355 |
}
|
356 |
|
357 |
+
.fade-in-staggered {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
358 |
opacity: 0;
|
359 |
+
animation: fadeInUp 0.6s ease forwards;
|
360 |
+
animation-delay: var(--stagger-delay, 0s);
|
361 |
}
|
362 |
+
</style>
|
363 |
+
{% endif %}
|
364 |
+
|
365 |
+
{% if ai_response %}
|
366 |
+
<div class="row">
|
367 |
+
<div class="col-md-12">
|
368 |
+
<div class="glass-card mb-4 position-relative overflow-hidden">
|
369 |
+
<div class="p-4">
|
370 |
+
<h4 class="mb-4 quantum-glow">
|
371 |
+
<i class="fas fa-robot me-2 quantum-spin"></i> AI Analysis
|
372 |
+
</h4>
|
373 |
+
|
374 |
+
<!-- LED corner tracers -->
|
375 |
+
<div class="led-corner led-corner-tl"></div>
|
376 |
+
<div class="led-corner led-corner-tr"></div>
|
377 |
+
<div class="led-corner led-corner-bl"></div>
|
378 |
+
<div class="led-corner led-corner-br"></div>
|
379 |
+
|
380 |
+
{% if ai_response.error %}
|
381 |
+
<div class="glass-card p-4" style="background: rgba(220, 53, 69, 0.1);">
|
382 |
+
<div class="row align-items-center">
|
383 |
+
<div class="col-md-1 text-center">
|
384 |
+
<i class="fas fa-exclamation-triangle fa-2x text-danger"></i>
|
385 |
+
</div>
|
386 |
+
<div class="col-md-11">
|
387 |
+
<h5 class="text-danger">OpenAI API Error</h5>
|
388 |
+
<p>{{ ai_response.error }}</p>
|
389 |
+
<div class="mt-3">
|
390 |
+
<a href="https://platform.openai.com/signup" target="_blank" class="btn btn-sm btn-outline-info quantum-btn">
|
391 |
+
<i class="fas fa-key me-1"></i> Get OpenAI API Key
|
392 |
+
</a>
|
393 |
+
<button class="btn btn-sm btn-outline-primary quantum-btn ms-2" id="add-api-key-btn">
|
394 |
+
<i class="fas fa-plus-circle me-1"></i> Add API Key to Project
|
395 |
+
</button>
|
396 |
+
</div>
|
397 |
+
</div>
|
398 |
+
</div>
|
399 |
+
</div>
|
400 |
+
{% else %}
|
401 |
+
<div class="glass-card mb-3 fade-in">
|
402 |
+
<div class="p-3 border-bottom border-info border-opacity-10">
|
403 |
+
<div class="d-flex justify-content-between align-items-center">
|
404 |
+
<div>
|
405 |
+
<i class="fas fa-brain me-2 text-info"></i>
|
406 |
+
<span>Response from <span class="badge bg-info">{{ ai_response.model }}</span></span>
|
407 |
+
</div>
|
408 |
+
<div class="d-flex align-items-center">
|
409 |
+
<div class="ai-indicator me-2">
|
410 |
+
<div class="ai-pulse"></div>
|
411 |
+
</div>
|
412 |
+
<span class="badge bg-info">AI-Generated</span>
|
413 |
+
</div>
|
414 |
+
</div>
|
415 |
+
</div>
|
416 |
+
<div class="p-4">
|
417 |
+
<!-- LED tracer progress line -->
|
418 |
+
<div class="vision-progress mb-3">
|
419 |
+
<div class="vision-progress-bar"></div>
|
420 |
+
</div>
|
421 |
+
|
422 |
+
<div class="ai-response p-2 position-relative">
|
423 |
+
{{ ai_response.text | safe | replace('\n', '<br>') }}
|
424 |
+
|
425 |
+
<!-- Quantum particles floating around the AI response -->
|
426 |
+
<div class="quantum-particles-container"></div>
|
427 |
+
</div>
|
428 |
+
</div>
|
429 |
+
</div>
|
430 |
+
|
431 |
+
<div class="d-flex justify-content-end mt-3 fade-in" style="animation-delay: 0.3s;">
|
432 |
+
<button class="btn btn-sm btn-outline-light me-2" id="copy-ai-response">
|
433 |
+
<i class="fas fa-copy me-1"></i> Copy Response
|
434 |
+
</button>
|
435 |
+
<button class="btn btn-sm btn-outline-info" id="regenerate-response">
|
436 |
+
<i class="fas fa-sync-alt me-1"></i> Regenerate
|
437 |
+
</button>
|
438 |
+
</div>
|
439 |
+
{% endif %}
|
440 |
+
</div>
|
441 |
+
</div>
|
442 |
+
</div>
|
443 |
+
</div>
|
444 |
+
|
445 |
+
<style>
|
446 |
+
/* AI Indicator */
|
447 |
+
.ai-indicator {
|
448 |
+
width: 20px;
|
449 |
+
height: 20px;
|
450 |
+
position: relative;
|
451 |
}
|
452 |
|
453 |
+
.ai-pulse {
|
454 |
+
width: 100%;
|
455 |
+
height: 100%;
|
456 |
+
border-radius: 50%;
|
457 |
+
background: rgba(13, 202, 240, 0.5);
|
458 |
+
position: absolute;
|
459 |
+
animation: ai-pulse 2s infinite;
|
460 |
}
|
461 |
|
462 |
+
@keyframes ai-pulse {
|
463 |
+
0% {
|
464 |
+
transform: scale(0.5);
|
465 |
+
opacity: 1;
|
466 |
+
}
|
467 |
+
100% {
|
468 |
+
transform: scale(1.5);
|
469 |
+
opacity: 0;
|
470 |
+
}
|
471 |
}
|
472 |
|
473 |
+
/* Quantum particles */
|
474 |
+
.quantum-particles-container {
|
|
|
|
|
|
|
|
|
|
|
|
|
475 |
position: absolute;
|
476 |
+
top: 0;
|
477 |
+
left: 0;
|
478 |
+
width: 100%;
|
479 |
+
height: 100%;
|
480 |
+
pointer-events: none;
|
481 |
+
overflow: hidden;
|
482 |
+
}
|
483 |
+
|
484 |
+
@keyframes float-particle {
|
485 |
+
0% {
|
486 |
+
transform: translateY(0) translateX(0) rotate(0deg);
|
487 |
+
opacity: 0;
|
488 |
+
}
|
489 |
+
10% {
|
490 |
+
opacity: 1;
|
491 |
+
}
|
492 |
+
90% {
|
493 |
+
opacity: 1;
|
494 |
+
}
|
495 |
+
100% {
|
496 |
+
transform: translateY(-100px) translateX(var(--x-drift)) rotate(360deg);
|
497 |
+
opacity: 0;
|
498 |
+
}
|
499 |
}
|
500 |
</style>
|
501 |
+
|
502 |
+
<script>
|
503 |
+
document.addEventListener('DOMContentLoaded', function() {
|
504 |
+
// Copy response button
|
505 |
+
const copyBtn = document.getElementById('copy-ai-response');
|
506 |
+
if (copyBtn) {
|
507 |
+
copyBtn.addEventListener('click', function() {
|
508 |
+
const responseText = document.querySelector('.ai-response').innerText;
|
509 |
+
navigator.clipboard.writeText(responseText).then(() => {
|
510 |
+
copyBtn.innerHTML = '<i class="fas fa-check me-1"></i> Copied!';
|
511 |
+
setTimeout(() => {
|
512 |
+
copyBtn.innerHTML = '<i class="fas fa-copy me-1"></i> Copy Response';
|
513 |
+
}, 2000);
|
514 |
+
});
|
515 |
+
});
|
516 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
517 |
|
518 |
+
// Add API key button
|
519 |
+
const apiKeyBtn = document.getElementById('add-api-key-btn');
|
520 |
+
if (apiKeyBtn) {
|
521 |
+
apiKeyBtn.addEventListener('click', function() {
|
522 |
+
alert('To add your OpenAI API key, please contact the administrator or use environment variables.');
|
523 |
+
});
|
524 |
+
}
|
525 |
+
|
526 |
+
// Create floating particles in the AI response
|
527 |
+
const particlesContainer = document.querySelector('.quantum-particles-container');
|
528 |
+
if (particlesContainer) {
|
529 |
+
// Create floating particles
|
530 |
+
setInterval(() => {
|
531 |
+
if (Math.random() > 0.7) {
|
532 |
+
createFloatingParticle(particlesContainer);
|
533 |
+
}
|
534 |
+
}, 500);
|
535 |
+
}
|
536 |
+
|
537 |
+
function createFloatingParticle(container) {
|
538 |
+
const particle = document.createElement('div');
|
539 |
+
particle.className = 'quantum-particle';
|
540 |
+
|
541 |
+
// Random position at the bottom
|
542 |
+
const xPos = Math.random() * 100;
|
543 |
+
const xDrift = (Math.random() - 0.5) * 100;
|
544 |
+
|
545 |
+
particle.style.cssText = `
|
546 |
+
position: absolute;
|
547 |
+
bottom: 0;
|
548 |
+
left: ${xPos}%;
|
549 |
+
width: 4px;
|
550 |
+
height: 4px;
|
551 |
+
border-radius: 50%;
|
552 |
+
background: ${getRandomParticleColor()};
|
553 |
+
filter: blur(1px);
|
554 |
+
box-shadow: 0 0 5px currentColor;
|
555 |
+
--x-drift: ${xDrift}px;
|
556 |
+
animation: float-particle ${3 + Math.random() * 4}s linear forwards;
|
557 |
+
`;
|
558 |
+
|
559 |
+
container.appendChild(particle);
|
560 |
+
|
561 |
+
// Remove after animation completes
|
562 |
+
setTimeout(() => {
|
563 |
+
particle.remove();
|
564 |
+
}, 7000);
|
565 |
+
}
|
566 |
+
|
567 |
+
function getRandomParticleColor() {
|
568 |
+
const colors = [
|
569 |
+
'rgba(13, 202, 240, 0.8)', // Cyan
|
570 |
+
'rgba(111, 66, 193, 0.8)', // Purple
|
571 |
+
'rgba(218, 75, 134, 0.8)' // Pink
|
572 |
+
];
|
573 |
+
return colors[Math.floor(Math.random() * colors.length)];
|
574 |
+
}
|
575 |
+
});
|
576 |
+
</script>
|
577 |
+
{% endif %}
|
578 |
+
</div>
|
579 |
+
|
580 |
+
{% endblock %}
|
install.sh
ADDED
@@ -0,0 +1,900 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
|
3 |
+
# Atlas Intelligence Universal Setup Script
|
4 |
+
# Compatible with macOS on Apple Silicon (M1/M2/M3)
|
5 |
+
# Created: April 16, 2025
|
6 |
+
|
7 |
+
# Exit on error, but allow for custom error handling
|
8 |
+
set -o pipefail
|
9 |
+
|
10 |
+
# Color codes for better output
|
11 |
+
GREEN='\033[0;32m'
|
12 |
+
BLUE='\033[0;34m'
|
13 |
+
YELLOW='\033[0;33m'
|
14 |
+
RED='\033[0;31m'
|
15 |
+
BOLD='\033[1m'
|
16 |
+
NC='\033[0m' # No Color
|
17 |
+
|
18 |
+
# Paths to project directories
|
19 |
+
OPENMANUS_DIR="$HOME/OpenManus"
|
20 |
+
CASIBASE_DIR="$HOME/casibase"
|
21 |
+
CYPHER_DIR="$HOME/Cypher"
|
22 |
+
CLOUD_DIR="$HOME/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business"
|
23 |
+
AI_COMPANION_DIR="$CLOUD_DIR/AIConversationCompanion"
|
24 |
+
QUANTUM_VISION_DIR="$CLOUD_DIR/QuantumVision"
|
25 |
+
UNIFIED_DIR="$HOME/AtlasUnified"
|
26 |
+
|
27 |
+
# Log file for the installation process
|
28 |
+
LOG_FILE="$QUANTUM_VISION_DIR/install_log.txt"
|
29 |
+
|
30 |
+
# Function to log messages to both stdout and log file
|
31 |
+
log() {
|
32 |
+
echo -e "$1" | tee -a "$LOG_FILE"
|
33 |
+
}
|
34 |
+
|
35 |
+
# Function to log errors
|
36 |
+
log_error() {
|
37 |
+
echo -e "${RED}ERROR: $1${NC}" | tee -a "$LOG_FILE"
|
38 |
+
}
|
39 |
+
|
40 |
+
# Function to log warnings
|
41 |
+
log_warning() {
|
42 |
+
echo -e "${YELLOW}WARNING: $1${NC}" | tee -a "$LOG_FILE"
|
43 |
+
}
|
44 |
+
|
45 |
+
# Function to log success
|
46 |
+
log_success() {
|
47 |
+
echo -e "${GREEN}$1${NC}" | tee -a "$LOG_FILE"
|
48 |
+
}
|
49 |
+
|
50 |
+
# Function to log information
|
51 |
+
log_info() {
|
52 |
+
echo -e "${BLUE}$1${NC}" | tee -a "$LOG_FILE"
|
53 |
+
}
|
54 |
+
|
55 |
+
# Clean log file
|
56 |
+
> "$LOG_FILE"
|
57 |
+
|
58 |
+
log_info "=============================================="
|
59 |
+
log_info " Atlas Intelligence Unified Setup "
|
60 |
+
log_info "=============================================="
|
61 |
+
log_info "This script will set up a universal connection between:"
|
62 |
+
log "- OpenManus AI Agent Framework"
|
63 |
+
log "- Casibase RAG Knowledge Database"
|
64 |
+
log "- Cypher"
|
65 |
+
log "- AI Conversation Companion"
|
66 |
+
log "- Quantum Vision"
|
67 |
+
log_info "=============================================="
|
68 |
+
log ""
|
69 |
+
|
70 |
+
# Check if running on macOS
|
71 |
+
if [[ "$(uname)" != "Darwin" ]]; then
|
72 |
+
log_error "This script is designed for macOS. Exiting."
|
73 |
+
exit 1
|
74 |
+
fi
|
75 |
+
|
76 |
+
# Check for Apple Silicon
|
77 |
+
if [[ "$(uname -m)" != "arm64" ]]; then
|
78 |
+
log_warning "You're not running on Apple Silicon (M1/M2/M3). Some optimizations may not work."
|
79 |
+
fi
|
80 |
+
|
81 |
+
# Check for required tools
|
82 |
+
log_info "Checking for required tools..."
|
83 |
+
|
84 |
+
# Check for Homebrew
|
85 |
+
if ! command -v brew &> /dev/null; then
|
86 |
+
log_warning "Homebrew not found. Installing Homebrew..."
|
87 |
+
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
88 |
+
|
89 |
+
# Add Homebrew to PATH for Apple Silicon
|
90 |
+
if [[ "$(uname -m)" == "arm64" ]]; then
|
91 |
+
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
|
92 |
+
eval "$(/opt/homebrew/bin/brew shellenv)"
|
93 |
+
fi
|
94 |
+
else
|
95 |
+
log_success "Homebrew is installed."
|
96 |
+
fi
|
97 |
+
|
98 |
+
# Check for Python
|
99 |
+
if ! command -v python3 &> /dev/null; then
|
100 |
+
log_warning "Python not found. Installing Python..."
|
101 |
+
brew install python@3.11
|
102 |
+
else
|
103 |
+
log_success "Python is installed."
|
104 |
+
PYTHON_VERSION=$(python3 --version | cut -d' ' -f2)
|
105 |
+
log_info "Python version: $PYTHON_VERSION"
|
106 |
+
fi
|
107 |
+
|
108 |
+
# Check for Go
|
109 |
+
if ! command -v go &> /dev/null; then
|
110 |
+
log_warning "Go not found. Installing Go..."
|
111 |
+
brew install go
|
112 |
+
else
|
113 |
+
log_success "Go is installed."
|
114 |
+
GO_VERSION=$(go version | cut -d' ' -f3)
|
115 |
+
log_info "Go version: $GO_VERSION"
|
116 |
+
fi
|
117 |
+
|
118 |
+
# Check for Node.js
|
119 |
+
if ! command -v node &> /dev/null; then
|
120 |
+
log_warning "Node.js not found. Installing Node.js..."
|
121 |
+
brew install node
|
122 |
+
else
|
123 |
+
log_success "Node.js is installed."
|
124 |
+
NODE_VERSION=$(node --version)
|
125 |
+
log_info "Node.js version: $NODE_VERSION"
|
126 |
+
fi
|
127 |
+
|
128 |
+
# Check for required Python modules
|
129 |
+
check_python_module() {
|
130 |
+
if ! python3 -c "import $1" &> /dev/null; then
|
131 |
+
return 1
|
132 |
+
else
|
133 |
+
return 0
|
134 |
+
fi
|
135 |
+
}
|
136 |
+
|
137 |
+
# Create unified directory if it doesn't exist
|
138 |
+
log_info "Setting up unified directory structure..."
|
139 |
+
mkdir -p "$UNIFIED_DIR"
|
140 |
+
mkdir -p "$UNIFIED_DIR/config"
|
141 |
+
mkdir -p "$UNIFIED_DIR/logs"
|
142 |
+
mkdir -p "$UNIFIED_DIR/data"
|
143 |
+
|
144 |
+
# Set up Python virtual environment with error handling
|
145 |
+
log_info "Setting up Python virtual environment..."
|
146 |
+
if ! command -v python3 -m venv &> /dev/null; then
|
147 |
+
log_warning "Python venv module not found. Installing venv..."
|
148 |
+
brew reinstall python@3.11
|
149 |
+
fi
|
150 |
+
|
151 |
+
# Create and activate virtual environment
|
152 |
+
python3 -m venv "$UNIFIED_DIR/venv" || {
|
153 |
+
log_error "Failed to create virtual environment. Trying alternative approach..."
|
154 |
+
pip3 install virtualenv
|
155 |
+
virtualenv "$UNIFIED_DIR/venv"
|
156 |
+
}
|
157 |
+
|
158 |
+
# Source the virtual environment with error handling
|
159 |
+
if [ -f "$UNIFIED_DIR/venv/bin/activate" ]; then
|
160 |
+
source "$UNIFIED_DIR/venv/bin/activate"
|
161 |
+
log_success "Virtual environment activated."
|
162 |
+
else
|
163 |
+
log_error "Virtual environment activation script not found. Installation may not work correctly."
|
164 |
+
exit 1
|
165 |
+
fi
|
166 |
+
|
167 |
+
# Upgrade pip and install wheel to avoid common installation issues
|
168 |
+
log_info "Upgrading pip and installing wheel..."
|
169 |
+
pip install --upgrade pip wheel setuptools
|
170 |
+
|
171 |
+
# Install core dependencies that commonly cause issues if not installed first
|
172 |
+
log_info "Installing core dependencies..."
|
173 |
+
pip install Cython numpy==1.26.4 pandas scikit-learn
|
174 |
+
|
175 |
+
# OpenManus dependencies
|
176 |
+
log_info "Installing OpenManus dependencies..."
|
177 |
+
if [ -f "$OPENMANUS_DIR/requirements.txt" ]; then
|
178 |
+
# Try-catch for OpenManus requirements
|
179 |
+
if ! pip install -r "$OPENMANUS_DIR/requirements.txt"; then
|
180 |
+
log_warning "Some OpenManus dependencies failed to install. Installing essential ones individually..."
|
181 |
+
# Install critical dependencies individually
|
182 |
+
pip install openai tenacity pyyaml loguru
|
183 |
+
fi
|
184 |
+
else
|
185 |
+
log_error "OpenManus requirements.txt not found. Skipping OpenManus dependencies."
|
186 |
+
fi
|
187 |
+
|
188 |
+
# Quantum Vision dependencies - FIX: Instead of installing as a package, install dependencies directly
|
189 |
+
log_info "Installing Quantum Vision dependencies..."
|
190 |
+
if [ -f "$QUANTUM_VISION_DIR/pyproject.toml" ]; then
|
191 |
+
# Install specific dependencies from pyproject.toml instead of the package itself
|
192 |
+
log_info "Installing specific Quantum Vision dependencies..."
|
193 |
+
pip install beautifulsoup4 email-validator flask flask-sqlalchemy gunicorn
|
194 |
+
pip install "openai>=1.6.0" psycopg2-binary requests
|
195 |
+
pip install "spacy>=3.0.0" sqlalchemy trafilatura
|
196 |
+
|
197 |
+
# Create an empty __init__.py file in the Quantum Vision directory to make it importable
|
198 |
+
touch "$QUANTUM_VISION_DIR/__init__.py"
|
199 |
+
log_success "Quantum Vision dependencies installed."
|
200 |
+
else
|
201 |
+
log_error "Quantum Vision pyproject.toml not found. Skipping Quantum Vision dependencies."
|
202 |
+
fi
|
203 |
+
|
204 |
+
# Cypher dependencies
|
205 |
+
log_info "Installing Cypher dependencies..."
|
206 |
+
if [ -f "$CYPHER_DIR/requirements.txt" ]; then
|
207 |
+
# Try-catch for Cypher requirements
|
208 |
+
if ! pip install -r "$CYPHER_DIR/requirements.txt"; then
|
209 |
+
log_warning "Some Cypher dependencies failed to install. Installing essential ones individually..."
|
210 |
+
pip install flask requests
|
211 |
+
fi
|
212 |
+
else
|
213 |
+
log_warning "Cypher requirements.txt not found. Skipping Cypher dependencies."
|
214 |
+
fi
|
215 |
+
|
216 |
+
# Install FastAPI and related packages for the unified API
|
217 |
+
log_info "Installing FastAPI and related packages for the unified API..."
|
218 |
+
pip install fastapi uvicorn pydantic pyyaml
|
219 |
+
|
220 |
+
# Set up Casibase
|
221 |
+
log_info "Setting up Casibase..."
|
222 |
+
if [ -d "$CASIBASE_DIR" ]; then
|
223 |
+
cd "$CASIBASE_DIR"
|
224 |
+
|
225 |
+
# Check if Docker is installed
|
226 |
+
if ! command -v docker &> /dev/null; then
|
227 |
+
log_warning "Docker not found. Installing Docker..."
|
228 |
+
brew install --cask docker
|
229 |
+
log_warning "Please open Docker Desktop and complete setup, then run this script again."
|
230 |
+
log_warning "Skipping Casibase setup for now."
|
231 |
+
else
|
232 |
+
# Build and run Casibase
|
233 |
+
log_info "Building Casibase..."
|
234 |
+
# Only run build.sh if it exists and is executable
|
235 |
+
if [ -f "./build.sh" ] && [ -x "./build.sh" ]; then
|
236 |
+
# Run build.sh with error handling
|
237 |
+
if ! ./build.sh; then
|
238 |
+
log_warning "Casibase build failed. You may need to run it manually later."
|
239 |
+
else
|
240 |
+
log_success "Casibase build completed successfully."
|
241 |
+
fi
|
242 |
+
else
|
243 |
+
log_warning "Casibase build.sh not found or not executable. Skipping build."
|
244 |
+
fi
|
245 |
+
fi
|
246 |
+
|
247 |
+
log_success "Casibase setup completed with available components."
|
248 |
+
else
|
249 |
+
log_warning "Casibase directory not found. Skipping Casibase setup."
|
250 |
+
fi
|
251 |
+
|
252 |
+
# Return to original directory
|
253 |
+
cd "$QUANTUM_VISION_DIR"
|
254 |
+
|
255 |
+
# Set up AI Conversation Companion frontend
|
256 |
+
log_info "Setting up AI Conversation Companion..."
|
257 |
+
if [ -d "$AI_COMPANION_DIR" ]; then
|
258 |
+
cd "$AI_COMPANION_DIR"
|
259 |
+
|
260 |
+
# Install npm dependencies
|
261 |
+
if [ -f "package.json" ]; then
|
262 |
+
log_info "Installing npm dependencies for AI Conversation Companion..."
|
263 |
+
# Run npm install with error handling
|
264 |
+
if ! npm install; then
|
265 |
+
log_warning "Some npm packages failed to install. You may need to run 'npm install' manually later."
|
266 |
+
fi
|
267 |
+
else
|
268 |
+
log_warning "package.json not found in AI Conversation Companion. Skipping npm install."
|
269 |
+
fi
|
270 |
+
else
|
271 |
+
log_warning "AI Conversation Companion directory not found. Skipping frontend setup."
|
272 |
+
fi
|
273 |
+
|
274 |
+
# Return to original directory
|
275 |
+
cd "$QUANTUM_VISION_DIR"
|
276 |
+
|
277 |
+
# Create unified configuration
|
278 |
+
log_info "Creating unified configuration..."
|
279 |
+
mkdir -p "$UNIFIED_DIR/config"
|
280 |
+
cat > "$UNIFIED_DIR/config/unified_config.yaml" << EOF
|
281 |
+
# Atlas Intelligence Unified Configuration
|
282 |
+
version: 1.0.0
|
283 |
+
created: "$(date)"
|
284 |
+
|
285 |
+
# Project Paths
|
286 |
+
paths:
|
287 |
+
openmanus: "$OPENMANUS_DIR"
|
288 |
+
casibase: "$CASIBASE_DIR"
|
289 |
+
cypher: "$CYPHER_DIR"
|
290 |
+
ai_companion: "$AI_COMPANION_DIR"
|
291 |
+
quantum_vision: "$QUANTUM_VISION_DIR"
|
292 |
+
unified: "$UNIFIED_DIR"
|
293 |
+
|
294 |
+
# Integration Settings
|
295 |
+
integrations:
|
296 |
+
enable_openmanus: true
|
297 |
+
enable_casibase: true
|
298 |
+
enable_cypher: true
|
299 |
+
enable_ai_companion: true
|
300 |
+
enable_quantum_vision: true
|
301 |
+
|
302 |
+
# API Settings
|
303 |
+
api:
|
304 |
+
host: "localhost"
|
305 |
+
port: 8080
|
306 |
+
debug: false
|
307 |
+
enable_cors: true
|
308 |
+
|
309 |
+
# Database Settings
|
310 |
+
database:
|
311 |
+
type: "sqlite" # For development, can be changed to mysql or postgresql
|
312 |
+
path: "$UNIFIED_DIR/data/atlas_unified.db"
|
313 |
+
|
314 |
+
# Logging Settings
|
315 |
+
logging:
|
316 |
+
level: "info"
|
317 |
+
file: "$UNIFIED_DIR/logs/atlas_unified.log"
|
318 |
+
max_size_mb: 10
|
319 |
+
backup_count: 5
|
320 |
+
|
321 |
+
# LLM Integration
|
322 |
+
llm:
|
323 |
+
provider: "openai" # Can be openai, anthropic, llama, etc.
|
324 |
+
model: "gpt-4"
|
325 |
+
api_key_env: "OPENAI_API_KEY"
|
326 |
+
EOF
|
327 |
+
|
328 |
+
# Create the unified bridge script with better error handling and diagnostics
|
329 |
+
log_info "Creating unified bridge script..."
|
330 |
+
cat > "$UNIFIED_DIR/unified_bridge.py" << EOF
|
331 |
+
#!/usr/bin/env python3
|
332 |
+
"""
|
333 |
+
Atlas Intelligence Unified Bridge
|
334 |
+
This script provides a universal bridge between all Atlas Intelligence components.
|
335 |
+
"""
|
336 |
+
|
337 |
+
import os
|
338 |
+
import sys
|
339 |
+
import yaml
|
340 |
+
import logging
|
341 |
+
import traceback
|
342 |
+
from pathlib import Path
|
343 |
+
from typing import Dict, Any, Optional, Union
|
344 |
+
|
345 |
+
# Configure logging first
|
346 |
+
logging.basicConfig(
|
347 |
+
level=logging.INFO,
|
348 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
349 |
+
handlers=[
|
350 |
+
logging.StreamHandler()
|
351 |
+
]
|
352 |
+
)
|
353 |
+
logger = logging.getLogger("atlas_unified")
|
354 |
+
|
355 |
+
# Setup paths
|
356 |
+
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
357 |
+
CONFIG_PATH = os.path.join(SCRIPT_DIR, "config", "unified_config.yaml")
|
358 |
+
|
359 |
+
# Load configuration with error handling
|
360 |
+
try:
|
361 |
+
with open(CONFIG_PATH, "r") as f:
|
362 |
+
config = yaml.safe_load(f)
|
363 |
+
|
364 |
+
# Now that we have config, setup file logging
|
365 |
+
log_file = config["logging"]["file"]
|
366 |
+
os.makedirs(os.path.dirname(log_file), exist_ok=True)
|
367 |
+
|
368 |
+
file_handler = logging.FileHandler(log_file)
|
369 |
+
file_handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
|
370 |
+
logger.addHandler(file_handler)
|
371 |
+
|
372 |
+
logger.setLevel(logging.getLevelName(config["logging"]["level"].upper()))
|
373 |
+
except Exception as e:
|
374 |
+
logger.error(f"Failed to load configuration: {e}")
|
375 |
+
logger.error(traceback.format_exc())
|
376 |
+
sys.exit(1)
|
377 |
+
|
378 |
+
# Add project paths to Python path with validation
|
379 |
+
for key, path in config["paths"].items():
|
380 |
+
if os.path.exists(path):
|
381 |
+
sys.path.append(path)
|
382 |
+
logger.info(f"Added {key} path to system path: {path}")
|
383 |
+
else:
|
384 |
+
logger.warning(f"Path {key} does not exist: {path}")
|
385 |
+
|
386 |
+
# Try to import FastAPI with error handling
|
387 |
+
try:
|
388 |
+
from fastapi import FastAPI, Request, Response, status
|
389 |
+
from fastapi.middleware.cors import CORSMiddleware
|
390 |
+
import uvicorn
|
391 |
+
except ImportError as e:
|
392 |
+
logger.error(f"Failed to import required packages: {e}")
|
393 |
+
logger.error("Please ensure fastapi and uvicorn are installed.")
|
394 |
+
logger.error("Run: pip install fastapi uvicorn")
|
395 |
+
sys.exit(1)
|
396 |
+
|
397 |
+
# Initialize FastAPI app
|
398 |
+
app = FastAPI(title="Atlas Intelligence Unified API")
|
399 |
+
|
400 |
+
# Add CORS middleware
|
401 |
+
if config["api"]["enable_cors"]:
|
402 |
+
app.add_middleware(
|
403 |
+
CORSMiddleware,
|
404 |
+
allow_origins=["*"],
|
405 |
+
allow_credentials=True,
|
406 |
+
allow_methods=["*"],
|
407 |
+
allow_headers=["*"],
|
408 |
+
)
|
409 |
+
|
410 |
+
@app.get("/")
|
411 |
+
async def root():
|
412 |
+
"""Root endpoint returning welcome message"""
|
413 |
+
return {
|
414 |
+
"message": "Welcome to Atlas Intelligence Unified API",
|
415 |
+
"version": config["version"],
|
416 |
+
"status": "operational"
|
417 |
+
}
|
418 |
+
|
419 |
+
@app.get("/health")
|
420 |
+
async def health():
|
421 |
+
"""Health check endpoint"""
|
422 |
+
return {"status": "healthy"}
|
423 |
+
|
424 |
+
@app.get("/status")
|
425 |
+
async def status():
|
426 |
+
"""Status endpoint with detailed component information"""
|
427 |
+
components_status = {}
|
428 |
+
|
429 |
+
# Check OpenManus status
|
430 |
+
if config["integrations"]["enable_openmanus"]:
|
431 |
+
components_status["openmanus"] = check_openmanus_status()
|
432 |
+
else:
|
433 |
+
components_status["openmanus"] = "disabled"
|
434 |
+
|
435 |
+
# Check QuantumVision status
|
436 |
+
if config["integrations"]["enable_quantum_vision"]:
|
437 |
+
components_status["quantum_vision"] = check_quantum_vision_status()
|
438 |
+
else:
|
439 |
+
components_status["quantum_vision"] = "disabled"
|
440 |
+
|
441 |
+
# Check Casibase status
|
442 |
+
if config["integrations"]["enable_casibase"]:
|
443 |
+
components_status["casibase"] = check_casibase_status()
|
444 |
+
else:
|
445 |
+
components_status["casibase"] = "disabled"
|
446 |
+
|
447 |
+
# Check Cypher status
|
448 |
+
if config["integrations"]["enable_cypher"]:
|
449 |
+
components_status["cypher"] = check_cypher_status()
|
450 |
+
else:
|
451 |
+
components_status["cypher"] = "disabled"
|
452 |
+
|
453 |
+
# Check AI Conversation Companion status
|
454 |
+
if config["integrations"]["enable_ai_companion"]:
|
455 |
+
components_status["ai_companion"] = check_ai_companion_status()
|
456 |
+
else:
|
457 |
+
components_status["ai_companion"] = "disabled"
|
458 |
+
|
459 |
+
return {
|
460 |
+
"status": "operational",
|
461 |
+
"components": components_status,
|
462 |
+
"version": config["version"]
|
463 |
+
}
|
464 |
+
|
465 |
+
def check_openmanus_status() -> str:
|
466 |
+
"""Check if OpenManus is available and working"""
|
467 |
+
try:
|
468 |
+
if not os.path.exists(config["paths"]["openmanus"]):
|
469 |
+
return "unavailable (path not found)"
|
470 |
+
|
471 |
+
# Try importing a key module
|
472 |
+
sys.path.append(config["paths"]["openmanus"])
|
473 |
+
import app.llm as openmanus_llm
|
474 |
+
return "available"
|
475 |
+
except Exception as e:
|
476 |
+
logger.error(f"Failed to check OpenManus: {e}")
|
477 |
+
return f"error: {str(e)}"
|
478 |
+
|
479 |
+
def check_quantum_vision_status() -> str:
|
480 |
+
"""Check if QuantumVision is available and working"""
|
481 |
+
try:
|
482 |
+
if not os.path.exists(config["paths"]["quantum_vision"]):
|
483 |
+
return "unavailable (path not found)"
|
484 |
+
|
485 |
+
# Try importing a key module
|
486 |
+
sys.path.append(config["paths"]["quantum_vision"])
|
487 |
+
try:
|
488 |
+
import nlp_processor
|
489 |
+
return "available"
|
490 |
+
except ImportError:
|
491 |
+
# Create simple __init__.py in directory if it doesn't exist
|
492 |
+
if not os.path.exists(f"{config['paths']['quantum_vision']}/__init__.py"):
|
493 |
+
with open(f"{config['paths']['quantum_vision']}/__init__.py", "w") as f:
|
494 |
+
f.write("# Generated by AtlasUnified installer\n")
|
495 |
+
import nlp_processor
|
496 |
+
return "available (fixed module imports)"
|
497 |
+
return "error: could not import nlp_processor"
|
498 |
+
except Exception as e:
|
499 |
+
logger.error(f"Failed to check QuantumVision: {e}")
|
500 |
+
return f"error: {str(e)}"
|
501 |
+
|
502 |
+
def check_casibase_status() -> str:
|
503 |
+
"""Check if Casibase is available"""
|
504 |
+
if os.path.exists(config["paths"]["casibase"]):
|
505 |
+
return "path exists (service status unknown)"
|
506 |
+
return "unavailable (path not found)"
|
507 |
+
|
508 |
+
def check_cypher_status() -> str:
|
509 |
+
"""Check if Cypher is available"""
|
510 |
+
if os.path.exists(config["paths"]["cypher"]):
|
511 |
+
return "path exists (service status unknown)"
|
512 |
+
return "unavailable (path not found)"
|
513 |
+
|
514 |
+
def check_ai_companion_status() -> str:
|
515 |
+
"""Check if AI Conversation Companion is available"""
|
516 |
+
if os.path.exists(config["paths"]["ai_companion"]):
|
517 |
+
return "path exists (service status unknown)"
|
518 |
+
return "unavailable (path not found)"
|
519 |
+
|
520 |
+
@app.post("/query")
|
521 |
+
async def query(request: Request, response: Response):
|
522 |
+
"""Process a query through available components"""
|
523 |
+
try:
|
524 |
+
data = await request.json()
|
525 |
+
except Exception as e:
|
526 |
+
response.status_code = status.HTTP_400_BAD_REQUEST
|
527 |
+
return {"error": f"Invalid JSON: {str(e)}"}
|
528 |
+
|
529 |
+
query_text = data.get("query", "")
|
530 |
+
if not query_text:
|
531 |
+
response.status_code = status.HTTP_400_BAD_REQUEST
|
532 |
+
return {"error": "Query text is required"}
|
533 |
+
|
534 |
+
logger.info(f"Received query: {query_text}")
|
535 |
+
results = {}
|
536 |
+
|
537 |
+
# Process with OpenManus if enabled
|
538 |
+
if config["integrations"]["enable_openmanus"]:
|
539 |
+
try:
|
540 |
+
sys.path.append(config["paths"]["openmanus"])
|
541 |
+
try:
|
542 |
+
from app.agent import Agent
|
543 |
+
agent = Agent()
|
544 |
+
openmanus_result = agent.run(query_text)
|
545 |
+
results["openmanus"] = openmanus_result
|
546 |
+
except Exception as inner_e:
|
547 |
+
# Fallback to simpler interaction if full agent fails
|
548 |
+
logger.warning(f"Failed to use OpenManus Agent, trying LLM directly: {inner_e}")
|
549 |
+
from app import llm
|
550 |
+
openmanus_result = llm.generate_text(prompt=query_text)
|
551 |
+
results["openmanus"] = {"fallback_response": openmanus_result}
|
552 |
+
except Exception as e:
|
553 |
+
logger.error(f"OpenManus processing error: {e}")
|
554 |
+
logger.error(traceback.format_exc())
|
555 |
+
results["openmanus"] = {"error": str(e)}
|
556 |
+
|
557 |
+
# Process with Quantum Vision if enabled
|
558 |
+
if config["integrations"]["enable_quantum_vision"]:
|
559 |
+
try:
|
560 |
+
sys.path.append(config["paths"]["quantum_vision"])
|
561 |
+
|
562 |
+
# Try to import and use nlp_processor
|
563 |
+
try:
|
564 |
+
import nlp_processor
|
565 |
+
processor = nlp_processor.NLPProcessor()
|
566 |
+
quantum_result = processor.process_text(query_text)
|
567 |
+
results["quantum_vision"] = quantum_result
|
568 |
+
except AttributeError:
|
569 |
+
# Fallback to direct OpenAI if NLPProcessor class not found
|
570 |
+
logger.warning("NLPProcessor not found, falling back to OpenAI integration")
|
571 |
+
import openai_integration
|
572 |
+
api_result = openai_integration.process_with_openai(query_text)
|
573 |
+
results["quantum_vision"] = {"fallback_response": api_result}
|
574 |
+
except Exception as e:
|
575 |
+
logger.error(f"Quantum Vision processing error: {e}")
|
576 |
+
logger.error(traceback.format_exc())
|
577 |
+
results["quantum_vision"] = {"error": str(e)}
|
578 |
+
|
579 |
+
return results
|
580 |
+
|
581 |
+
def main():
|
582 |
+
"""Run the unified bridge server"""
|
583 |
+
logger.info("Starting Atlas Intelligence Unified Bridge")
|
584 |
+
try:
|
585 |
+
uvicorn.run(
|
586 |
+
"unified_bridge:app",
|
587 |
+
host=config["api"]["host"],
|
588 |
+
port=config["api"]["port"],
|
589 |
+
reload=config["api"]["debug"]
|
590 |
+
)
|
591 |
+
except Exception as e:
|
592 |
+
logger.error(f"Failed to start server: {e}")
|
593 |
+
logger.error(traceback.format_exc())
|
594 |
+
sys.exit(1)
|
595 |
+
|
596 |
+
if __name__ == "__main__":
|
597 |
+
main()
|
598 |
+
EOF
|
599 |
+
|
600 |
+
# Make the bridge script executable
|
601 |
+
chmod +x "$UNIFIED_DIR/unified_bridge.py"
|
602 |
+
|
603 |
+
# Create a launch script with improved error handling
|
604 |
+
log_info "Creating launch script..."
|
605 |
+
cat > "$UNIFIED_DIR/start_unified.sh" << EOF
|
606 |
+
#!/bin/bash
|
607 |
+
|
608 |
+
# Color codes for better output
|
609 |
+
GREEN='\033[0;32m'
|
610 |
+
BLUE='\033[0;34m'
|
611 |
+
YELLOW='\033[0;33m'
|
612 |
+
RED='\033[0;31m'
|
613 |
+
NC='\033[0m' # No Color
|
614 |
+
|
615 |
+
# Check if virtual environment exists
|
616 |
+
if [ ! -f "$UNIFIED_DIR/venv/bin/activate" ]; then
|
617 |
+
echo -e "${RED}Error: Virtual environment not found.${NC}"
|
618 |
+
echo -e "${BLUE}Running installation repair...${NC}"
|
619 |
+
|
620 |
+
# Attempt to fix the virtual environment
|
621 |
+
cd "$UNIFIED_DIR"
|
622 |
+
python3 -m venv venv
|
623 |
+
|
624 |
+
if [ ! -f "$UNIFIED_DIR/venv/bin/activate" ]; then
|
625 |
+
echo -e "${RED}Failed to create virtual environment. Please run the installation script again.${NC}"
|
626 |
+
exit 1
|
627 |
+
fi
|
628 |
+
fi
|
629 |
+
|
630 |
+
# Activate the virtual environment
|
631 |
+
source "$UNIFIED_DIR/venv/bin/activate" || {
|
632 |
+
echo -e "${RED}Failed to activate virtual environment.${NC}"
|
633 |
+
exit 1
|
634 |
+
}
|
635 |
+
|
636 |
+
# Check if required packages are installed
|
637 |
+
echo -e "${BLUE}Checking required packages...${NC}"
|
638 |
+
python -c "import fastapi, uvicorn" &>/dev/null || {
|
639 |
+
echo -e "${YELLOW}Some required packages are missing. Installing...${NC}"
|
640 |
+
pip install fastapi uvicorn pydantic
|
641 |
+
}
|
642 |
+
|
643 |
+
# Change to the unified directory
|
644 |
+
cd "$UNIFIED_DIR" || {
|
645 |
+
echo -e "${RED}Failed to change to unified directory.${NC}"
|
646 |
+
exit 1
|
647 |
+
}
|
648 |
+
|
649 |
+
# Run the unified bridge
|
650 |
+
echo -e "${GREEN}Starting Atlas Intelligence Unified API...${NC}"
|
651 |
+
|
652 |
+
if [[ \$1 == "--debug" ]]; then
|
653 |
+
echo -e "${YELLOW}Running in debug mode...${NC}"
|
654 |
+
PYTHONPATH="\$PYTHONPATH:$UNIFIED_DIR" python unified_bridge.py
|
655 |
+
else
|
656 |
+
# Run in "production" mode with error logging
|
657 |
+
PYTHONPATH="\$PYTHONPATH:$UNIFIED_DIR" python unified_bridge.py 2> "$UNIFIED_DIR/logs/error.log"
|
658 |
+
fi
|
659 |
+
EOF
|
660 |
+
|
661 |
+
chmod +x "$UNIFIED_DIR/start_unified.sh"
|
662 |
+
|
663 |
+
# Create a simple diagnostic tool for troubleshooting
|
664 |
+
log_info "Creating diagnostic tool..."
|
665 |
+
cat > "$UNIFIED_DIR/diagnose.py" << EOF
|
666 |
+
#!/usr/bin/env python3
|
667 |
+
"""
|
668 |
+
Atlas Intelligence Diagnostic Tool
|
669 |
+
Checks if all components are properly configured and working.
|
670 |
+
"""
|
671 |
+
|
672 |
+
import os
|
673 |
+
import sys
|
674 |
+
import yaml
|
675 |
+
import importlib
|
676 |
+
import platform
|
677 |
+
import subprocess
|
678 |
+
from pathlib import Path
|
679 |
+
|
680 |
+
def print_header(text):
|
681 |
+
print(f"\n{'=' * 50}")
|
682 |
+
print(f" {text}")
|
683 |
+
print(f"{'=' * 50}")
|
684 |
+
|
685 |
+
def print_success(text):
|
686 |
+
print(f"[✓] {text}")
|
687 |
+
|
688 |
+
def print_error(text):
|
689 |
+
print(f"[✗] {text}")
|
690 |
+
|
691 |
+
def print_warning(text):
|
692 |
+
print(f"[!] {text}")
|
693 |
+
|
694 |
+
def check_python():
|
695 |
+
print_header("Python Environment")
|
696 |
+
print(f"Python version: {platform.python_version()}")
|
697 |
+
print(f"Python executable: {sys.executable}")
|
698 |
+
|
699 |
+
# Check virtual environment
|
700 |
+
in_venv = sys.prefix != sys.base_prefix
|
701 |
+
if in_venv:
|
702 |
+
print_success(f"Running in virtual environment: {sys.prefix}")
|
703 |
+
else:
|
704 |
+
print_error("Not running in a virtual environment")
|
705 |
+
|
706 |
+
def check_core_packages():
|
707 |
+
print_header("Core Python Packages")
|
708 |
+
packages = [
|
709 |
+
"fastapi", "uvicorn", "pydantic", "yaml", "numpy",
|
710 |
+
"openai", "requests", "flask", "sqlalchemy"
|
711 |
+
]
|
712 |
+
|
713 |
+
for package in packages:
|
714 |
+
try:
|
715 |
+
pkg = importlib.import_module(package)
|
716 |
+
version = getattr(pkg, "__version__", "unknown version")
|
717 |
+
print_success(f"{package}: {version}")
|
718 |
+
except ImportError:
|
719 |
+
print_error(f"{package}: Not installed")
|
720 |
+
|
721 |
+
def check_components(config):
|
722 |
+
print_header("Component Status")
|
723 |
+
|
724 |
+
# Check OpenManus
|
725 |
+
if config["integrations"]["enable_openmanus"]:
|
726 |
+
path = config["paths"]["openmanus"]
|
727 |
+
if os.path.exists(path):
|
728 |
+
print_success(f"OpenManus path exists: {path}")
|
729 |
+
# Try to import a key module
|
730 |
+
try:
|
731 |
+
sys.path.append(path)
|
732 |
+
import app.llm
|
733 |
+
print_success("OpenManus modules can be imported")
|
734 |
+
except ImportError as e:
|
735 |
+
print_error(f"Cannot import OpenManus modules: {e}")
|
736 |
+
else:
|
737 |
+
print_error(f"OpenManus path does not exist: {path}")
|
738 |
+
|
739 |
+
# Check QuantumVision
|
740 |
+
if config["integrations"]["enable_quantum_vision"]:
|
741 |
+
path = config["paths"]["quantum_vision"]
|
742 |
+
if os.path.exists(path):
|
743 |
+
print_success(f"QuantumVision path exists: {path}")
|
744 |
+
# Try to import a key module
|
745 |
+
try:
|
746 |
+
sys.path.append(path)
|
747 |
+
import nlp_processor
|
748 |
+
print_success("QuantumVision modules can be imported")
|
749 |
+
except ImportError as e:
|
750 |
+
print_error(f"Cannot import QuantumVision modules: {e}")
|
751 |
+
else:
|
752 |
+
print_error(f"QuantumVision path does not exist: {path}")
|
753 |
+
|
754 |
+
# Check Casibase
|
755 |
+
if config["integrations"]["enable_casibase"]:
|
756 |
+
path = config["paths"]["casibase"]
|
757 |
+
if os.path.exists(path):
|
758 |
+
print_success(f"Casibase path exists: {path}")
|
759 |
+
else:
|
760 |
+
print_error(f"Casibase path does not exist: {path}")
|
761 |
+
|
762 |
+
# Check other components
|
763 |
+
for component in ["cypher", "ai_companion"]:
|
764 |
+
if config["integrations"][f"enable_{component}"]:
|
765 |
+
path = config["paths"][component]
|
766 |
+
if os.path.exists(path):
|
767 |
+
print_success(f"{component} path exists: {path}")
|
768 |
+
else:
|
769 |
+
print_error(f"{component} path does not exist: {path}")
|
770 |
+
|
771 |
+
def check_connectivity():
|
772 |
+
print_header("API Connectivity")
|
773 |
+
host = "localhost"
|
774 |
+
port = 8080
|
775 |
+
|
776 |
+
# Check if something is already running on the port
|
777 |
+
try:
|
778 |
+
import socket
|
779 |
+
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
780 |
+
s.settimeout(1)
|
781 |
+
result = s.connect_ex((host, port))
|
782 |
+
if result == 0:
|
783 |
+
print_success(f"Port {port} is open (API may already be running)")
|
784 |
+
else:
|
785 |
+
print_warning(f"Port {port} is not in use (API is not running)")
|
786 |
+
s.close()
|
787 |
+
except Exception as e:
|
788 |
+
print_error(f"Failed to check port: {e}")
|
789 |
+
|
790 |
+
def main():
|
791 |
+
print_header("Atlas Intelligence Diagnostic Tool")
|
792 |
+
print(f"Running diagnostics on: {os.path.abspath('.')}")
|
793 |
+
|
794 |
+
# Check Python environment
|
795 |
+
check_python()
|
796 |
+
|
797 |
+
# Check core packages
|
798 |
+
check_core_packages()
|
799 |
+
|
800 |
+
# Load configuration
|
801 |
+
config_path = Path("config/unified_config.yaml")
|
802 |
+
if config_path.exists():
|
803 |
+
try:
|
804 |
+
with open(config_path, "r") as f:
|
805 |
+
config = yaml.safe_load(f)
|
806 |
+
print_success("Configuration loaded successfully")
|
807 |
+
|
808 |
+
# Check components
|
809 |
+
check_components(config)
|
810 |
+
except Exception as e:
|
811 |
+
print_error(f"Failed to load configuration: {e}")
|
812 |
+
else:
|
813 |
+
print_error(f"Configuration file not found: {config_path}")
|
814 |
+
|
815 |
+
# Check connectivity
|
816 |
+
check_connectivity()
|
817 |
+
|
818 |
+
print_header("Diagnostic Complete")
|
819 |
+
print("To start the API server, run: ./start_unified.sh")
|
820 |
+
print("For detailed logs, check the logs directory")
|
821 |
+
|
822 |
+
if __name__ == "__main__":
|
823 |
+
main()
|
824 |
+
EOF
|
825 |
+
|
826 |
+
chmod +x "$UNIFIED_DIR/diagnose.py"
|
827 |
+
|
828 |
+
# Create a repair script for common issues
|
829 |
+
log_info "Creating repair script..."
|
830 |
+
cat > "$UNIFIED_DIR/repair.sh" << EOF
|
831 |
+
#!/bin/bash
|
832 |
+
|
833 |
+
# Color codes for better output
|
834 |
+
GREEN='\033[0;32m'
|
835 |
+
BLUE='\033[0;34m'
|
836 |
+
YELLOW='\033[0;33m'
|
837 |
+
RED='\033[0;31m'
|
838 |
+
NC='\033[0m' # No Color
|
839 |
+
|
840 |
+
echo -e "${BLUE}==============================================${NC}"
|
841 |
+
echo -e "${BLUE} Atlas Intelligence Repair Tool ${NC}"
|
842 |
+
echo -e "${BLUE}==============================================${NC}"
|
843 |
+
|
844 |
+
# Check if running in the correct directory
|
845 |
+
if [ ! -f "./config/unified_config.yaml" ]; then
|
846 |
+
echo -e "${RED}Error: Run this script from the AtlasUnified directory.${NC}"
|
847 |
+
exit 1
|
848 |
+
fi
|
849 |
+
|
850 |
+
# Repair virtual environment
|
851 |
+
echo -e "${BLUE}Repairing virtual environment...${NC}"
|
852 |
+
rm -rf ./venv
|
853 |
+
python3 -m venv venv
|
854 |
+
source ./venv/bin/activate
|
855 |
+
|
856 |
+
# Install core dependencies
|
857 |
+
echo -e "${BLUE}Installing core dependencies...${NC}"
|
858 |
+
pip install --upgrade pip wheel setuptools
|
859 |
+
pip install fastapi uvicorn pydantic pyyaml numpy
|
860 |
+
pip install openai requests flask sqlalchemy
|
861 |
+
|
862 |
+
# Fix permissions
|
863 |
+
echo -e "${BLUE}Fixing permissions...${NC}"
|
864 |
+
chmod +x ./unified_bridge.py
|
865 |
+
chmod +x ./start_unified.sh
|
866 |
+
chmod +x ./diagnose.py
|
867 |
+
|
868 |
+
# Create any missing directories
|
869 |
+
echo -e "${BLUE}Creating missing directories...${NC}"
|
870 |
+
mkdir -p ./logs
|
871 |
+
mkdir -p ./data
|
872 |
+
|
873 |
+
# Run diagnostics
|
874 |
+
echo -e "${BLUE}Running diagnostics...${NC}"
|
875 |
+
python ./diagnose.py
|
876 |
+
|
877 |
+
echo -e "${GREEN}Repair complete. You can now run ./start_unified.sh${NC}"
|
878 |
+
EOF
|
879 |
+
|
880 |
+
chmod +x "$UNIFIED_DIR/repair.sh"
|
881 |
+
|
882 |
+
# Make sure we're back in the original directory
|
883 |
+
cd "$QUANTUM_VISION_DIR"
|
884 |
+
|
885 |
+
log_success "Installation complete!"
|
886 |
+
log_info "=============================================="
|
887 |
+
log_info "To start the unified system, run:"
|
888 |
+
log_success "cd $UNIFIED_DIR && ./start_unified.sh"
|
889 |
+
log_info "=============================================="
|
890 |
+
log_info "For diagnostics and troubleshooting:"
|
891 |
+
log_success "cd $UNIFIED_DIR && python diagnose.py"
|
892 |
+
log_info "=============================================="
|
893 |
+
log_info "To repair the installation if needed:"
|
894 |
+
log_success "cd $UNIFIED_DIR && ./repair.sh"
|
895 |
+
log_info "=============================================="
|
896 |
+
log_info "The unified API will be available at:"
|
897 |
+
log_success "http://localhost:8080"
|
898 |
+
log_info "=============================================="
|
899 |
+
log_info "Installation log saved to: $LOG_FILE"
|
900 |
+
log_info "=============================================="
|
install_atlas_unified.sh
ADDED
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
|
3 |
+
# Atlas Unified Platform Installation Script
|
4 |
+
# This script installs all components needed for Atlas Intelligence Platform
|
5 |
+
|
6 |
+
# Terminal colors
|
7 |
+
CYAN='\033[0;36m'
|
8 |
+
GREEN='\033[0;32m'
|
9 |
+
YELLOW='\033[0;33m'
|
10 |
+
RED='\033[0;31m'
|
11 |
+
RESET='\033[0m'
|
12 |
+
|
13 |
+
# Utility functions
|
14 |
+
log() {
|
15 |
+
echo -e "${CYAN}[INFO]${RESET} $1"
|
16 |
+
}
|
17 |
+
|
18 |
+
log_success() {
|
19 |
+
echo -e "${GREEN}[SUCCESS]${RESET} $1"
|
20 |
+
}
|
21 |
+
|
22 |
+
log_warning() {
|
23 |
+
echo -e "${YELLOW}[WARNING]${RESET} $1"
|
24 |
+
}
|
25 |
+
|
26 |
+
log_error() {
|
27 |
+
echo -e "${RED}[ERROR]${RESET} $1"
|
28 |
+
}
|
29 |
+
|
30 |
+
check_command() {
|
31 |
+
if ! command -v "$1" &> /dev/null; then
|
32 |
+
log_error "$1 could not be found. Please install it first."
|
33 |
+
return 1
|
34 |
+
fi
|
35 |
+
return 0
|
36 |
+
}
|
37 |
+
|
38 |
+
# Base directory
|
39 |
+
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
40 |
+
# Modified to use actual directory paths as seen in the workspace structure
|
41 |
+
PARENT_DIR_OM="/Users/lattm"
|
42 |
+
PARENT_DIR_OTHERS="/Users/lattm"
|
43 |
+
PARENT_DIR_CLOUD="/Users/lattm/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business"
|
44 |
+
|
45 |
+
# Check required tools
|
46 |
+
check_command "python3" || exit 1
|
47 |
+
check_command "pip3" || exit 1
|
48 |
+
check_command "node" || exit 1
|
49 |
+
check_command "npm" || exit 1
|
50 |
+
|
51 |
+
# Create logs directory
|
52 |
+
mkdir -p "$BASE_DIR/logs"
|
53 |
+
mkdir -p "$BASE_DIR/pids"
|
54 |
+
|
55 |
+
# Function to install OpenManus
|
56 |
+
install_openmanus() {
|
57 |
+
log "Installing OpenManus..."
|
58 |
+
OPENMANUS_DIR="$PARENT_DIR_OM/OpenManus"
|
59 |
+
|
60 |
+
# Check if directory exists
|
61 |
+
if [ ! -d "$OPENMANUS_DIR" ]; then
|
62 |
+
log_error "OpenManus directory not found at: $OPENMANUS_DIR"
|
63 |
+
return 1
|
64 |
+
fi
|
65 |
+
|
66 |
+
cd "$OPENMANUS_DIR" || exit
|
67 |
+
|
68 |
+
# Create and activate virtual environment if it doesn't exist
|
69 |
+
if [ ! -d "openmanus_env" ]; then
|
70 |
+
python3 -m venv openmanus_env
|
71 |
+
fi
|
72 |
+
source openmanus_env/bin/activate
|
73 |
+
|
74 |
+
# Install dependencies
|
75 |
+
pip install -r requirements.txt
|
76 |
+
pip install -e .
|
77 |
+
|
78 |
+
# Additional dependencies for MCP server
|
79 |
+
pip install fastapi uvicorn pydantic typing-extensions
|
80 |
+
|
81 |
+
log_success "OpenManus installation completed"
|
82 |
+
|
83 |
+
# Deactivate virtual environment
|
84 |
+
deactivate
|
85 |
+
}
|
86 |
+
|
87 |
+
# Function to install Casibase
|
88 |
+
install_casibase() {
|
89 |
+
log "Installing Casibase..."
|
90 |
+
CASIBASE_DIR="$PARENT_DIR_OTHERS/casibase"
|
91 |
+
|
92 |
+
# Check if directory exists
|
93 |
+
if [ ! -d "$CASIBASE_DIR" ]; then
|
94 |
+
log_error "Casibase directory not found at: $CASIBASE_DIR"
|
95 |
+
return 1
|
96 |
+
fi
|
97 |
+
|
98 |
+
cd "$CASIBASE_DIR" || exit
|
99 |
+
|
100 |
+
# Build Casibase if server doesn't exist
|
101 |
+
if [ ! -f "./server" ]; then
|
102 |
+
./build.sh
|
103 |
+
fi
|
104 |
+
|
105 |
+
log_success "Casibase installation completed"
|
106 |
+
}
|
107 |
+
|
108 |
+
# Function to install Cypher
|
109 |
+
install_cypher() {
|
110 |
+
log "Installing Cypher..."
|
111 |
+
CYPHER_DIR="$PARENT_DIR_OTHERS/Cypher"
|
112 |
+
|
113 |
+
# Check if directory exists
|
114 |
+
if [ ! -d "$CYPHER_DIR" ]; then
|
115 |
+
log_error "Cypher directory not found at: $CYPHER_DIR"
|
116 |
+
return 1
|
117 |
+
fi
|
118 |
+
|
119 |
+
cd "$CYPHER_DIR" || exit
|
120 |
+
|
121 |
+
# Install Python dependencies
|
122 |
+
pip3 install -r requirements.txt
|
123 |
+
|
124 |
+
log_success "Cypher installation completed"
|
125 |
+
}
|
126 |
+
|
127 |
+
# Function to install QuantumVision
|
128 |
+
install_quantumvision() {
|
129 |
+
log "Installing QuantumVision..."
|
130 |
+
QUANTUM_DIR="$BASE_DIR"
|
131 |
+
|
132 |
+
cd "$QUANTUM_DIR" || exit
|
133 |
+
|
134 |
+
# Install Python dependencies
|
135 |
+
pip3 install -r requirements.txt
|
136 |
+
|
137 |
+
# Install additional dependencies
|
138 |
+
pip3 install fastapi uvicorn pydantic typing-extensions openai langchain transformers torch numpy pandas scikit-learn matplotlib seaborn
|
139 |
+
|
140 |
+
log_success "QuantumVision installation completed"
|
141 |
+
}
|
142 |
+
|
143 |
+
# Function to install AIConversationCompanion
|
144 |
+
install_conversation_companion() {
|
145 |
+
log "Installing AIConversationCompanion..."
|
146 |
+
CONVERSATION_DIR="$PARENT_DIR_CLOUD/AIConversationCompanion"
|
147 |
+
|
148 |
+
# Make sure directory exists
|
149 |
+
if [ ! -d "$CONVERSATION_DIR" ]; then
|
150 |
+
log_error "AIConversationCompanion directory not found at: $CONVERSATION_DIR"
|
151 |
+
return 1
|
152 |
+
fi
|
153 |
+
|
154 |
+
cd "$CONVERSATION_DIR" || exit
|
155 |
+
|
156 |
+
# Install server dependencies
|
157 |
+
cd ./server || exit
|
158 |
+
npm install
|
159 |
+
|
160 |
+
# Install client dependencies
|
161 |
+
cd ../client || exit
|
162 |
+
npm install
|
163 |
+
|
164 |
+
cd "$CONVERSATION_DIR" || exit
|
165 |
+
|
166 |
+
log_success "AIConversationCompanion installation completed"
|
167 |
+
return 0
|
168 |
+
}
|
169 |
+
|
170 |
+
# Function to install Atlas Unified Bridge
|
171 |
+
install_unified_bridge() {
|
172 |
+
log "Installing Atlas Unified Bridge..."
|
173 |
+
BRIDGE_DIR="$BASE_DIR/atlas_bridge"
|
174 |
+
|
175 |
+
# Create bridge directory if not exists
|
176 |
+
mkdir -p "$BRIDGE_DIR"
|
177 |
+
|
178 |
+
cd "$BRIDGE_DIR" || exit
|
179 |
+
|
180 |
+
# Create bridge server file
|
181 |
+
cat > bridge_server.py << 'EOL'
|
182 |
+
#!/usr/bin/env python3
|
183 |
+
"""
|
184 |
+
Atlas Unified Bridge Server
|
185 |
+
This server connects all Atlas platform components together
|
186 |
+
"""
|
187 |
+
|
188 |
+
import os
|
189 |
+
import sys
|
190 |
+
import json
|
191 |
+
import time
|
192 |
+
import logging
|
193 |
+
import requests
|
194 |
+
from fastapi import FastAPI, HTTPException, Request
|
195 |
+
from fastapi.middleware.cors import CORSMiddleware
|
196 |
+
from fastapi.responses import JSONResponse
|
197 |
+
import uvicorn
|
198 |
+
from pydantic import BaseModel
|
199 |
+
from typing import Dict, List, Any, Optional
|
200 |
+
|
201 |
+
# Setup logging
|
202 |
+
logging.basicConfig(
|
203 |
+
level=logging.INFO,
|
204 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
205 |
+
handlers=[
|
206 |
+
logging.StreamHandler(),
|
207 |
+
logging.FileHandler("bridge_server.log")
|
208 |
+
]
|
209 |
+
)
|
210 |
+
logger = logging.getLogger("atlas_bridge")
|
211 |
+
|
212 |
+
# Component endpoints
|
213 |
+
OPENMANUS_ENDPOINT = "http://localhost:50505"
|
214 |
+
CASIBASE_ENDPOINT = "http://localhost:7777"
|
215 |
+
CYPHER_ENDPOINT = "http://localhost:5000"
|
216 |
+
QUANTUMVISION_ENDPOINT = "http://localhost:8000"
|
217 |
+
CONVERSATION_ENDPOINT = "http://localhost:8001"
|
218 |
+
|
219 |
+
# Initialize FastAPI app
|
220 |
+
app = FastAPI(title="Atlas Unified Bridge")
|
221 |
+
|
222 |
+
# Add CORS middleware
|
223 |
+
app.add_middleware(
|
224 |
+
CORSMiddleware,
|
225 |
+
allow_origins=["*"], # Adjust in production
|
226 |
+
allow_credentials=True,
|
227 |
+
allow_methods=["*"],
|
228 |
+
allow_headers=["*"],
|
229 |
+
)
|
230 |
+
|
231 |
+
# Health check endpoint
|
232 |
+
@app.get("/health")
|
233 |
+
async def health_check():
|
234 |
+
"""Health check endpoint to verify if bridge server is running"""
|
235 |
+
status = {
|
236 |
+
"bridge": "operational",
|
237 |
+
"components": {
|
238 |
+
"openmanus": check_component_health(OPENMANUS_ENDPOINT),
|
239 |
+
"casibase": check_component_health(CASIBASE_ENDPOINT),
|
240 |
+
"cypher": check_component_health(CYPHER_ENDPOINT),
|
241 |
+
"quantumvision": check_component_health(QUANTUMVISION_ENDPOINT),
|
242 |
+
"conversation": check_component_health(CONVERSATION_ENDPOINT)
|
243 |
+
},
|
244 |
+
"timestamp": time.time()
|
245 |
+
}
|
246 |
+
return status
|
247 |
+
|
248 |
+
def check_component_health(endpoint: str) -> str:
|
249 |
+
"""Check if a component is operational"""
|
250 |
+
try:
|
251 |
+
response = requests.get(f"{endpoint}/health", timeout=3)
|
252 |
+
if response.status_code == 200:
|
253 |
+
return "operational"
|
254 |
+
except:
|
255 |
+
pass
|
256 |
+
return "unavailable"
|
257 |
+
|
258 |
+
# Main entry point
|
259 |
+
if __name__ == "__main__":
|
260 |
+
logger.info("Starting Atlas Unified Bridge Server")
|
261 |
+
uvicorn.run("bridge_server:app", host="0.0.0.0", port=8080, reload=True)
|
262 |
+
EOL
|
263 |
+
|
264 |
+
# Create requirements.txt for the bridge
|
265 |
+
cat > requirements.txt << 'EOL'
|
266 |
+
fastapi>=0.103.1
|
267 |
+
uvicorn>=0.23.2
|
268 |
+
requests>=2.31.0
|
269 |
+
pydantic>=2.3.0
|
270 |
+
python-multipart>=0.0.6
|
271 |
+
EOL
|
272 |
+
|
273 |
+
# Install bridge dependencies
|
274 |
+
pip3 install -r requirements.txt
|
275 |
+
|
276 |
+
log_success "Atlas Unified Bridge installation completed"
|
277 |
+
}
|
278 |
+
|
279 |
+
# Create a requirements.txt file for QuantumVision if it doesn't exist
|
280 |
+
create_quantum_requirements() {
|
281 |
+
QUANTUM_REQ="$BASE_DIR/requirements.txt"
|
282 |
+
|
283 |
+
if [ ! -f "$QUANTUM_REQ" ]; then
|
284 |
+
log "Creating QuantumVision requirements file..."
|
285 |
+
cat > "$QUANTUM_REQ" << 'EOL'
|
286 |
+
fastapi>=0.103.1
|
287 |
+
uvicorn>=0.23.2
|
288 |
+
pydantic>=2.3.0
|
289 |
+
numpy>=1.24.0
|
290 |
+
pandas>=2.0.0
|
291 |
+
scikit-learn>=1.3.0
|
292 |
+
matplotlib>=3.7.0
|
293 |
+
seaborn>=0.12.0
|
294 |
+
torch>=2.0.0
|
295 |
+
transformers>=4.35.0
|
296 |
+
openai>=1.0.0
|
297 |
+
langchain>=0.0.267
|
298 |
+
requests>=2.31.0
|
299 |
+
python-dotenv>=1.0.0
|
300 |
+
pytest>=7.4.0
|
301 |
+
EOL
|
302 |
+
log_success "Requirements file created"
|
303 |
+
fi
|
304 |
+
}
|
305 |
+
|
306 |
+
# Main installation procedure
|
307 |
+
log "Starting Atlas Unified Platform installation..."
|
308 |
+
|
309 |
+
# Create requirements file for QuantumVision
|
310 |
+
create_quantum_requirements
|
311 |
+
|
312 |
+
# Install all components
|
313 |
+
install_openmanus
|
314 |
+
install_casibase
|
315 |
+
install_cypher
|
316 |
+
install_quantumvision
|
317 |
+
install_conversation_companion
|
318 |
+
install_unified_bridge
|
319 |
+
|
320 |
+
# Set executable permissions
|
321 |
+
chmod +x "$BASE_DIR/start_atlas.sh"
|
322 |
+
chmod +x "$BASE_DIR/start_atlas_unified.sh"
|
323 |
+
|
324 |
+
log_success "Atlas Unified Platform installation completed"
|
325 |
+
log "You can now start the platform using: $BASE_DIR/start_atlas_unified.sh"
|
326 |
+
|
327 |
+
# Log the installation
|
328 |
+
date > "$BASE_DIR/install_log.txt"
|
329 |
+
echo "Atlas Unified Platform installed successfully" >> "$BASE_DIR/install_log.txt"
|
install_log.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Thu Apr 17 12:22:40 CDT 2025
|
2 |
+
Atlas Unified Platform installed successfully
|
layout.html
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!DOCTYPE html>
|
2 |
+
<html lang="en" data-bs-theme="dark">
|
3 |
+
<head>
|
4 |
+
<meta charset="UTF-8">
|
5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
6 |
+
<title>Quantum NLP Framework</title>
|
7 |
+
<!-- Replit-themed Bootstrap CSS -->
|
8 |
+
<link rel="stylesheet" href="https://cdn.replit.com/agent/bootstrap-agent-dark-theme.min.css">
|
9 |
+
<!-- Custom CSS -->
|
10 |
+
<link rel="stylesheet" href="{{ url_for('static', filename='css/style.css') }}">
|
11 |
+
<!-- Animation CSS -->
|
12 |
+
<link rel="stylesheet" href="{{ url_for('static', filename='css/animations.css') }}">
|
13 |
+
<!-- Contextual Hints CSS -->
|
14 |
+
<link rel="stylesheet" href="{{ url_for('static', filename='css/contextual_hints.css') }}">
|
15 |
+
<!-- Font Awesome for icons -->
|
16 |
+
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css">
|
17 |
+
</head>
|
18 |
+
<body class="{% if request.environ.get('OPENAI_API_KEY') %}has-openai-key{% endif %}">
|
19 |
+
<!-- Quantum Pyramid LED Tracer Background -->
|
20 |
+
<div class="quantum-tracer">
|
21 |
+
<img src="{{ url_for('static', filename='images/quantum-bg.svg') }}" alt="" style="position: fixed; width: 100%; height: 100%; opacity: 0.5; pointer-events: none;">
|
22 |
+
</div>
|
23 |
+
<nav class="navbar navbar-expand-lg glass-card">
|
24 |
+
<div class="container">
|
25 |
+
<a class="navbar-brand" href="/">
|
26 |
+
<i class="fas fa-atom me-2 quantum-spin"></i>
|
27 |
+
<span class="fw-bold">Quantum</span> NLP Framework
|
28 |
+
</a>
|
29 |
+
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav">
|
30 |
+
<span class="navbar-toggler-icon"></span>
|
31 |
+
</button>
|
32 |
+
<div class="collapse navbar-collapse" id="navbarNav">
|
33 |
+
<ul class="navbar-nav ms-auto">
|
34 |
+
<li class="nav-item">
|
35 |
+
<a class="nav-link glow-hover" href="/"><i class="fas fa-home me-1"></i> Home</a>
|
36 |
+
</li>
|
37 |
+
<li class="nav-item">
|
38 |
+
<a class="nav-link glow-hover" href="/zap-integrations">
|
39 |
+
<i class="fas fa-bolt me-1"></i> ZAP Integrations
|
40 |
+
</a>
|
41 |
+
</li>
|
42 |
+
<li class="nav-item">
|
43 |
+
<a class="nav-link glow-hover" href="/automation-workflow">
|
44 |
+
<i class="fas fa-cogs me-1"></i> Automation Workflow
|
45 |
+
</a>
|
46 |
+
</li>
|
47 |
+
<li class="nav-item">
|
48 |
+
<a class="nav-link glow-hover" href="/settings">
|
49 |
+
<i class="fas fa-sliders-h me-1"></i> Settings
|
50 |
+
</a>
|
51 |
+
</li>
|
52 |
+
<li class="nav-item">
|
53 |
+
<a class="nav-link glow-hover" href="https://github.com/yourusername/quantum-nlp-framework" target="_blank">
|
54 |
+
<i class="fab fa-github me-1"></i> GitHub
|
55 |
+
</a>
|
56 |
+
</li>
|
57 |
+
</ul>
|
58 |
+
</div>
|
59 |
+
</div>
|
60 |
+
</nav>
|
61 |
+
|
62 |
+
<main class="container my-4">
|
63 |
+
<!-- Flash messages -->
|
64 |
+
{% with messages = get_flashed_messages(with_categories=true) %}
|
65 |
+
{% if messages %}
|
66 |
+
{% for category, message in messages %}
|
67 |
+
<div class="alert alert-{{ category }} alert-dismissible fade show" role="alert">
|
68 |
+
{{ message }}
|
69 |
+
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
|
70 |
+
</div>
|
71 |
+
{% endfor %}
|
72 |
+
{% endif %}
|
73 |
+
{% endwith %}
|
74 |
+
|
75 |
+
<!-- Main content -->
|
76 |
+
{% block content %}{% endblock %}
|
77 |
+
</main>
|
78 |
+
|
79 |
+
<footer class="glass-card py-4 mt-5">
|
80 |
+
<div class="container">
|
81 |
+
<div class="row">
|
82 |
+
<div class="col-md-6">
|
83 |
+
<h5><i class="fas fa-atom me-2 quantum-spin"></i> <span class="fw-bold">Quantum</span> NLP Framework</h5>
|
84 |
+
<p>A multi-dimensional, layered thinking process inspired by quantum computing concepts.</p>
|
85 |
+
<div class="vision-progress">
|
86 |
+
<div class="vision-progress-bar"></div>
|
87 |
+
</div>
|
88 |
+
</div>
|
89 |
+
<div class="col-md-3">
|
90 |
+
<h5 class="quantum-glow">Resources</h5>
|
91 |
+
<ul class="list-unstyled">
|
92 |
+
<li><a href="https://spacy.io" target="_blank" class="text-light glow-hover">spaCy Documentation</a></li>
|
93 |
+
<li><a href="https://platform.openai.com/docs" target="_blank" class="text-light glow-hover">OpenAI API</a></li>
|
94 |
+
<li><a href="https://flask.palletsprojects.com" target="_blank" class="text-light glow-hover">Flask Docs</a></li>
|
95 |
+
</ul>
|
96 |
+
</div>
|
97 |
+
<div class="col-md-3">
|
98 |
+
<h5 class="quantum-glow">About</h5>
|
99 |
+
<p>Open-source and free quantum-inspired NLP framework for Replit.</p>
|
100 |
+
</div>
|
101 |
+
</div>
|
102 |
+
<div class="text-center mt-4">
|
103 |
+
<p class="mb-0">© 2025 Quantum NLP Framework. Open Source under MIT License.</p>
|
104 |
+
</div>
|
105 |
+
</div>
|
106 |
+
</footer>
|
107 |
+
|
108 |
+
<!-- Canvas for particle animations -->
|
109 |
+
<canvas id="quantum-particles" class="particle-canvas" style="display: none;"></canvas>
|
110 |
+
|
111 |
+
<!-- Quantum loading overlay -->
|
112 |
+
<div id="quantum-overlay">
|
113 |
+
<div class="quantum-loader">
|
114 |
+
<div class="quantum-spinner">
|
115 |
+
<div class="q-orbit q-orbit-1"></div>
|
116 |
+
<div class="q-orbit q-orbit-2"></div>
|
117 |
+
<div class="q-orbit q-orbit-3"></div>
|
118 |
+
<div class="q-core"></div>
|
119 |
+
</div>
|
120 |
+
<div class="quantum-message">Quantum Dimensional Processing</div>
|
121 |
+
</div>
|
122 |
+
</div>
|
123 |
+
|
124 |
+
<!-- Bootstrap JS Bundle with Popper -->
|
125 |
+
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script>
|
126 |
+
<!-- Custom JS -->
|
127 |
+
<script src="{{ url_for('static', filename='js/main.js') }}"></script>
|
128 |
+
<!-- Animation JS -->
|
129 |
+
<script src="{{ url_for('static', filename='js/animations.js') }}"></script>
|
130 |
+
<!-- Contextual Hints JS -->
|
131 |
+
<script src="{{ url_for('static', filename='js/contextual_hints.js') }}"></script>
|
132 |
+
<!-- Page-specific scripts -->
|
133 |
+
{% block scripts %}{% endblock %}
|
134 |
+
</body>
|
135 |
+
</html>
|
main.js
ADDED
@@ -0,0 +1,269 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
// Main JavaScript file for the Quantum NLP Framework
|
2 |
+
|
3 |
+
document.addEventListener('DOMContentLoaded', function() {
|
4 |
+
// Show loading state on form submission
|
5 |
+
const form = document.getElementById('process-form');
|
6 |
+
const analyzeBtn = document.getElementById('analyze-btn');
|
7 |
+
|
8 |
+
if (form) {
|
9 |
+
form.addEventListener('submit', function() {
|
10 |
+
if (analyzeBtn) {
|
11 |
+
// Disable button and show loading state
|
12 |
+
analyzeBtn.disabled = true;
|
13 |
+
analyzeBtn.innerHTML = '<i class="fas fa-atom fa-spin me-2"></i> Processing...';
|
14 |
+
|
15 |
+
// Trigger quantum transition animation if available
|
16 |
+
if (typeof showQuantumTransition === 'function') {
|
17 |
+
showQuantumTransition();
|
18 |
+
}
|
19 |
+
}
|
20 |
+
});
|
21 |
+
}
|
22 |
+
|
23 |
+
// Add hover effects to analyze button if present
|
24 |
+
if (analyzeBtn) {
|
25 |
+
analyzeBtn.addEventListener('mouseover', function() {
|
26 |
+
// Add hover class for CSS effects
|
27 |
+
this.classList.add('quantum-btn-hover');
|
28 |
+
|
29 |
+
// Trigger particle effect if available
|
30 |
+
if (typeof triggerSmallParticleEffect === 'function') {
|
31 |
+
triggerSmallParticleEffect(this);
|
32 |
+
}
|
33 |
+
});
|
34 |
+
|
35 |
+
analyzeBtn.addEventListener('mouseout', function() {
|
36 |
+
this.classList.remove('quantum-btn-hover');
|
37 |
+
});
|
38 |
+
}
|
39 |
+
|
40 |
+
// Initialize tooltips
|
41 |
+
const tooltipTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="tooltip"]'));
|
42 |
+
tooltipTriggerList.map(function(tooltipTriggerEl) {
|
43 |
+
return new bootstrap.Tooltip(tooltipTriggerEl);
|
44 |
+
});
|
45 |
+
|
46 |
+
// Initialize text-to-vision transitions
|
47 |
+
initTextToVisionTransitions();
|
48 |
+
|
49 |
+
// Check for results container and apply entrance animations
|
50 |
+
const resultsContainer = document.getElementById('results-container');
|
51 |
+
if (resultsContainer && resultsContainer.children.length > 0) {
|
52 |
+
applyResultsAnimations(resultsContainer);
|
53 |
+
}
|
54 |
+
|
55 |
+
// Highlight entities in text (if applicable)
|
56 |
+
highlightEntities();
|
57 |
+
|
58 |
+
// Check if OpenAI key is available for contextual hints
|
59 |
+
const hasOpenaiKey = document.body.classList.contains('has-openai-key');
|
60 |
+
if (!hasOpenaiKey && typeof window.contextualHintSystem !== 'undefined') {
|
61 |
+
setTimeout(() => {
|
62 |
+
const settingsLink = document.querySelector('a[href="/settings"]');
|
63 |
+
if (settingsLink) {
|
64 |
+
window.contextualHintSystem.registerHint('openai-key-needed', {
|
65 |
+
title: 'OpenAI API Key Needed',
|
66 |
+
content: 'For enhanced AI-powered analysis, add your OpenAI API key in Settings. Without it, the system uses local fallback processing.',
|
67 |
+
position: 'bottom',
|
68 |
+
selector: 'a[href="/settings"]',
|
69 |
+
icon: 'fas fa-key',
|
70 |
+
important: true,
|
71 |
+
maxShows: 2,
|
72 |
+
buttonText: 'Go to Settings'
|
73 |
+
});
|
74 |
+
window.contextualHintSystem.considerShowingHint('openai-key-needed', settingsLink);
|
75 |
+
}
|
76 |
+
}, 2000);
|
77 |
+
}
|
78 |
+
});
|
79 |
+
|
80 |
+
/**
|
81 |
+
* Initialize text-to-vision transitions with staggered timing
|
82 |
+
*/
|
83 |
+
function initTextToVisionTransitions() {
|
84 |
+
const visionElements = document.querySelectorAll('.text-to-vision');
|
85 |
+
visionElements.forEach((elem, index) => {
|
86 |
+
// Add a staggered delay for smoother visual effect
|
87 |
+
const delay = 300 + (index * 150);
|
88 |
+
setTimeout(() => {
|
89 |
+
elem.classList.add('text-to-vision-active');
|
90 |
+
|
91 |
+
// Reset the animation after it completes to allow replaying
|
92 |
+
setTimeout(() => {
|
93 |
+
elem.classList.remove('text-to-vision-active');
|
94 |
+
}, 1500);
|
95 |
+
}, delay);
|
96 |
+
});
|
97 |
+
}
|
98 |
+
|
99 |
+
/**
|
100 |
+
* Apply animations to results container
|
101 |
+
*/
|
102 |
+
function applyResultsAnimations(container) {
|
103 |
+
// Add entrance animation
|
104 |
+
container.style.opacity = '0';
|
105 |
+
container.style.transform = 'translateY(20px)';
|
106 |
+
|
107 |
+
setTimeout(() => {
|
108 |
+
container.style.transition = 'opacity 0.5s ease, transform 0.5s ease';
|
109 |
+
container.style.opacity = '1';
|
110 |
+
container.style.transform = 'translateY(0)';
|
111 |
+
|
112 |
+
// Apply the quantum-reveal class to each card with staggered timing
|
113 |
+
const cards = container.querySelectorAll('.quantum-card');
|
114 |
+
cards.forEach((card, index) => {
|
115 |
+
setTimeout(() => {
|
116 |
+
card.classList.add('quantum-reveal');
|
117 |
+
}, 300 + (index * 150));
|
118 |
+
});
|
119 |
+
}, 200);
|
120 |
+
}
|
121 |
+
|
122 |
+
/**
|
123 |
+
* Highlight named entities in the displayed text
|
124 |
+
*/
|
125 |
+
function highlightEntities() {
|
126 |
+
// First try the data-entities method
|
127 |
+
const entityContainers = document.querySelectorAll('.highlight-entities');
|
128 |
+
|
129 |
+
entityContainers.forEach(function(element) {
|
130 |
+
let html = element.innerHTML;
|
131 |
+
const entities = JSON.parse(element.dataset.entities || '[]');
|
132 |
+
|
133 |
+
// Sort entities by start position in descending order
|
134 |
+
// to avoid offset issues when replacing text
|
135 |
+
entities.sort((a, b) => b.start - a.start);
|
136 |
+
|
137 |
+
// Replace each entity with highlighted version
|
138 |
+
entities.forEach(function(entity) {
|
139 |
+
const entityText = html.substring(entity.start, entity.end);
|
140 |
+
const replacement = `<span class="entity entity-${entity.label.toLowerCase()}"
|
141 |
+
title="${entity.label}">${entityText}</span>`;
|
142 |
+
|
143 |
+
html = html.substring(0, entity.start) + replacement + html.substring(entity.end);
|
144 |
+
});
|
145 |
+
|
146 |
+
element.innerHTML = html;
|
147 |
+
});
|
148 |
+
|
149 |
+
// Add special effects to entity items in lists
|
150 |
+
const entityItems = document.querySelectorAll('.list-group-item:has(.badge.bg-success)');
|
151 |
+
|
152 |
+
entityItems.forEach(item => {
|
153 |
+
// Extract entity text from the item
|
154 |
+
const entityText = item.textContent.split(/\s+/)[0].trim();
|
155 |
+
const entityType = item.querySelector('.badge')?.textContent || '';
|
156 |
+
|
157 |
+
// Add quantum hover effects
|
158 |
+
item.classList.add('quantum-entity-item');
|
159 |
+
|
160 |
+
// Handle hover effects
|
161 |
+
item.addEventListener('mouseover', function() {
|
162 |
+
this.style.backgroundColor = 'rgba(66, 133, 244, 0.1)';
|
163 |
+
this.style.boxShadow = '0 0 10px rgba(66, 133, 244, 0.3)';
|
164 |
+
this.style.transform = 'translateX(5px)';
|
165 |
+
|
166 |
+
// Create particle effect if the function is available
|
167 |
+
if (typeof triggerSmallParticleEffect === 'function') {
|
168 |
+
triggerSmallParticleEffect(this, 3, 0.3);
|
169 |
+
}
|
170 |
+
});
|
171 |
+
|
172 |
+
item.addEventListener('mouseout', function() {
|
173 |
+
this.style.backgroundColor = '';
|
174 |
+
this.style.boxShadow = '';
|
175 |
+
this.style.transform = '';
|
176 |
+
});
|
177 |
+
|
178 |
+
// Try to find and highlight the entity in the original text
|
179 |
+
highlightEntityInInput(entityText);
|
180 |
+
});
|
181 |
+
}
|
182 |
+
|
183 |
+
/**
|
184 |
+
* Highlight an entity in the input text area
|
185 |
+
*/
|
186 |
+
function highlightEntityInInput(entityText) {
|
187 |
+
const textArea = document.getElementById('input_text');
|
188 |
+
if (!textArea || !entityText) return;
|
189 |
+
|
190 |
+
const text = textArea.value;
|
191 |
+
if (!text) return;
|
192 |
+
|
193 |
+
// Check if the entity is in the text
|
194 |
+
const regex = new RegExp(`\\b${entityText}\\b`, 'gi');
|
195 |
+
if (!regex.test(text)) return;
|
196 |
+
|
197 |
+
// We can't highlight directly in textarea, so we can add a custom tooltip
|
198 |
+
// or create a special view mode that shows the marked-up text
|
199 |
+
|
200 |
+
// For now, we'll just add a subtle animation to the textarea
|
201 |
+
textArea.classList.add('has-entities');
|
202 |
+
|
203 |
+
// Create a glowing effect in the textarea
|
204 |
+
const glowEffect = document.createElement('div');
|
205 |
+
glowEffect.className = 'textarea-glow';
|
206 |
+
glowEffect.style.position = 'absolute';
|
207 |
+
glowEffect.style.top = '0';
|
208 |
+
glowEffect.style.left = '0';
|
209 |
+
glowEffect.style.width = '100%';
|
210 |
+
glowEffect.style.height = '100%';
|
211 |
+
glowEffect.style.pointerEvents = 'none';
|
212 |
+
glowEffect.style.boxShadow = 'inset 0 0 15px rgba(66, 133, 244, 0.3)';
|
213 |
+
glowEffect.style.opacity = '0';
|
214 |
+
glowEffect.style.transition = 'opacity 0.5s ease';
|
215 |
+
|
216 |
+
// Add glow effect as sibling to textarea with relative positioning
|
217 |
+
const parent = textArea.parentElement;
|
218 |
+
parent.style.position = 'relative';
|
219 |
+
parent.appendChild(glowEffect);
|
220 |
+
|
221 |
+
// Show the glow effect
|
222 |
+
setTimeout(() => {
|
223 |
+
glowEffect.style.opacity = '1';
|
224 |
+
}, 300);
|
225 |
+
|
226 |
+
// Hide after 2 seconds
|
227 |
+
setTimeout(() => {
|
228 |
+
glowEffect.style.opacity = '0';
|
229 |
+
|
230 |
+
// Remove after fade out
|
231 |
+
setTimeout(() => {
|
232 |
+
glowEffect.remove();
|
233 |
+
}, 500);
|
234 |
+
}, 2000);
|
235 |
+
}
|
236 |
+
|
237 |
+
/**
|
238 |
+
* Create a basic visualization of quantum score
|
239 |
+
*/
|
240 |
+
function visualizeQuantumScore(elementId, score) {
|
241 |
+
if (!elementId) return;
|
242 |
+
|
243 |
+
const element = document.getElementById(elementId);
|
244 |
+
if (!element) return;
|
245 |
+
|
246 |
+
// Create a simple bar visualization
|
247 |
+
const width = score * 100;
|
248 |
+
const color = getColorForScore(score);
|
249 |
+
|
250 |
+
element.innerHTML = `
|
251 |
+
<div class="progress">
|
252 |
+
<div class="progress-bar" role="progressbar"
|
253 |
+
style="width: ${width}%; background-color: ${color};"
|
254 |
+
aria-valuenow="${score * 100}" aria-valuemin="0" aria-valuemax="100">
|
255 |
+
${(score * 100).toFixed(0)}%
|
256 |
+
</div>
|
257 |
+
</div>
|
258 |
+
`;
|
259 |
+
}
|
260 |
+
|
261 |
+
/**
|
262 |
+
* Get color based on score value
|
263 |
+
*/
|
264 |
+
function getColorForScore(score) {
|
265 |
+
if (score < 0.3) return '#dc3545'; // Low - red
|
266 |
+
if (score < 0.6) return '#ffc107'; // Medium - yellow
|
267 |
+
if (score < 0.8) return '#0dcaf0'; // Good - cyan
|
268 |
+
return '#198754'; // High - green
|
269 |
+
}
|
main.py
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from app import app
|
2 |
+
|
3 |
+
if __name__ == "__main__":
|
4 |
+
app.run(host="0.0.0.0", port=5000, debug=True)
|
models.py
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datetime import datetime
|
2 |
+
from app import db
|
3 |
+
|
4 |
+
|
5 |
+
class Task(db.Model):
|
6 |
+
"""Model for storing workflow tasks and scheduled activities"""
|
7 |
+
id = db.Column(db.Integer, primary_key=True)
|
8 |
+
title = db.Column(db.String(100), nullable=False)
|
9 |
+
description = db.Column(db.Text)
|
10 |
+
status = db.Column(db.String(20), default='pending') # pending, in_progress, completed, failed
|
11 |
+
priority = db.Column(db.Integer, default=1) # 1 (low) to 5 (high)
|
12 |
+
created_at = db.Column(db.DateTime, default=datetime.utcnow)
|
13 |
+
scheduled_for = db.Column(db.DateTime, nullable=True)
|
14 |
+
completed_at = db.Column(db.DateTime, nullable=True)
|
15 |
+
task_type = db.Column(db.String(50)) # web_scrape, analyze, schedule, etc.
|
16 |
+
|
17 |
+
# For recursive task dependencies
|
18 |
+
parent_id = db.Column(db.Integer, db.ForeignKey('task.id'), nullable=True)
|
19 |
+
parent = db.relationship('Task', remote_side=[id], backref=db.backref('subtasks', lazy='dynamic'))
|
20 |
+
|
21 |
+
# Metadata for storing task configuration and results
|
22 |
+
config = db.Column(db.JSON, nullable=True)
|
23 |
+
result = db.Column(db.JSON, nullable=True)
|
24 |
+
|
25 |
+
def __repr__(self):
|
26 |
+
return f'<Task {self.id}: {self.title}>'
|
27 |
+
|
28 |
+
|
29 |
+
class WebResource(db.Model):
|
30 |
+
"""Model for storing web resources for browser automation"""
|
31 |
+
id = db.Column(db.Integer, primary_key=True)
|
32 |
+
url = db.Column(db.String(500), nullable=False)
|
33 |
+
title = db.Column(db.String(200))
|
34 |
+
category = db.Column(db.String(50))
|
35 |
+
last_accessed = db.Column(db.DateTime, nullable=True)
|
36 |
+
content_hash = db.Column(db.String(64), nullable=True) # To detect changes
|
37 |
+
|
38 |
+
# Relationship with tasks
|
39 |
+
task_id = db.Column(db.Integer, db.ForeignKey('task.id'), nullable=True)
|
40 |
+
task = db.relationship('Task', backref=db.backref('web_resources', lazy='dynamic'))
|
41 |
+
|
42 |
+
def __repr__(self):
|
43 |
+
return f'<WebResource {self.id}: {self.title}>'
|
44 |
+
|
45 |
+
|
46 |
+
class WorkflowTemplate(db.Model):
|
47 |
+
"""Model for storing reusable workflow templates"""
|
48 |
+
id = db.Column(db.Integer, primary_key=True)
|
49 |
+
name = db.Column(db.String(100), nullable=False)
|
50 |
+
description = db.Column(db.Text)
|
51 |
+
created_at = db.Column(db.DateTime, default=datetime.utcnow)
|
52 |
+
steps = db.Column(db.JSON, nullable=False) # Array of workflow steps
|
53 |
+
|
54 |
+
def __repr__(self):
|
55 |
+
return f'<WorkflowTemplate {self.id}: {self.name}>'
|
nlp_processor.cpython-313.pyc
ADDED
Binary file (11.5 kB). View file
|
|
nlp_processor.py
ADDED
@@ -0,0 +1,325 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
NLP Processing module that uses spaCy to analyze and process text.
|
3 |
+
With fallback to Ollama Llama 3 and LM Studio.
|
4 |
+
"""
|
5 |
+
import logging
|
6 |
+
import json
|
7 |
+
import requests
|
8 |
+
from typing import Dict, Any, Optional, Union
|
9 |
+
|
10 |
+
logger = logging.getLogger(__name__)
|
11 |
+
|
12 |
+
# Ollama settings
|
13 |
+
OLLAMA_URL = "http://localhost:11434/api/generate"
|
14 |
+
OLLAMA_MODEL = "llama3"
|
15 |
+
|
16 |
+
# LM Studio fallback settings
|
17 |
+
LM_STUDIO_URL = "http://localhost:1234/v1"
|
18 |
+
|
19 |
+
def process_text(nlp, text):
|
20 |
+
"""
|
21 |
+
Process text using spaCy NLP.
|
22 |
+
|
23 |
+
Args:
|
24 |
+
nlp: The loaded spaCy NLP model
|
25 |
+
text: Text string to process
|
26 |
+
|
27 |
+
Returns:
|
28 |
+
dict: Results of NLP processing
|
29 |
+
"""
|
30 |
+
try:
|
31 |
+
logger.debug(f"Processing text with spaCy: {text[:50]}...")
|
32 |
+
|
33 |
+
# Process the text with spaCy
|
34 |
+
doc = nlp(text)
|
35 |
+
|
36 |
+
# Extract named entities
|
37 |
+
entities = [{
|
38 |
+
'text': ent.text,
|
39 |
+
'label': ent.label_,
|
40 |
+
'start': ent.start_char,
|
41 |
+
'end': ent.end_char
|
42 |
+
} for ent in doc.ents]
|
43 |
+
|
44 |
+
# Extract noun chunks (phrases)
|
45 |
+
noun_chunks = [chunk.text for chunk in doc.noun_chunks]
|
46 |
+
|
47 |
+
# Get part-of-speech tags for each token
|
48 |
+
tokens = [{
|
49 |
+
'text': token.text,
|
50 |
+
'lemma': token.lemma_,
|
51 |
+
'pos': token.pos_,
|
52 |
+
'tag': token.tag_,
|
53 |
+
'dep': token.dep_,
|
54 |
+
'is_stop': token.is_stop
|
55 |
+
} for token in doc]
|
56 |
+
|
57 |
+
# Get sentence boundaries
|
58 |
+
sentences = [sent.text for sent in doc.sents]
|
59 |
+
|
60 |
+
# Calculate text statistics
|
61 |
+
word_count = len([token for token in doc if not token.is_punct and not token.is_space])
|
62 |
+
sentence_count = len(list(doc.sents))
|
63 |
+
average_sentence_length = word_count / sentence_count if sentence_count > 0 else 0
|
64 |
+
|
65 |
+
# Calculate token frequencies
|
66 |
+
token_freq = {}
|
67 |
+
for token in doc:
|
68 |
+
if not token.is_punct and not token.is_space and not token.is_stop:
|
69 |
+
if token.lemma_ in token_freq:
|
70 |
+
token_freq[token.lemma_] += 1
|
71 |
+
else:
|
72 |
+
token_freq[token.lemma_] = 1
|
73 |
+
|
74 |
+
# Sort frequencies and get top terms
|
75 |
+
sorted_freq = sorted(token_freq.items(), key=lambda x: x[1], reverse=True)
|
76 |
+
top_terms = sorted_freq[:10]
|
77 |
+
|
78 |
+
# Prepare results dictionary
|
79 |
+
results = {
|
80 |
+
'entities': entities,
|
81 |
+
'noun_chunks': noun_chunks,
|
82 |
+
'tokens': tokens,
|
83 |
+
'sentences': sentences,
|
84 |
+
'stats': {
|
85 |
+
'word_count': word_count,
|
86 |
+
'sentence_count': sentence_count,
|
87 |
+
'average_sentence_length': round(average_sentence_length, 2)
|
88 |
+
},
|
89 |
+
'top_terms': top_terms
|
90 |
+
}
|
91 |
+
|
92 |
+
return results
|
93 |
+
|
94 |
+
except Exception as e:
|
95 |
+
logger.error(f"Error in NLP processing: {str(e)}")
|
96 |
+
return {
|
97 |
+
'error': str(e),
|
98 |
+
'entities': [],
|
99 |
+
'noun_chunks': [],
|
100 |
+
'tokens': [],
|
101 |
+
'sentences': [text],
|
102 |
+
'stats': {
|
103 |
+
'word_count': len(text.split()),
|
104 |
+
'sentence_count': 1,
|
105 |
+
'average_sentence_length': len(text.split())
|
106 |
+
},
|
107 |
+
'top_terms': []
|
108 |
+
}
|
109 |
+
|
110 |
+
def process_with_ollama(text: str) -> Dict[str, Any]:
|
111 |
+
"""
|
112 |
+
Process text using Ollama Llama 3 model.
|
113 |
+
|
114 |
+
Args:
|
115 |
+
text: Text string to process
|
116 |
+
|
117 |
+
Returns:
|
118 |
+
dict: Results from Ollama processing
|
119 |
+
"""
|
120 |
+
try:
|
121 |
+
logger.info(f"Processing with Ollama Llama 3: {text[:50]}...")
|
122 |
+
|
123 |
+
# Prepare the prompt
|
124 |
+
prompt = f"""Analyze the following text and provide insights:
|
125 |
+
Text: "{text}"
|
126 |
+
|
127 |
+
Please provide:
|
128 |
+
1. A summary of the main points
|
129 |
+
2. Key entities mentioned
|
130 |
+
3. Sentiment analysis (positive, negative, or neutral)
|
131 |
+
4. Any actions or requests mentioned
|
132 |
+
"""
|
133 |
+
|
134 |
+
# Call Ollama API
|
135 |
+
payload = {
|
136 |
+
"model": OLLAMA_MODEL,
|
137 |
+
"prompt": prompt,
|
138 |
+
"stream": False
|
139 |
+
}
|
140 |
+
|
141 |
+
response = requests.post(OLLAMA_URL, json=payload, timeout=30)
|
142 |
+
response.raise_for_status()
|
143 |
+
|
144 |
+
result = response.json()
|
145 |
+
|
146 |
+
# Extract the response
|
147 |
+
return {
|
148 |
+
"ollama_response": result.get("response", ""),
|
149 |
+
"processing_type": "ollama_llama3",
|
150 |
+
"prompt_tokens": result.get("prompt_eval_count", 0),
|
151 |
+
"completion_tokens": result.get("eval_count", 0),
|
152 |
+
"total_duration": result.get("total_duration", 0)
|
153 |
+
}
|
154 |
+
|
155 |
+
except requests.exceptions.RequestException as e:
|
156 |
+
logger.error(f"Error connecting to Ollama: {e}")
|
157 |
+
return {
|
158 |
+
"error": f"Ollama service error: {str(e)}",
|
159 |
+
"processing_type": "ollama_failed"
|
160 |
+
}
|
161 |
+
except Exception as e:
|
162 |
+
logger.error(f"Unexpected error in Ollama processing: {str(e)}")
|
163 |
+
return {
|
164 |
+
"error": f"Unexpected error: {str(e)}",
|
165 |
+
"processing_type": "ollama_error"
|
166 |
+
}
|
167 |
+
|
168 |
+
def process_with_lm_studio(text: str) -> Dict[str, Any]:
|
169 |
+
"""
|
170 |
+
Process text using LM Studio as a fallback.
|
171 |
+
|
172 |
+
Args:
|
173 |
+
text: Text string to process
|
174 |
+
|
175 |
+
Returns:
|
176 |
+
dict: Results from LM Studio processing
|
177 |
+
"""
|
178 |
+
try:
|
179 |
+
logger.info(f"Falling back to LM Studio: {text[:50]}...")
|
180 |
+
|
181 |
+
# Prepare the prompt similar to OpenManus approach
|
182 |
+
prompt = f"Analyze this text: {text}"
|
183 |
+
|
184 |
+
# Call LM Studio API
|
185 |
+
headers = {
|
186 |
+
"Content-Type": "application/json"
|
187 |
+
}
|
188 |
+
|
189 |
+
payload = {
|
190 |
+
"prompt": prompt,
|
191 |
+
"max_tokens": 500,
|
192 |
+
"temperature": 0.7
|
193 |
+
}
|
194 |
+
|
195 |
+
response = requests.post(
|
196 |
+
f"{LM_STUDIO_URL}/completions",
|
197 |
+
headers=headers,
|
198 |
+
json=payload,
|
199 |
+
timeout=30
|
200 |
+
)
|
201 |
+
response.raise_for_status()
|
202 |
+
|
203 |
+
result = response.json()
|
204 |
+
|
205 |
+
return {
|
206 |
+
"lm_studio_response": result.get("choices", [{}])[0].get("text", ""),
|
207 |
+
"processing_type": "lm_studio_fallback"
|
208 |
+
}
|
209 |
+
|
210 |
+
except requests.exceptions.RequestException as e:
|
211 |
+
logger.error(f"Error connecting to LM Studio: {e}")
|
212 |
+
return {
|
213 |
+
"error": f"LM Studio service error: {str(e)}",
|
214 |
+
"processing_type": "lm_studio_failed"
|
215 |
+
}
|
216 |
+
except Exception as e:
|
217 |
+
logger.error(f"Unexpected error in LM Studio processing: {str(e)}")
|
218 |
+
return {
|
219 |
+
"error": f"Unexpected error: {str(e)}",
|
220 |
+
"processing_type": "lm_studio_error"
|
221 |
+
}
|
222 |
+
|
223 |
+
def enhanced_process_text(text: str) -> Dict[str, Any]:
|
224 |
+
"""
|
225 |
+
Process text with multiple fallback methods.
|
226 |
+
First tries spaCy NLP, then Ollama Llama 3, then LM Studio.
|
227 |
+
|
228 |
+
Args:
|
229 |
+
text: Text string to process
|
230 |
+
|
231 |
+
Returns:
|
232 |
+
dict: Combined results from available processing methods
|
233 |
+
"""
|
234 |
+
results = {}
|
235 |
+
spacy_success = False
|
236 |
+
|
237 |
+
# First try spaCy
|
238 |
+
try:
|
239 |
+
import spacy
|
240 |
+
try:
|
241 |
+
nlp = spacy.load("en_core_web_sm")
|
242 |
+
except:
|
243 |
+
nlp = spacy.blank("en")
|
244 |
+
|
245 |
+
spacy_results = process_text(nlp, text)
|
246 |
+
results["spacy"] = spacy_results
|
247 |
+
spacy_success = "error" not in spacy_results
|
248 |
+
except ImportError:
|
249 |
+
logger.warning("spaCy not available")
|
250 |
+
results["spacy"] = {"error": "spaCy not installed or available"}
|
251 |
+
|
252 |
+
# Try Ollama Llama 3 (regardless of spaCy success)
|
253 |
+
try:
|
254 |
+
ollama_results = process_with_ollama(text)
|
255 |
+
results["ollama"] = ollama_results
|
256 |
+
|
257 |
+
# If we got a clear error from Ollama, try LM Studio
|
258 |
+
if "error" in ollama_results:
|
259 |
+
lm_studio_results = process_with_lm_studio(text)
|
260 |
+
results["lm_studio"] = lm_studio_results
|
261 |
+
except Exception as e:
|
262 |
+
logger.error(f"Error in Ollama processing: {e}")
|
263 |
+
results["ollama"] = {"error": str(e)}
|
264 |
+
|
265 |
+
# Try LM Studio as final fallback
|
266 |
+
try:
|
267 |
+
lm_studio_results = process_with_lm_studio(text)
|
268 |
+
results["lm_studio"] = lm_studio_results
|
269 |
+
except Exception as lm_e:
|
270 |
+
logger.error(f"Error in LM Studio fallback: {lm_e}")
|
271 |
+
results["lm_studio"] = {"error": str(lm_e)}
|
272 |
+
|
273 |
+
# Generate a simplified response
|
274 |
+
simple_response = generate_simple_response(results)
|
275 |
+
results["simple_response"] = simple_response
|
276 |
+
|
277 |
+
return results
|
278 |
+
|
279 |
+
def generate_simple_response(results: Dict[str, Any]) -> str:
|
280 |
+
"""
|
281 |
+
Generate a simple, user-friendly response from the processing results.
|
282 |
+
|
283 |
+
Args:
|
284 |
+
results: The combined processing results
|
285 |
+
|
286 |
+
Returns:
|
287 |
+
str: A simplified response for the user
|
288 |
+
"""
|
289 |
+
# First try to get a response from Ollama
|
290 |
+
if "ollama" in results and "error" not in results["ollama"]:
|
291 |
+
return results["ollama"].get("ollama_response", "")
|
292 |
+
|
293 |
+
# Fall back to LM Studio
|
294 |
+
if "lm_studio" in results and "error" not in results["lm_studio"]:
|
295 |
+
return results["lm_studio"].get("lm_studio_response", "")
|
296 |
+
|
297 |
+
# Finally use spaCy results if we have to
|
298 |
+
if "spacy" in results and "error" not in results["spacy"]:
|
299 |
+
spacy_data = results["spacy"]
|
300 |
+
|
301 |
+
# Create a simple response from spaCy analysis
|
302 |
+
sentences = spacy_data.get("sentences", [])
|
303 |
+
top_terms = spacy_data.get("top_terms", [])
|
304 |
+
entities = spacy_data.get("entities", [])
|
305 |
+
|
306 |
+
response_parts = []
|
307 |
+
|
308 |
+
if sentences:
|
309 |
+
response_parts.append(f"I analyzed your text of {len(sentences)} sentence(s).")
|
310 |
+
|
311 |
+
if top_terms:
|
312 |
+
terms_str = ", ".join([term[0] for term in top_terms[:5]])
|
313 |
+
response_parts.append(f"The key terms are: {terms_str}.")
|
314 |
+
|
315 |
+
if entities:
|
316 |
+
entity_texts = [e["text"] for e in entities]
|
317 |
+
if entity_texts:
|
318 |
+
entities_str = ", ".join(entity_texts)
|
319 |
+
response_parts.append(f"I identified these entities: {entities_str}.")
|
320 |
+
|
321 |
+
if response_parts:
|
322 |
+
return " ".join(response_parts)
|
323 |
+
|
324 |
+
# If all else fails
|
325 |
+
return "I processed your request but couldn't generate a detailed analysis."
|
openai_integration.cpython-313.pyc
ADDED
Binary file (19.4 kB). View file
|
|
openai_integration.py
ADDED
@@ -0,0 +1,908 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
OpenAI integration module for the Ultimate AI Tool workflow automation assistant.
|
3 |
+
|
4 |
+
This module provides enhanced ChatGPT integration with:
|
5 |
+
- Quantum layer multidimensional thinking
|
6 |
+
- Recursive agent capabilities
|
7 |
+
- Task and workflow optimization
|
8 |
+
- Web content analysis
|
9 |
+
- Support for local LLM solutions (Ollama, LM Studio)
|
10 |
+
"""
|
11 |
+
import os
|
12 |
+
import logging
|
13 |
+
import json
|
14 |
+
import random
|
15 |
+
import datetime
|
16 |
+
import requests
|
17 |
+
import re
|
18 |
+
|
19 |
+
# The newest OpenAI model is "gpt-4o" which was released May 13, 2024.
|
20 |
+
# do not change this unless explicitly requested by the user
|
21 |
+
MODEL = "gpt-4o"
|
22 |
+
EMBEDDING_MODEL = "text-embedding-ada-002"
|
23 |
+
|
24 |
+
logger = logging.getLogger(__name__)
|
25 |
+
|
26 |
+
# Initialize OpenAI client only if API key is available
|
27 |
+
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
|
28 |
+
client = None
|
29 |
+
|
30 |
+
# Local LLM Configuration
|
31 |
+
LOCAL_LLM_ENABLED = os.environ.get("LOCAL_LLM_ENABLED", "true").lower() == "true"
|
32 |
+
LOCAL_LLM_PROVIDER = os.environ.get("LOCAL_LLM_PROVIDER", "ollama").lower() # "ollama" or "lmstudio"
|
33 |
+
LOCAL_LLM_MODEL = os.environ.get("LOCAL_LLM_MODEL", "llama3")
|
34 |
+
OLLAMA_URL = os.environ.get("OLLAMA_URL", "http://localhost:11434")
|
35 |
+
LM_STUDIO_URL = os.environ.get("LM_STUDIO_URL", "http://localhost:1234/v1")
|
36 |
+
|
37 |
+
# Dictionary of local LLM providers and their configurations
|
38 |
+
LOCAL_LLM_PROVIDERS = {
|
39 |
+
"lmstudio": {
|
40 |
+
"url": lambda: LM_STUDIO_URL,
|
41 |
+
"chat_endpoint": "/chat/completions",
|
42 |
+
"completions_endpoint": "/completions"
|
43 |
+
},
|
44 |
+
"ollama": {
|
45 |
+
"url": lambda: OLLAMA_URL,
|
46 |
+
"chat_endpoint": "/api/chat",
|
47 |
+
"completions_endpoint": "/api/generate"
|
48 |
+
}
|
49 |
+
}
|
50 |
+
|
51 |
+
# Conversation history for the recursive agent capability
|
52 |
+
CONVERSATION_HISTORY = []
|
53 |
+
MAX_HISTORY_LENGTH = 10
|
54 |
+
|
55 |
+
try:
|
56 |
+
# Only import the OpenAI module if there's an API key
|
57 |
+
if OPENAI_API_KEY:
|
58 |
+
from openai import OpenAI
|
59 |
+
client = OpenAI(api_key=OPENAI_API_KEY)
|
60 |
+
logger.info("OpenAI client initialized successfully")
|
61 |
+
else:
|
62 |
+
logger.warning("OPENAI_API_KEY environment variable not set - using local LLM if available")
|
63 |
+
except ImportError:
|
64 |
+
logger.warning("OpenAI package not imported - using local LLM if available")
|
65 |
+
except Exception as e:
|
66 |
+
logger.error(f"Failed to initialize OpenAI client: {str(e)}")
|
67 |
+
|
68 |
+
def query_local_llm(prompt=None, provider=None, model=None, messages=None, max_tokens=500, temperature=0.7):
|
69 |
+
"""
|
70 |
+
Query a local LLM provider like LM Studio or Ollama
|
71 |
+
|
72 |
+
Args:
|
73 |
+
prompt: The text prompt to send (used if messages is None)
|
74 |
+
provider: The local LLM provider to use (lmstudio or ollama)
|
75 |
+
model: The model name to use (provider-specific)
|
76 |
+
messages: List of messages in chat format (if None, uses prompt as single user message)
|
77 |
+
max_tokens: Maximum tokens to generate
|
78 |
+
temperature: Temperature for response generation
|
79 |
+
|
80 |
+
Returns:
|
81 |
+
str: The generated response text
|
82 |
+
"""
|
83 |
+
if not LOCAL_LLM_ENABLED:
|
84 |
+
logger.warning("Local LLM is disabled")
|
85 |
+
return "Local LLM is disabled. Please enable it or provide an OpenAI API key."
|
86 |
+
|
87 |
+
# Use default provider if not specified
|
88 |
+
if not provider:
|
89 |
+
provider = LOCAL_LLM_PROVIDER
|
90 |
+
|
91 |
+
# Check if the provider is supported
|
92 |
+
if provider not in LOCAL_LLM_PROVIDERS:
|
93 |
+
logger.error(f"Unknown local LLM provider: {provider}")
|
94 |
+
return f"Error: Unknown provider '{provider}'"
|
95 |
+
|
96 |
+
try:
|
97 |
+
provider_config = LOCAL_LLM_PROVIDERS[provider]
|
98 |
+
base_url = provider_config["url"]()
|
99 |
+
|
100 |
+
# Default model based on provider
|
101 |
+
if model is None:
|
102 |
+
model = LOCAL_LLM_MODEL
|
103 |
+
if provider == "lmstudio":
|
104 |
+
# Some LM Studio installations use a prefix for model names
|
105 |
+
model = f"openai/{model}" if not model.startswith("openai/") else model
|
106 |
+
|
107 |
+
# Format request based on whether we have messages or just a prompt
|
108 |
+
if messages:
|
109 |
+
# Handle chat completion format
|
110 |
+
endpoint = provider_config["chat_endpoint"]
|
111 |
+
url = f"{base_url}{endpoint}"
|
112 |
+
|
113 |
+
if provider == "lmstudio":
|
114 |
+
# LM Studio follows OpenAI format
|
115 |
+
payload = {
|
116 |
+
"model": model,
|
117 |
+
"messages": messages,
|
118 |
+
"temperature": temperature,
|
119 |
+
"max_tokens": max_tokens
|
120 |
+
}
|
121 |
+
elif provider == "ollama":
|
122 |
+
# Ollama has its own format
|
123 |
+
payload = {
|
124 |
+
"model": model,
|
125 |
+
"messages": messages,
|
126 |
+
"stream": False,
|
127 |
+
"options": {
|
128 |
+
"temperature": temperature,
|
129 |
+
"num_predict": max_tokens
|
130 |
+
}
|
131 |
+
}
|
132 |
+
else:
|
133 |
+
# Handle completion format with just a prompt
|
134 |
+
endpoint = provider_config["completions_endpoint"]
|
135 |
+
url = f"{base_url}{endpoint}"
|
136 |
+
|
137 |
+
if provider == "lmstudio":
|
138 |
+
payload = {
|
139 |
+
"model": model,
|
140 |
+
"prompt": prompt,
|
141 |
+
"temperature": temperature,
|
142 |
+
"max_tokens": max_tokens
|
143 |
+
}
|
144 |
+
elif provider == "ollama":
|
145 |
+
payload = {
|
146 |
+
"model": model,
|
147 |
+
"prompt": prompt,
|
148 |
+
"options": {
|
149 |
+
"temperature": temperature,
|
150 |
+
"num_predict": max_tokens
|
151 |
+
}
|
152 |
+
}
|
153 |
+
|
154 |
+
# Make the request
|
155 |
+
logger.info(f"Querying {provider} at {url} with model {model}")
|
156 |
+
response = requests.post(url, json=payload, timeout=60)
|
157 |
+
response.raise_for_status()
|
158 |
+
data = response.json()
|
159 |
+
|
160 |
+
# Handle response based on provider and request type
|
161 |
+
if messages:
|
162 |
+
if provider == "lmstudio":
|
163 |
+
# OpenAI-compatible format
|
164 |
+
return data.get("choices", [{}])[0].get("message", {}).get("content", "")
|
165 |
+
elif provider == "ollama":
|
166 |
+
return data.get("message", {}).get("content", "")
|
167 |
+
else:
|
168 |
+
if provider == "lmstudio":
|
169 |
+
return data.get("choices", [{}])[0].get("text", "")
|
170 |
+
elif provider == "ollama":
|
171 |
+
return data.get("response", "")
|
172 |
+
|
173 |
+
except requests.exceptions.RequestException as e:
|
174 |
+
logger.error(f"Error connecting to {provider}: {e}")
|
175 |
+
return f"Error: Unable to connect to {provider}. {str(e)}"
|
176 |
+
except json.JSONDecodeError as e:
|
177 |
+
logger.error(f"Error parsing JSON response from {provider}: {e}")
|
178 |
+
return f"Error: Invalid response from {provider}"
|
179 |
+
except Exception as e:
|
180 |
+
logger.error(f"Unexpected error with {provider}: {e}")
|
181 |
+
return f"Error: {str(e)}"
|
182 |
+
|
183 |
+
def generate_fallback_response(input_text, quantum_results):
|
184 |
+
"""
|
185 |
+
Generate a fallback response when OpenAI API is not available.
|
186 |
+
|
187 |
+
Args:
|
188 |
+
input_text (str): Original input text
|
189 |
+
quantum_results (dict): Results from quantum thinking process
|
190 |
+
|
191 |
+
Returns:
|
192 |
+
dict: Simulated response
|
193 |
+
"""
|
194 |
+
meta_insight = quantum_results.get('meta_insight', '')
|
195 |
+
paths = quantum_results.get('paths', [])
|
196 |
+
|
197 |
+
# Create a list of template sentences to use in the fallback response
|
198 |
+
templates = [
|
199 |
+
f"The quantum analysis reveals that '{input_text}' contains multiple layers of meaning.",
|
200 |
+
f"Based on the meta-insight: {meta_insight}, we can understand this text from several perspectives.",
|
201 |
+
"The multi-dimensional thinking process uncovers patterns that might not be immediately obvious.",
|
202 |
+
"When we apply quantum-inspired analysis, new connections emerge between concepts."
|
203 |
+
]
|
204 |
+
|
205 |
+
# Add insights from paths
|
206 |
+
for path in paths[:2]:
|
207 |
+
if 'concept' in path and 'insight' in path:
|
208 |
+
templates.append(f"The concept of '{path['concept']}' leads to this insight: {path['insight']}")
|
209 |
+
if 'entangled_insight' in path:
|
210 |
+
templates.append(f"Furthermore: {path['entangled_insight']}")
|
211 |
+
|
212 |
+
# Add concluding statements
|
213 |
+
conclusions = [
|
214 |
+
"This analysis demonstrates how multi-layered thinking can extract deeper meaning from text.",
|
215 |
+
"By considering multiple dimensions simultaneously, we gain a more comprehensive understanding.",
|
216 |
+
f"The quantum score of {quantum_results.get('quantum_score', 0.5)} suggests a significant depth to this analysis.",
|
217 |
+
"To fully appreciate the analysis, consider how these different perspectives interact and inform each other."
|
218 |
+
]
|
219 |
+
|
220 |
+
# Construct the response
|
221 |
+
response_parts = []
|
222 |
+
response_parts.extend(random.sample(templates, min(3, len(templates))))
|
223 |
+
response_parts.append("") # Empty line
|
224 |
+
response_parts.append("In conclusion:")
|
225 |
+
response_parts.extend(random.sample(conclusions, min(2, len(conclusions))))
|
226 |
+
|
227 |
+
response_text = "\n\n".join(response_parts)
|
228 |
+
|
229 |
+
return {
|
230 |
+
"text": response_text,
|
231 |
+
"meta_insight": meta_insight,
|
232 |
+
"model": "Quantum Framework (Fallback Mode)"
|
233 |
+
}
|
234 |
+
|
235 |
+
def generate_completion(input_text, quantum_results):
|
236 |
+
"""
|
237 |
+
Generate a completion using OpenAI's API, local LLM, or fallback to local generation.
|
238 |
+
|
239 |
+
Args:
|
240 |
+
input_text (str): Original input text
|
241 |
+
quantum_results (dict): Results from quantum thinking process
|
242 |
+
|
243 |
+
Returns:
|
244 |
+
dict: OpenAI API response, local LLM response, or fallback response
|
245 |
+
"""
|
246 |
+
# Try OpenAI first if available
|
247 |
+
if OPENAI_API_KEY and client is not None:
|
248 |
+
try:
|
249 |
+
# Extract meta insights from quantum results
|
250 |
+
meta_insight = quantum_results.get('meta_insight', '')
|
251 |
+
|
252 |
+
# Create a prompt that incorporates quantum thinking results
|
253 |
+
prompt = create_prompt(input_text, quantum_results)
|
254 |
+
|
255 |
+
logger.debug(f"Sending prompt to OpenAI: {prompt[:100]}...")
|
256 |
+
|
257 |
+
# Call OpenAI API
|
258 |
+
response = client.chat.completions.create(
|
259 |
+
model=MODEL,
|
260 |
+
messages=[
|
261 |
+
{
|
262 |
+
"role": "system",
|
263 |
+
"content": "You are a quantum thinking framework assistant that analyzes text using multi-dimensional, layered thinking processes. Provide insightful, thoughtful responses that consider multiple perspectives and layers of meaning."
|
264 |
+
},
|
265 |
+
{
|
266 |
+
"role": "user",
|
267 |
+
"content": prompt
|
268 |
+
}
|
269 |
+
],
|
270 |
+
max_tokens=500,
|
271 |
+
temperature=0.7
|
272 |
+
)
|
273 |
+
|
274 |
+
# Extract and return the response text
|
275 |
+
response_text = response.choices[0].message.content if response.choices else ""
|
276 |
+
|
277 |
+
return {
|
278 |
+
"text": response_text,
|
279 |
+
"meta_insight": meta_insight,
|
280 |
+
"model": MODEL
|
281 |
+
}
|
282 |
+
|
283 |
+
except Exception as e:
|
284 |
+
logger.error(f"Error calling OpenAI API: {str(e)}")
|
285 |
+
# Fall through to try local LLM
|
286 |
+
|
287 |
+
# Try local LLM if enabled
|
288 |
+
if LOCAL_LLM_ENABLED:
|
289 |
+
try:
|
290 |
+
# Extract meta insights from quantum results
|
291 |
+
meta_insight = quantum_results.get('meta_insight', '')
|
292 |
+
|
293 |
+
# Create a prompt that incorporates quantum thinking results
|
294 |
+
prompt = create_prompt(input_text, quantum_results)
|
295 |
+
|
296 |
+
logger.debug(f"Sending prompt to local LLM ({LOCAL_LLM_PROVIDER}): {prompt[:100]}...")
|
297 |
+
|
298 |
+
# Create messages for chat format
|
299 |
+
messages = [
|
300 |
+
{
|
301 |
+
"role": "system",
|
302 |
+
"content": "You are a quantum thinking framework assistant that analyzes text using multi-dimensional, layered thinking processes. Provide insightful, thoughtful responses that consider multiple perspectives and layers of meaning."
|
303 |
+
},
|
304 |
+
{
|
305 |
+
"role": "user",
|
306 |
+
"content": prompt
|
307 |
+
}
|
308 |
+
]
|
309 |
+
|
310 |
+
# Query local LLM
|
311 |
+
response_text = query_local_llm(
|
312 |
+
provider=LOCAL_LLM_PROVIDER,
|
313 |
+
model=LOCAL_LLM_MODEL,
|
314 |
+
messages=messages,
|
315 |
+
max_tokens=500,
|
316 |
+
temperature=0.7
|
317 |
+
)
|
318 |
+
|
319 |
+
if not response_text.startswith("Error:"):
|
320 |
+
return {
|
321 |
+
"text": response_text,
|
322 |
+
"meta_insight": meta_insight,
|
323 |
+
"model": f"{LOCAL_LLM_PROVIDER}/{LOCAL_LLM_MODEL}"
|
324 |
+
}
|
325 |
+
else:
|
326 |
+
logger.error(f"Error from local LLM: {response_text}")
|
327 |
+
# Fall through to fallback
|
328 |
+
|
329 |
+
except Exception as e:
|
330 |
+
logger.error(f"Error using local LLM: {str(e)}")
|
331 |
+
# Fall through to fallback
|
332 |
+
|
333 |
+
# If all else fails, fall back to local generation
|
334 |
+
logger.warning("All LLM options failed - using fallback response generator")
|
335 |
+
return generate_fallback_response(input_text, quantum_results)
|
336 |
+
|
337 |
+
def create_prompt(input_text, quantum_results):
|
338 |
+
"""
|
339 |
+
Create a detailed prompt for the OpenAI API.
|
340 |
+
|
341 |
+
Args:
|
342 |
+
input_text (str): Original input text
|
343 |
+
quantum_results (dict): Results from quantum thinking process
|
344 |
+
|
345 |
+
Returns:
|
346 |
+
str: Formatted prompt
|
347 |
+
"""
|
348 |
+
# Extract key elements from quantum results
|
349 |
+
dimension = quantum_results.get('dimension', 0)
|
350 |
+
meta_insight = quantum_results.get('meta_insight', '')
|
351 |
+
paths = quantum_results.get('paths', [])
|
352 |
+
|
353 |
+
# Build a detailed prompt
|
354 |
+
prompt = f"""
|
355 |
+
I've analyzed the following text using a quantum thinking framework:
|
356 |
+
|
357 |
+
Original text: "{input_text}"
|
358 |
+
|
359 |
+
The analysis explored {dimension} recursive dimensions of thought, resulting in the following meta-insight:
|
360 |
+
{meta_insight}
|
361 |
+
|
362 |
+
Key conceptual paths explored:
|
363 |
+
"""
|
364 |
+
|
365 |
+
# Add information about the paths explored
|
366 |
+
for i, path in enumerate(paths[:3], 1): # Include up to 3 paths to keep prompt concise
|
367 |
+
concept = path.get('concept', '')
|
368 |
+
insight = path.get('insight', '')
|
369 |
+
entangled_insight = path.get('entangled_insight', '')
|
370 |
+
|
371 |
+
prompt += f"\nPath {i}: Concept '{concept}'\n"
|
372 |
+
prompt += f"- Insight: {insight}\n"
|
373 |
+
|
374 |
+
if entangled_insight:
|
375 |
+
prompt += f"- Entangled insight: {entangled_insight}\n"
|
376 |
+
|
377 |
+
# Add instruction for the response
|
378 |
+
prompt += f"""
|
379 |
+
Based on this multi-dimensional analysis, please:
|
380 |
+
|
381 |
+
1. Provide a comprehensive interpretation of the original text
|
382 |
+
2. Highlight patterns and insights revealed by the quantum thinking process
|
383 |
+
3. Suggest alternative perspectives or implications that might not be immediately obvious
|
384 |
+
4. Synthesize a coherent understanding that incorporates multiple layers of meaning
|
385 |
+
|
386 |
+
Respond in a clear, insightful manner that demonstrates deep understanding.
|
387 |
+
"""
|
388 |
+
|
389 |
+
return prompt
|
390 |
+
|
391 |
+
def analyze_web_content(url, content, metadata=None):
|
392 |
+
"""
|
393 |
+
Analyze web content using OpenAI, local LLM, or fallback method.
|
394 |
+
|
395 |
+
Args:
|
396 |
+
url (str): The URL of the web content
|
397 |
+
content (str): The text content to analyze
|
398 |
+
metadata (dict, optional): Additional metadata about the content
|
399 |
+
|
400 |
+
Returns:
|
401 |
+
dict: Analysis results
|
402 |
+
"""
|
403 |
+
# Try OpenAI first if available
|
404 |
+
if OPENAI_API_KEY and client is not None:
|
405 |
+
try:
|
406 |
+
# Truncate content if too long
|
407 |
+
max_content_length = 3000
|
408 |
+
trimmed_content = content[:max_content_length] + "..." if len(content) > max_content_length else content
|
409 |
+
|
410 |
+
# Create a system prompt for web content analysis
|
411 |
+
system_prompt = """You are a quantum web analysis assistant. Analyze the provided web content and extract key information:
|
412 |
+
1. Main topics and themes
|
413 |
+
2. Key entities (people, organizations, places)
|
414 |
+
3. Important facts and claims
|
415 |
+
4. Sentiment and tone
|
416 |
+
5. Potential biases or viewpoints
|
417 |
+
|
418 |
+
Provide a comprehensive analysis that would be valuable for someone seeking to understand this content deeply."""
|
419 |
+
|
420 |
+
# Create a user prompt with the content
|
421 |
+
user_prompt = f"""Analyze the following web content from {url}:
|
422 |
+
|
423 |
+
{trimmed_content}
|
424 |
+
|
425 |
+
Provide a structured analysis with clear sections for topics, entities, facts, sentiment, and potential biases.
|
426 |
+
Include a brief summary at the beginning and key takeaways at the end."""
|
427 |
+
|
428 |
+
# Call OpenAI API
|
429 |
+
response = client.chat.completions.create(
|
430 |
+
model=MODEL,
|
431 |
+
messages=[
|
432 |
+
{"role": "system", "content": system_prompt},
|
433 |
+
{"role": "user", "content": user_prompt}
|
434 |
+
],
|
435 |
+
max_tokens=800,
|
436 |
+
temperature=0.5
|
437 |
+
)
|
438 |
+
|
439 |
+
# Extract the response text
|
440 |
+
analysis_text = response.choices[0].message.content if response.choices else ""
|
441 |
+
|
442 |
+
return {
|
443 |
+
"status": "success",
|
444 |
+
"url": url,
|
445 |
+
"analysis": analysis_text,
|
446 |
+
"model": MODEL,
|
447 |
+
"timestamp": datetime.datetime.utcnow().isoformat()
|
448 |
+
}
|
449 |
+
|
450 |
+
except Exception as e:
|
451 |
+
logger.error(f"Error analyzing web content with OpenAI: {str(e)}")
|
452 |
+
# Fall through to try local LLM
|
453 |
+
|
454 |
+
# Try local LLM if enabled
|
455 |
+
if LOCAL_LLM_ENABLED:
|
456 |
+
try:
|
457 |
+
# Truncate content if too long
|
458 |
+
max_content_length = 3000
|
459 |
+
trimmed_content = content[:max_content_length] + "..." if len(content) > max_content_length else content
|
460 |
+
|
461 |
+
# Create messages for chat format
|
462 |
+
messages = [
|
463 |
+
{
|
464 |
+
"role": "system",
|
465 |
+
"content": """You are a web content analysis assistant. Analyze the provided web content and extract key information:
|
466 |
+
1. Main topics and themes
|
467 |
+
2. Key entities (people, organizations, places)
|
468 |
+
3. Important facts and claims
|
469 |
+
4. Sentiment and tone
|
470 |
+
5. Potential biases or viewpoints
|
471 |
+
|
472 |
+
Provide a comprehensive analysis that would be valuable for someone seeking to understand this content deeply."""
|
473 |
+
},
|
474 |
+
{
|
475 |
+
"role": "user",
|
476 |
+
"content": f"""Analyze the following web content from {url}:
|
477 |
+
|
478 |
+
{trimmed_content}
|
479 |
+
|
480 |
+
Provide a structured analysis with clear sections for topics, entities, facts, sentiment, and potential biases.
|
481 |
+
Include a brief summary at the beginning and key takeaways at the end."""
|
482 |
+
}
|
483 |
+
]
|
484 |
+
|
485 |
+
# Query local LLM
|
486 |
+
analysis_text = query_local_llm(
|
487 |
+
provider=LOCAL_LLM_PROVIDER,
|
488 |
+
model=LOCAL_LLM_MODEL,
|
489 |
+
messages=messages,
|
490 |
+
max_tokens=800,
|
491 |
+
temperature=0.5
|
492 |
+
)
|
493 |
+
|
494 |
+
if not analysis_text.startswith("Error:"):
|
495 |
+
return {
|
496 |
+
"status": "success",
|
497 |
+
"url": url,
|
498 |
+
"analysis": analysis_text,
|
499 |
+
"model": f"{LOCAL_LLM_PROVIDER}/{LOCAL_LLM_MODEL}",
|
500 |
+
"timestamp": datetime.datetime.utcnow().isoformat()
|
501 |
+
}
|
502 |
+
else:
|
503 |
+
logger.error(f"Error from local LLM for web analysis: {analysis_text}")
|
504 |
+
# Fall through to fallback
|
505 |
+
|
506 |
+
except Exception as e:
|
507 |
+
logger.error(f"Error analyzing web content with local LLM: {str(e)}")
|
508 |
+
# Fall through to fallback
|
509 |
+
|
510 |
+
# If all else fails, fall back to local analysis
|
511 |
+
logger.warning("All LLM options failed for web analysis - using fallback web analysis")
|
512 |
+
return fallback_web_analysis(url, content, metadata)
|
513 |
+
|
514 |
+
def fallback_web_analysis(url, content, metadata=None):
|
515 |
+
"""Generate a fallback web content analysis when OpenAI API is not available."""
|
516 |
+
# Extract basic metrics
|
517 |
+
word_count = len(content.split())
|
518 |
+
sentence_count = len([s for s in content.split('.') if s.strip()])
|
519 |
+
avg_sentence_length = word_count / max(1, sentence_count)
|
520 |
+
|
521 |
+
# Sample key phrases (very basic implementation)
|
522 |
+
words = content.split()
|
523 |
+
key_phrases = []
|
524 |
+
|
525 |
+
if len(words) > 20:
|
526 |
+
for i in range(min(5, len(words) // 20)):
|
527 |
+
start_idx = i * (len(words) // 5)
|
528 |
+
phrase = ' '.join(words[start_idx:start_idx + min(5, len(words) - start_idx)])
|
529 |
+
key_phrases.append(phrase)
|
530 |
+
|
531 |
+
analysis = f"""
|
532 |
+
## Web Content Analysis (Fallback Mode)
|
533 |
+
|
534 |
+
**URL:** {url}
|
535 |
+
|
536 |
+
**Basic Metrics:**
|
537 |
+
- Word Count: {word_count}
|
538 |
+
- Sentence Count: {sentence_count}
|
539 |
+
- Average Sentence Length: {avg_sentence_length:.1f} words
|
540 |
+
|
541 |
+
**Sample Content Phrases:**
|
542 |
+
"""
|
543 |
+
for phrase in key_phrases:
|
544 |
+
analysis += f"- \"{phrase}\"\n"
|
545 |
+
|
546 |
+
analysis += """
|
547 |
+
**Note:** This is a limited analysis generated by the fallback system.
|
548 |
+
To get more comprehensive analysis capabilities, please provide an OpenAI API key.
|
549 |
+
"""
|
550 |
+
|
551 |
+
return {
|
552 |
+
"status": "limited",
|
553 |
+
"url": url,
|
554 |
+
"analysis": analysis,
|
555 |
+
"model": "Fallback Analysis",
|
556 |
+
"timestamp": datetime.datetime.utcnow().isoformat()
|
557 |
+
}
|
558 |
+
|
559 |
+
def process_recursive_agent(user_input, context=None):
|
560 |
+
"""
|
561 |
+
Process input using a recursive agent approach that maintains conversation context.
|
562 |
+
|
563 |
+
Args:
|
564 |
+
user_input (str): User's input text
|
565 |
+
context (dict, optional): Additional context information
|
566 |
+
|
567 |
+
Returns:
|
568 |
+
dict: Agent response with actions or next steps
|
569 |
+
"""
|
570 |
+
global CONVERSATION_HISTORY
|
571 |
+
|
572 |
+
# Add user input to conversation history
|
573 |
+
CONVERSATION_HISTORY.append({"role": "user", "content": user_input})
|
574 |
+
|
575 |
+
# Trim history if too long
|
576 |
+
if len(CONVERSATION_HISTORY) > MAX_HISTORY_LENGTH:
|
577 |
+
CONVERSATION_HISTORY = CONVERSATION_HISTORY[-MAX_HISTORY_LENGTH:]
|
578 |
+
|
579 |
+
# System message for recursive agent
|
580 |
+
system_message = {
|
581 |
+
"role": "system",
|
582 |
+
"content": """You are an advanced recursive agent with the ability to:
|
583 |
+
1. Process and analyze user requests
|
584 |
+
2. Break complex tasks into subtasks
|
585 |
+
3. Execute tasks in the correct sequence
|
586 |
+
4. Learn from previous interactions
|
587 |
+
5. Maintain context across conversation turns
|
588 |
+
|
589 |
+
Respond in JSON format with the following structure:
|
590 |
+
{
|
591 |
+
"understanding": "Your interpretation of the user's request",
|
592 |
+
"thought_process": "Your step-by-step reasoning",
|
593 |
+
"actions": [
|
594 |
+
{"type": "web_search", "query": "search query"},
|
595 |
+
{"type": "analyze", "text": "text to analyze"},
|
596 |
+
{"type": "schedule", "task": "task to schedule", "when": "time"}
|
597 |
+
],
|
598 |
+
"response": "Your response to the user",
|
599 |
+
"follow_up": "Optional follow-up question or suggestion"
|
600 |
+
}"""
|
601 |
+
}
|
602 |
+
|
603 |
+
# Combine system message with conversation history
|
604 |
+
messages = [system_message] + CONVERSATION_HISTORY
|
605 |
+
|
606 |
+
# Add context information if provided
|
607 |
+
if context:
|
608 |
+
context_message = {
|
609 |
+
"role": "system",
|
610 |
+
"content": f"Additional context information: {json.dumps(context)}"
|
611 |
+
}
|
612 |
+
messages.append(context_message)
|
613 |
+
|
614 |
+
# Try OpenAI first if available
|
615 |
+
if OPENAI_API_KEY and client is not None:
|
616 |
+
try:
|
617 |
+
# Call OpenAI API with JSON response format
|
618 |
+
response = client.chat.completions.create(
|
619 |
+
model=MODEL,
|
620 |
+
messages=messages,
|
621 |
+
max_tokens=1000,
|
622 |
+
temperature=0.7,
|
623 |
+
response_format={"type": "json_object"}
|
624 |
+
)
|
625 |
+
|
626 |
+
# Extract and parse the JSON response
|
627 |
+
response_text = response.choices[0].message.content if response.choices else "{}"
|
628 |
+
try:
|
629 |
+
agent_response = json.loads(response_text)
|
630 |
+
except json.JSONDecodeError:
|
631 |
+
agent_response = {
|
632 |
+
"understanding": "Error parsing response",
|
633 |
+
"response": "I encountered an error processing your request. Please try again.",
|
634 |
+
"actions": []
|
635 |
+
}
|
636 |
+
|
637 |
+
# Add assistant response to conversation history
|
638 |
+
CONVERSATION_HISTORY.append({
|
639 |
+
"role": "assistant",
|
640 |
+
"content": agent_response.get("response", "")
|
641 |
+
})
|
642 |
+
|
643 |
+
return {
|
644 |
+
"status": "success",
|
645 |
+
"agent_response": agent_response,
|
646 |
+
"model": MODEL
|
647 |
+
}
|
648 |
+
|
649 |
+
except Exception as e:
|
650 |
+
logger.error(f"Error in OpenAI recursive agent: {str(e)}")
|
651 |
+
# Fall through to try local LLM
|
652 |
+
|
653 |
+
# Try local LLM if enabled
|
654 |
+
if LOCAL_LLM_ENABLED:
|
655 |
+
try:
|
656 |
+
# Query local LLM
|
657 |
+
response_text = query_local_llm(
|
658 |
+
provider=LOCAL_LLM_PROVIDER,
|
659 |
+
model=LOCAL_LLM_MODEL,
|
660 |
+
messages=messages,
|
661 |
+
max_tokens=1000,
|
662 |
+
temperature=0.7
|
663 |
+
)
|
664 |
+
|
665 |
+
if not response_text.startswith("Error:"):
|
666 |
+
# Try to parse JSON response
|
667 |
+
try:
|
668 |
+
# First, look for JSON structure in the response
|
669 |
+
json_match = re.search(r'({[\s\S]*})', response_text)
|
670 |
+
if json_match:
|
671 |
+
response_text = json_match.group(1)
|
672 |
+
|
673 |
+
agent_response = json.loads(response_text)
|
674 |
+
except json.JSONDecodeError:
|
675 |
+
# If not valid JSON, create a simplified response
|
676 |
+
agent_response = {
|
677 |
+
"understanding": "Processed with local LLM",
|
678 |
+
"response": response_text,
|
679 |
+
"actions": []
|
680 |
+
}
|
681 |
+
|
682 |
+
# Add assistant response to conversation history
|
683 |
+
CONVERSATION_HISTORY.append({
|
684 |
+
"role": "assistant",
|
685 |
+
"content": agent_response.get("response", response_text)
|
686 |
+
})
|
687 |
+
|
688 |
+
return {
|
689 |
+
"status": "success",
|
690 |
+
"agent_response": agent_response,
|
691 |
+
"model": f"{LOCAL_LLM_PROVIDER}/{LOCAL_LLM_MODEL}"
|
692 |
+
}
|
693 |
+
else:
|
694 |
+
logger.error(f"Error from local LLM for recursive agent: {response_text}")
|
695 |
+
# Fall through to fallback
|
696 |
+
|
697 |
+
except Exception as e:
|
698 |
+
logger.error(f"Error in local LLM recursive agent: {str(e)}")
|
699 |
+
# Fall through to fallback
|
700 |
+
|
701 |
+
# If all else fails, use a simple fallback
|
702 |
+
fallback_response = {
|
703 |
+
"understanding": "Basic understanding of request",
|
704 |
+
"response": "I processed your request but couldn't use advanced features. Please ensure an LLM is available.",
|
705 |
+
"actions": []
|
706 |
+
}
|
707 |
+
|
708 |
+
# Add assistant response to conversation history
|
709 |
+
CONVERSATION_HISTORY.append({
|
710 |
+
"role": "assistant",
|
711 |
+
"content": fallback_response.get("response", "")
|
712 |
+
})
|
713 |
+
|
714 |
+
return {
|
715 |
+
"status": "limited",
|
716 |
+
"agent_response": fallback_response,
|
717 |
+
"model": "Fallback Agent"
|
718 |
+
}
|
719 |
+
|
720 |
+
def optimize_workflow(workflow_description, tasks=None):
|
721 |
+
"""
|
722 |
+
Optimize a workflow based on description and existing tasks.
|
723 |
+
|
724 |
+
Args:
|
725 |
+
workflow_description (str): Description of the workflow
|
726 |
+
tasks (list, optional): List of existing tasks in the workflow
|
727 |
+
|
728 |
+
Returns:
|
729 |
+
dict: Optimized workflow with steps and recommendations
|
730 |
+
"""
|
731 |
+
# Create system prompt for workflow optimization
|
732 |
+
system_prompt = """You are a workflow optimization expert. Analyze the provided workflow description and tasks to:
|
733 |
+
1. Identify inefficiencies and bottlenecks
|
734 |
+
2. Suggest automation opportunities
|
735 |
+
3. Optimize task sequencing and dependencies
|
736 |
+
4. Recommend parallel processing where possible
|
737 |
+
5. Create a step-by-step optimized workflow plan
|
738 |
+
|
739 |
+
Respond with a structured optimization plan in JSON format."""
|
740 |
+
|
741 |
+
# Create user prompt with workflow details
|
742 |
+
tasks_json = json.dumps(tasks) if tasks else "[]"
|
743 |
+
user_prompt = f"""Optimize the following workflow:
|
744 |
+
|
745 |
+
Description: {workflow_description}
|
746 |
+
|
747 |
+
Current Tasks: {tasks_json}
|
748 |
+
|
749 |
+
Provide a comprehensive optimization plan with clear steps, automation recommendations, and efficiency improvements.
|
750 |
+
Format your response as a JSON object with 'optimized_workflow', 'recommendations', and 'automation_opportunities' keys."""
|
751 |
+
|
752 |
+
# Try OpenAI first if available
|
753 |
+
if OPENAI_API_KEY and client is not None:
|
754 |
+
try:
|
755 |
+
# Call OpenAI API with JSON response format
|
756 |
+
response = client.chat.completions.create(
|
757 |
+
model=MODEL,
|
758 |
+
messages=[
|
759 |
+
{"role": "system", "content": system_prompt},
|
760 |
+
{"role": "user", "content": user_prompt}
|
761 |
+
],
|
762 |
+
max_tokens=1000,
|
763 |
+
temperature=0.7,
|
764 |
+
response_format={"type": "json_object"}
|
765 |
+
)
|
766 |
+
|
767 |
+
# Extract and parse the JSON response
|
768 |
+
response_text = response.choices[0].message.content if response.choices else "{}"
|
769 |
+
try:
|
770 |
+
optimization_result = json.loads(response_text)
|
771 |
+
except json.JSONDecodeError:
|
772 |
+
optimization_result = {
|
773 |
+
"optimized_workflow": [],
|
774 |
+
"recommendations": ["Error parsing optimization results"],
|
775 |
+
"automation_opportunities": []
|
776 |
+
}
|
777 |
+
|
778 |
+
return {
|
779 |
+
"status": "success",
|
780 |
+
"optimization": optimization_result,
|
781 |
+
"model": MODEL
|
782 |
+
}
|
783 |
+
|
784 |
+
except Exception as e:
|
785 |
+
logger.error(f"Error optimizing workflow with OpenAI: {str(e)}")
|
786 |
+
# Fall through to try local LLM
|
787 |
+
|
788 |
+
# Try local LLM if enabled
|
789 |
+
if LOCAL_LLM_ENABLED:
|
790 |
+
try:
|
791 |
+
# Create messages for chat format
|
792 |
+
messages = [
|
793 |
+
{"role": "system", "content": system_prompt},
|
794 |
+
{"role": "user", "content": user_prompt}
|
795 |
+
]
|
796 |
+
|
797 |
+
# Query local LLM
|
798 |
+
response_text = query_local_llm(
|
799 |
+
provider=LOCAL_LLM_PROVIDER,
|
800 |
+
model=LOCAL_LLM_MODEL,
|
801 |
+
messages=messages,
|
802 |
+
max_tokens=1000,
|
803 |
+
temperature=0.7
|
804 |
+
)
|
805 |
+
|
806 |
+
if not response_text.startswith("Error:"):
|
807 |
+
# Try to parse JSON response
|
808 |
+
try:
|
809 |
+
# First, look for JSON structure in the response
|
810 |
+
json_match = re.search(r'({[\s\S]*})', response_text)
|
811 |
+
if json_match:
|
812 |
+
response_text = json_match.group(1)
|
813 |
+
|
814 |
+
optimization_result = json.loads(response_text)
|
815 |
+
except json.JSONDecodeError:
|
816 |
+
# If not valid JSON, create a simplified response
|
817 |
+
optimization_result = {
|
818 |
+
"optimized_workflow": [],
|
819 |
+
"recommendations": ["The optimization was processed with a local LLM but couldn't be formatted as JSON. Raw output:", response_text],
|
820 |
+
"automation_opportunities": []
|
821 |
+
}
|
822 |
+
|
823 |
+
return {
|
824 |
+
"status": "success",
|
825 |
+
"optimization": optimization_result,
|
826 |
+
"model": f"{LOCAL_LLM_PROVIDER}/{LOCAL_LLM_MODEL}"
|
827 |
+
}
|
828 |
+
else:
|
829 |
+
logger.error(f"Error from local LLM for workflow optimization: {response_text}")
|
830 |
+
# Fall through to fallback
|
831 |
+
|
832 |
+
except Exception as e:
|
833 |
+
logger.error(f"Error optimizing workflow with local LLM: {str(e)}")
|
834 |
+
# Fall through to fallback
|
835 |
+
|
836 |
+
# If all else fails, return a simple fallback
|
837 |
+
return {
|
838 |
+
"status": "error",
|
839 |
+
"message": "No LLM available for workflow optimization",
|
840 |
+
"optimization": {
|
841 |
+
"optimized_workflow": [],
|
842 |
+
"recommendations": ["To enable workflow optimization, please provide an LLM solution."],
|
843 |
+
"automation_opportunities": []
|
844 |
+
}
|
845 |
+
}
|
846 |
+
|
847 |
+
def generate_text(prompt):
|
848 |
+
"""Generate text using OpenAI API or local LLM
|
849 |
+
|
850 |
+
Args:
|
851 |
+
prompt: The text prompt to send to the LLM
|
852 |
+
|
853 |
+
Returns:
|
854 |
+
str: The generated response
|
855 |
+
"""
|
856 |
+
# Try OpenAI first if available
|
857 |
+
if OPENAI_API_KEY and client is not None:
|
858 |
+
try:
|
859 |
+
logger.info(f"Sending prompt to OpenAI: {prompt[:50]}...")
|
860 |
+
|
861 |
+
# Make the API call
|
862 |
+
response = client.chat.completions.create(
|
863 |
+
model=MODEL,
|
864 |
+
messages=[
|
865 |
+
{"role": "system", "content": "You are a helpful AI assistant."},
|
866 |
+
{"role": "user", "content": prompt}
|
867 |
+
],
|
868 |
+
max_tokens=500
|
869 |
+
)
|
870 |
+
|
871 |
+
# Extract and return the response text
|
872 |
+
response_text = response.choices[0].message.content
|
873 |
+
return response_text
|
874 |
+
|
875 |
+
except Exception as e:
|
876 |
+
logger.error(f"Error generating text with OpenAI: {str(e)}")
|
877 |
+
# Fall through to try local LLM
|
878 |
+
|
879 |
+
# Try local LLM if enabled
|
880 |
+
if LOCAL_LLM_ENABLED:
|
881 |
+
try:
|
882 |
+
logger.info(f"Sending prompt to local LLM ({LOCAL_LLM_PROVIDER}): {prompt[:50]}...")
|
883 |
+
|
884 |
+
# Create messages for chat format
|
885 |
+
messages = [
|
886 |
+
{"role": "system", "content": "You are a helpful AI assistant."},
|
887 |
+
{"role": "user", "content": prompt}
|
888 |
+
]
|
889 |
+
|
890 |
+
# Query local LLM
|
891 |
+
response_text = query_local_llm(
|
892 |
+
provider=LOCAL_LLM_PROVIDER,
|
893 |
+
model=LOCAL_LLM_MODEL,
|
894 |
+
messages=messages
|
895 |
+
)
|
896 |
+
|
897 |
+
if not response_text.startswith("Error:"):
|
898 |
+
return response_text
|
899 |
+
else:
|
900 |
+
logger.error(f"Error from local LLM: {response_text}")
|
901 |
+
# Fall through to fallback
|
902 |
+
|
903 |
+
except Exception as e:
|
904 |
+
logger.error(f"Error generating text with local LLM: {str(e)}")
|
905 |
+
# Fall through to fallback
|
906 |
+
|
907 |
+
# If all else fails, return a simple message
|
908 |
+
return "I'm unable to generate a response at this time. Please ensure an LLM solution is available."
|
pyproject.toml
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[project]
|
2 |
+
name = "repl-nix-workspace"
|
3 |
+
version = "0.1.0"
|
4 |
+
description = "Add your description here"
|
5 |
+
requires-python = ">=3.11"
|
6 |
+
dependencies = [
|
7 |
+
"beautifulsoup4>=4.13.3",
|
8 |
+
"email-validator>=2.2.0",
|
9 |
+
"flask>=3.1.0",
|
10 |
+
"flask-sqlalchemy>=3.1.1",
|
11 |
+
"gunicorn>=23.0.0",
|
12 |
+
"openai>=1.68.2",
|
13 |
+
"psycopg2-binary>=2.9.10",
|
14 |
+
"requests>=2.32.3",
|
15 |
+
"spacy>=3.8.4",
|
16 |
+
"sqlalchemy>=2.0.39",
|
17 |
+
"trafilatura>=2.0.0",
|
18 |
+
]
|
quantum-bg.svg
ADDED
|
quantum_thinking.py
ADDED
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Quantum-inspired recursive thinking module.
|
3 |
+
|
4 |
+
This module implements a multi-dimensional, layered thinking process
|
5 |
+
inspired by quantum computing concepts, using recursion to simulate
|
6 |
+
multiple simultaneous thought paths.
|
7 |
+
"""
|
8 |
+
import logging
|
9 |
+
import random
|
10 |
+
from collections import defaultdict
|
11 |
+
import re
|
12 |
+
|
13 |
+
logger = logging.getLogger(__name__)
|
14 |
+
|
15 |
+
def quantum_recursive_thinking(text, depth=3, branch_factor=3, superposition_factor=0.7):
|
16 |
+
"""
|
17 |
+
Implements a quantum-inspired recursive thinking process.
|
18 |
+
|
19 |
+
Args:
|
20 |
+
text (str): The input text to process
|
21 |
+
depth (int): The recursive depth (dimensionality) of thinking
|
22 |
+
branch_factor (int): How many paths to explore at each level
|
23 |
+
superposition_factor (float): How much paths should influence each other
|
24 |
+
|
25 |
+
Returns:
|
26 |
+
dict: Results of the quantum thinking process
|
27 |
+
"""
|
28 |
+
try:
|
29 |
+
logger.debug(f"Quantum thinking on text: {text[:50]}... with depth {depth}")
|
30 |
+
|
31 |
+
# Base case
|
32 |
+
if depth <= 0 or not text.strip():
|
33 |
+
return {
|
34 |
+
'dimension': 0,
|
35 |
+
'insight': "Reached base dimension",
|
36 |
+
'paths': []
|
37 |
+
}
|
38 |
+
|
39 |
+
# Clean and prepare text
|
40 |
+
cleaned_text = clean_text(text)
|
41 |
+
|
42 |
+
# Extract key concepts from text
|
43 |
+
concepts = extract_key_concepts(cleaned_text)
|
44 |
+
|
45 |
+
# Ensure we have enough concepts
|
46 |
+
if len(concepts) < branch_factor:
|
47 |
+
# Add some generic concepts to ensure branching
|
48 |
+
generic_concepts = ["analysis", "implication", "perspective",
|
49 |
+
"alternative", "synthesis", "insight"]
|
50 |
+
concepts.extend(generic_concepts[:branch_factor - len(concepts)])
|
51 |
+
|
52 |
+
# Select branch_factor concepts to explore
|
53 |
+
selected_concepts = concepts[:branch_factor]
|
54 |
+
|
55 |
+
# Create paths for recursive exploration
|
56 |
+
paths = []
|
57 |
+
all_insights = []
|
58 |
+
|
59 |
+
# For each selected concept, create a new thinking path
|
60 |
+
for i, concept in enumerate(selected_concepts):
|
61 |
+
# Create the thought prompt for this concept
|
62 |
+
thought_prompt = generate_thought_prompt(text, concept, depth)
|
63 |
+
|
64 |
+
# Recursively explore this path with reduced depth
|
65 |
+
path_result = quantum_recursive_thinking(
|
66 |
+
thought_prompt,
|
67 |
+
depth - 1,
|
68 |
+
max(2, branch_factor - 1), # Reduce branching as we go deeper
|
69 |
+
superposition_factor * 0.9 # Reduce superposition effect as we go deeper
|
70 |
+
)
|
71 |
+
|
72 |
+
# Generate insight for this path
|
73 |
+
path_insight = generate_insight(text, concept, depth)
|
74 |
+
all_insights.append(path_insight)
|
75 |
+
|
76 |
+
# Add results to paths
|
77 |
+
paths.append({
|
78 |
+
'concept': concept,
|
79 |
+
'prompt': thought_prompt,
|
80 |
+
'insight': path_insight,
|
81 |
+
'sub_dimensions': path_result
|
82 |
+
})
|
83 |
+
|
84 |
+
# Apply "superposition" - allow paths to influence each other
|
85 |
+
# This simulates quantum entanglement between thinking paths
|
86 |
+
entangled_paths = apply_superposition(paths, superposition_factor)
|
87 |
+
|
88 |
+
# Synthesize a meta-insight from all path insights
|
89 |
+
meta_insight = synthesize_insights(text, all_insights, depth)
|
90 |
+
|
91 |
+
# Compute "quantum probability" score
|
92 |
+
quantum_score = compute_quantum_score(text, entangled_paths, depth)
|
93 |
+
|
94 |
+
# Return the final results
|
95 |
+
return {
|
96 |
+
'dimension': depth,
|
97 |
+
'meta_insight': meta_insight,
|
98 |
+
'quantum_score': quantum_score,
|
99 |
+
'paths': entangled_paths
|
100 |
+
}
|
101 |
+
|
102 |
+
except Exception as e:
|
103 |
+
logger.error(f"Error in quantum thinking: {str(e)}")
|
104 |
+
return {
|
105 |
+
'dimension': depth,
|
106 |
+
'error': str(e),
|
107 |
+
'meta_insight': "Error occurred during quantum thinking process",
|
108 |
+
'paths': []
|
109 |
+
}
|
110 |
+
|
111 |
+
def clean_text(text):
|
112 |
+
"""Clean and normalize the input text."""
|
113 |
+
# Remove extra whitespace
|
114 |
+
text = re.sub(r'\s+', ' ', text).strip()
|
115 |
+
return text
|
116 |
+
|
117 |
+
def extract_key_concepts(text):
|
118 |
+
"""Extract key concepts from the text for branching."""
|
119 |
+
# Split into words and filter out common words
|
120 |
+
words = text.lower().split()
|
121 |
+
stopwords = set(["the", "and", "a", "an", "in", "on", "at", "to", "for", "is", "are",
|
122 |
+
"was", "were", "be", "been", "being", "have", "has", "had", "do",
|
123 |
+
"does", "did", "of", "that", "this", "with", "by"])
|
124 |
+
|
125 |
+
# Filter out stopwords and short words
|
126 |
+
filtered_words = [w for w in words if w not in stopwords and len(w) > 3]
|
127 |
+
|
128 |
+
# Count frequency
|
129 |
+
word_freq = defaultdict(int)
|
130 |
+
for word in filtered_words:
|
131 |
+
word_freq[word] += 1
|
132 |
+
|
133 |
+
# Sort by frequency
|
134 |
+
sorted_words = sorted(word_freq.items(), key=lambda x: x[1], reverse=True)
|
135 |
+
|
136 |
+
# Return just the words, not the counts
|
137 |
+
return [word for word, count in sorted_words[:10]]
|
138 |
+
|
139 |
+
def generate_thought_prompt(text, concept, depth):
|
140 |
+
"""Generate a thought prompt for recursive exploration."""
|
141 |
+
prompts = [
|
142 |
+
f"Exploring the concept of '{concept}' in relation to: {text}",
|
143 |
+
f"How does '{concept}' relate to or illuminate: {text}",
|
144 |
+
f"Considering '{concept}' as a lens to understand: {text}",
|
145 |
+
f"What insights emerge when viewing through the prism of '{concept}': {text}",
|
146 |
+
f"Analyzing the dimensions of '{concept}' within: {text}"
|
147 |
+
]
|
148 |
+
|
149 |
+
# Select a prompt based on the current depth (to add variety)
|
150 |
+
prompt_index = (depth - 1) % len(prompts)
|
151 |
+
return prompts[prompt_index]
|
152 |
+
|
153 |
+
def generate_insight(text, concept, depth):
|
154 |
+
"""Generate an insight for a specific concept and depth."""
|
155 |
+
# Templates for different depth levels
|
156 |
+
templates = {
|
157 |
+
3: [
|
158 |
+
f"The primary aspect of '{concept}' reveals fundamental patterns in the input.",
|
159 |
+
f"At the root level, '{concept}' serves as an organizing principle.",
|
160 |
+
f"The meta-structure of '{concept}' frames our understanding of the whole."
|
161 |
+
],
|
162 |
+
2: [
|
163 |
+
f"The relationship between '{concept}' and contextual elements creates tension.",
|
164 |
+
f"'{concept}' operates as both cause and effect within this system.",
|
165 |
+
f"The secondary implications of '{concept}' reveal hidden connections."
|
166 |
+
],
|
167 |
+
1: [
|
168 |
+
f"The granular detail of '{concept}' illuminates microscopic truth.",
|
169 |
+
f"At the most specific level, '{concept}' manifests as concrete instances.",
|
170 |
+
f"The practical application of '{concept}' suggests actionable insights."
|
171 |
+
]
|
172 |
+
}
|
173 |
+
|
174 |
+
# Use depth to select appropriate template, default to depth 1 if not found
|
175 |
+
depth_templates = templates.get(depth, templates[1])
|
176 |
+
|
177 |
+
# Randomly select from available templates for this depth
|
178 |
+
return random.choice(depth_templates)
|
179 |
+
|
180 |
+
def apply_superposition(paths, factor):
|
181 |
+
"""
|
182 |
+
Apply 'quantum superposition' by allowing paths to influence each other.
|
183 |
+
|
184 |
+
Args:
|
185 |
+
paths: List of path results
|
186 |
+
factor: How strongly paths should influence each other (0-1)
|
187 |
+
|
188 |
+
Returns:
|
189 |
+
List of entangled paths
|
190 |
+
"""
|
191 |
+
if not paths or len(paths) < 2:
|
192 |
+
return paths
|
193 |
+
|
194 |
+
entangled_paths = paths.copy()
|
195 |
+
|
196 |
+
# For each path
|
197 |
+
for i in range(len(paths)):
|
198 |
+
# Skip with probability (1-factor) to maintain some independence
|
199 |
+
if random.random() > factor:
|
200 |
+
continue
|
201 |
+
|
202 |
+
# Select another random path to entangle with
|
203 |
+
other_idx = random.choice([j for j in range(len(paths)) if j != i])
|
204 |
+
|
205 |
+
# Extract a concept from the other path
|
206 |
+
other_concept = paths[other_idx]['concept']
|
207 |
+
|
208 |
+
# Create an entangled insight combining both paths
|
209 |
+
entangled_insight = f"The interaction between '{paths[i]['concept']}' and '{other_concept}' creates a superposition of meaning, suggesting that {random.choice(['patterns emerge from complexity', 'multiple interpretations coexist', 'contextual understanding requires multiple viewpoints', 'dimensional analysis reveals hidden structure'])}"
|
210 |
+
|
211 |
+
# Update the path with the entangled insight
|
212 |
+
entangled_paths[i]['entangled_insight'] = entangled_insight
|
213 |
+
entangled_paths[i]['entangled_with'] = other_concept
|
214 |
+
|
215 |
+
return entangled_paths
|
216 |
+
|
217 |
+
def synthesize_insights(text, insights, depth):
|
218 |
+
"""Synthesize a meta-insight from all path insights."""
|
219 |
+
if not insights:
|
220 |
+
return "Insufficient data for meta-insight synthesis."
|
221 |
+
|
222 |
+
# Templates for meta-insights
|
223 |
+
meta_templates = [
|
224 |
+
"The recursive analysis reveals that multiple perspectives {}, suggesting {}.",
|
225 |
+
"Quantum-inspired thinking uncovers a superposition where {} and {} coexist.",
|
226 |
+
"The dimensional analysis indicates that {} serves as the foundation for {}.",
|
227 |
+
"At depth {}, the emergent pattern demonstrates how {} relates to {}.",
|
228 |
+
"The entanglement of concepts suggests that {} is fundamentally connected to {}."
|
229 |
+
]
|
230 |
+
|
231 |
+
# Create pairs of concepts to insert into templates
|
232 |
+
if len(insights) >= 2:
|
233 |
+
pairs = [
|
234 |
+
("complexity emerges from simplicity", "underlying patterns exist"),
|
235 |
+
("surface contradictions resolve at deeper levels", "unity underlies diversity"),
|
236 |
+
("conceptual frameworks overlap", "interdisciplinary understanding is essential"),
|
237 |
+
("analytical depth reveals nuance", "binary thinking is insufficient"),
|
238 |
+
("contextual understanding requires multiple dimensions", "meaning emerges from relationship")
|
239 |
+
]
|
240 |
+
|
241 |
+
# Select a template and pair
|
242 |
+
template = random.choice(meta_templates)
|
243 |
+
pair = random.choice(pairs)
|
244 |
+
|
245 |
+
# Format the template with the pair and depth
|
246 |
+
return template.format(pair[0], pair[1]) if "{}" in template else template.format(depth, pair[0], pair[1])
|
247 |
+
else:
|
248 |
+
return "Single-path analysis suggests limited dimensional exploration."
|
249 |
+
|
250 |
+
def compute_quantum_score(text, paths, depth):
|
251 |
+
"""Compute a 'quantum probability' score for the analysis."""
|
252 |
+
# Base score from input text complexity
|
253 |
+
base_score = min(0.95, max(0.2, len(text) / 1000))
|
254 |
+
|
255 |
+
# Add score based on number of paths explored
|
256 |
+
path_score = min(0.3, len(paths) * 0.1)
|
257 |
+
|
258 |
+
# Add score based on depth
|
259 |
+
depth_score = min(0.4, depth * 0.15)
|
260 |
+
|
261 |
+
# Compute final score (between 0-1)
|
262 |
+
final_score = min(0.99, base_score + path_score + depth_score)
|
263 |
+
|
264 |
+
# Return formatted score
|
265 |
+
return round(final_score, 2)
|
quantumvision.log
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
./start_atlas_unified.sh: line 95: python: command not found
|
quantumvision.pid
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
57088
|
replit.nix
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{pkgs}: {
|
2 |
+
deps = [
|
3 |
+
pkgs.postgresql
|
4 |
+
pkgs.openssl
|
5 |
+
];
|
6 |
+
}
|
requirements.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
fastapi>=0.103.1
|
2 |
+
uvicorn>=0.23.2
|
3 |
+
requests>=2.31.0
|
4 |
+
pydantic>=2.3.0
|
5 |
+
python-multipart>=0.0.6
|
settings.html
ADDED
@@ -0,0 +1,362 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{% extends "layout.html" %}
|
2 |
+
|
3 |
+
{% block content %}
|
4 |
+
<div class="row mb-4">
|
5 |
+
<div class="col-md-12">
|
6 |
+
<div class="glass-card floating">
|
7 |
+
<div class="p-4">
|
8 |
+
<h1 class="display-5 quantum-glow">
|
9 |
+
<i class="fas fa-sliders-h me-2"></i> Settings
|
10 |
+
</h1>
|
11 |
+
<p class="card-text lead">
|
12 |
+
Configure your Quantum NLP Framework experience
|
13 |
+
</p>
|
14 |
+
<div class="vision-progress">
|
15 |
+
<div class="vision-progress-bar"></div>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
</div>
|
19 |
+
</div>
|
20 |
+
</div>
|
21 |
+
|
22 |
+
<div class="row">
|
23 |
+
<!-- OpenAI API Settings -->
|
24 |
+
<div class="col-md-6 mb-4">
|
25 |
+
<div class="glass-card h-100">
|
26 |
+
<div class="p-4">
|
27 |
+
<h4 class="mb-4 quantum-glow"><i class="fas fa-robot me-2"></i> OpenAI Integration</h4>
|
28 |
+
|
29 |
+
<form method="POST" action="/save-api-key" id="api-key-form">
|
30 |
+
<div class="mb-3">
|
31 |
+
<label for="api_key" class="form-label">OpenAI API Key</label>
|
32 |
+
<div class="input-group">
|
33 |
+
<span class="input-group-text"><i class="fas fa-key"></i></span>
|
34 |
+
<input type="password" class="form-control" id="api_key" name="api_key"
|
35 |
+
placeholder="sk-..." value="{{ api_key_masked }}">
|
36 |
+
<button class="btn btn-outline-secondary" type="button" id="toggle-api-key">
|
37 |
+
<i class="fas fa-eye"></i>
|
38 |
+
</button>
|
39 |
+
</div>
|
40 |
+
<div class="form-text">Your API key is stored securely and never shared</div>
|
41 |
+
</div>
|
42 |
+
|
43 |
+
<div class="mb-3">
|
44 |
+
<label class="form-label">Status</label>
|
45 |
+
<div class="d-flex align-items-center">
|
46 |
+
{% if api_key_status %}
|
47 |
+
<span class="badge bg-success me-2"><i class="fas fa-check me-1"></i> Connected</span>
|
48 |
+
{% else %}
|
49 |
+
<span class="badge bg-warning me-2"><i class="fas fa-exclamation-triangle me-1"></i> Not Connected</span>
|
50 |
+
{% endif %}
|
51 |
+
<div class="form-text">
|
52 |
+
{% if api_key_status %}
|
53 |
+
Your OpenAI API key is valid and active
|
54 |
+
{% else %}
|
55 |
+
Add your OpenAI API key to enable AI-enhanced features
|
56 |
+
{% endif %}
|
57 |
+
</div>
|
58 |
+
</div>
|
59 |
+
</div>
|
60 |
+
|
61 |
+
<div class="mb-3">
|
62 |
+
<label for="ai_model" class="form-label">Default AI Model</label>
|
63 |
+
<select class="form-select" id="ai_model" name="ai_model">
|
64 |
+
<option value="gpt-4o" {% if ai_model == 'gpt-4o' %}selected{% endif %}>GPT-4o (Recommended)</option>
|
65 |
+
<option value="gpt-3.5-turbo" {% if ai_model == 'gpt-3.5-turbo' %}selected{% endif %}>GPT-3.5 Turbo (Faster)</option>
|
66 |
+
</select>
|
67 |
+
<div class="form-text">Select the AI model to use for enhanced analysis</div>
|
68 |
+
</div>
|
69 |
+
|
70 |
+
<!-- LED tracer progress line -->
|
71 |
+
<div class="vision-progress mb-4">
|
72 |
+
<div class="vision-progress-bar"></div>
|
73 |
+
</div>
|
74 |
+
|
75 |
+
<div class="d-grid">
|
76 |
+
<button type="submit" class="btn btn-primary quantum-btn">
|
77 |
+
<span class="quantum-btn-content">
|
78 |
+
<i class="fas fa-save me-2"></i> Save API Settings
|
79 |
+
</span>
|
80 |
+
</button>
|
81 |
+
</div>
|
82 |
+
</form>
|
83 |
+
</div>
|
84 |
+
</div>
|
85 |
+
</div>
|
86 |
+
|
87 |
+
<!-- Contextual Hints Settings -->
|
88 |
+
<div class="col-md-6 mb-4">
|
89 |
+
<div class="glass-card h-100">
|
90 |
+
<div class="p-4">
|
91 |
+
<h4 class="mb-4 quantum-glow"><i class="fas fa-lightbulb me-2"></i> Contextual Hints</h4>
|
92 |
+
|
93 |
+
<div class="mb-4">
|
94 |
+
<p>Contextual hints provide just-in-time guidance to help you understand the Quantum NLP Framework. They appear subtly when you might need assistance with a feature.</p>
|
95 |
+
|
96 |
+
<div class="form-check form-switch mb-3">
|
97 |
+
<input class="form-check-input" type="checkbox" id="enable_hints" checked>
|
98 |
+
<label class="form-check-label" for="enable_hints">Enable Contextual Hints</label>
|
99 |
+
</div>
|
100 |
+
|
101 |
+
<div class="form-check form-switch mb-3">
|
102 |
+
<input class="form-check-input" type="checkbox" id="show_particle_effects" checked>
|
103 |
+
<label class="form-check-label" for="show_particle_effects">Show Quantum Particle Effects</label>
|
104 |
+
</div>
|
105 |
+
</div>
|
106 |
+
|
107 |
+
<div class="mb-4">
|
108 |
+
<label class="form-label">Hint History</label>
|
109 |
+
<div class="d-flex align-items-center">
|
110 |
+
<div id="hint-count-display" class="me-2">7 hints shown</div>
|
111 |
+
<button class="btn btn-sm btn-outline-info ms-auto" id="reset-hints-btn">
|
112 |
+
<i class="fas fa-redo-alt me-1"></i> Reset All Hints
|
113 |
+
</button>
|
114 |
+
</div>
|
115 |
+
<div class="form-text">Reset to show all hints again, including those you've dismissed</div>
|
116 |
+
</div>
|
117 |
+
|
118 |
+
<div class="mb-4">
|
119 |
+
<label for="hint_duration" class="form-label">Hint Display Duration</label>
|
120 |
+
<select class="form-select" id="hint_duration">
|
121 |
+
<option value="5">5 seconds</option>
|
122 |
+
<option value="10" selected>10 seconds</option>
|
123 |
+
<option value="15">15 seconds</option>
|
124 |
+
<option value="0">Until dismissed</option>
|
125 |
+
</select>
|
126 |
+
<div class="form-text">How long hints will stay visible before auto-dismissing (0 = stay until manually dismissed)</div>
|
127 |
+
</div>
|
128 |
+
|
129 |
+
<!-- Sample hint preview -->
|
130 |
+
<div class="mb-4">
|
131 |
+
<label class="form-label">Hint Preview</label>
|
132 |
+
<div class="contextual-hint position-static active">
|
133 |
+
<div class="led-tracer"></div>
|
134 |
+
<div class="contextual-hint-title">
|
135 |
+
<i class="fas fa-atom"></i>
|
136 |
+
<h5>Quantum Analysis</h5>
|
137 |
+
</div>
|
138 |
+
<div class="contextual-hint-content">This is how hints appear when they provide guidance for using the Quantum NLP Framework.</div>
|
139 |
+
<div class="contextual-hint-actions">
|
140 |
+
<button class="hint-button hint-button-primary">Got it</button>
|
141 |
+
<span class="hint-dont-show">Don't show again</span>
|
142 |
+
</div>
|
143 |
+
</div>
|
144 |
+
</div>
|
145 |
+
|
146 |
+
<!-- LED tracer progress line -->
|
147 |
+
<div class="vision-progress mb-4">
|
148 |
+
<div class="vision-progress-bar"></div>
|
149 |
+
</div>
|
150 |
+
|
151 |
+
<div class="d-grid">
|
152 |
+
<button type="button" id="save-hints-settings" class="btn btn-primary quantum-btn">
|
153 |
+
<span class="quantum-btn-content">
|
154 |
+
<i class="fas fa-save me-2"></i> Save Hint Settings
|
155 |
+
</span>
|
156 |
+
</button>
|
157 |
+
</div>
|
158 |
+
</div>
|
159 |
+
</div>
|
160 |
+
</div>
|
161 |
+
</div>
|
162 |
+
|
163 |
+
<div class="row">
|
164 |
+
<!-- User Interface Settings -->
|
165 |
+
<div class="col-md-12 mb-4">
|
166 |
+
<div class="glass-card">
|
167 |
+
<div class="p-4">
|
168 |
+
<h4 class="mb-4 quantum-glow"><i class="fas fa-palette me-2"></i> User Interface Settings</h4>
|
169 |
+
|
170 |
+
<div class="row">
|
171 |
+
<div class="col-md-6 mb-3">
|
172 |
+
<label class="form-label">Visual Effects</label>
|
173 |
+
<div class="form-check form-switch mb-2">
|
174 |
+
<input class="form-check-input" type="checkbox" id="led_tracer_effects" checked>
|
175 |
+
<label class="form-check-label" for="led_tracer_effects">LED Tracer Lines</label>
|
176 |
+
</div>
|
177 |
+
<div class="form-check form-switch mb-2">
|
178 |
+
<input class="form-check-input" type="checkbox" id="smoke_effects" checked>
|
179 |
+
<label class="form-check-label" for="smoke_effects">Smoke Effects</label>
|
180 |
+
</div>
|
181 |
+
<div class="form-check form-switch mb-2">
|
182 |
+
<input class="form-check-input" type="checkbox" id="quantum_particles" checked>
|
183 |
+
<label class="form-check-label" for="quantum_particles">Quantum Particles</label>
|
184 |
+
</div>
|
185 |
+
</div>
|
186 |
+
|
187 |
+
<div class="col-md-6 mb-3">
|
188 |
+
<label class="form-label">Performance Settings</label>
|
189 |
+
<div class="form-check form-switch mb-2">
|
190 |
+
<input class="form-check-input" type="checkbox" id="reduced_animations" >
|
191 |
+
<label class="form-check-label" for="reduced_animations">Reduced Animations</label>
|
192 |
+
</div>
|
193 |
+
<div class="form-check form-switch mb-2">
|
194 |
+
<input class="form-check-input" type="checkbox" id="high_contrast">
|
195 |
+
<label class="form-check-label" for="high_contrast">High Contrast Mode</label>
|
196 |
+
</div>
|
197 |
+
<div class="form-check form-switch mb-2">
|
198 |
+
<input class="form-check-input" type="checkbox" id="focus_mode">
|
199 |
+
<label class="form-check-label" for="focus_mode">Focus Mode</label>
|
200 |
+
</div>
|
201 |
+
</div>
|
202 |
+
</div>
|
203 |
+
|
204 |
+
<!-- LED tracer progress line -->
|
205 |
+
<div class="vision-progress mb-4">
|
206 |
+
<div class="vision-progress-bar"></div>
|
207 |
+
</div>
|
208 |
+
|
209 |
+
<div class="d-grid">
|
210 |
+
<button type="button" id="save-ui-settings" class="btn btn-primary quantum-btn">
|
211 |
+
<span class="quantum-btn-content">
|
212 |
+
<i class="fas fa-save me-2"></i> Save UI Settings
|
213 |
+
</span>
|
214 |
+
</button>
|
215 |
+
</div>
|
216 |
+
</div>
|
217 |
+
</div>
|
218 |
+
</div>
|
219 |
+
</div>
|
220 |
+
{% endblock %}
|
221 |
+
|
222 |
+
{% block scripts %}
|
223 |
+
<script>
|
224 |
+
document.addEventListener('DOMContentLoaded', function() {
|
225 |
+
// Toggle API key visibility
|
226 |
+
const toggleApiKeyBtn = document.getElementById('toggle-api-key');
|
227 |
+
const apiKeyInput = document.getElementById('api_key');
|
228 |
+
|
229 |
+
if (toggleApiKeyBtn && apiKeyInput) {
|
230 |
+
toggleApiKeyBtn.addEventListener('click', function() {
|
231 |
+
const type = apiKeyInput.getAttribute('type') === 'password' ? 'text' : 'password';
|
232 |
+
apiKeyInput.setAttribute('type', type);
|
233 |
+
toggleApiKeyBtn.innerHTML = type === 'password' ? '<i class="fas fa-eye"></i>' : '<i class="fas fa-eye-slash"></i>';
|
234 |
+
});
|
235 |
+
}
|
236 |
+
|
237 |
+
// Reset hints button
|
238 |
+
const resetHintsBtn = document.getElementById('reset-hints-btn');
|
239 |
+
if (resetHintsBtn && window.contextualHintSystem) {
|
240 |
+
resetHintsBtn.addEventListener('click', function() {
|
241 |
+
if (confirm('Are you sure you want to reset all hints? This will show all hints again, including those you\'ve chosen not to see.')) {
|
242 |
+
window.contextualHintSystem.resetShownHints();
|
243 |
+
updateHintCountDisplay();
|
244 |
+
// Show confirmation
|
245 |
+
alert('All hints have been reset and will be shown again.');
|
246 |
+
}
|
247 |
+
});
|
248 |
+
}
|
249 |
+
|
250 |
+
// Update hint count display
|
251 |
+
function updateHintCountDisplay() {
|
252 |
+
const hintCountDisplay = document.getElementById('hint-count-display');
|
253 |
+
if (hintCountDisplay && window.contextualHintSystem) {
|
254 |
+
const count = window.contextualHintSystem.shownHints.length;
|
255 |
+
hintCountDisplay.textContent = `${count} hints shown`;
|
256 |
+
}
|
257 |
+
}
|
258 |
+
|
259 |
+
// Call initially
|
260 |
+
updateHintCountDisplay();
|
261 |
+
|
262 |
+
// Save hint settings
|
263 |
+
const saveHintsSettingsBtn = document.getElementById('save-hints-settings');
|
264 |
+
if (saveHintsSettingsBtn) {
|
265 |
+
saveHintsSettingsBtn.addEventListener('click', function() {
|
266 |
+
const enableHints = document.getElementById('enable_hints').checked;
|
267 |
+
const showParticleEffects = document.getElementById('show_particle_effects').checked;
|
268 |
+
const hintDuration = document.getElementById('hint_duration').value;
|
269 |
+
|
270 |
+
// Save to localStorage
|
271 |
+
localStorage.setItem('enableHints', enableHints);
|
272 |
+
localStorage.setItem('showParticleEffects', showParticleEffects);
|
273 |
+
localStorage.setItem('hintDuration', hintDuration);
|
274 |
+
|
275 |
+
// Apply settings
|
276 |
+
if (window.contextualHintSystem) {
|
277 |
+
// Logic would be implemented in the hint system
|
278 |
+
}
|
279 |
+
|
280 |
+
// Show confirmation
|
281 |
+
alert('Hint settings saved successfully.');
|
282 |
+
});
|
283 |
+
}
|
284 |
+
|
285 |
+
// Save UI settings
|
286 |
+
const saveUISettingsBtn = document.getElementById('save-ui-settings');
|
287 |
+
if (saveUISettingsBtn) {
|
288 |
+
saveUISettingsBtn.addEventListener('click', function() {
|
289 |
+
const ledTracerEffects = document.getElementById('led_tracer_effects').checked;
|
290 |
+
const smokeEffects = document.getElementById('smoke_effects').checked;
|
291 |
+
const quantumParticles = document.getElementById('quantum_particles').checked;
|
292 |
+
const reducedAnimations = document.getElementById('reduced_animations').checked;
|
293 |
+
const highContrast = document.getElementById('high_contrast').checked;
|
294 |
+
const focusMode = document.getElementById('focus_mode').checked;
|
295 |
+
|
296 |
+
// Save to localStorage
|
297 |
+
localStorage.setItem('ledTracerEffects', ledTracerEffects);
|
298 |
+
localStorage.setItem('smokeEffects', smokeEffects);
|
299 |
+
localStorage.setItem('quantumParticles', quantumParticles);
|
300 |
+
localStorage.setItem('reducedAnimations', reducedAnimations);
|
301 |
+
localStorage.setItem('highContrast', highContrast);
|
302 |
+
localStorage.setItem('focusMode', focusMode);
|
303 |
+
|
304 |
+
// Apply settings
|
305 |
+
document.body.classList.toggle('reduced-animations', reducedAnimations);
|
306 |
+
document.body.classList.toggle('high-contrast', highContrast);
|
307 |
+
document.body.classList.toggle('focus-mode', focusMode);
|
308 |
+
|
309 |
+
// Show confirmation
|
310 |
+
alert('UI settings saved successfully.');
|
311 |
+
});
|
312 |
+
}
|
313 |
+
|
314 |
+
// Load saved settings on page load
|
315 |
+
function loadSavedSettings() {
|
316 |
+
// Load hint settings
|
317 |
+
if (localStorage.getItem('enableHints') !== null) {
|
318 |
+
document.getElementById('enable_hints').checked = localStorage.getItem('enableHints') === 'true';
|
319 |
+
}
|
320 |
+
|
321 |
+
if (localStorage.getItem('showParticleEffects') !== null) {
|
322 |
+
document.getElementById('show_particle_effects').checked = localStorage.getItem('showParticleEffects') === 'true';
|
323 |
+
}
|
324 |
+
|
325 |
+
if (localStorage.getItem('hintDuration')) {
|
326 |
+
document.getElementById('hint_duration').value = localStorage.getItem('hintDuration');
|
327 |
+
}
|
328 |
+
|
329 |
+
// Load UI settings
|
330 |
+
if (localStorage.getItem('ledTracerEffects') !== null) {
|
331 |
+
document.getElementById('led_tracer_effects').checked = localStorage.getItem('ledTracerEffects') === 'true';
|
332 |
+
}
|
333 |
+
|
334 |
+
if (localStorage.getItem('smokeEffects') !== null) {
|
335 |
+
document.getElementById('smoke_effects').checked = localStorage.getItem('smokeEffects') === 'true';
|
336 |
+
}
|
337 |
+
|
338 |
+
if (localStorage.getItem('quantumParticles') !== null) {
|
339 |
+
document.getElementById('quantum_particles').checked = localStorage.getItem('quantumParticles') === 'true';
|
340 |
+
}
|
341 |
+
|
342 |
+
if (localStorage.getItem('reducedAnimations') !== null) {
|
343 |
+
document.getElementById('reduced_animations').checked = localStorage.getItem('reducedAnimations') === 'true';
|
344 |
+
document.body.classList.toggle('reduced-animations', localStorage.getItem('reducedAnimations') === 'true');
|
345 |
+
}
|
346 |
+
|
347 |
+
if (localStorage.getItem('highContrast') !== null) {
|
348 |
+
document.getElementById('high_contrast').checked = localStorage.getItem('highContrast') === 'true';
|
349 |
+
document.body.classList.toggle('high-contrast', localStorage.getItem('highContrast') === 'true');
|
350 |
+
}
|
351 |
+
|
352 |
+
if (localStorage.getItem('focusMode') !== null) {
|
353 |
+
document.getElementById('focus_mode').checked = localStorage.getItem('focusMode') === 'true';
|
354 |
+
document.body.classList.toggle('focus-mode', localStorage.getItem('focusMode') === 'true');
|
355 |
+
}
|
356 |
+
}
|
357 |
+
|
358 |
+
// Call initially
|
359 |
+
loadSavedSettings();
|
360 |
+
});
|
361 |
+
</script>
|
362 |
+
{% endblock %}
|
smoke-bg.svg
ADDED
|
start_atlas.sh
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
#
|
3 |
+
# Atlas Startup Script
|
4 |
+
# This script launches all Atlas components
|
5 |
+
# Created: April 17, 2025
|
6 |
+
|
7 |
+
# Set base directory
|
8 |
+
BASE_DIR="$HOME"
|
9 |
+
VENV_DIR="$BASE_DIR/atlas_venv"
|
10 |
+
|
11 |
+
echo "===== Starting Atlas Unified System ====="
|
12 |
+
|
13 |
+
# Check if virtual environment exists
|
14 |
+
if [ ! -d "$VENV_DIR" ]; then
|
15 |
+
echo "Virtual environment not found. Please run the installer first."
|
16 |
+
echo "Run: ./install_atlas_unified.sh"
|
17 |
+
exit 1
|
18 |
+
fi
|
19 |
+
|
20 |
+
# Activate the virtual environment
|
21 |
+
source "$VENV_DIR/bin/activate"
|
22 |
+
|
23 |
+
# Launch components based on what's available
|
24 |
+
echo "Starting core components..."
|
25 |
+
|
26 |
+
# OpenManus
|
27 |
+
if [ -d "$BASE_DIR/OpenManus" ]; then
|
28 |
+
echo "Starting OpenManus..."
|
29 |
+
cd "$BASE_DIR/OpenManus"
|
30 |
+
python -m app.api_server &
|
31 |
+
OPENMANUS_PID=$!
|
32 |
+
echo "OpenManus started with PID: $OPENMANUS_PID"
|
33 |
+
fi
|
34 |
+
|
35 |
+
# Casibase
|
36 |
+
if [ -d "$BASE_DIR/casibase" ] && [ -f "$BASE_DIR/casibase/casibase" ]; then
|
37 |
+
echo "Starting Casibase..."
|
38 |
+
cd "$BASE_DIR/casibase"
|
39 |
+
./casibase &
|
40 |
+
CASIBASE_PID=$!
|
41 |
+
echo "Casibase started with PID: $CASIBASE_PID"
|
42 |
+
fi
|
43 |
+
|
44 |
+
# Cypher
|
45 |
+
if [ -d "$BASE_DIR/Cypher" ]; then
|
46 |
+
echo "Starting Cypher..."
|
47 |
+
cd "$BASE_DIR/Cypher"
|
48 |
+
python app.py &
|
49 |
+
CYPHER_PID=$!
|
50 |
+
echo "Cypher started with PID: $CYPHER_PID"
|
51 |
+
fi
|
52 |
+
|
53 |
+
# QuantumVision
|
54 |
+
if [ -d "$BASE_DIR/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business/QuantumVision" ]; then
|
55 |
+
echo "Starting QuantumVision..."
|
56 |
+
cd "$BASE_DIR/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business/QuantumVision"
|
57 |
+
python app.py &
|
58 |
+
QV_PID=$!
|
59 |
+
echo "QuantumVision started with PID: $QV_PID"
|
60 |
+
fi
|
61 |
+
|
62 |
+
# AIConversationCompanion
|
63 |
+
if [ -d "$BASE_DIR/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business/AIConversationCompanion" ]; then
|
64 |
+
echo "Starting AIConversationCompanion..."
|
65 |
+
# Start the server
|
66 |
+
cd "$BASE_DIR/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business/AIConversationCompanion/server"
|
67 |
+
npm start &
|
68 |
+
AI_SERVER_PID=$!
|
69 |
+
|
70 |
+
# Let server initialize
|
71 |
+
sleep 2
|
72 |
+
|
73 |
+
# Start the client if dist directory exists, otherwise use development mode
|
74 |
+
if [ -d "$BASE_DIR/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business/AIConversationCompanion/dist" ]; then
|
75 |
+
echo "Starting AIConversationCompanion client (production mode)..."
|
76 |
+
AI_CLIENT_PID=$!
|
77 |
+
echo "AIConversationCompanion started with server PID: $AI_SERVER_PID and client PID: $AI_CLIENT_PID"
|
78 |
+
else
|
79 |
+
echo "Starting AIConversationCompanion client (development mode)..."
|
80 |
+
cd "$BASE_DIR/Library/Mobile Documents/com~apple~CloudDocs/Atlas Business/AIConversationCompanion/client"
|
81 |
+
npm run dev &
|
82 |
+
AI_CLIENT_PID=$!
|
83 |
+
echo "AIConversationCompanion started with server PID: $AI_SERVER_PID and client PID: $AI_CLIENT_PID"
|
84 |
+
fi
|
85 |
+
fi
|
86 |
+
|
87 |
+
echo "All available components started."
|
88 |
+
echo "Open http://localhost:8080 to access the main interface."
|
89 |
+
echo "Press Ctrl+C to stop all services."
|
90 |
+
|
91 |
+
function cleanup {
|
92 |
+
echo "Stopping all services..."
|
93 |
+
kill $OPENMANUS_PID $CASIBASE_PID $CYPHER_PID $QV_PID $AI_SERVER_PID $AI_CLIENT_PID 2>/dev/null
|
94 |
+
echo "All services stopped."
|
95 |
+
exit 0
|
96 |
+
}
|
97 |
+
|
98 |
+
trap cleanup SIGINT
|
99 |
+
|
100 |
+
# Keep script running
|
101 |
+
wait
|
start_atlas_unified.sh
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
|
3 |
+
# Atlas Unified Platform Launcher
|
4 |
+
# This script starts all Atlas components together
|
5 |
+
|
6 |
+
# Configuration
|
7 |
+
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
8 |
+
OPENMANUS_DIR="$BASE_DIR/../OpenManus"
|
9 |
+
CASIBASE_DIR="$BASE_DIR/../casibase"
|
10 |
+
CYPHER_DIR="$BASE_DIR/../Cypher"
|
11 |
+
QUANTUM_DIR="$BASE_DIR"
|
12 |
+
CONVERSATION_DIR="$BASE_DIR/../AIConversationCompanion"
|
13 |
+
BRIDGE_DIR="$BASE_DIR/atlas_bridge"
|
14 |
+
|
15 |
+
# Terminal colors
|
16 |
+
CYAN='\033[0;36m'
|
17 |
+
GREEN='\033[0;32m'
|
18 |
+
YELLOW='\033[0;33m'
|
19 |
+
RED='\033[0;31m'
|
20 |
+
RESET='\033[0m'
|
21 |
+
|
22 |
+
# Utility functions
|
23 |
+
log() {
|
24 |
+
echo -e "${CYAN}[INFO]${RESET} $1"
|
25 |
+
}
|
26 |
+
|
27 |
+
log_success() {
|
28 |
+
echo -e "${GREEN}[SUCCESS]${RESET} $1"
|
29 |
+
}
|
30 |
+
|
31 |
+
log_warning() {
|
32 |
+
echo -e "${YELLOW}[WARNING]${RESET} $1"
|
33 |
+
}
|
34 |
+
|
35 |
+
log_error() {
|
36 |
+
echo -e "${RED}[ERROR]${RESET} $1"
|
37 |
+
}
|
38 |
+
|
39 |
+
check_component() {
|
40 |
+
if [ -d "$1" ]; then
|
41 |
+
return 0
|
42 |
+
else
|
43 |
+
log_warning "Directory not found: $1"
|
44 |
+
return 1
|
45 |
+
fi
|
46 |
+
}
|
47 |
+
|
48 |
+
# Function to start OpenManus
|
49 |
+
start_openmanus() {
|
50 |
+
if check_component "$OPENMANUS_DIR"; then
|
51 |
+
log "Starting OpenManus..."
|
52 |
+
cd "$OPENMANUS_DIR" || return
|
53 |
+
# Activate virtual environment if it exists
|
54 |
+
if [ -d "./openmanus_env" ]; then
|
55 |
+
source ./openmanus_env/bin/activate
|
56 |
+
fi
|
57 |
+
# Start the server in the background
|
58 |
+
python run_mcp_server.py > "$BASE_DIR/logs/openmanus.log" 2>&1 &
|
59 |
+
echo $! > "$BASE_DIR/pids/openmanus.pid"
|
60 |
+
log_success "OpenManus started with PID $(cat "$BASE_DIR/pids/openmanus.pid")"
|
61 |
+
# Deactivate virtual environment if it was activated
|
62 |
+
if [ -d "./openmanus_env" ]; then
|
63 |
+
deactivate
|
64 |
+
fi
|
65 |
+
fi
|
66 |
+
}
|
67 |
+
|
68 |
+
# Function to start Casibase
|
69 |
+
start_casibase() {
|
70 |
+
if check_component "$CASIBASE_DIR"; then
|
71 |
+
log "Starting Casibase..."
|
72 |
+
cd "$CASIBASE_DIR" || return
|
73 |
+
./server > "$BASE_DIR/logs/casibase.log" 2>&1 &
|
74 |
+
echo $! > "$BASE_DIR/pids/casibase.pid"
|
75 |
+
log_success "Casibase started with PID $(cat "$BASE_DIR/pids/casibase.pid")"
|
76 |
+
fi
|
77 |
+
}
|
78 |
+
|
79 |
+
# Function to start Cypher
|
80 |
+
start_cypher() {
|
81 |
+
if check_component "$CYPHER_DIR"; then
|
82 |
+
log "Starting Cypher..."
|
83 |
+
cd "$CYPHER_DIR" || return
|
84 |
+
python app.py > "$BASE_DIR/logs/cypher.log" 2>&1 &
|
85 |
+
echo $! > "$BASE_DIR/pids/cypher.pid"
|
86 |
+
log_success "Cypher started with PID $(cat "$BASE_DIR/pids/cypher.pid")"
|
87 |
+
fi
|
88 |
+
}
|
89 |
+
|
90 |
+
# Function to start QuantumVision
|
91 |
+
start_quantumvision() {
|
92 |
+
if check_component "$QUANTUM_DIR"; then
|
93 |
+
log "Starting QuantumVision..."
|
94 |
+
cd "$QUANTUM_DIR" || return
|
95 |
+
python app.py > "$BASE_DIR/logs/quantumvision.log" 2>&1 &
|
96 |
+
echo $! > "$BASE_DIR/pids/quantumvision.pid"
|
97 |
+
log_success "QuantumVision started with PID $(cat "$BASE_DIR/pids/quantumvision.pid")"
|
98 |
+
fi
|
99 |
+
}
|
100 |
+
|
101 |
+
# Function to start AIConversationCompanion
|
102 |
+
start_conversation_companion() {
|
103 |
+
if check_component "$CONVERSATION_DIR"; then
|
104 |
+
log "Starting AIConversationCompanion..."
|
105 |
+
cd "$CONVERSATION_DIR" || return
|
106 |
+
# Start the server components
|
107 |
+
cd ./server && npm run start:prod > "$BASE_DIR/logs/ai-conv-server.log" 2>&1 &
|
108 |
+
echo $! > "$BASE_DIR/pids/ai-conv-server.pid"
|
109 |
+
log_success "AIConversationCompanion API server started with PID $(cat "$BASE_DIR/pids/ai-conv-server.pid")"
|
110 |
+
|
111 |
+
# Start the client components
|
112 |
+
cd ../client && npm run dev > "$BASE_DIR/logs/ai-conv-client.log" 2>&1 &
|
113 |
+
echo $! > "$BASE_DIR/pids/ai-conv-client.pid"
|
114 |
+
log_success "AIConversationCompanion client started with PID $(cat "$BASE_DIR/pids/ai-conv-client.pid")"
|
115 |
+
fi
|
116 |
+
}
|
117 |
+
|
118 |
+
# Function to start the Unified Bridge
|
119 |
+
start_bridge() {
|
120 |
+
if [ -d "$BRIDGE_DIR" ]; then
|
121 |
+
log "Starting Atlas Unified Bridge..."
|
122 |
+
cd "$BRIDGE_DIR" || return
|
123 |
+
python bridge_server.py > "$BASE_DIR/logs/bridge.log" 2>&1 &
|
124 |
+
echo $! > "$BASE_DIR/pids/bridge.pid"
|
125 |
+
log_success "Atlas Unified Bridge started with PID $(cat "$BASE_DIR/pids/bridge.pid")"
|
126 |
+
else
|
127 |
+
log_error "Atlas Unified Bridge directory not found"
|
128 |
+
fi
|
129 |
+
}
|
130 |
+
|
131 |
+
# Create logs and pids directories
|
132 |
+
mkdir -p "$BASE_DIR/logs"
|
133 |
+
mkdir -p "$BASE_DIR/pids"
|
134 |
+
|
135 |
+
# Start all components
|
136 |
+
log "Starting Atlas Unified Platform..."
|
137 |
+
|
138 |
+
start_openmanus
|
139 |
+
start_casibase
|
140 |
+
start_cypher
|
141 |
+
start_quantumvision
|
142 |
+
start_conversation_companion
|
143 |
+
start_bridge
|
144 |
+
|
145 |
+
log_success "All components started. Logs available in: $BASE_DIR/logs/"
|
146 |
+
log "You can access the Unified Bridge at: http://localhost:8080/"
|
style.css
CHANGED
@@ -1,28 +1,489 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
body {
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
}
|
5 |
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
}
|
10 |
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
}
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
.card {
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
}
|
25 |
|
26 |
-
.
|
27 |
-
|
28 |
}
|
|
|
1 |
+
/* Custom styling for the Quantum NLP Framework */
|
2 |
+
|
3 |
+
/* LED corner tracers for the Vision OS-inspired cards */
|
4 |
+
.led-corner {
|
5 |
+
position: absolute;
|
6 |
+
width: 15px;
|
7 |
+
height: 15px;
|
8 |
+
z-index: 2;
|
9 |
+
}
|
10 |
+
|
11 |
+
.led-corner-tl {
|
12 |
+
top: 0;
|
13 |
+
left: 0;
|
14 |
+
border-top: 2px solid #da4b86;
|
15 |
+
border-left: 2px solid #da4b86;
|
16 |
+
border-top-left-radius: 6px;
|
17 |
+
box-shadow: -2px -2px 8px rgba(218, 75, 134, 0.3);
|
18 |
+
}
|
19 |
+
|
20 |
+
.led-corner-tr {
|
21 |
+
top: 0;
|
22 |
+
right: 0;
|
23 |
+
border-top: 2px solid #6f42c1;
|
24 |
+
border-right: 2px solid #6f42c1;
|
25 |
+
border-top-right-radius: 6px;
|
26 |
+
box-shadow: 2px -2px 8px rgba(111, 66, 193, 0.3);
|
27 |
+
}
|
28 |
+
|
29 |
+
.led-corner-bl {
|
30 |
+
bottom: 0;
|
31 |
+
left: 0;
|
32 |
+
border-bottom: 2px solid #6f42c1;
|
33 |
+
border-left: 2px solid #6f42c1;
|
34 |
+
border-bottom-left-radius: 6px;
|
35 |
+
box-shadow: -2px 2px 8px rgba(111, 66, 193, 0.3);
|
36 |
+
}
|
37 |
+
|
38 |
+
.led-corner-br {
|
39 |
+
bottom: 0;
|
40 |
+
right: 0;
|
41 |
+
border-bottom: 2px solid #0dcaf0;
|
42 |
+
border-right: 2px solid #0dcaf0;
|
43 |
+
border-bottom-right-radius: 6px;
|
44 |
+
box-shadow: 2px 2px 8px rgba(13, 202, 240, 0.3);
|
45 |
+
}
|
46 |
+
|
47 |
+
/* Staggered fade-in animation */
|
48 |
+
@keyframes fadeInUp {
|
49 |
+
from {
|
50 |
+
opacity: 0;
|
51 |
+
transform: translateY(20px);
|
52 |
+
}
|
53 |
+
to {
|
54 |
+
opacity: 1;
|
55 |
+
transform: translateY(0);
|
56 |
+
}
|
57 |
+
}
|
58 |
+
|
59 |
+
.fade-in {
|
60 |
+
opacity: 0;
|
61 |
+
animation: fadeInUp 0.6s ease forwards;
|
62 |
+
}
|
63 |
+
|
64 |
+
.fade-in-staggered {
|
65 |
+
opacity: 0;
|
66 |
+
animation: fadeInUp 0.6s ease forwards;
|
67 |
+
animation-delay: var(--stagger-delay, 0s);
|
68 |
+
}
|
69 |
body {
|
70 |
+
background-color: #0a0a10;
|
71 |
+
background-image:
|
72 |
+
linear-gradient(rgba(10, 10, 16, 0.97), rgba(10, 10, 16, 0.97));
|
73 |
+
background-size: cover;
|
74 |
+
background-position: center;
|
75 |
+
background-attachment: fixed;
|
76 |
+
position: relative;
|
77 |
+
overflow-x: hidden;
|
78 |
+
min-height: 100vh;
|
79 |
+
}
|
80 |
+
|
81 |
+
/* Overlay animated smoke effect */
|
82 |
+
body::before {
|
83 |
+
content: '';
|
84 |
+
position: fixed;
|
85 |
+
top: 0;
|
86 |
+
left: 0;
|
87 |
+
width: 100%;
|
88 |
+
height: 100%;
|
89 |
+
background: url('/static/images/smoke-bg.svg');
|
90 |
+
background-size: cover;
|
91 |
+
opacity: 0.2;
|
92 |
+
animation: smoke-drift 120s linear infinite;
|
93 |
+
pointer-events: none;
|
94 |
+
z-index: -1;
|
95 |
+
}
|
96 |
+
|
97 |
+
@keyframes smoke-drift {
|
98 |
+
0% { background-position: 0% 0%; }
|
99 |
+
50% { background-position: 100% 50%; }
|
100 |
+
100% { background-position: 0% 0%; }
|
101 |
+
}
|
102 |
+
|
103 |
+
/* LED tracer line effect for background */
|
104 |
+
.quantum-tracer {
|
105 |
+
position: fixed;
|
106 |
+
top: 0;
|
107 |
+
left: 0;
|
108 |
+
width: 100%;
|
109 |
+
height: 100%;
|
110 |
+
pointer-events: none;
|
111 |
+
z-index: -1;
|
112 |
+
}
|
113 |
+
|
114 |
+
/* Make textareas more readable */
|
115 |
+
textarea.form-control {
|
116 |
+
font-family: 'Consolas', 'Monaco', monospace;
|
117 |
+
font-size: 16px;
|
118 |
+
transition: all 0.3s ease;
|
119 |
+
}
|
120 |
+
|
121 |
+
textarea.form-control:focus {
|
122 |
+
box-shadow: 0 0 15px rgba(66, 133, 244, 0.3);
|
123 |
+
border-color: rgba(66, 133, 244, 0.7);
|
124 |
+
}
|
125 |
+
|
126 |
+
/* Style for named entities */
|
127 |
+
.entity {
|
128 |
+
padding: 2px 5px;
|
129 |
+
margin: 0 2px;
|
130 |
+
border-radius: 3px;
|
131 |
+
display: inline-block;
|
132 |
+
transition: all 0.3s ease;
|
133 |
}
|
134 |
|
135 |
+
.entity:hover {
|
136 |
+
box-shadow: 0 0 8px rgba(66, 133, 244, 0.6);
|
137 |
+
transform: scale(1.05);
|
138 |
}
|
139 |
|
140 |
+
/* Entity items in list */
|
141 |
+
.quantum-entity-item {
|
142 |
+
transition: all 0.3s ease;
|
143 |
+
position: relative;
|
144 |
+
overflow: hidden;
|
145 |
}
|
146 |
|
147 |
+
.quantum-entity-item::before {
|
148 |
+
content: '';
|
149 |
+
position: absolute;
|
150 |
+
top: 0;
|
151 |
+
left: -100%;
|
152 |
+
width: 100%;
|
153 |
+
height: 100%;
|
154 |
+
background: linear-gradient(
|
155 |
+
to right,
|
156 |
+
transparent,
|
157 |
+
rgba(66, 133, 244, 0.1),
|
158 |
+
transparent
|
159 |
+
);
|
160 |
+
transition: all 0.5s ease;
|
161 |
+
}
|
162 |
+
|
163 |
+
.quantum-entity-item:hover::before {
|
164 |
+
left: 100%;
|
165 |
+
}
|
166 |
+
|
167 |
+
/* Style for AI response */
|
168 |
+
.ai-response {
|
169 |
+
font-size: 1.05rem;
|
170 |
+
line-height: 1.6;
|
171 |
+
white-space: pre-line;
|
172 |
+
}
|
173 |
+
|
174 |
+
/* Styles for quantum paths */
|
175 |
+
.quantum-path {
|
176 |
+
margin-bottom: 15px;
|
177 |
+
padding: 10px;
|
178 |
+
border-radius: 5px;
|
179 |
+
}
|
180 |
+
|
181 |
+
/* Add a nice fade-in effect for results */
|
182 |
.card {
|
183 |
+
animation: fadein 0.5s;
|
184 |
+
}
|
185 |
+
|
186 |
+
@keyframes fadein {
|
187 |
+
from { opacity: 0; }
|
188 |
+
to { opacity: 1; }
|
189 |
+
}
|
190 |
+
|
191 |
+
/* Improve accordion styling */
|
192 |
+
.accordion-button:focus {
|
193 |
+
box-shadow: none;
|
194 |
+
}
|
195 |
+
|
196 |
+
/* Make badges more prominent */
|
197 |
+
.badge {
|
198 |
+
font-size: 0.85rem;
|
199 |
+
padding: 0.35em 0.65em;
|
200 |
+
}
|
201 |
+
|
202 |
+
/* Style for form elements */
|
203 |
+
.input-group-text {
|
204 |
+
min-width: 40px;
|
205 |
+
justify-content: center;
|
206 |
+
}
|
207 |
+
|
208 |
+
/* Improve responsiveness for mobile */
|
209 |
+
@media (max-width: 768px) {
|
210 |
+
.card-title {
|
211 |
+
font-size: 1.75rem;
|
212 |
+
}
|
213 |
+
|
214 |
+
.lead {
|
215 |
+
font-size: 1.1rem;
|
216 |
+
}
|
217 |
+
}
|
218 |
+
|
219 |
+
/* Add a slight glow effect to the main title */
|
220 |
+
h1.card-title {
|
221 |
+
text-shadow: 0 0 10px rgba(13, 110, 253, 0.5);
|
222 |
+
}
|
223 |
+
|
224 |
+
/* Quantum button effects */
|
225 |
+
.quantum-btn {
|
226 |
+
position: relative;
|
227 |
+
overflow: hidden;
|
228 |
+
transition: all 0.3s ease;
|
229 |
+
z-index: 1;
|
230 |
+
}
|
231 |
+
|
232 |
+
.quantum-btn::before {
|
233 |
+
content: '';
|
234 |
+
position: absolute;
|
235 |
+
top: 0;
|
236 |
+
left: 0;
|
237 |
+
width: 100%;
|
238 |
+
height: 100%;
|
239 |
+
background: linear-gradient(
|
240 |
+
45deg,
|
241 |
+
rgba(66, 133, 244, 0),
|
242 |
+
rgba(66, 133, 244, 0.1),
|
243 |
+
rgba(66, 133, 244, 0)
|
244 |
+
);
|
245 |
+
z-index: -1;
|
246 |
+
transform: translateX(-100%);
|
247 |
+
transition: transform 0.6s ease;
|
248 |
+
}
|
249 |
+
|
250 |
+
.quantum-btn:hover::before,
|
251 |
+
.quantum-btn-hover::before {
|
252 |
+
transform: translateX(100%);
|
253 |
+
}
|
254 |
+
|
255 |
+
.quantum-btn:hover {
|
256 |
+
box-shadow: 0 0 15px rgba(66, 133, 244, 0.3);
|
257 |
+
transform: translateY(-2px);
|
258 |
+
}
|
259 |
+
|
260 |
+
.quantum-btn:active {
|
261 |
+
transform: scale(0.97);
|
262 |
+
}
|
263 |
+
|
264 |
+
/* Textarea with entities */
|
265 |
+
.has-entities {
|
266 |
+
transition: all 0.3s ease;
|
267 |
+
}
|
268 |
+
|
269 |
+
.textarea-glow {
|
270 |
+
border-radius: 4px;
|
271 |
+
animation: textarea-pulse 2s infinite alternate;
|
272 |
+
}
|
273 |
+
|
274 |
+
@keyframes textarea-pulse {
|
275 |
+
0% {
|
276 |
+
box-shadow: inset 0 0 10px rgba(66, 133, 244, 0.2);
|
277 |
+
}
|
278 |
+
100% {
|
279 |
+
box-shadow: inset 0 0 20px rgba(66, 133, 244, 0.5);
|
280 |
+
}
|
281 |
+
}
|
282 |
+
|
283 |
+
/* Card reveal animation */
|
284 |
+
.quantum-card {
|
285 |
+
position: relative;
|
286 |
+
transition: all 0.4s cubic-bezier(0.165, 0.84, 0.44, 1);
|
287 |
+
}
|
288 |
+
|
289 |
+
.quantum-card.quantum-reveal {
|
290 |
+
transform: translateY(0);
|
291 |
+
}
|
292 |
+
|
293 |
+
.quantum-card:hover {
|
294 |
+
transform: translateY(-5px);
|
295 |
+
box-shadow: 0 10px 20px rgba(0, 0, 0, 0.2);
|
296 |
+
}
|
297 |
+
|
298 |
+
/* Vision OS-inspired glassy UI elements - Borderless with LED glow */
|
299 |
+
.glass-card {
|
300 |
+
background: rgba(20, 25, 40, 0.3);
|
301 |
+
backdrop-filter: blur(12px);
|
302 |
+
border: none;
|
303 |
+
border-radius: 16px;
|
304 |
+
box-shadow:
|
305 |
+
0 10px 30px rgba(0, 0, 0, 0.2),
|
306 |
+
0 0 0 1px rgba(255, 255, 255, 0.05),
|
307 |
+
0 0 15px rgba(218, 75, 134, 0.15),
|
308 |
+
inset 0 0 20px rgba(218, 75, 134, 0.03);
|
309 |
+
overflow: hidden;
|
310 |
+
position: relative;
|
311 |
+
}
|
312 |
+
|
313 |
+
/* LED edge glow effect */
|
314 |
+
.glass-card::before {
|
315 |
+
content: '';
|
316 |
+
position: absolute;
|
317 |
+
top: 0;
|
318 |
+
left: 0;
|
319 |
+
right: 0;
|
320 |
+
height: 1px;
|
321 |
+
background: linear-gradient(90deg,
|
322 |
+
rgba(218, 75, 134, 0),
|
323 |
+
rgba(218, 75, 134, 0.8),
|
324 |
+
rgba(111, 66, 193, 0.8),
|
325 |
+
rgba(13, 202, 240, 0.8),
|
326 |
+
rgba(218, 75, 134, 0));
|
327 |
+
opacity: 0.7;
|
328 |
+
z-index: 1;
|
329 |
+
animation: led-trace 6s linear infinite;
|
330 |
+
}
|
331 |
+
|
332 |
+
.glass-card::after {
|
333 |
+
content: '';
|
334 |
+
position: absolute;
|
335 |
+
bottom: 0;
|
336 |
+
left: 0;
|
337 |
+
width: 100%;
|
338 |
+
height: 1px;
|
339 |
+
background: linear-gradient(90deg,
|
340 |
+
rgba(13, 202, 240, 0),
|
341 |
+
rgba(13, 202, 240, 0.8),
|
342 |
+
rgba(111, 66, 193, 0.8),
|
343 |
+
rgba(218, 75, 134, 0.8),
|
344 |
+
rgba(13, 202, 240, 0));
|
345 |
+
opacity: 0.3;
|
346 |
+
transition: opacity 0.3s ease;
|
347 |
+
animation: led-trace-reverse 6s linear infinite;
|
348 |
+
}
|
349 |
+
|
350 |
+
@keyframes led-trace {
|
351 |
+
0% { background-position: -1000px 0; }
|
352 |
+
100% { background-position: 1000px 0; }
|
353 |
+
}
|
354 |
+
|
355 |
+
@keyframes led-trace-reverse {
|
356 |
+
0% { background-position: 1000px 0; }
|
357 |
+
100% { background-position: -1000px 0; }
|
358 |
+
}
|
359 |
+
|
360 |
+
.glass-card:hover::after {
|
361 |
+
opacity: 0.8;
|
362 |
+
}
|
363 |
+
|
364 |
+
.glass-card .card-header {
|
365 |
+
background: rgba(20, 25, 40, 0.4);
|
366 |
+
border: none;
|
367 |
+
padding: 1rem 1.25rem;
|
368 |
+
position: relative;
|
369 |
+
z-index: 1;
|
370 |
+
backdrop-filter: blur(8px);
|
371 |
+
}
|
372 |
+
|
373 |
+
.glass-card .card-body {
|
374 |
+
background: linear-gradient(135deg,
|
375 |
+
rgba(20, 25, 40, 0.2) 0%,
|
376 |
+
rgba(20, 25, 40, 0.1) 100%);
|
377 |
+
border: none;
|
378 |
+
position: relative;
|
379 |
+
z-index: 1;
|
380 |
+
backdrop-filter: blur(8px);
|
381 |
+
}
|
382 |
+
|
383 |
+
/* Quantum glow effect on hover */
|
384 |
+
.quantum-glow {
|
385 |
+
text-shadow: 0 0 5px rgba(218, 75, 134, 0.7);
|
386 |
+
transition: all 0.3s ease;
|
387 |
+
}
|
388 |
+
|
389 |
+
.quantum-glow:hover {
|
390 |
+
text-shadow: 0 0 15px rgba(218, 75, 134, 0.9);
|
391 |
+
}
|
392 |
+
|
393 |
+
/* Floating elements */
|
394 |
+
.floating {
|
395 |
+
animation: floating 3s ease-in-out infinite;
|
396 |
+
}
|
397 |
+
|
398 |
+
@keyframes floating {
|
399 |
+
0% { transform: translateY(0px); }
|
400 |
+
50% { transform: translateY(-10px); }
|
401 |
+
100% { transform: translateY(0px); }
|
402 |
+
}
|
403 |
+
|
404 |
+
/* Vision OS-inspired progress indicators */
|
405 |
+
.vision-progress {
|
406 |
+
height: 2px;
|
407 |
+
background: rgba(30, 41, 59, 0.3);
|
408 |
+
border-radius: 2px;
|
409 |
+
overflow: hidden;
|
410 |
+
margin: 15px 0;
|
411 |
+
position: relative;
|
412 |
+
}
|
413 |
+
|
414 |
+
.vision-progress::before {
|
415 |
+
content: '';
|
416 |
+
position: absolute;
|
417 |
+
top: 0;
|
418 |
+
left: 0;
|
419 |
+
right: 0;
|
420 |
+
bottom: 0;
|
421 |
+
background: rgba(13, 202, 240, 0.05);
|
422 |
+
box-shadow: 0 0 15px rgba(13, 202, 240, 0.2);
|
423 |
+
filter: blur(3px);
|
424 |
+
}
|
425 |
+
|
426 |
+
.vision-progress-bar {
|
427 |
+
height: 100%;
|
428 |
+
background: linear-gradient(90deg,
|
429 |
+
rgba(13, 202, 240, 0.8),
|
430 |
+
rgba(111, 66, 193, 0.8),
|
431 |
+
rgba(218, 75, 134, 0.8));
|
432 |
+
border-radius: 2px;
|
433 |
+
animation: progress-slide 5s ease-in-out infinite alternate;
|
434 |
+
position: relative;
|
435 |
+
}
|
436 |
+
|
437 |
+
.vision-progress-bar::after {
|
438 |
+
content: '';
|
439 |
+
position: absolute;
|
440 |
+
top: 0;
|
441 |
+
right: 0;
|
442 |
+
width: 15px;
|
443 |
+
height: 100%;
|
444 |
+
background: linear-gradient(90deg,
|
445 |
+
rgba(255, 255, 255, 0),
|
446 |
+
rgba(255, 255, 255, 0.8));
|
447 |
+
filter: blur(3px);
|
448 |
+
opacity: 0.8;
|
449 |
+
}
|
450 |
+
|
451 |
+
@keyframes progress-slide {
|
452 |
+
0% { width: 0%; transform: translateX(0); }
|
453 |
+
10% { width: 30%; }
|
454 |
+
30% { width: 60%; }
|
455 |
+
50% { width: 40%; }
|
456 |
+
70% { width: 75%; }
|
457 |
+
100% { width: 55%; }
|
458 |
+
}
|
459 |
+
|
460 |
+
/* Hover glow effects */
|
461 |
+
.glow-hover {
|
462 |
+
position: relative;
|
463 |
+
transition: all 0.3s ease, text-shadow 0.3s ease;
|
464 |
+
z-index: 1;
|
465 |
+
}
|
466 |
+
|
467 |
+
.glow-hover:hover {
|
468 |
+
text-shadow: 0 0 8px rgba(218, 75, 134, 0.7);
|
469 |
+
transform: translateY(-2px);
|
470 |
+
}
|
471 |
+
|
472 |
+
.glow-hover::after {
|
473 |
+
content: '';
|
474 |
+
position: absolute;
|
475 |
+
bottom: -3px;
|
476 |
+
left: 0;
|
477 |
+
width: 100%;
|
478 |
+
height: 1px;
|
479 |
+
background: linear-gradient(90deg,
|
480 |
+
rgba(218, 75, 134, 0),
|
481 |
+
rgba(218, 75, 134, 0.7),
|
482 |
+
rgba(218, 75, 134, 0));
|
483 |
+
opacity: 0;
|
484 |
+
transition: opacity 0.3s ease;
|
485 |
}
|
486 |
|
487 |
+
.glow-hover:hover::after {
|
488 |
+
opacity: 1;
|
489 |
}
|
task_scheduler.py
ADDED
@@ -0,0 +1,298 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Task scheduler module for handling background tasks and automation scheduling.
|
3 |
+
This module enables the AI assistant to run tasks in the background and
|
4 |
+
schedule activities for optimal workflow management.
|
5 |
+
"""
|
6 |
+
|
7 |
+
import logging
|
8 |
+
import threading
|
9 |
+
import time
|
10 |
+
from datetime import datetime, timedelta
|
11 |
+
import json
|
12 |
+
from functools import wraps
|
13 |
+
|
14 |
+
from models import Task, db
|
15 |
+
import browser_automation
|
16 |
+
|
17 |
+
logger = logging.getLogger(__name__)
|
18 |
+
|
19 |
+
|
20 |
+
# Threading lock for task execution
|
21 |
+
task_lock = threading.Lock()
|
22 |
+
|
23 |
+
|
24 |
+
def with_db_context(func):
|
25 |
+
"""Decorator to ensure DB operations are handled in the correct context"""
|
26 |
+
@wraps(func)
|
27 |
+
def wrapper(*args, **kwargs):
|
28 |
+
from app import app
|
29 |
+
with app.app_context():
|
30 |
+
return func(*args, **kwargs)
|
31 |
+
return wrapper
|
32 |
+
|
33 |
+
|
34 |
+
class TaskScheduler:
|
35 |
+
"""Class for scheduling and executing tasks in the background"""
|
36 |
+
|
37 |
+
def __init__(self):
|
38 |
+
self.scheduler_thread = None
|
39 |
+
self.running = False
|
40 |
+
self.task_handlers = {
|
41 |
+
'web_scrape': self.handle_web_scrape_task,
|
42 |
+
'analyze': self.handle_analyze_task,
|
43 |
+
'recursive': self.handle_recursive_task,
|
44 |
+
'workflow': self.handle_workflow_task,
|
45 |
+
}
|
46 |
+
|
47 |
+
def start(self):
|
48 |
+
"""Start the task scheduler thread"""
|
49 |
+
if self.scheduler_thread and self.scheduler_thread.is_alive():
|
50 |
+
logger.warning("Scheduler is already running")
|
51 |
+
return
|
52 |
+
|
53 |
+
self.running = True
|
54 |
+
self.scheduler_thread = threading.Thread(
|
55 |
+
target=self.scheduler_loop,
|
56 |
+
daemon=True
|
57 |
+
)
|
58 |
+
self.scheduler_thread.start()
|
59 |
+
logger.info("Task scheduler started")
|
60 |
+
|
61 |
+
def stop(self):
|
62 |
+
"""Stop the task scheduler thread"""
|
63 |
+
self.running = False
|
64 |
+
if self.scheduler_thread:
|
65 |
+
self.scheduler_thread.join(timeout=5.0)
|
66 |
+
logger.info("Task scheduler stopped")
|
67 |
+
|
68 |
+
@with_db_context
|
69 |
+
def scheduler_loop(self):
|
70 |
+
"""Main scheduler loop to check and execute scheduled tasks"""
|
71 |
+
while self.running:
|
72 |
+
try:
|
73 |
+
# Find scheduled tasks that are due
|
74 |
+
now = datetime.utcnow()
|
75 |
+
due_tasks = Task.query.filter(
|
76 |
+
Task.status == 'pending',
|
77 |
+
Task.scheduled_for <= now
|
78 |
+
).limit(5).all()
|
79 |
+
|
80 |
+
# Execute each due task
|
81 |
+
for task in due_tasks:
|
82 |
+
self.execute_task(task)
|
83 |
+
|
84 |
+
# Sleep for a short period before checking again
|
85 |
+
time.sleep(5)
|
86 |
+
|
87 |
+
except Exception as e:
|
88 |
+
logger.error(f"Error in scheduler loop: {str(e)}")
|
89 |
+
time.sleep(10) # Sleep longer on error
|
90 |
+
|
91 |
+
@with_db_context
|
92 |
+
def execute_task(self, task):
|
93 |
+
"""Execute a task based on its type"""
|
94 |
+
try:
|
95 |
+
# Lock to prevent multiple executions
|
96 |
+
with task_lock:
|
97 |
+
task.status = 'in_progress'
|
98 |
+
db.session.commit()
|
99 |
+
|
100 |
+
# Get the appropriate handler for this task type
|
101 |
+
handler = self.task_handlers.get(task.task_type)
|
102 |
+
|
103 |
+
if handler:
|
104 |
+
result = handler(task)
|
105 |
+
|
106 |
+
with task_lock:
|
107 |
+
if result.get('status') == 'success':
|
108 |
+
task.status = 'completed'
|
109 |
+
task.completed_at = datetime.utcnow()
|
110 |
+
else:
|
111 |
+
task.status = 'failed'
|
112 |
+
|
113 |
+
task.result = result
|
114 |
+
db.session.commit()
|
115 |
+
else:
|
116 |
+
logger.error(f"No handler for task type: {task.task_type}")
|
117 |
+
with task_lock:
|
118 |
+
task.status = 'failed'
|
119 |
+
task.result = {'error': f"No handler for task type: {task.task_type}"}
|
120 |
+
db.session.commit()
|
121 |
+
|
122 |
+
except Exception as e:
|
123 |
+
logger.error(f"Error executing task {task.id}: {str(e)}")
|
124 |
+
with task_lock:
|
125 |
+
task.status = 'failed'
|
126 |
+
task.result = {'error': str(e)}
|
127 |
+
db.session.commit()
|
128 |
+
|
129 |
+
def handle_web_scrape_task(self, task):
|
130 |
+
"""Handle a web scraping task"""
|
131 |
+
return browser_automation.execute_scraping_task(task.id)
|
132 |
+
|
133 |
+
def handle_analyze_task(self, task):
|
134 |
+
"""Handle an analysis task"""
|
135 |
+
# Use the quantum thinking process for analysis
|
136 |
+
try:
|
137 |
+
from quantum_thinking import quantum_recursive_thinking
|
138 |
+
|
139 |
+
# Get the text to analyze from the task config
|
140 |
+
text = task.config.get('text', '')
|
141 |
+
depth = task.config.get('depth', 3)
|
142 |
+
|
143 |
+
# Run the quantum thinking process
|
144 |
+
quantum_results = quantum_recursive_thinking(text, depth=depth)
|
145 |
+
|
146 |
+
return {
|
147 |
+
'status': 'success',
|
148 |
+
'quantum_results': quantum_results,
|
149 |
+
'timestamp': datetime.utcnow().isoformat()
|
150 |
+
}
|
151 |
+
except Exception as e:
|
152 |
+
logger.error(f"Error in analyze task: {str(e)}")
|
153 |
+
return {'status': 'error', 'error': str(e)}
|
154 |
+
|
155 |
+
def handle_recursive_task(self, task):
|
156 |
+
"""Handle a recursive task that creates subtasks"""
|
157 |
+
try:
|
158 |
+
# Get the configuration for subtasks
|
159 |
+
subtasks_config = task.config.get('subtasks', [])
|
160 |
+
results = []
|
161 |
+
|
162 |
+
# Create each subtask
|
163 |
+
for subtask_config in subtasks_config:
|
164 |
+
subtask = Task(
|
165 |
+
title=subtask_config.get('title', 'Subtask'),
|
166 |
+
description=subtask_config.get('description', ''),
|
167 |
+
status='pending',
|
168 |
+
priority=subtask_config.get('priority', task.priority),
|
169 |
+
task_type=subtask_config.get('task_type', 'web_scrape'),
|
170 |
+
scheduled_for=subtask_config.get('scheduled_for'),
|
171 |
+
config=subtask_config.get('config', {}),
|
172 |
+
parent_id=task.id
|
173 |
+
)
|
174 |
+
db.session.add(subtask)
|
175 |
+
results.append({
|
176 |
+
'subtask_title': subtask.title,
|
177 |
+
'status': 'created'
|
178 |
+
})
|
179 |
+
|
180 |
+
db.session.commit()
|
181 |
+
|
182 |
+
return {
|
183 |
+
'status': 'success',
|
184 |
+
'subtasks_created': len(subtasks_config),
|
185 |
+
'results': results
|
186 |
+
}
|
187 |
+
except Exception as e:
|
188 |
+
logger.error(f"Error in recursive task: {str(e)}")
|
189 |
+
return {'status': 'error', 'error': str(e)}
|
190 |
+
|
191 |
+
def handle_workflow_task(self, task):
|
192 |
+
"""Handle a workflow template execution"""
|
193 |
+
try:
|
194 |
+
workflow_id = task.config.get('workflow_id')
|
195 |
+
if not workflow_id:
|
196 |
+
return {'status': 'error', 'error': 'No workflow ID provided'}
|
197 |
+
|
198 |
+
from models import WorkflowTemplate
|
199 |
+
workflow = WorkflowTemplate.query.get(workflow_id)
|
200 |
+
if not workflow:
|
201 |
+
return {'status': 'error', 'error': f'Workflow ID {workflow_id} not found'}
|
202 |
+
|
203 |
+
# Create tasks for each workflow step
|
204 |
+
steps = workflow.steps
|
205 |
+
results = []
|
206 |
+
|
207 |
+
# Schedule with appropriate delays
|
208 |
+
now = datetime.utcnow()
|
209 |
+
delay_minutes = 0
|
210 |
+
|
211 |
+
for i, step in enumerate(steps):
|
212 |
+
scheduled_time = now + timedelta(minutes=delay_minutes)
|
213 |
+
|
214 |
+
subtask = Task(
|
215 |
+
title=f"{workflow.name} - Step {i+1}: {step.get('title', 'Step')}",
|
216 |
+
description=step.get('description', ''),
|
217 |
+
status='pending',
|
218 |
+
task_type=step.get('task_type', 'web_scrape'),
|
219 |
+
scheduled_for=scheduled_time,
|
220 |
+
config=step.get('config', {}),
|
221 |
+
parent_id=task.id
|
222 |
+
)
|
223 |
+
db.session.add(subtask)
|
224 |
+
results.append({
|
225 |
+
'step': i+1,
|
226 |
+
'title': subtask.title,
|
227 |
+
'scheduled_for': scheduled_time.isoformat()
|
228 |
+
})
|
229 |
+
|
230 |
+
# Increment delay for next task
|
231 |
+
delay_minutes += step.get('delay_minutes', 5)
|
232 |
+
|
233 |
+
db.session.commit()
|
234 |
+
|
235 |
+
return {
|
236 |
+
'status': 'success',
|
237 |
+
'workflow_name': workflow.name,
|
238 |
+
'steps_created': len(steps),
|
239 |
+
'results': results
|
240 |
+
}
|
241 |
+
except Exception as e:
|
242 |
+
logger.error(f"Error in workflow task: {str(e)}")
|
243 |
+
return {'status': 'error', 'error': str(e)}
|
244 |
+
|
245 |
+
|
246 |
+
# Global scheduler instance
|
247 |
+
scheduler = TaskScheduler()
|
248 |
+
|
249 |
+
|
250 |
+
# Helper functions for task management
|
251 |
+
def schedule_task(title, task_type, config=None, description=None, priority=3, scheduled_for=None):
|
252 |
+
"""Schedule a new task"""
|
253 |
+
try:
|
254 |
+
task = Task(
|
255 |
+
title=title,
|
256 |
+
description=description,
|
257 |
+
status='pending',
|
258 |
+
priority=priority,
|
259 |
+
task_type=task_type,
|
260 |
+
scheduled_for=scheduled_for or datetime.utcnow(),
|
261 |
+
config=config or {}
|
262 |
+
)
|
263 |
+
db.session.add(task)
|
264 |
+
db.session.commit()
|
265 |
+
return task.id
|
266 |
+
except Exception as e:
|
267 |
+
logger.error(f"Error scheduling task: {str(e)}")
|
268 |
+
db.session.rollback()
|
269 |
+
return None
|
270 |
+
|
271 |
+
|
272 |
+
def get_pending_tasks(limit=10):
|
273 |
+
"""Get a list of pending tasks"""
|
274 |
+
return Task.query.filter_by(status='pending').order_by(
|
275 |
+
Task.priority.desc(),
|
276 |
+
Task.scheduled_for.asc()
|
277 |
+
).limit(limit).all()
|
278 |
+
|
279 |
+
|
280 |
+
def get_completed_tasks(limit=10):
|
281 |
+
"""Get a list of completed tasks"""
|
282 |
+
return Task.query.filter_by(status='completed').order_by(
|
283 |
+
Task.completed_at.desc()
|
284 |
+
).limit(limit).all()
|
285 |
+
|
286 |
+
|
287 |
+
def get_task_result(task_id):
|
288 |
+
"""Get the result of a task"""
|
289 |
+
task = Task.query.get(task_id)
|
290 |
+
if not task:
|
291 |
+
return None
|
292 |
+
return {
|
293 |
+
'id': task.id,
|
294 |
+
'title': task.title,
|
295 |
+
'status': task.status,
|
296 |
+
'result': task.result,
|
297 |
+
'completed_at': task.completed_at.isoformat() if task.completed_at else None
|
298 |
+
}
|
update_unified_bridge.py
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Atlas Intelligence Unified Bridge Update
|
4 |
+
This script updates the unified_bridge.py file to use the enhanced NLP processor
|
5 |
+
with Ollama Llama 3 support and better error handling.
|
6 |
+
"""
|
7 |
+
|
8 |
+
import os
|
9 |
+
import sys
|
10 |
+
import shutil
|
11 |
+
import re
|
12 |
+
|
13 |
+
# Define the path to the unified bridge file
|
14 |
+
UNIFIED_BRIDGE_PATH = os.path.expanduser("~/AtlasUnified/unified_bridge.py")
|
15 |
+
BACKUP_PATH = os.path.expanduser("~/AtlasUnified/unified_bridge.py.bak")
|
16 |
+
|
17 |
+
def backup_original_file():
|
18 |
+
"""Create a backup of the original file"""
|
19 |
+
if os.path.exists(UNIFIED_BRIDGE_PATH):
|
20 |
+
print(f"Creating backup at {BACKUP_PATH}")
|
21 |
+
shutil.copy2(UNIFIED_BRIDGE_PATH, BACKUP_PATH)
|
22 |
+
return True
|
23 |
+
else:
|
24 |
+
print(f"Error: Unified bridge file not found at {UNIFIED_BRIDGE_PATH}")
|
25 |
+
return False
|
26 |
+
|
27 |
+
def fix_quantum_vision_integration(content):
|
28 |
+
"""Update Quantum Vision integration to use enhanced_process_text"""
|
29 |
+
# Find and replace the QuantumVision integration section
|
30 |
+
quantum_pattern = r"# Process with Quantum Vision if enabled.*?(?=\s{4}return results|$)"
|
31 |
+
quantum_replacement = """# Process with Quantum Vision if enabled - UPDATED to use enhanced processing with Ollama Llama 3
|
32 |
+
if config["integrations"]["enable_quantum_vision"]:
|
33 |
+
try:
|
34 |
+
sys.path.append(config["paths"]["quantum_vision"])
|
35 |
+
import nlp_processor
|
36 |
+
|
37 |
+
# Use the enhanced processing function with multiple fallbacks
|
38 |
+
quantum_result = nlp_processor.enhanced_process_text(query_text)
|
39 |
+
|
40 |
+
# Extract the simple response for the main response field
|
41 |
+
simple_response = quantum_result.get("simple_response", "")
|
42 |
+
|
43 |
+
results["quantum_vision"] = {
|
44 |
+
"response": simple_response,
|
45 |
+
"detailed_results": quantum_result
|
46 |
+
}
|
47 |
+
|
48 |
+
except Exception as e:
|
49 |
+
logger.error(f"Quantum Vision processing error: {e}")
|
50 |
+
logger.error(traceback.format_exc())
|
51 |
+
results["quantum_vision"] = {"error": str(e)}
|
52 |
+
|
53 |
+
"""
|
54 |
+
return re.sub(quantum_pattern, quantum_replacement, content, flags=re.DOTALL)
|
55 |
+
|
56 |
+
def main():
|
57 |
+
"""Main function to apply all updates"""
|
58 |
+
# Check if the unified bridge file exists
|
59 |
+
if not os.path.exists(UNIFIED_BRIDGE_PATH):
|
60 |
+
print(f"Error: Unified bridge file not found at {UNIFIED_BRIDGE_PATH}")
|
61 |
+
return False
|
62 |
+
|
63 |
+
# Create a backup
|
64 |
+
if not backup_original_file():
|
65 |
+
return False
|
66 |
+
|
67 |
+
# Read the file content
|
68 |
+
with open(UNIFIED_BRIDGE_PATH, 'r') as file:
|
69 |
+
content = file.read()
|
70 |
+
|
71 |
+
# Apply update
|
72 |
+
content = fix_quantum_vision_integration(content)
|
73 |
+
|
74 |
+
# Write the updated content back
|
75 |
+
with open(UNIFIED_BRIDGE_PATH, 'w') as file:
|
76 |
+
file.write(content)
|
77 |
+
|
78 |
+
print("Updates applied successfully!")
|
79 |
+
print("Restart the Atlas Unified service to apply the changes.")
|
80 |
+
return True
|
81 |
+
|
82 |
+
if __name__ == "__main__":
|
83 |
+
main()
|
uv.lock
ADDED
The diff for this file is too large to render.
See raw diff
|
|
zap_integrations.html
ADDED
@@ -0,0 +1,290 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{% extends 'layout.html' %}
|
2 |
+
|
3 |
+
{% block content %}
|
4 |
+
<div class="row mb-4">
|
5 |
+
<div class="col-12">
|
6 |
+
<div class="glass-card">
|
7 |
+
<div class="p-4">
|
8 |
+
<h1 class="display-5 quantum-glow">
|
9 |
+
<i class="fas fa-bolt me-2"></i> ZAP Integrations
|
10 |
+
</h1>
|
11 |
+
<p class="lead">Connect your Quantum NLP Framework to external services with predefined automation workflows.</p>
|
12 |
+
<div class="vision-progress">
|
13 |
+
<div class="vision-progress-bar"></div>
|
14 |
+
</div>
|
15 |
+
</div>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
</div>
|
19 |
+
|
20 |
+
<div class="row" id="integrations-container">
|
21 |
+
{% for integration in integrations %}
|
22 |
+
<div class="col-md-6 col-lg-4 mb-4 fade-in" style="animation-delay: {{ loop.index * 0.15 }}s">
|
23 |
+
<div class="glass-card h-100 position-relative overflow-hidden">
|
24 |
+
<div class="card-body p-4">
|
25 |
+
<div class="d-flex justify-content-between align-items-start mb-3">
|
26 |
+
<div class="integration-icon rounded-circle d-flex align-items-center justify-content-center"
|
27 |
+
style="width: 60px; height: 60px; background: rgba(20, 25, 40, 0.5);">
|
28 |
+
<i class="fas {{ integration.icon }} fa-2x text-light"></i>
|
29 |
+
</div>
|
30 |
+
<span class="badge {% if integration.status == 'active' %}bg-success{% elif integration.status == 'pending' %}bg-warning{% else %}bg-danger{% endif %} quantum-score">
|
31 |
+
{{ integration.status | capitalize }}
|
32 |
+
</span>
|
33 |
+
</div>
|
34 |
+
|
35 |
+
<h3 class="h4 mb-3">{{ integration.name }}</h3>
|
36 |
+
<p class="text-muted mb-0">{{ integration.description }}</p>
|
37 |
+
|
38 |
+
<div class="mt-4 d-flex justify-content-between align-items-center">
|
39 |
+
<button class="btn btn-sm btn-outline-light glow-hover">Configure</button>
|
40 |
+
<div class="form-check form-switch">
|
41 |
+
<input class="form-check-input" type="checkbox" id="integration-{{ loop.index }}" {% if integration.status == 'active' %}checked{% endif %}>
|
42 |
+
<label class="form-check-label" for="integration-{{ loop.index }}">Enabled</label>
|
43 |
+
</div>
|
44 |
+
</div>
|
45 |
+
</div>
|
46 |
+
|
47 |
+
<!-- LED corner tracers -->
|
48 |
+
<div class="led-corner led-corner-tl"></div>
|
49 |
+
<div class="led-corner led-corner-tr"></div>
|
50 |
+
<div class="led-corner led-corner-bl"></div>
|
51 |
+
<div class="led-corner led-corner-br"></div>
|
52 |
+
</div>
|
53 |
+
</div>
|
54 |
+
{% endfor %}
|
55 |
+
</div>
|
56 |
+
|
57 |
+
<div class="row mt-5">
|
58 |
+
<div class="col-12">
|
59 |
+
<div class="glass-card">
|
60 |
+
<div class="card-body p-4">
|
61 |
+
<h3 class="h4 mb-4 quantum-glow">Connect New Service</h3>
|
62 |
+
|
63 |
+
<form>
|
64 |
+
<div class="row g-3">
|
65 |
+
<div class="col-md-6">
|
66 |
+
<div class="form-floating mb-3">
|
67 |
+
<input type="text" class="form-control" id="serviceName" placeholder="Service Name">
|
68 |
+
<label for="serviceName">Service Name</label>
|
69 |
+
</div>
|
70 |
+
</div>
|
71 |
+
<div class="col-md-6">
|
72 |
+
<div class="form-floating mb-3">
|
73 |
+
<select class="form-select" id="serviceType">
|
74 |
+
<option selected>Select service type</option>
|
75 |
+
<option value="api">API Integration</option>
|
76 |
+
<option value="webhook">Webhook</option>
|
77 |
+
<option value="notification">Notification Service</option>
|
78 |
+
<option value="export">Data Export</option>
|
79 |
+
</select>
|
80 |
+
<label for="serviceType">Service Type</label>
|
81 |
+
</div>
|
82 |
+
</div>
|
83 |
+
<div class="col-12">
|
84 |
+
<div class="form-floating mb-3">
|
85 |
+
<input type="url" class="form-control" id="endpointUrl" placeholder="Endpoint URL">
|
86 |
+
<label for="endpointUrl">Endpoint URL</label>
|
87 |
+
</div>
|
88 |
+
</div>
|
89 |
+
<div class="col-md-6">
|
90 |
+
<div class="form-floating mb-3">
|
91 |
+
<input type="text" class="form-control" id="apiKey" placeholder="API Key">
|
92 |
+
<label for="apiKey">API Key (if required)</label>
|
93 |
+
</div>
|
94 |
+
</div>
|
95 |
+
<div class="col-md-6">
|
96 |
+
<div class="form-floating mb-3">
|
97 |
+
<select class="form-select" id="frequency">
|
98 |
+
<option value="realtime">Real-time</option>
|
99 |
+
<option value="hourly">Hourly</option>
|
100 |
+
<option value="daily">Daily</option>
|
101 |
+
<option value="weekly">Weekly</option>
|
102 |
+
</select>
|
103 |
+
<label for="frequency">Execution Frequency</label>
|
104 |
+
</div>
|
105 |
+
</div>
|
106 |
+
<div class="col-12 text-end">
|
107 |
+
<button type="submit" class="btn btn-primary quantum-btn">
|
108 |
+
<i class="fas fa-plus me-2"></i> Add Integration
|
109 |
+
</button>
|
110 |
+
</div>
|
111 |
+
</div>
|
112 |
+
</form>
|
113 |
+
</div>
|
114 |
+
</div>
|
115 |
+
</div>
|
116 |
+
</div>
|
117 |
+
|
118 |
+
<style>
|
119 |
+
/* LED corner tracers */
|
120 |
+
.led-corner {
|
121 |
+
position: absolute;
|
122 |
+
width: 15px;
|
123 |
+
height: 15px;
|
124 |
+
z-index: 2;
|
125 |
+
}
|
126 |
+
|
127 |
+
.led-corner-tl {
|
128 |
+
top: 0;
|
129 |
+
left: 0;
|
130 |
+
border-top: 2px solid #da4b86;
|
131 |
+
border-left: 2px solid #da4b86;
|
132 |
+
border-top-left-radius: 6px;
|
133 |
+
}
|
134 |
+
|
135 |
+
.led-corner-tr {
|
136 |
+
top: 0;
|
137 |
+
right: 0;
|
138 |
+
border-top: 2px solid #6f42c1;
|
139 |
+
border-right: 2px solid #6f42c1;
|
140 |
+
border-top-right-radius: 6px;
|
141 |
+
}
|
142 |
+
|
143 |
+
.led-corner-bl {
|
144 |
+
bottom: 0;
|
145 |
+
left: 0;
|
146 |
+
border-bottom: 2px solid #6f42c1;
|
147 |
+
border-left: 2px solid #6f42c1;
|
148 |
+
border-bottom-left-radius: 6px;
|
149 |
+
}
|
150 |
+
|
151 |
+
.led-corner-br {
|
152 |
+
bottom: 0;
|
153 |
+
right: 0;
|
154 |
+
border-bottom: 2px solid #0dcaf0;
|
155 |
+
border-right: 2px solid #0dcaf0;
|
156 |
+
border-bottom-right-radius: 6px;
|
157 |
+
}
|
158 |
+
|
159 |
+
/* Animation for staggered appearance */
|
160 |
+
@keyframes fadeInUp {
|
161 |
+
from {
|
162 |
+
opacity: 0;
|
163 |
+
transform: translateY(20px);
|
164 |
+
}
|
165 |
+
to {
|
166 |
+
opacity: 1;
|
167 |
+
transform: translateY(0);
|
168 |
+
}
|
169 |
+
}
|
170 |
+
|
171 |
+
.fade-in {
|
172 |
+
opacity: 0;
|
173 |
+
animation: fadeInUp 0.6s ease forwards;
|
174 |
+
}
|
175 |
+
|
176 |
+
/* Integration icon pulse */
|
177 |
+
.integration-icon {
|
178 |
+
position: relative;
|
179 |
+
transition: all 0.3s ease;
|
180 |
+
overflow: hidden;
|
181 |
+
}
|
182 |
+
|
183 |
+
.integration-icon::after {
|
184 |
+
content: '';
|
185 |
+
position: absolute;
|
186 |
+
top: 0;
|
187 |
+
left: 0;
|
188 |
+
right: 0;
|
189 |
+
bottom: 0;
|
190 |
+
background: radial-gradient(circle, rgba(218, 75, 134, 0.4) 0%, transparent 70%);
|
191 |
+
opacity: 0;
|
192 |
+
transition: opacity 0.3s ease;
|
193 |
+
}
|
194 |
+
|
195 |
+
.glass-card:hover .integration-icon::after {
|
196 |
+
opacity: 1;
|
197 |
+
}
|
198 |
+
|
199 |
+
/* Form styling */
|
200 |
+
.form-control, .form-select {
|
201 |
+
background-color: rgba(30, 41, 59, 0.3);
|
202 |
+
border: 1px solid rgba(255, 255, 255, 0.1);
|
203 |
+
color: #fff;
|
204 |
+
}
|
205 |
+
|
206 |
+
.form-control:focus, .form-select:focus {
|
207 |
+
background-color: rgba(30, 41, 59, 0.5);
|
208 |
+
border-color: rgba(218, 75, 134, 0.5);
|
209 |
+
box-shadow: 0 0 0 0.25rem rgba(218, 75, 134, 0.25);
|
210 |
+
color: #fff;
|
211 |
+
}
|
212 |
+
|
213 |
+
.form-floating label {
|
214 |
+
color: rgba(255, 255, 255, 0.7);
|
215 |
+
}
|
216 |
+
|
217 |
+
.form-floating .form-control:focus ~ label,
|
218 |
+
.form-floating .form-select:focus ~ label {
|
219 |
+
color: rgba(255, 255, 255, 0.9);
|
220 |
+
}
|
221 |
+
</style>
|
222 |
+
|
223 |
+
<script>
|
224 |
+
document.addEventListener('DOMContentLoaded', function() {
|
225 |
+
// Initialize the cards with staggered animation
|
226 |
+
const cards = document.querySelectorAll('.fade-in');
|
227 |
+
|
228 |
+
// Initialize event listeners for the integration switches
|
229 |
+
const switches = document.querySelectorAll('.form-check-input');
|
230 |
+
switches.forEach(switchEl => {
|
231 |
+
switchEl.addEventListener('change', function() {
|
232 |
+
const card = this.closest('.glass-card');
|
233 |
+
const statusBadge = card.querySelector('.badge');
|
234 |
+
|
235 |
+
if (this.checked) {
|
236 |
+
statusBadge.className = 'badge bg-success quantum-score';
|
237 |
+
statusBadge.textContent = 'Active';
|
238 |
+
|
239 |
+
// Show success animation
|
240 |
+
triggerParticleEffect(card, '#28a745');
|
241 |
+
} else {
|
242 |
+
statusBadge.className = 'badge bg-danger quantum-score';
|
243 |
+
statusBadge.textContent = 'Inactive';
|
244 |
+
|
245 |
+
// Show deactivation animation
|
246 |
+
triggerParticleEffect(card, '#dc3545');
|
247 |
+
}
|
248 |
+
});
|
249 |
+
});
|
250 |
+
|
251 |
+
// Particle effect function
|
252 |
+
function triggerParticleEffect(element, color) {
|
253 |
+
const rect = element.getBoundingClientRect();
|
254 |
+
const centerX = rect.left + rect.width / 2;
|
255 |
+
const centerY = rect.top + rect.height / 2;
|
256 |
+
|
257 |
+
// Create particles
|
258 |
+
for (let i = 0; i < 20; i++) {
|
259 |
+
createParticle(centerX, centerY, color);
|
260 |
+
}
|
261 |
+
}
|
262 |
+
|
263 |
+
function createParticle(x, y, color) {
|
264 |
+
const particle = document.createElement('div');
|
265 |
+
particle.className = 'quantum-particle';
|
266 |
+
particle.style.left = x + 'px';
|
267 |
+
particle.style.top = y + 'px';
|
268 |
+
particle.style.color = color;
|
269 |
+
|
270 |
+
// Random direction and speed
|
271 |
+
const angle = Math.random() * Math.PI * 2;
|
272 |
+
const speed = 1 + Math.random() * 3;
|
273 |
+
particle.speedX = Math.cos(angle) * speed;
|
274 |
+
particle.speedY = Math.sin(angle) * speed;
|
275 |
+
|
276 |
+
document.body.appendChild(particle);
|
277 |
+
|
278 |
+
// Animate and remove
|
279 |
+
setTimeout(() => {
|
280 |
+
particle.style.left = (x + particle.speedX * 40) + 'px';
|
281 |
+
particle.style.top = (y + particle.speedY * 40) + 'px';
|
282 |
+
|
283 |
+
setTimeout(() => {
|
284 |
+
particle.remove();
|
285 |
+
}, 1000);
|
286 |
+
}, 10);
|
287 |
+
}
|
288 |
+
});
|
289 |
+
</script>
|
290 |
+
{% endblock %}
|