yosen: adding test benchmark

#1
by yl0628 - opened
.gitignore CHANGED
@@ -4,44 +4,41 @@ __pycache__/
4
  *$py.class
5
  *.so
6
  .Python
7
- *.egg-info/
8
- dist/
9
- build/
10
-
11
- # Virtual environments
12
- venv/
13
  env/
 
14
  ENV/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  # IDE
17
  .vscode/
18
  .idea/
19
  *.swp
20
  *.swo
 
21
 
22
  # OS
23
  .DS_Store
24
  Thumbs.db
25
 
26
- # Training scripts and data (not needed for deployment)
27
- train_conflict_model.py
28
- generate_embeddings.py
29
- Synthetic data.py
30
- validation_tools.py
31
- scripts/
32
- synthetic_requirements_txt/
33
- synthetic_requirements_dataset.json
34
-
35
- # Problem3 folder (separate project)
36
- problem3/
37
-
38
  # Temporary files
39
  *.tmp
40
  *.log
41
 
42
- # Model files (binary files not allowed in HF Spaces git - use XET or generate at runtime)
43
- models/*.pkl
44
- models/*.json
45
- # Keep models directory but exclude contents
46
- !models/.gitkeep
47
-
 
4
  *$py.class
5
  *.so
6
  .Python
 
 
 
 
 
 
7
  env/
8
+ venv/
9
  ENV/
10
+ build/
11
+ develop-eggs/
12
+ dist/
13
+ downloads/
14
+ eggs/
15
+ .eggs/
16
+ lib/
17
+ lib64/
18
+ parts/
19
+ sdist/
20
+ var/
21
+ wheels/
22
+ *.egg-info/
23
+ .installed.cfg
24
+ *.egg
25
+
26
+ # Gradio
27
+ flagged/
28
+ gradio_cached_examples/
29
 
30
  # IDE
31
  .vscode/
32
  .idea/
33
  *.swp
34
  *.swo
35
+ *~
36
 
37
  # OS
38
  .DS_Store
39
  Thumbs.db
40
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  # Temporary files
42
  *.tmp
43
  *.log
44
 
 
 
 
 
 
 
ML_MODELS_README.md DELETED
@@ -1,168 +0,0 @@
1
- # ML Models Integration Guide
2
-
3
- This document explains how to train and use the ML models for conflict prediction and package similarity.
4
-
5
- ## Overview
6
-
7
- The project includes two ML models:
8
-
9
- 1. **Conflict Prediction Model**: A Random Forest classifier that predicts whether a set of dependencies will have conflicts
10
- 2. **Package Embeddings**: Pre-computed semantic embeddings for common Python packages for similarity matching
11
-
12
- ## Training the Models
13
-
14
- ### Step 1: Install Training Dependencies
15
-
16
- ```bash
17
- pip install scikit-learn sentence-transformers numpy
18
- ```
19
-
20
- ### Step 2: Train Conflict Prediction Model
21
-
22
- ```bash
23
- cd "code to upload"
24
- python train_conflict_model.py
25
- ```
26
-
27
- This will:
28
- - Load the synthetic dataset (`synthetic_requirements_dataset.json`)
29
- - Extract features from requirements
30
- - Train a Random Forest classifier
31
- - Save the model to `models/conflict_predictor.pkl`
32
- - Display accuracy and feature importance
33
-
34
- **Expected Output:**
35
- - Model size: ~2-5 MB
36
- - Test accuracy: ~85-95% (depending on dataset)
37
-
38
- ### Step 3: Generate Package Embeddings
39
-
40
- ```bash
41
- python generate_embeddings.py
42
- ```
43
-
44
- This will:
45
- - Load a sentence transformer model
46
- - Generate embeddings for common Python packages
47
- - Save embeddings to `models/package_embeddings.json`
48
- - Save model info to `models/embedding_info.json`
49
-
50
- **Expected Output:**
51
- - Embeddings file: ~5-10 MB
52
- - Embedding dimension: 384
53
- - Number of packages: ~100+
54
-
55
- ## Model Files Structure
56
-
57
- After training, you should have:
58
-
59
- ```
60
- code to upload/
61
- β”œβ”€β”€ models/
62
- β”‚ β”œβ”€β”€ conflict_predictor.pkl # Classification model
63
- β”‚ β”œβ”€β”€ package_embeddings.json # Pre-computed embeddings
64
- β”‚ └── embedding_info.json # Model metadata
65
- ```
66
-
67
- ## Integration in Main App
68
-
69
- The models are automatically loaded when available:
70
-
71
- 1. **Conflict Prediction**: Runs before detailed analysis to provide early warnings
72
- 2. **Package Similarity**: Enhances spell-checking with semantic matching
73
-
74
- ### Features
75
-
76
- - **Graceful Fallback**: If models aren't available, the app works with rule-based methods
77
- - **Lazy Loading**: Models load only when needed
78
- - **Error Handling**: ML failures don't break the app
79
-
80
- ## Usage in Code
81
-
82
- ### Conflict Prediction
83
-
84
- ```python
85
- from ml_models import ConflictPredictor
86
-
87
- predictor = ConflictPredictor()
88
- has_conflict, confidence = predictor.predict(requirements_text)
89
-
90
- if has_conflict:
91
- print(f"Conflict predicted with {confidence:.1%} confidence")
92
- ```
93
-
94
- ### Package Similarity
95
-
96
- ```python
97
- from ml_models import PackageEmbeddings
98
-
99
- embeddings = PackageEmbeddings()
100
- similar = embeddings.find_similar("numpyy", top_k=3)
101
- # Returns: [('numpy', 0.95), ('scipy', 0.72), ...]
102
-
103
- best_match = embeddings.get_best_match("pandaz")
104
- # Returns: 'pandas'
105
- ```
106
-
107
- ## Hugging Face Spaces Deployment
108
-
109
- ### Option 1: Include Models in Repo
110
-
111
- 1. Train models locally
112
- 2. Commit model files to the repo
113
- 3. Models load automatically on Spaces
114
-
115
- **Pros**: Simple, no external dependencies
116
- **Cons**: Larger repo size (~10-15 MB)
117
-
118
- ### Option 2: Upload to Hugging Face Hub
119
-
120
- 1. Train models locally
121
- 2. Upload to Hugging Face Hub:
122
- ```python
123
- from huggingface_hub import upload_file
124
- upload_file("models/conflict_predictor.pkl", repo_id="your-username/conflict-predictor")
125
- ```
126
- 3. Load from Hub in app:
127
- ```python
128
- from huggingface_hub import hf_hub_download
129
- model_path = hf_hub_download(repo_id="your-username/conflict-predictor", filename="conflict_predictor.pkl")
130
- ```
131
-
132
- **Pros**: Smaller repo, version control for models
133
- **Cons**: Requires internet connection at startup
134
-
135
- ## Performance
136
-
137
- - **Conflict Prediction**: <10ms per prediction
138
- - **Embedding Lookup**: <1ms (pre-computed) or ~50ms (on-the-fly)
139
- - **Model Loading**: ~1-2 seconds at startup
140
-
141
- ## Troubleshooting
142
-
143
- ### Models Not Loading
144
-
145
- - Check that `models/` directory exists
146
- - Verify model files are present
147
- - Check file permissions
148
-
149
- ### Low Prediction Accuracy
150
-
151
- - Retrain with more data
152
- - Adjust feature engineering
153
- - Try different model parameters
154
-
155
- ### Embeddings Not Working
156
-
157
- - Ensure `sentence-transformers` is installed
158
- - Check internet connection (for first-time model download)
159
- - Verify embeddings file format
160
-
161
- ## Future Improvements
162
-
163
- - [ ] Train on larger, real-world dataset
164
- - [ ] Add version-specific embeddings
165
- - [ ] Implement online learning
166
- - [ ] Add confidence intervals
167
- - [ ] Support for custom model paths
168
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -18,13 +18,10 @@ A powerful tool to analyze and resolve Python package dependencies. Check for ve
18
 
19
  - **Multiple Input Methods**: Library list, requirements.txt paste, or file upload
20
  - **Conflict Detection**: Automatically detects version conflicts and compatibility issues
21
- - **πŸ€– AI-Powered Explanations**: Uses LLM to generate intelligent, natural language explanations for conflicts (with fallback to rule-based)
22
  - **Dependency Resolution**: Uses pip's resolver to find compatible versions
23
  - **Environment Aware**: Configure Python version, device (CPU/GPU), and OS
24
  - **Analysis Modes**: Quick (top-level) or Deep (with transitive dependencies)
25
  - **Resolution Strategies**: Latest compatible, stable/pinned, keep existing, or minimal changes
26
- - **Spell Checking**: Auto-corrects common spelling mistakes in package names
27
- - **Validation Utilities**: Benchmark against the bundled synthetic dataset and generate perturbed requirements for stress testing
28
 
29
  ## πŸš€ How to Use
30
 
@@ -81,15 +78,6 @@ The tool automatically detects:
81
  - **TensorFlow/Keras**: Validates TensorFlow/Keras version pairs
82
  - **Version Conflicts**: Identifies incompatible version specifications
83
 
84
- ## πŸ€– AI Explanations
85
-
86
- When enabled, the tool uses LLM reasoning to provide:
87
- - **Clear Explanations**: Natural language descriptions of what the conflict is
88
- - **Why It Happens**: Technical reasons behind the conflict
89
- - **How to Fix**: Actionable solutions with specific version recommendations
90
-
91
- The LLM explanations use Hugging Face Inference API (free tier) and automatically fall back to rule-based explanations if the API is unavailable.
92
-
93
  ## πŸ“‹ Example
94
 
95
  **Input:**
@@ -124,8 +112,6 @@ pandas==2.0.3
124
  - Deep mode may take longer for large dependency sets
125
  - The tool works best with packages available on PyPI
126
  - Platform-specific dependencies (e.g., CUDA) are detected but resolution may vary
127
- - Run `python validation_tools.py` to benchmark the built-in compatibility checks against synthetic cases.
128
- - Use `python scripts/perturb_requirements.py --help` to generate noisy/invalid requirements for robustness testing.
129
 
130
  ## 🀝 Contributing
131
 
 
18
 
19
  - **Multiple Input Methods**: Library list, requirements.txt paste, or file upload
20
  - **Conflict Detection**: Automatically detects version conflicts and compatibility issues
 
21
  - **Dependency Resolution**: Uses pip's resolver to find compatible versions
22
  - **Environment Aware**: Configure Python version, device (CPU/GPU), and OS
23
  - **Analysis Modes**: Quick (top-level) or Deep (with transitive dependencies)
24
  - **Resolution Strategies**: Latest compatible, stable/pinned, keep existing, or minimal changes
 
 
25
 
26
  ## πŸš€ How to Use
27
 
 
78
  - **TensorFlow/Keras**: Validates TensorFlow/Keras version pairs
79
  - **Version Conflicts**: Identifies incompatible version specifications
80
 
 
 
 
 
 
 
 
 
 
81
  ## πŸ“‹ Example
82
 
83
  **Input:**
 
112
  - Deep mode may take longer for large dependency sets
113
  - The tool works best with packages available on PyPI
114
  - Platform-specific dependencies (e.g., CUDA) are detected but resolution may vary
 
 
115
 
116
  ## 🀝 Contributing
117
 
app.py CHANGED
@@ -9,19 +9,10 @@ import tempfile
9
  import subprocess
10
  from pathlib import Path
11
  from typing import List, Dict, Tuple, Optional, Set
12
- from difflib import get_close_matches
13
- import requests
14
  from packaging.requirements import Requirement
15
  from packaging.specifiers import SpecifierSet
16
  from packaging.version import Version
17
-
18
- # Import ML models (with graceful fallback)
19
- try:
20
- from ml_models import ConflictPredictor, PackageEmbeddings
21
- ML_AVAILABLE = True
22
- except ImportError:
23
- ML_AVAILABLE = False
24
- print("Warning: ML models not available. Some features will be disabled.")
25
 
26
 
27
  class DependencyParser:
@@ -294,554 +285,7 @@ class DependencyResolver:
294
  Path(temp_req_file).unlink(missing_ok=True)
295
 
296
 
297
- class CatalogValidator:
298
- """Validate package names against a simple ground-truth catalog."""
299
-
300
- def __init__(self, catalog_path: Path = Path("data/package_name_catalog.json"), use_ml: bool = True):
301
- self.catalog_path = catalog_path
302
- self.valid_packages: Set[str] = set()
303
- self.invalid_packages: Set[str] = set()
304
- self.use_ml = use_ml and ML_AVAILABLE
305
- self.embeddings = None
306
-
307
- self._load_catalog()
308
-
309
- # Load embeddings if available
310
- if self.use_ml:
311
- try:
312
- self.embeddings = PackageEmbeddings()
313
- except Exception as e:
314
- print(f"Warning: Could not load embeddings: {e}")
315
- self.use_ml = False
316
-
317
- def _load_catalog(self) -> None:
318
- if not self.catalog_path.exists():
319
- return
320
- try:
321
- data = json.loads(self.catalog_path.read_text())
322
- self.valid_packages = {p.lower() for p in data.get("valid_packages", [])}
323
- self.invalid_packages = {p.lower() for p in data.get("invalid_packages", [])}
324
- except Exception as exc:
325
- # Keep going even if catalog is malformed
326
- print(f"Warning: could not read catalog {self.catalog_path}: {exc}")
327
-
328
- def suggest_correction(self, package_name: str, cutoff: float = 0.6) -> Optional[str]:
329
- """Suggest a corrected package name using fuzzy matching and embeddings."""
330
- if not self.valid_packages:
331
- return None
332
-
333
- package_lower = package_name.lower()
334
-
335
- # If it's already valid, no correction needed
336
- if package_lower in self.valid_packages:
337
- return None
338
-
339
- # Try ML-based embedding similarity first (more accurate)
340
- if self.use_ml and self.embeddings:
341
- try:
342
- best_match = self.embeddings.get_best_match(package_name, threshold=0.7)
343
- if best_match and best_match in self.valid_packages:
344
- return best_match
345
- except Exception:
346
- pass
347
-
348
- # Fallback to fuzzy matching
349
- matches = get_close_matches(
350
- package_lower,
351
- list(self.valid_packages),
352
- n=1,
353
- cutoff=cutoff
354
- )
355
-
356
- if matches:
357
- return matches[0]
358
- return None
359
-
360
- def check_and_correct_packages(self, dependencies: List[Dict], auto_correct: bool = True) -> Tuple[List[Dict], List[str]]:
361
- """Check packages and optionally correct spelling mistakes.
362
-
363
- Returns:
364
- Tuple of (corrected_dependencies, warnings)
365
- """
366
- corrected_deps = []
367
- warnings: List[str] = []
368
- seen: Set[str] = set()
369
- max_warnings = 15
370
-
371
- for dep in dependencies:
372
- package = dep["package"]
373
- package_lower = package.lower()
374
-
375
- if package_lower in seen:
376
- corrected_deps.append(dep)
377
- continue
378
- seen.add(package_lower)
379
-
380
- # Check if it's explicitly invalid
381
- if self.invalid_packages and package_lower in self.invalid_packages:
382
- warnings.append(f"Package '{package}' is flagged as invalid in the catalog.")
383
- if len(warnings) >= max_warnings:
384
- corrected_deps.append(dep)
385
- continue
386
-
387
- # Try to suggest a correction
388
- suggestion = self.suggest_correction(package)
389
- if suggestion:
390
- if auto_correct:
391
- corrected_dep = dep.copy()
392
- corrected_dep['package'] = suggestion
393
- corrected_dep['original'] = corrected_dep['original'].replace(package, suggestion, 1)
394
- corrected_deps.append(corrected_dep)
395
- warnings.append(f" β†’ Auto-corrected to '{suggestion}'")
396
- else:
397
- warnings.append(f" β†’ Did you mean '{suggestion}'?")
398
- else:
399
- corrected_deps.append(dep)
400
- continue
401
-
402
- # Check if it's not in valid catalog and suggest correction
403
- if self.valid_packages and package_lower not in self.valid_packages:
404
- suggestion = self.suggest_correction(package)
405
- if suggestion:
406
- if auto_correct:
407
- corrected_dep = dep.copy()
408
- corrected_dep['package'] = suggestion
409
- corrected_dep['original'] = corrected_dep['original'].replace(package, suggestion, 1)
410
- corrected_deps.append(corrected_dep)
411
- warnings.append(f"Package '{package}' not found. Auto-corrected to '{suggestion}'")
412
- else:
413
- warnings.append(f"Package '{package}' not found. Did you mean '{suggestion}'?")
414
- if len(warnings) >= max_warnings:
415
- break
416
- else:
417
- warnings.append(
418
- f"Package '{package}' is not in the curated valid catalog. Check for typos or private packages."
419
- )
420
- corrected_deps.append(dep)
421
- if len(warnings) >= max_warnings:
422
- break
423
- else:
424
- # Package is valid, keep as-is
425
- corrected_deps.append(dep)
426
-
427
- if len(warnings) >= max_warnings:
428
- warnings.append("Additional potential catalog issues omitted for brevity.")
429
-
430
- return corrected_deps, warnings
431
-
432
- def check_packages(self, dependencies: List[Dict]) -> List[str]:
433
- """Return warnings for packages that look suspicious or explicitly invalid."""
434
- _, warnings = self.check_and_correct_packages(dependencies, auto_correct=False)
435
- return warnings
436
-
437
-
438
- class ProjectRequirementsGenerator:
439
- """Generate requirements.txt from project description using LLM."""
440
-
441
- def __init__(self, use_llm: bool = True):
442
- """
443
- Initialize project requirements generator.
444
-
445
- Args:
446
- use_llm: If True, uses Hugging Face Inference API
447
- If False, uses rule-based suggestions
448
- """
449
- self.use_llm = use_llm
450
- # Using a better model for code generation
451
- # Try to use a code generation model, fallback to GPT-2
452
- self.api_url = "https://api-inference.huggingface.co/models/bigcode/starcoder"
453
- self.fallback_url = "https://api-inference.huggingface.co/models/gpt2"
454
- self.headers = {"Content-Type": "application/json"}
455
-
456
- def generate_requirements(self, project_description: str) -> Tuple[str, str]:
457
- """
458
- Generate requirements.txt from project description.
459
-
460
- Args:
461
- project_description: User's description of their project
462
-
463
- Returns:
464
- Tuple of (requirements_text, explanations_text)
465
- """
466
- if not project_description or not project_description.strip():
467
- return "", ""
468
-
469
- # Always try rule-based first as it's more reliable
470
- requirements, explanations = self._rule_based_suggestions(project_description)
471
-
472
- # Try LLM to enhance the suggestions if enabled
473
- if self.use_llm:
474
- prompt = self._create_requirements_prompt(project_description)
475
- llm_response = self._call_llm_for_requirements(prompt)
476
- llm_requirements, llm_explanations = self._parse_llm_response(llm_response)
477
-
478
- # If LLM generated valid requirements, use them (or merge with rule-based)
479
- if llm_requirements and len(llm_requirements.strip()) > 10:
480
- # Merge: prefer LLM but keep rule-based if LLM is incomplete
481
- if len(llm_requirements) > len(requirements):
482
- requirements = llm_requirements
483
- explanations = llm_explanations if llm_explanations else explanations
484
- else:
485
- # Combine both
486
- combined = set(requirements.split('\n'))
487
- combined.update(llm_requirements.split('\n'))
488
- requirements = '\n'.join([r for r in combined if r.strip()])
489
-
490
- return requirements, explanations
491
-
492
- def _create_requirements_prompt(self, description: str) -> str:
493
- """Create a prompt for generating requirements.txt."""
494
- prompt = f"""You are a Python expert. Based on this project description, generate a requirements.txt file with appropriate Python packages.
495
-
496
- Project Description:
497
- {description}
498
-
499
- Generate a requirements.txt file with:
500
- 1. Essential packages needed for this project
501
- 2. Appropriate version pins where necessary
502
- 3. Format: one package per line with version (e.g., "pandas==2.0.3" or "fastapi>=0.100.0")
503
-
504
- For each package, provide a brief explanation of why it's needed.
505
-
506
- Format your response as:
507
- REQUIREMENTS:
508
- package1==version1
509
- package2>=version2
510
- ...
511
-
512
- EXPLANATIONS:
513
- - package1: Brief explanation of why it's needed
514
- - package2: Brief explanation of why it's needed
515
- ...
516
-
517
- Keep it practical and focused on the most important dependencies (5-15 packages typically).
518
- """
519
- return prompt
520
-
521
- def _call_llm_for_requirements(self, prompt: str) -> str:
522
- """Call LLM API to generate requirements."""
523
- try:
524
- # Try the code generation model first
525
- payload = {
526
- "inputs": prompt,
527
- "parameters": {
528
- "max_new_tokens": 500,
529
- "temperature": 0.3,
530
- "return_full_text": False
531
- }
532
- }
533
-
534
- response = requests.post(
535
- self.api_url,
536
- headers=self.headers,
537
- json=payload,
538
- timeout=15
539
- )
540
-
541
- if response.status_code == 200:
542
- result = response.json()
543
- if isinstance(result, list) and len(result) > 0:
544
- generated_text = result[0].get('generated_text', '')
545
- if generated_text:
546
- return generated_text.strip()
547
-
548
- # Fallback to GPT-2
549
- response = requests.post(
550
- self.fallback_url,
551
- headers=self.headers,
552
- json=payload,
553
- timeout=15
554
- )
555
-
556
- if response.status_code == 200:
557
- result = response.json()
558
- if isinstance(result, list) and len(result) > 0:
559
- generated_text = result[0].get('generated_text', '')
560
- if generated_text:
561
- return generated_text.strip()
562
-
563
- return ""
564
-
565
- except Exception as e:
566
- print(f"LLM API error: {e}")
567
- return ""
568
-
569
- def _parse_llm_response(self, response: str) -> Tuple[str, str]:
570
- """Parse LLM response to extract requirements and explanations."""
571
- if not response:
572
- return "", ""
573
-
574
- requirements = []
575
- explanations = []
576
-
577
- # Try to extract REQUIREMENTS section
578
- if "REQUIREMENTS:" in response:
579
- req_section = response.split("REQUIREMENTS:")[1]
580
- if "EXPLANATIONS:" in req_section:
581
- req_section = req_section.split("EXPLANATIONS:")[0]
582
-
583
- for line in req_section.strip().split('\n'):
584
- line = line.strip()
585
- if line and not line.startswith('#') and not line.startswith('-'):
586
- # Clean up the line
587
- line = line.split('#')[0].strip() # Remove comments
588
- if line and ('==' in line or '>=' in line or '<=' in line or '>' in line or '<' in line or not any(c in line for c in '=<>')):
589
- requirements.append(line)
590
-
591
- # Try to extract EXPLANATIONS section
592
- if "EXPLANATIONS:" in response:
593
- exp_section = response.split("EXPLANATIONS:")[1]
594
- for line in exp_section.strip().split('\n'):
595
- line = line.strip()
596
- if line and line.startswith('-'):
597
- explanations.append(line[1:].strip())
598
-
599
- # If parsing failed, try to extract package names from the response
600
- if not requirements:
601
- # Look for lines that look like package specifications
602
- for line in response.split('\n'):
603
- line = line.strip()
604
- # Check if it looks like a package (has letters, maybe numbers, maybe version)
605
- if line and ('==' in line or '>=' in line or '<=' in line):
606
- parts = line.split()
607
- if parts:
608
- requirements.append(parts[0])
609
-
610
- requirements_text = '\n'.join(requirements[:20]) # Limit to 20 packages
611
- explanations_text = '\n'.join(explanations[:20]) if explanations else ""
612
-
613
- return requirements_text, explanations_text
614
-
615
- def _rule_based_suggestions(self, description: str) -> Tuple[str, str]:
616
- """Generate rule-based suggestions when LLM is unavailable."""
617
- desc_lower = description.lower()
618
- suggestions = []
619
- explanations = []
620
-
621
- # RAG / Chatbot / PDF processing
622
- if any(word in desc_lower for word in ['rag', 'chatbot', 'pdf', 'document', 'query', 'retrieval']):
623
- suggestions.append("streamlit>=1.28.0")
624
- suggestions.append("langchain>=0.1.0")
625
- suggestions.append("pypdf>=3.17.0")
626
- if 'openai' in desc_lower or 'gpt' in desc_lower:
627
- suggestions.append("openai>=1.0.0")
628
- else:
629
- suggestions.append("openai>=1.0.0")
630
- suggestions.append("chromadb>=0.4.0")
631
- explanations.append("- streamlit: Build interactive web apps for your chatbot interface")
632
- explanations.append("- langchain: Framework for building RAG applications")
633
- explanations.append("- pypdf: PDF parsing and text extraction")
634
- explanations.append("- openai: OpenAI API for LLM integration")
635
- explanations.append("- chromadb: Vector database for document embeddings")
636
-
637
- # Web frameworks
638
- if any(word in desc_lower for word in ['web', 'api', 'server', 'backend', 'rest']):
639
- suggestions.append("fastapi>=0.100.0")
640
- suggestions.append("uvicorn[standard]>=0.23.0")
641
- explanations.append("- fastapi: Modern web framework for building APIs")
642
- explanations.append("- uvicorn: ASGI server to run FastAPI applications")
643
-
644
- # Data science
645
- if any(word in desc_lower for word in ['data', 'analysis', 'csv', 'excel', 'dataframe', 'pandas']):
646
- suggestions.append("pandas>=2.0.0")
647
- suggestions.append("numpy>=1.24.0")
648
- explanations.append("- pandas: Data manipulation and analysis")
649
- explanations.append("- numpy: Numerical computing library")
650
-
651
- # Machine learning
652
- if any(word in desc_lower for word in ['ml', 'machine learning', 'model', 'train', 'neural', 'deep learning', 'ai']):
653
- suggestions.append("scikit-learn>=1.3.0")
654
- if 'pytorch' in desc_lower or 'torch' in desc_lower:
655
- suggestions.append("torch>=2.0.0")
656
- explanations.append("- torch: PyTorch deep learning framework")
657
- elif 'tensorflow' in desc_lower or 'tf' in desc_lower:
658
- suggestions.append("tensorflow>=2.13.0")
659
- explanations.append("- tensorflow: TensorFlow deep learning framework")
660
- explanations.append("- scikit-learn: Machine learning algorithms and utilities")
661
-
662
- # Database
663
- if any(word in desc_lower for word in ['database', 'sql', 'db', 'postgres', 'mysql']):
664
- suggestions.append("sqlalchemy>=2.0.0")
665
- explanations.append("- sqlalchemy: SQL toolkit and ORM")
666
-
667
- # HTTP requests
668
- if any(word in desc_lower for word in ['http', 'request', 'fetch', 'download']):
669
- suggestions.append("requests>=2.31.0")
670
- explanations.append("- requests: HTTP library for making API calls")
671
-
672
- # Environment variables
673
- if any(word in desc_lower for word in ['config', 'env', 'environment', 'settings']):
674
- suggestions.append("python-dotenv>=1.0.0")
675
- explanations.append("- python-dotenv: Load environment variables from .env file")
676
-
677
- # If no specific matches, provide common packages
678
- if not suggestions:
679
- suggestions.append("requests>=2.31.0")
680
- suggestions.append("python-dotenv>=1.0.0")
681
- explanations.append("- requests: HTTP library for API calls and web requests")
682
- explanations.append("- python-dotenv: Manage environment variables and configuration")
683
-
684
- requirements_text = '\n'.join(suggestions) if suggestions else ""
685
- explanations_text = '\n'.join(explanations) if explanations else ""
686
-
687
- return requirements_text, explanations_text
688
-
689
-
690
- class ExplanationEngine:
691
- """Generate intelligent explanations for dependency conflicts using LLM."""
692
-
693
- def __init__(self, use_llm: bool = True):
694
- """
695
- Initialize explanation engine.
696
-
697
- Args:
698
- use_llm: If True, uses Hugging Face Inference API (free tier)
699
- If False, uses rule-based explanations only
700
- """
701
- self.use_llm = use_llm
702
- # Using Hugging Face Inference API (free tier)
703
- self.api_url = "https://api-inference.huggingface.co/models/gpt2"
704
- self.headers = {"Content-Type": "application/json"}
705
-
706
- def generate_explanation(self, conflict: Dict, dependencies: List[Dict]) -> Dict:
707
- """
708
- Generate a detailed explanation for a conflict.
709
-
710
- Args:
711
- conflict: Conflict dictionary with type, packages, message, etc.
712
- dependencies: Full list of dependencies for context
713
-
714
- Returns:
715
- Dictionary with explanation, why_it_happens, how_to_fix
716
- """
717
- # Build context about the conflict
718
- conflict_type = conflict.get('type', 'unknown')
719
- packages = conflict.get('packages', [conflict.get('package', 'unknown')])
720
- message = conflict.get('message', '')
721
- details = conflict.get('details', {})
722
-
723
- # Create prompt for LLM
724
- prompt = self._create_prompt(conflict, dependencies)
725
-
726
- # Get LLM explanation
727
- explanation_text = self._call_llm(prompt) if self.use_llm else self._fallback_explanation(prompt)
728
-
729
- # Parse and structure the explanation
730
- return {
731
- 'summary': message,
732
- 'explanation': explanation_text,
733
- 'why_it_happens': self._extract_why(explanation_text, conflict),
734
- 'how_to_fix': self._extract_fix(explanation_text, conflict),
735
- 'packages_involved': packages,
736
- 'severity': conflict.get('severity', 'medium')
737
- }
738
-
739
- def _create_prompt(self, conflict: Dict, dependencies: List[Dict]) -> str:
740
- """Create a prompt for the LLM."""
741
- conflict_type = conflict.get('type', 'unknown')
742
- packages = conflict.get('packages', [conflict.get('package', 'unknown')])
743
- message = conflict.get('message', '')
744
- details = conflict.get('details', {})
745
-
746
- # Get relevant dependency info
747
- relevant_deps = [d for d in dependencies if d['package'] in packages]
748
-
749
- prompt = f"""You are a Python dependency expert. Explain this dependency conflict clearly:
750
-
751
- Conflict: {message}
752
- Type: {conflict_type}
753
- Packages involved: {', '.join(packages)}
754
-
755
- Dependency details:
756
- """
757
- for dep in relevant_deps:
758
- prompt += f"- {dep['package']}: {dep['specifier'] or 'no version specified'}\n"
759
-
760
- if details:
761
- prompt += f"\nVersion constraints: {json.dumps(details)}\n"
762
-
763
- prompt += """
764
- Provide a clear, concise explanation that:
765
- 1. Explains what the conflict is in simple terms
766
- 2. Explains why this conflict happens (technical reason)
767
- 3. Suggests how to fix it (specific version recommendations)
768
-
769
- Keep it under 150 words and use plain language.
770
- """
771
- return prompt
772
-
773
- def _call_llm(self, prompt: str) -> str:
774
- """
775
- Call LLM API to generate explanation.
776
- Falls back to rule-based explanation if API fails.
777
- """
778
- try:
779
- # Try Hugging Face Inference API (free tier)
780
- payload = {
781
- "inputs": prompt,
782
- "parameters": {
783
- "max_new_tokens": 200,
784
- "temperature": 0.7,
785
- "return_full_text": False
786
- }
787
- }
788
-
789
- response = requests.post(
790
- self.api_url,
791
- headers=self.headers,
792
- json=payload,
793
- timeout=10
794
- )
795
-
796
- if response.status_code == 200:
797
- result = response.json()
798
- if isinstance(result, list) and len(result) > 0:
799
- generated_text = result[0].get('generated_text', '')
800
- if generated_text:
801
- return generated_text.strip()
802
-
803
- # If API fails, fall back to rule-based
804
- return self._fallback_explanation(prompt)
805
-
806
- except Exception as e:
807
- # Fall back to rule-based explanation
808
- return self._fallback_explanation(prompt)
809
-
810
- def _fallback_explanation(self, prompt: str) -> str:
811
- """Generate rule-based explanation when LLM is unavailable."""
812
- # Extract key info from prompt
813
- if "pytorch-lightning" in prompt.lower() and "torch" in prompt.lower():
814
- return """PyTorch Lightning 2.0+ requires PyTorch 2.0 or higher because it uses new PyTorch APIs and features that don't exist in version 1.x. The conflict happens because you're trying to use a newer version of PyTorch Lightning with an older version of PyTorch. To fix this, either upgrade PyTorch to 2.0+ or downgrade PyTorch Lightning to 1.x."""
815
-
816
- elif "fastapi" in prompt.lower() and "pydantic" in prompt.lower():
817
- return """FastAPI 0.78.x was built for Pydantic v1, which has a different API than Pydantic v2. The conflict occurs because Pydantic v2 introduced breaking changes that FastAPI 0.78 doesn't support. To fix this, either upgrade FastAPI to 0.99+ (which supports Pydantic v2) or downgrade Pydantic to v1.x."""
818
-
819
- elif "tensorflow" in prompt.lower() and "keras" in prompt.lower():
820
- return """Keras 3.0+ requires TensorFlow 2.x because it was redesigned to work with TensorFlow 2's eager execution and new features. TensorFlow 1.x uses a different execution model that Keras 3.0 doesn't support. To fix this, upgrade TensorFlow to 2.x or downgrade Keras to 2.x."""
821
-
822
- elif "duplicate" in prompt.lower():
823
- return """You have the same package specified multiple times with different versions. This creates ambiguity about which version should be installed. To fix this, remove duplicate entries and keep only one version specification per package."""
824
-
825
- else:
826
- return """This dependency conflict occurs due to incompatible version requirements between packages. Review the version constraints and ensure all packages are compatible with each other. Consider updating to compatible versions or using a dependency resolver."""
827
-
828
- def _extract_why(self, explanation: str, conflict: Dict) -> str:
829
- """Extract the 'why it happens' part from explanation."""
830
- # Simple extraction - look for sentences explaining the reason
831
- sentences = explanation.split('.')
832
- why_sentences = [s.strip() for s in sentences if any(word in s.lower() for word in ['because', 'due to', 'requires', 'needs', 'since'])]
833
- return '. '.join(why_sentences[:2]) + '.' if why_sentences else "Version constraints are incompatible."
834
-
835
- def _extract_fix(self, explanation: str, conflict: Dict) -> str:
836
- """Extract the 'how to fix' part from explanation."""
837
- # Simple extraction - look for fix suggestions
838
- sentences = explanation.split('.')
839
- fix_sentences = [s.strip() for s in sentences if any(word in s.lower() for word in ['upgrade', 'downgrade', 'fix', 'change', 'update', 'remove'])]
840
- return '. '.join(fix_sentences[:2]) + '.' if fix_sentences else "Adjust version constraints to compatible versions."
841
-
842
-
843
  def process_dependencies(
844
- project_description: str,
845
  library_list: str,
846
  requirements_text: str,
847
  uploaded_file,
@@ -849,28 +293,10 @@ def process_dependencies(
849
  device: str,
850
  os_type: str,
851
  mode: str,
852
- resolution_strategy: str,
853
- use_llm_explanations: bool = True,
854
- use_ml_prediction: bool = True,
855
- use_ml_spellcheck: bool = True,
856
- show_ml_details: bool = False
857
- ) -> Tuple[str, str, str]:
858
  """Main processing function for Gradio interface."""
859
 
860
- # Generate requirements from project description if provided
861
- generated_requirements = ""
862
- generation_explanations = ""
863
- if project_description and project_description.strip():
864
- generator = ProjectRequirementsGenerator(use_llm=True)
865
- generated_requirements, generation_explanations = generator.generate_requirements(project_description)
866
-
867
- # If we generated requirements, add them to the requirements_text
868
- if generated_requirements:
869
- if requirements_text:
870
- requirements_text = generated_requirements + "\n" + requirements_text
871
- else:
872
- requirements_text = generated_requirements
873
-
874
  # Collect dependencies from all sources
875
  all_dependencies = []
876
 
@@ -889,71 +315,17 @@ def process_dependencies(
889
  # Parse uploaded file
890
  if uploaded_file:
891
  try:
892
- # Handle both string paths and file objects (Gradio 6.x compatibility)
893
- if isinstance(uploaded_file, str):
894
- file_path = uploaded_file
895
- else:
896
- # If it's a file object, get the path
897
- file_path = uploaded_file.name if hasattr(uploaded_file, 'name') else str(uploaded_file)
898
-
899
- with open(file_path, 'r') as f:
900
  content = f.read()
901
  parser = DependencyParser()
902
  deps = parser.parse_requirements_text(content)
903
  all_dependencies.extend(deps)
904
  except Exception as e:
905
- return f"Error reading file: {str(e)}", "", ""
906
 
907
  if not all_dependencies:
908
- return "Please provide at least one input: library list, requirements text, or uploaded file.", "", ""
909
 
910
- catalog_validator = CatalogValidator(use_ml=use_ml_spellcheck and ML_AVAILABLE)
911
- # Auto-correct spelling mistakes in package names
912
- all_dependencies, catalog_warnings = catalog_validator.check_and_correct_packages(all_dependencies, auto_correct=True)
913
-
914
- # ML-based conflict prediction (pre-analysis)
915
- ml_conflict_prediction = None
916
- ml_confidence = 0.0
917
- ml_details = ""
918
- if use_ml_prediction and ML_AVAILABLE:
919
- try:
920
- predictor = ConflictPredictor()
921
- requirements_text_for_ml = '\n'.join([d['original'] for d in all_dependencies])
922
- has_conflict, confidence = predictor.predict(requirements_text_for_ml)
923
- ml_conflict_prediction = has_conflict
924
- ml_confidence = confidence
925
-
926
- # Build ML details output
927
- ml_details = f"""
928
- ### ML Model Details
929
-
930
- **Conflict Prediction Model:**
931
- - Prediction: {"Conflict Detected" if has_conflict else "No Conflict"}
932
- - Confidence: {confidence:.2%}
933
- - Model Type: Random Forest Classifier
934
- - Features Analyzed: Package presence, version specificity, conflict patterns
935
-
936
- """
937
- if show_ml_details:
938
- # Get feature importance or additional details
939
- ml_details += f"""
940
- **Raw Prediction:**
941
- - Has Conflict: {has_conflict}
942
- - Confidence Score: {confidence:.4f}
943
- - Probability Distribution: Conflict={confidence:.2%}, No Conflict={1-confidence:.2%}
944
-
945
- """
946
-
947
- if has_conflict and confidence > 0.7:
948
- catalog_warnings.append(
949
- f"ML Prediction: High probability ({confidence:.1%}) of conflicts detected"
950
- )
951
- except Exception as e:
952
- print(f"ML prediction error: {e}")
953
- ml_details = f"ML Prediction Error: {str(e)}"
954
- elif use_ml_prediction and not ML_AVAILABLE:
955
- ml_details = "ML models not available. Train models using `train_conflict_model.py` to enable this feature."
956
-
957
  # Build dependency graph
958
  resolver = DependencyResolver(python_version=python_version, platform=os_type, device=device)
959
  deep_mode = (mode == "Deep (with transitive dependencies)")
@@ -962,208 +334,52 @@ def process_dependencies(
962
  # Check compatibility
963
  is_compatible, issues = resolver.check_compatibility(graph)
964
 
965
- # Convert string issues to structured format for LLM explanations
966
- structured_issues = []
967
- for issue in issues:
968
- if isinstance(issue, str):
969
- # Parse the issue string to extract package names and type
970
- issue_dict = {
971
- 'type': 'version_incompatibility',
972
- 'message': issue,
973
- 'severity': 'high',
974
- 'details': {}
975
- }
976
-
977
- # Extract package names from known patterns
978
- packages = []
979
- issue_lower = issue.lower()
980
-
981
- # Check for specific known conflicts
982
- if 'pytorch-lightning' in issue_lower and 'torch' in issue_lower:
983
- packages = ['pytorch-lightning', 'torch']
984
- issue_dict['type'] = 'version_incompatibility'
985
- # Extract version details
986
- for dep in all_dependencies:
987
- if dep['package'] in packages:
988
- issue_dict['details'][dep['package']] = dep.get('specifier', '')
989
- elif 'fastapi' in issue_lower and 'pydantic' in issue_lower:
990
- packages = ['fastapi', 'pydantic']
991
- issue_dict['type'] = 'version_incompatibility'
992
- for dep in all_dependencies:
993
- if dep['package'] in packages:
994
- issue_dict['details'][dep['package']] = dep.get('specifier', '')
995
- elif 'tensorflow' in issue_lower and 'keras' in issue_lower:
996
- packages = ['tensorflow', 'keras']
997
- issue_dict['type'] = 'version_incompatibility'
998
- for dep in all_dependencies:
999
- if dep['package'] in packages:
1000
- issue_dict['details'][dep['package']] = dep.get('specifier', '')
1001
- elif 'conflict in' in issue_lower:
1002
- # Duplicate package conflict
1003
- pkg = issue.split('Conflict in')[1].split(':')[0].strip()
1004
- packages = [pkg]
1005
- issue_dict['type'] = 'duplicate'
1006
- issue_dict['package'] = pkg
1007
- else:
1008
- # Generic: try to find packages mentioned in the issue
1009
- for dep in all_dependencies:
1010
- if dep['package'] in issue_lower:
1011
- packages.append(dep['package'])
1012
-
1013
- if packages:
1014
- issue_dict['packages'] = packages
1015
- else:
1016
- issue_dict['package'] = 'unknown'
1017
- issue_dict['packages'] = []
1018
-
1019
- structured_issues.append(issue_dict)
1020
- else:
1021
- structured_issues.append(issue)
1022
-
1023
- # Generate LLM explanations if enabled
1024
- explanations = []
1025
- if use_llm_explanations and structured_issues:
1026
- explanation_engine = ExplanationEngine(use_llm=use_llm_explanations)
1027
- for issue in structured_issues:
1028
- try:
1029
- explanation = explanation_engine.generate_explanation(issue, all_dependencies)
1030
- explanations.append(explanation)
1031
- except Exception as e:
1032
- # If explanation generation fails, just use the issue message
1033
- explanations.append({
1034
- 'summary': issue.get('message', str(issue)),
1035
- 'explanation': issue.get('message', str(issue)),
1036
- 'why_it_happens': 'Unable to generate explanation.',
1037
- 'how_to_fix': 'Review version constraints.',
1038
- 'packages_involved': issue.get('packages', []),
1039
- 'severity': issue.get('severity', 'medium')
1040
- })
1041
-
1042
  # Resolve dependencies
1043
- resolved_text, resolver_warnings = resolver.resolve_dependencies(all_dependencies, resolution_strategy)
1044
- warnings = catalog_warnings + resolver_warnings
1045
 
1046
  # Build output message
1047
  output_parts = []
1048
  output_parts.append("## Dependency Analysis Results\n\n")
1049
 
1050
- # Show generated requirements if project description was provided
1051
- if project_description and project_description.strip() and generated_requirements:
1052
- output_parts.append("### Generated Requirements from Project Description\n\n")
1053
- output_parts.append(f"**Project:** {project_description[:100]}{'...' if len(project_description) > 100 else ''}\n\n")
1054
- output_parts.append("**Suggested Packages:**\n")
1055
- output_parts.append("```\n")
1056
- output_parts.append(generated_requirements)
1057
- output_parts.append("\n```\n\n")
1058
-
1059
- if generation_explanations:
1060
- output_parts.append("**Why these packages?**\n")
1061
- output_parts.append(generation_explanations)
1062
- output_parts.append("\n\n")
1063
-
1064
- output_parts.append("---\n\n")
1065
-
1066
- # Show ML prediction if available
1067
- if ML_AVAILABLE and ml_conflict_prediction is not None:
1068
- if ml_conflict_prediction:
1069
- output_parts.append(f"### ML Prediction: Potential Conflicts Detected (Confidence: {ml_confidence:.1%})\n\n")
1070
- else:
1071
- output_parts.append(f"### ML Prediction: Low Conflict Risk (Confidence: {ml_confidence:.1%})\n\n")
1072
-
1073
  if issues:
1074
- output_parts.append("### Compatibility Issues Found:\n")
1075
- if explanations:
1076
- # Show detailed LLM explanations
1077
- for i, (issue, explanation) in enumerate(zip(issues, explanations), 1):
1078
- output_parts.append(f"#### Issue #{i}: {explanation['summary']}\n\n")
1079
- output_parts.append(f"**Explanation:**\n{explanation['explanation']}\n\n")
1080
- output_parts.append(f"**Why this happens:**\n{explanation['why_it_happens']}\n\n")
1081
- output_parts.append(f"**How to fix:**\n{explanation['how_to_fix']}\n\n")
1082
- output_parts.append("---\n\n")
1083
- else:
1084
- # Fallback to simple list
1085
- for issue in issues:
1086
- output_parts.append(f"- {issue}\n")
1087
- output_parts.append("\n")
1088
-
1089
- # Separate corrections from other warnings
1090
- corrections = [w for w in warnings if "Auto-corrected" in w or "β†’" in w]
1091
- other_warnings = [w for w in warnings if w not in corrections]
1092
-
1093
- if corrections:
1094
- output_parts.append("### Spelling Corrections:\n")
1095
- for correction in corrections:
1096
- output_parts.append(f"- {correction}\n")
1097
  output_parts.append("\n")
1098
 
1099
- if other_warnings:
1100
- output_parts.append("### Warnings:\n")
1101
- for warning in other_warnings:
1102
  output_parts.append(f"- {warning}\n")
1103
  output_parts.append("\n")
1104
 
1105
  if is_compatible and not issues:
1106
- output_parts.append("### No compatibility issues detected!\n\n")
1107
 
1108
- output_parts.append(f"### Resolved Requirements ({len(all_dependencies)} packages):\n")
1109
  output_parts.append("```\n")
1110
  output_parts.append(resolved_text)
1111
  output_parts.append("\n```\n")
1112
 
1113
- # Add ML details if requested
1114
- if show_ml_details and ml_details:
1115
- output_parts.append(ml_details)
1116
-
1117
- return ''.join(output_parts), resolved_text, ml_details
1118
 
1119
 
1120
  # Gradio Interface
1121
  def create_interface():
1122
  """Create and return the Gradio interface."""
1123
- import gradio as gr
1124
 
1125
- with gr.Blocks(title="Python Dependency Compatibility Board") as app:
1126
  gr.Markdown("""
1127
- # Python Dependency Compatibility Board
1128
 
1129
- Analyze and resolve Python package dependencies with **AI-powered explanations** and **ML-based conflict prediction**.
1130
-
1131
- ## Key Features
1132
-
1133
- | Feature | Status | Description |
1134
- |---------|--------|-------------|
1135
- | **LLM Requirements Generation** | Active | Generate requirements.txt from project description using AI |
1136
- | **LLM Reasoning** | Active | AI-powered natural language explanations for conflicts |
1137
- | **ML Conflict Prediction** | {"Available" if ML_AVAILABLE else "Not Loaded"} | Machine learning model predicts conflicts before analysis |
1138
- | **Embedding-Based Spell Check** | {"Available" if ML_AVAILABLE else "Not Loaded"} | Semantic similarity matching for package names |
1139
- | **Auto-Correction** | Active | Automatically fixes spelling mistakes in package names |
1140
- | **Dependency Resolution** | Active | Resolves conflicts using pip's resolver |
1141
 
 
1142
  """)
1143
 
1144
- # Project Description Input (Optional)
1145
- with gr.Row():
1146
- with gr.Column(scale=3):
1147
- project_description_input = gr.Textbox(
1148
- label="Project Description (Optional) - AI-Powered Requirements Generation",
1149
- placeholder="Describe your project idea here...\nExample: 'I want to build a web API for data analysis with machine learning capabilities'",
1150
- lines=4,
1151
- info="Describe your project and AI will suggest required libraries with explanations.",
1152
- value=""
1153
- )
1154
- with gr.Column(scale=1):
1155
- generate_requirements_btn = gr.Button(
1156
- "Generate Requirements from Description",
1157
- variant="secondary",
1158
- size="lg"
1159
- )
1160
- generated_requirements_display = gr.Markdown(
1161
- label="Generated Requirements Preview",
1162
- value="AI-generated requirements preview will appear here after clicking the button above."
1163
- )
1164
-
1165
- gr.Markdown("---")
1166
-
1167
  with gr.Row():
1168
  with gr.Column(scale=1):
1169
  gr.Markdown("### Input Methods")
@@ -1225,33 +441,6 @@ def create_interface():
1225
  info="How to resolve version conflicts"
1226
  )
1227
 
1228
- gr.Markdown("---")
1229
- gr.Markdown("### AI & ML Features")
1230
-
1231
- use_llm = gr.Checkbox(
1232
- label="**LLM Reasoning** - AI Explanations",
1233
- value=True,
1234
- info="Generate intelligent, natural language explanations for conflicts using LLM"
1235
- )
1236
-
1237
- use_ml_prediction = gr.Checkbox(
1238
- label="**ML Conflict Prediction**",
1239
- value=True,
1240
- info=f"{'Model available - Predicts conflicts before detailed analysis' if ML_AVAILABLE else 'Model not loaded - Train models to enable'}"
1241
- )
1242
-
1243
- use_ml_spellcheck = gr.Checkbox(
1244
- label="**ML Spell Check** (Embedding-based)",
1245
- value=True,
1246
- info=f"{'Model available - Uses semantic similarity for better corrections' if ML_AVAILABLE else 'Model not loaded - Train models to enable'}"
1247
- )
1248
-
1249
- show_ml_details = gr.Checkbox(
1250
- label="Show ML Model Details",
1251
- value=False,
1252
- info="Display raw ML predictions and confidence scores"
1253
- )
1254
-
1255
  process_btn = gr.Button("Analyze & Resolve Dependencies", variant="primary", size="lg")
1256
 
1257
  with gr.Row():
@@ -1261,7 +450,7 @@ def create_interface():
1261
  )
1262
 
1263
  with gr.Row():
1264
- with gr.Column(scale=2):
1265
  resolved_output = gr.Textbox(
1266
  label="Resolved requirements.txt",
1267
  lines=15,
@@ -1273,40 +462,9 @@ def create_interface():
1273
  value=None,
1274
  visible=True
1275
  )
1276
-
1277
- with gr.Column(scale=1):
1278
- ml_output = gr.Markdown(
1279
- label="ML Model Output",
1280
- value="ML predictions will appear here when enabled...",
1281
- visible=True
1282
- )
1283
-
1284
- def generate_requirements_only(project_desc):
1285
- """Generate requirements from project description only."""
1286
- if not project_desc or not project_desc.strip():
1287
- return "", ""
1288
-
1289
- generator = ProjectRequirementsGenerator(use_llm=True)
1290
- requirements, explanations = generator.generate_requirements(project_desc)
1291
-
1292
- if requirements:
1293
- output = f"## Generated Requirements\n\n"
1294
- output += f"**Project:** {project_desc[:100]}{'...' if len(project_desc) > 100 else ''}\n\n"
1295
- output += "**Suggested Packages:**\n```\n"
1296
- output += requirements
1297
- output += "\n```\n\n"
1298
- if explanations:
1299
- output += "**Why these packages?**\n"
1300
- output += explanations
1301
- # Also return the requirements text for the textbox
1302
- return output, requirements
1303
- else:
1304
- error_msg = "Could not generate requirements. Please try a more detailed description or check your connection."
1305
- return error_msg, ""
1306
 
1307
  def process_and_download(*args):
1308
- # Extract all arguments
1309
- result_text, resolved_text, ml_details = process_dependencies(*args)
1310
 
1311
  # Create a temporary file for download
1312
  temp_file = None
@@ -1318,108 +476,33 @@ def create_interface():
1318
  except Exception as e:
1319
  print(f"Error creating download file: {e}")
1320
 
1321
- # Format ML output
1322
- ml_output_text = ml_details if ml_details else "ML features disabled or models not available."
1323
-
1324
- return result_text, resolved_text, temp_file if temp_file else None, ml_output_text
1325
-
1326
- # Button to generate requirements from description
1327
- def generate_and_update(project_desc, existing_reqs):
1328
- """Generate requirements and update the requirements input."""
1329
- if not project_desc or not project_desc.strip():
1330
- return "Please enter a project description first.", existing_reqs
1331
-
1332
- generator = ProjectRequirementsGenerator(use_llm=True)
1333
- requirements, explanations = generator.generate_requirements(project_desc)
1334
-
1335
- # Check if we got valid requirements (rule-based should always return something)
1336
- if requirements and requirements.strip() and len(requirements.strip()) > 5:
1337
- # Create preview output
1338
- preview = f"## Generated Requirements\n\n"
1339
- preview += f"**Project:** {project_desc[:100]}{'...' if len(project_desc) > 100 else ''}\n\n"
1340
- preview += "**Suggested Packages:**\n```\n"
1341
- preview += requirements
1342
- preview += "\n```\n\n"
1343
- if explanations and explanations.strip():
1344
- preview += "**Why these packages?**\n"
1345
- preview += explanations
1346
- preview += "\n\n*Requirements have been added to the 'Requirements.txt Content' box below. You can edit them before analysis.*"
1347
-
1348
- # Update requirements input (append or replace)
1349
- if existing_reqs and existing_reqs.strip():
1350
- updated_reqs = requirements + "\n" + existing_reqs
1351
- else:
1352
- updated_reqs = requirements
1353
-
1354
- return preview, updated_reqs
1355
- else:
1356
- # Fallback - generate basic requirements
1357
- desc_lower = project_desc.lower()
1358
- basic_reqs = []
1359
- basic_explanations = []
1360
-
1361
- if 'streamlit' in desc_lower or 'web' in desc_lower or 'app' in desc_lower:
1362
- basic_reqs.append("streamlit>=1.28.0")
1363
- basic_explanations.append("- streamlit: Build interactive web applications")
1364
-
1365
- if 'pdf' in desc_lower or 'document' in desc_lower:
1366
- basic_reqs.append("pypdf>=3.17.0")
1367
- basic_explanations.append("- pypdf: PDF parsing and text extraction")
1368
-
1369
- if 'rag' in desc_lower or 'chatbot' in desc_lower or 'llm' in desc_lower:
1370
- basic_reqs.append("langchain>=0.1.0")
1371
- basic_reqs.append("openai>=1.0.0")
1372
- basic_explanations.append("- langchain: Framework for building LLM applications")
1373
- basic_explanations.append("- openai: OpenAI API integration")
1374
-
1375
- if basic_reqs:
1376
- reqs_text = '\n'.join(basic_reqs)
1377
- exp_text = '\n'.join(basic_explanations)
1378
- preview = f"## Generated Requirements\n\n**Project:** {project_desc[:100]}\n\n**Suggested Packages:**\n```\n{reqs_text}\n```\n\n**Why these packages?**\n{exp_text}"
1379
- if existing_reqs and existing_reqs.strip():
1380
- updated_reqs = reqs_text + "\n" + existing_reqs
1381
- else:
1382
- updated_reqs = reqs_text
1383
- return preview, updated_reqs
1384
-
1385
- error_msg = "## Could not generate requirements\n\nPlease try a more detailed description with keywords like: web, API, data analysis, machine learning, PDF, chatbot, etc."
1386
- return error_msg, existing_reqs
1387
-
1388
- generate_requirements_btn.click(
1389
- fn=generate_and_update,
1390
- inputs=[project_description_input, requirements_input],
1391
- outputs=[generated_requirements_display, requirements_input]
1392
- )
1393
 
1394
  process_btn.click(
1395
  fn=process_and_download,
1396
- inputs=[project_description_input, library_input, requirements_input, file_upload, python_version, device, os_type, mode, resolution_strategy, use_llm, use_ml_prediction, use_ml_spellcheck, show_ml_details],
1397
- outputs=[output_display, resolved_output, download_btn, ml_output]
1398
  )
1399
 
1400
  gr.Markdown("""
1401
  ---
1402
  ### How to Use
1403
 
1404
- 1. **(Optional) Describe your project** in the "Project Description" box - AI will suggest required libraries
1405
- 2. **Input your dependencies** using any of the three methods (or combine them)
1406
- 3. **Configure your environment** (Python version, device, OS)
1407
- 4. **Enable AI/ML features** (LLM explanations, ML predictions, ML spell-check)
1408
- 5. **Choose analysis mode**: Quick for fast results, Deep for complete dependency tree
1409
- 6. **Select resolution strategy**: How to handle version conflicts
1410
- 7. **Click "Analyze & Resolve Dependencies"**
1411
- 8. **Review the results** including AI-generated requirements and explanations
1412
- 9. **Download the resolved requirements.txt**
1413
 
1414
  ### Features
1415
 
1416
- - **AI Requirements Generation**: Describe your project and get suggested libraries with explanations
1417
- - Parse multiple input formats
1418
- - Detect version conflicts
1419
- - Check compatibility across dependency graph
1420
- - Resolve dependencies using pip
1421
- - Generate clean, pip-compatible requirements.txt
1422
- - Environment-aware (Python version, platform, device)
1423
  """)
1424
 
1425
  return app
@@ -1430,3 +513,4 @@ if __name__ == "__main__":
1430
  # For Hugging Face Spaces, use default launch settings
1431
  # For local development, you can customize
1432
  app.launch()
 
 
9
  import subprocess
10
  from pathlib import Path
11
  from typing import List, Dict, Tuple, Optional, Set
 
 
12
  from packaging.requirements import Requirement
13
  from packaging.specifiers import SpecifierSet
14
  from packaging.version import Version
15
+ import gradio as gr
 
 
 
 
 
 
 
16
 
17
 
18
  class DependencyParser:
 
285
  Path(temp_req_file).unlink(missing_ok=True)
286
 
287
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
288
  def process_dependencies(
 
289
  library_list: str,
290
  requirements_text: str,
291
  uploaded_file,
 
293
  device: str,
294
  os_type: str,
295
  mode: str,
296
+ resolution_strategy: str
297
+ ) -> Tuple[str, str]:
 
 
 
 
298
  """Main processing function for Gradio interface."""
299
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
300
  # Collect dependencies from all sources
301
  all_dependencies = []
302
 
 
315
  # Parse uploaded file
316
  if uploaded_file:
317
  try:
318
+ with open(uploaded_file, 'r') as f:
 
 
 
 
 
 
 
319
  content = f.read()
320
  parser = DependencyParser()
321
  deps = parser.parse_requirements_text(content)
322
  all_dependencies.extend(deps)
323
  except Exception as e:
324
+ return f"Error reading file: {str(e)}", ""
325
 
326
  if not all_dependencies:
327
+ return "Please provide at least one input: library list, requirements text, or uploaded file.", ""
328
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
329
  # Build dependency graph
330
  resolver = DependencyResolver(python_version=python_version, platform=os_type, device=device)
331
  deep_mode = (mode == "Deep (with transitive dependencies)")
 
334
  # Check compatibility
335
  is_compatible, issues = resolver.check_compatibility(graph)
336
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
337
  # Resolve dependencies
338
+ resolved_text, warnings = resolver.resolve_dependencies(all_dependencies, resolution_strategy)
 
339
 
340
  # Build output message
341
  output_parts = []
342
  output_parts.append("## Dependency Analysis Results\n\n")
343
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
344
  if issues:
345
+ output_parts.append("### ⚠️ Compatibility Issues Found:\n")
346
+ for issue in issues:
347
+ output_parts.append(f"- {issue}\n")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
348
  output_parts.append("\n")
349
 
350
+ if warnings:
351
+ output_parts.append("### ℹ️ Warnings:\n")
352
+ for warning in warnings:
353
  output_parts.append(f"- {warning}\n")
354
  output_parts.append("\n")
355
 
356
  if is_compatible and not issues:
357
+ output_parts.append("### βœ… No compatibility issues detected!\n\n")
358
 
359
+ output_parts.append(f"### πŸ“¦ Resolved Requirements ({len(all_dependencies)} packages):\n")
360
  output_parts.append("```\n")
361
  output_parts.append(resolved_text)
362
  output_parts.append("\n```\n")
363
 
364
+ return ''.join(output_parts), resolved_text
 
 
 
 
365
 
366
 
367
  # Gradio Interface
368
  def create_interface():
369
  """Create and return the Gradio interface."""
 
370
 
371
+ with gr.Blocks(title="Python Dependency Compatibility Board", theme=gr.themes.Soft()) as app:
372
  gr.Markdown("""
373
+ # 🐍 Python Dependency Compatibility Board
374
 
375
+ Analyze and resolve Python package dependencies. Input your requirements in multiple ways:
376
+ - List library names (one per line)
377
+ - Paste requirements.txt content
378
+ - Upload a requirements.txt file
 
 
 
 
 
 
 
 
379
 
380
+ The tool will check for compatibility issues and generate a resolved requirements.txt file.
381
  """)
382
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
383
  with gr.Row():
384
  with gr.Column(scale=1):
385
  gr.Markdown("### Input Methods")
 
441
  info="How to resolve version conflicts"
442
  )
443
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
444
  process_btn = gr.Button("Analyze & Resolve Dependencies", variant="primary", size="lg")
445
 
446
  with gr.Row():
 
450
  )
451
 
452
  with gr.Row():
453
+ with gr.Column():
454
  resolved_output = gr.Textbox(
455
  label="Resolved requirements.txt",
456
  lines=15,
 
462
  value=None,
463
  visible=True
464
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
465
 
466
  def process_and_download(*args):
467
+ result_text, resolved_text = process_dependencies(*args)
 
468
 
469
  # Create a temporary file for download
470
  temp_file = None
 
476
  except Exception as e:
477
  print(f"Error creating download file: {e}")
478
 
479
+ return result_text, resolved_text, temp_file if temp_file else None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
480
 
481
  process_btn.click(
482
  fn=process_and_download,
483
+ inputs=[library_input, requirements_input, file_upload, python_version, device, os_type, mode, resolution_strategy],
484
+ outputs=[output_display, resolved_output, download_btn]
485
  )
486
 
487
  gr.Markdown("""
488
  ---
489
  ### How to Use
490
 
491
+ 1. **Input your dependencies** using any of the three methods (or combine them)
492
+ 2. **Configure your environment** (Python version, device, OS)
493
+ 3. **Choose analysis mode**: Quick for fast results, Deep for complete dependency tree
494
+ 4. **Select resolution strategy**: How to handle version conflicts
495
+ 5. **Click "Analyze & Resolve Dependencies"**
496
+ 6. **Review the results** and download the resolved requirements.txt
 
 
 
497
 
498
  ### Features
499
 
500
+ - βœ… Parse multiple input formats
501
+ - βœ… Detect version conflicts
502
+ - βœ… Check compatibility across dependency graph
503
+ - βœ… Resolve dependencies using pip
504
+ - βœ… Generate clean, pip-compatible requirements.txt
505
+ - βœ… Environment-aware (Python version, platform, device)
 
506
  """)
507
 
508
  return app
 
513
  # For Hugging Face Spaces, use default launch settings
514
  # For local development, you can customize
515
  app.launch()
516
+
data/ground_truth/gt_1 copy.txt DELETED
@@ -1,6 +0,0 @@
1
- torch
2
- torchvision
3
- torchvision.transforms as transforms
4
- torch.utils.data import DataLoader
5
- numpy as np
6
- scipy import stats
 
 
 
 
 
 
 
data/ground_truth/gt_1.txt DELETED
@@ -1,6 +0,0 @@
1
- torch
2
- torchvision
3
- torchvision.transforms as transforms
4
- torch.utils.data import DataLoader
5
- numpy
6
- scipy
 
 
 
 
 
 
 
data/package_name_catalog.json DELETED
@@ -1,47 +0,0 @@
1
- {
2
- "valid_packages": [
3
- "numpy",
4
- "pandas",
5
- "scipy",
6
- "scikit-learn",
7
- "pydantic",
8
- "fastapi",
9
- "torch",
10
- "pytorch-lightning",
11
- "tensorflow",
12
- "keras",
13
- "pillow",
14
- "requests",
15
- "httpx",
16
- "langchain",
17
- "openai",
18
- "chromadb",
19
- "uvicorn",
20
- "starlette",
21
- "sqlalchemy",
22
- "alembic",
23
- "redis"
24
- ],
25
- "invalid_packages": [
26
- "numpyy",
27
- "pandaz",
28
- "scipy-pro",
29
- "fastapi-pro",
30
- "torchx",
31
- "pytorch-brightning",
32
- "tensorflower",
33
- "kerras",
34
- "pillow2",
35
- "requests3",
36
- "httxx",
37
- "langchainz",
38
- "opena1",
39
- "chromad",
40
- "uvicornx",
41
- "starlite",
42
- "sqalachemy",
43
- "alembico",
44
- "redis-plus",
45
- "fakerlib"
46
- ]
47
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ml_models.py DELETED
@@ -1,295 +0,0 @@
1
- """
2
- ML Model Loader and Utilities
3
- Handles loading and using the conflict prediction model and package embeddings.
4
- Loads from local files if available, otherwise downloads from Hugging Face Hub.
5
- """
6
-
7
- import json
8
- import pickle
9
- from pathlib import Path
10
- from typing import Dict, List, Tuple, Optional
11
- import numpy as np
12
- from packaging.requirements import Requirement
13
-
14
- # Try to import huggingface_hub for model downloading
15
- try:
16
- from huggingface_hub import hf_hub_download
17
- HF_HUB_AVAILABLE = True
18
- except ImportError:
19
- HF_HUB_AVAILABLE = False
20
- print("Warning: huggingface_hub not available. Models must be loaded locally.")
21
-
22
-
23
- class ConflictPredictor:
24
- """Load and use the conflict prediction model."""
25
-
26
- def __init__(self, model_path: Optional[Path] = None, repo_id: str = "ysakhale/dependency-conflict-models"):
27
- """Initialize the conflict predictor.
28
-
29
- Args:
30
- model_path: Local path to model file (optional)
31
- repo_id: Hugging Face repository ID to download from if local file not found
32
- """
33
- self.repo_id = repo_id
34
- self.model = None
35
- self.model_path = model_path
36
-
37
- # Try local path first
38
- if model_path is None:
39
- model_path = Path(__file__).parent / "models" / "conflict_predictor.pkl"
40
-
41
- self.model_path = model_path
42
-
43
- # Try loading from local file
44
- if model_path.exists():
45
- try:
46
- with open(model_path, 'rb') as f:
47
- self.model = pickle.load(f)
48
- print(f"Loaded conflict prediction model from {model_path}")
49
- return
50
- except Exception as e:
51
- print(f"Could not load conflict prediction model from local: {e}")
52
-
53
- # If local file doesn't exist, try downloading from HF Hub
54
- if HF_HUB_AVAILABLE:
55
- try:
56
- print(f"Model not found locally. Downloading from Hugging Face Hub: {repo_id}")
57
- downloaded_path = hf_hub_download(
58
- repo_id=repo_id,
59
- filename="conflict_predictor.pkl",
60
- repo_type="model"
61
- )
62
- with open(downloaded_path, 'rb') as f:
63
- self.model = pickle.load(f)
64
- print(f"Loaded conflict prediction model from Hugging Face Hub")
65
- # Optionally cache it locally
66
- try:
67
- model_path.parent.mkdir(parents=True, exist_ok=True)
68
- import shutil
69
- shutil.copy(downloaded_path, model_path)
70
- print(f"Cached model locally at {model_path}")
71
- except:
72
- pass
73
- return
74
- except Exception as e:
75
- print(f"Could not download model from Hugging Face Hub: {e}")
76
-
77
- print(f"Warning: Conflict prediction model not available")
78
-
79
- def extract_features(self, requirements_text: str) -> np.ndarray:
80
- """Extract features from requirements text (same as training)."""
81
- features = []
82
-
83
- packages = {}
84
- lines = requirements_text.strip().split('\n')
85
- num_packages = 0
86
- has_pins = 0
87
- version_specificity = []
88
-
89
- for line in lines:
90
- line = line.strip()
91
- if not line or line.startswith('#'):
92
- continue
93
-
94
- try:
95
- req = Requirement(line)
96
- pkg_name = req.name.lower()
97
- specifier = str(req.specifier) if req.specifier else ''
98
-
99
- if pkg_name in packages:
100
- features.append(1) # has_duplicate flag
101
- else:
102
- packages[pkg_name] = specifier
103
- num_packages += 1
104
-
105
- if specifier:
106
- has_pins += 1
107
- if '==' in specifier:
108
- version_specificity.append(3)
109
- elif '>=' in specifier or '<=' in specifier:
110
- version_specificity.append(2)
111
- else:
112
- version_specificity.append(1)
113
- else:
114
- version_specificity.append(0)
115
- except:
116
- pass
117
-
118
- feature_vec = []
119
- feature_vec.append(min(num_packages / 20.0, 1.0))
120
- feature_vec.append(has_pins / max(num_packages, 1))
121
- feature_vec.append(np.mean(version_specificity) / 3.0 if version_specificity else 0)
122
- feature_vec.append(1 if len(packages) < num_packages else 0)
123
-
124
- common_packages = [
125
- 'torch', 'pytorch-lightning', 'tensorflow', 'keras', 'fastapi', 'pydantic',
126
- 'numpy', 'pandas', 'scipy', 'scikit-learn', 'matplotlib', 'seaborn',
127
- 'requests', 'httpx', 'sqlalchemy', 'alembic', 'uvicorn', 'starlette',
128
- 'langchain', 'openai', 'chromadb', 'redis', 'celery', 'gunicorn',
129
- 'pillow', 'opencv-python', 'beautifulsoup4', 'scrapy', 'plotly', 'jax'
130
- ]
131
-
132
- for pkg in common_packages:
133
- feature_vec.append(1 if pkg in packages else 0)
134
-
135
- has_torch = 'torch' in packages
136
- has_pl = 'pytorch-lightning' in packages
137
- has_tf = 'tensorflow' in packages
138
- has_keras = 'keras' in packages
139
- has_fastapi = 'fastapi' in packages
140
- has_pydantic = 'pydantic' in packages
141
-
142
- feature_vec.append(1 if (has_torch and has_pl) else 0)
143
- feature_vec.append(1 if (has_tf and has_keras) else 0)
144
- feature_vec.append(1 if (has_fastapi and has_pydantic) else 0)
145
-
146
- return np.array(feature_vec)
147
-
148
- def predict(self, requirements_text: str) -> Tuple[bool, float]:
149
- """
150
- Predict if requirements have conflicts.
151
-
152
- Returns:
153
- (has_conflict, confidence_score)
154
- """
155
- if self.model is None:
156
- return False, 0.0
157
-
158
- try:
159
- features = self.extract_features(requirements_text)
160
- features = features.reshape(1, -1)
161
-
162
- prediction = self.model.predict(features)[0]
163
- probability = self.model.predict_proba(features)[0]
164
-
165
- has_conflict = bool(prediction)
166
- confidence = float(probability[1] if has_conflict else probability[0])
167
-
168
- return has_conflict, confidence
169
- except Exception as e:
170
- print(f"Error in conflict prediction: {e}")
171
- return False, 0.0
172
-
173
-
174
- class PackageEmbeddings:
175
- """Load and use package embeddings for similarity matching."""
176
-
177
- def __init__(self, embeddings_path: Optional[Path] = None, repo_id: str = "ysakhale/dependency-conflict-models"):
178
- """Initialize package embeddings.
179
-
180
- Args:
181
- embeddings_path: Local path to embeddings file (optional)
182
- repo_id: Hugging Face repository ID to download from if local file not found
183
- """
184
- self.repo_id = repo_id
185
- self.embeddings = {}
186
- self.embeddings_path = embeddings_path
187
- self.model = None
188
-
189
- if embeddings_path is None:
190
- embeddings_path = Path(__file__).parent / "models" / "package_embeddings.json"
191
-
192
- self.embeddings_path = embeddings_path
193
-
194
- # Try loading from local file
195
- if embeddings_path.exists():
196
- try:
197
- with open(embeddings_path, 'r') as f:
198
- self.embeddings = json.load(f)
199
- print(f"Loaded {len(self.embeddings)} package embeddings from {embeddings_path}")
200
- return
201
- except Exception as e:
202
- print(f"Could not load embeddings from local: {e}")
203
-
204
- # If local file doesn't exist, try downloading from HF Hub
205
- if HF_HUB_AVAILABLE:
206
- try:
207
- print(f"Embeddings not found locally. Downloading from Hugging Face Hub: {repo_id}")
208
- downloaded_path = hf_hub_download(
209
- repo_id=repo_id,
210
- filename="package_embeddings.json",
211
- repo_type="model"
212
- )
213
- with open(downloaded_path, 'r') as f:
214
- self.embeddings = json.load(f)
215
- print(f"Loaded {len(self.embeddings)} package embeddings from Hugging Face Hub")
216
- # Optionally cache it locally
217
- try:
218
- embeddings_path.parent.mkdir(parents=True, exist_ok=True)
219
- import shutil
220
- shutil.copy(downloaded_path, embeddings_path)
221
- print(f"Cached embeddings locally at {embeddings_path}")
222
- except:
223
- pass
224
- return
225
- except Exception as e:
226
- print(f"Could not download embeddings from Hugging Face Hub: {e}")
227
-
228
- print(f"Warning: Package embeddings not available")
229
-
230
- def _load_model(self):
231
- """Lazy load the sentence transformer model."""
232
- if self.model is None:
233
- try:
234
- from sentence_transformers import SentenceTransformer
235
- self.model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
236
- except ImportError:
237
- print("⚠️ sentence-transformers not available, embedding similarity disabled")
238
- return None
239
- return self.model
240
-
241
- def get_embedding(self, package_name: str) -> Optional[np.ndarray]:
242
- """Get embedding for a package (from cache or compute on-the-fly)."""
243
- package_lower = package_name.lower()
244
-
245
- # Check cache first
246
- if package_lower in self.embeddings:
247
- return np.array(self.embeddings[package_lower])
248
-
249
- # Compute on-the-fly if model available
250
- model = self._load_model()
251
- if model is not None:
252
- embedding = model.encode([package_name])[0]
253
- # Cache it
254
- self.embeddings[package_lower] = embedding.tolist()
255
- return embedding
256
-
257
- return None
258
-
259
- def find_similar(self, package_name: str, top_k: int = 5, threshold: float = 0.6) -> List[Tuple[str, float]]:
260
- """
261
- Find similar packages using cosine similarity.
262
-
263
- Returns:
264
- List of (package_name, similarity_score) tuples
265
- """
266
- query_emb = self.get_embedding(package_name)
267
- if query_emb is None:
268
- return []
269
-
270
- similarities = []
271
-
272
- for pkg, emb in self.embeddings.items():
273
- if pkg == package_name.lower():
274
- continue
275
-
276
- emb_array = np.array(emb)
277
- # Cosine similarity
278
- similarity = np.dot(query_emb, emb_array) / (
279
- np.linalg.norm(query_emb) * np.linalg.norm(emb_array)
280
- )
281
-
282
- if similarity >= threshold:
283
- similarities.append((pkg, float(similarity)))
284
-
285
- # Sort by similarity and return top_k
286
- similarities.sort(key=lambda x: x[1], reverse=True)
287
- return similarities[:top_k]
288
-
289
- def get_best_match(self, package_name: str, threshold: float = 0.7) -> Optional[str]:
290
- """Get the best matching package name."""
291
- similar = self.find_similar(package_name, top_k=1, threshold=threshold)
292
- if similar:
293
- return similar[0][0]
294
- return None
295
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -1,9 +1,4 @@
1
- gradio>=4.44.1
2
- packaging>=23.0
3
- pip>=23.0
4
- requests>=2.31.0
5
- scikit-learn>=1.3.0
6
- sentence-transformers>=2.2.0
7
- numpy>=1.24.0
8
- huggingface-hub>=0.20.0
9
-
 
1
+ gradio==4.44.1
2
+ packaging>=23.0
3
+ pip>=23.0
4
+
 
 
 
 
 
upload_models_to_hf.py DELETED
@@ -1,86 +0,0 @@
1
- """
2
- Upload ML models to Hugging Face Hub
3
- This allows the models to be loaded in Hugging Face Spaces
4
- """
5
-
6
- from pathlib import Path
7
- from huggingface_hub import HfApi, login
8
- import os
9
-
10
- def upload_models():
11
- """Upload models to Hugging Face Hub."""
12
-
13
- # Check if models exist
14
- models_dir = Path("models")
15
- if not models_dir.exists():
16
- print("Error: models/ directory not found!")
17
- print("Please train the models first:")
18
- print(" python train_conflict_model.py")
19
- print(" python generate_embeddings.py")
20
- return
21
-
22
- # Check for model files
23
- model_files = {
24
- "conflict_predictor.pkl": models_dir / "conflict_predictor.pkl",
25
- "package_embeddings.json": models_dir / "package_embeddings.json",
26
- "embedding_info.json": models_dir / "embedding_info.json"
27
- }
28
-
29
- missing = [name for name, path in model_files.items() if not path.exists()]
30
- if missing:
31
- print(f"Error: Missing model files: {missing}")
32
- print("Please train the models first:")
33
- print(" python train_conflict_model.py")
34
- print(" python generate_embeddings.py")
35
- return
36
-
37
- # Login to Hugging Face
38
- print("Logging in to Hugging Face...")
39
- print("(You'll need to enter your HF token - get it from https://huggingface.co/settings/tokens)")
40
- try:
41
- login()
42
- except Exception as e:
43
- print(f"Login error: {e}")
44
- print("\nYou can also set HF_TOKEN environment variable:")
45
- print(" $env:HF_TOKEN='your_token_here' # PowerShell")
46
- return
47
-
48
- # Initialize API
49
- api = HfApi()
50
-
51
- # Repository name for models
52
- repo_id = "ysakhale/dependency-conflict-models"
53
-
54
- # Create repository if it doesn't exist
55
- try:
56
- api.create_repo(
57
- repo_id=repo_id,
58
- repo_type="model",
59
- exist_ok=True,
60
- private=False
61
- )
62
- print(f"Repository {repo_id} is ready!")
63
- except Exception as e:
64
- print(f"Note: {e}")
65
-
66
- # Upload each model file
67
- print("\nUploading models...")
68
- for filename, filepath in model_files.items():
69
- print(f"Uploading {filename}...")
70
- try:
71
- api.upload_file(
72
- path_or_fileobj=str(filepath),
73
- path_in_repo=filename,
74
- repo_id=repo_id,
75
- repo_type="model"
76
- )
77
- print(f" βœ“ {filename} uploaded successfully!")
78
- except Exception as e:
79
- print(f" βœ— Error uploading {filename}: {e}")
80
-
81
- print(f"\nβœ… Models uploaded to: https://huggingface.co/{repo_id}")
82
- print("\nNext step: Update ml_models.py to load from this repository")
83
-
84
- if __name__ == "__main__":
85
- upload_models()
86
-