Vastav AI β Deepfake Detection System
π Developed by: Navneet Singh (Ekoahamdutivnastiβ’) π Hosted at: https://vastav.ekoahamdutivnasti.com π License: Proprietary, Copyright Β© 2025, protected under Indian and international copyright laws.
π Overview
Vastav AI (βVastavβ means βrealityβ in Hindi) is a state-of-the-art multi-asset deepfake detection system designed to provide almost real-time analysis of images, videos, and audio. Built with an ensemble of specialized AI models, it delivers robust, transparent, and highly-accurate results using forensic-grade techniques.
- Multi-Model Judging System: Routes the input through several expert models (visual, metadata), who βvoteβ using confidence scores.
- Heatmap Visualizations: Highlights manipulated regions in media files for easy inspection.
- PDF Report Generation: Creates detailed forensic reports complete with case IDs, timestamps, metadata, heatmaps, and judge-by-judge analysis.
- Cross-Modality Support: Detects deepfakes in images and videos using modality-specific pipelines. (Audio analysis coming soon)
π Key Features
Feature | Description |
---|---|
Multi-Judge Decision Engine | Ensemble of Visual, Metadata, and ChiefJustice AIs that vote to maximize accuracy |
Visual Heatmaps | Overlay pinpointing image/video manipulation artifacts |
Metadata Forensics | Detects editing history, creation timestamps, and device signatures |
Real-Time Feedback | Rapid processing suitable for integration with web apps and APIs |
PDF Report Generator | Detailed, court-ready documentation with evidentiary trails |
Cloud-Based API | Easy integration with enterprise systems and scalable deployment |
π οΈ How It Works
- Upload media (image/video) via API/UI.
- Visual pipeline analyzes compression artifacts, edge inconsistencies, lighting, GAN signatures.
- Metadata analyzer checks file integrity and origin.
- ChiefJustice AI aggregates the expert scores, producing a final verdict with overall confidence.
- Generates JSON output and automatic PDF report with all evidence and heatmaps.
Note: Audio analysis capabilities are currently under development and will be released soon.
π§ Technical Specifications
Architecture: Ensemble of ViT-based visual models + metadata module + aggregation layer.
Accuracy: 99.0β99.7% across all tested media types.
Latency: Sub-second per image; a few seconds per 5-minute video on standard cloud GPU.
Report Output: JSON + PDF, including:
- Case ID (e.g., VDC-XXXXX)
- Media metadata
- Multi-model confidence breakdown
- Heatmap overlay images
- Timestamped forensic notes
π§ͺ Example Usage
import requests
API_URL = "https://api.vastav.ekoahamdutivnasti.com/analyze"
files = {'file': open("suspected_video.mp4", "rb")}
response = requests.post(API_URL, files=files)
report = response.json()
print("Verdict:", report["verdict"])
print("Confidence:", report["confidence"])
print("Heatmap URL:", report["heatmap_image"])
print("PDF Report:", report["pdf_report_url"])
β οΈ Limitations
- May underperform on extremely low-resolution or heavily compressed media.
- Detection accuracy depends on known GAN/voice synthesis techniques; novel or unknown generative methods may reduce efficacy.
- Provence trail relies on embedded metadata; stripped/altered metadata may hinder analysis.
- Audio analysis coming soon β will include waveform integrity and voice synthesis detection.
π€ Use Cases & Ideal Users
- Journalists & Newsrooms β Validate coming-in user content before publishing.
- Law Enforcement & Forensics β Support legal workflows with detailed, evidence-grade forensic reports.
- Enterprises & Cybersecurity β Auto-screen media for internal communications or executive content.
- Government Agencies β Monitor political/social media content during sensitive events or elections.
π Licensing & Usage
This model and its components are proprietary. Distribution, modification, or reverse-engineering is strictly prohibited. Any unauthorized use may result in DMCA takedown or other legal action. For commercial or enterprise licensing, contact:
π§ [email protected]
π Additional Resources
- Full Documentation & API reference: https://vastav.ekoahamdutivnasti.com/docs
- Whitepaper (PDF): Forensic architecture & evaluation results
- Research references on deepfake evaluation (e.g. "DeepfakeβEvalβ2024" benchmark)
π§Ύ Citation
If you use Vastav AI in research or publications, please cite:
Singh, N. (2025). Vastav AI: Multi-Model Deepfake Detection System. Ekoahamdutivnasti. Version 1.0, released March 10, 2025.
π¨βπ» About Navneet Singh
Founder of the Ekoahamdutivnasti AI initiative, Navneet has pioneered practical, ensemble-based detection systems in deep forensics. Vastav AI reflects his commitment to tech-driven truth and innovative defense against digital deception.
βTruth always leaves a trace. Vastav finds it.β
Disclaimer: This README is tailored for Hugging Face hosting. Adjust links, dependencies, or metadata to align with deployment specifics.