Vastav AI – Deepfake Detection System

πŸ“Œ Developed by: Navneet Singh (Ekoahamdutivnastiβ„’) 🌐 Hosted at: https://vastav.ekoahamdutivnasti.com πŸ“„ License: Proprietary, Copyright Β© 2025, protected under Indian and international copyright laws.


πŸš€ Overview

Vastav AI (β€œVastav” means β€œreality” in Hindi) is a state-of-the-art multi-asset deepfake detection system designed to provide almost real-time analysis of images, videos, and audio. Built with an ensemble of specialized AI models, it delivers robust, transparent, and highly-accurate results using forensic-grade techniques.

  • Multi-Model Judging System: Routes the input through several expert models (visual, metadata), who β€œvote” using confidence scores.
  • Heatmap Visualizations: Highlights manipulated regions in media files for easy inspection.
  • PDF Report Generation: Creates detailed forensic reports complete with case IDs, timestamps, metadata, heatmaps, and judge-by-judge analysis.
  • Cross-Modality Support: Detects deepfakes in images and videos using modality-specific pipelines. (Audio analysis coming soon)

πŸ“Œ Key Features

Feature Description
Multi-Judge Decision Engine Ensemble of Visual, Metadata, and ChiefJustice AIs that vote to maximize accuracy
Visual Heatmaps Overlay pinpointing image/video manipulation artifacts
Metadata Forensics Detects editing history, creation timestamps, and device signatures
Real-Time Feedback Rapid processing suitable for integration with web apps and APIs
PDF Report Generator Detailed, court-ready documentation with evidentiary trails
Cloud-Based API Easy integration with enterprise systems and scalable deployment

πŸ› οΈ How It Works

  1. Upload media (image/video) via API/UI.
  2. Visual pipeline analyzes compression artifacts, edge inconsistencies, lighting, GAN signatures.
  3. Metadata analyzer checks file integrity and origin.
  4. ChiefJustice AI aggregates the expert scores, producing a final verdict with overall confidence.
  5. Generates JSON output and automatic PDF report with all evidence and heatmaps.

Note: Audio analysis capabilities are currently under development and will be released soon.


🧠 Technical Specifications

  • Architecture: Ensemble of ViT-based visual models + metadata module + aggregation layer.

  • Accuracy: 99.0–99.7% across all tested media types.

  • Latency: Sub-second per image; a few seconds per 5-minute video on standard cloud GPU.

  • Report Output: JSON + PDF, including:

    • Case ID (e.g., VDC-XXXXX)
    • Media metadata
    • Multi-model confidence breakdown
    • Heatmap overlay images
    • Timestamped forensic notes

πŸ§ͺ Example Usage

import requests

API_URL = "https://api.vastav.ekoahamdutivnasti.com/analyze"
files = {'file': open("suspected_video.mp4", "rb")}
response = requests.post(API_URL, files=files)
report = response.json()

print("Verdict:", report["verdict"])
print("Confidence:", report["confidence"])
print("Heatmap URL:", report["heatmap_image"])
print("PDF Report:", report["pdf_report_url"])

⚠️ Limitations

  • May underperform on extremely low-resolution or heavily compressed media.
  • Detection accuracy depends on known GAN/voice synthesis techniques; novel or unknown generative methods may reduce efficacy.
  • Provence trail relies on embedded metadata; stripped/altered metadata may hinder analysis.
  • Audio analysis coming soon – will include waveform integrity and voice synthesis detection.

🀝 Use Cases & Ideal Users

  • Journalists & Newsrooms – Validate coming-in user content before publishing.
  • Law Enforcement & Forensics – Support legal workflows with detailed, evidence-grade forensic reports.
  • Enterprises & Cybersecurity – Auto-screen media for internal communications or executive content.
  • Government Agencies – Monitor political/social media content during sensitive events or elections.

πŸ“„ Licensing & Usage

This model and its components are proprietary. Distribution, modification, or reverse-engineering is strictly prohibited. Any unauthorized use may result in DMCA takedown or other legal action. For commercial or enterprise licensing, contact:

πŸ“§ [email protected]


πŸ”— Additional Resources

  • Full Documentation & API reference: https://vastav.ekoahamdutivnasti.com/docs
  • Whitepaper (PDF): Forensic architecture & evaluation results
  • Research references on deepfake evaluation (e.g. "Deepfake‑Eval‑2024" benchmark)

🧾 Citation

If you use Vastav AI in research or publications, please cite:

Singh, N. (2025). Vastav AI: Multi-Model Deepfake Detection System. Ekoahamdutivnasti. Version 1.0, released March 10, 2025.


πŸ‘¨β€πŸ’» About Navneet Singh

Founder of the Ekoahamdutivnasti AI initiative, Navneet has pioneered practical, ensemble-based detection systems in deep forensics. Vastav AI reflects his commitment to tech-driven truth and innovative defense against digital deception.


β€œTruth always leaves a trace. Vastav finds it.”


Disclaimer: This README is tailored for Hugging Face hosting. Adjust links, dependencies, or metadata to align with deployment specifics.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support