Faheem / README.md
Shatha2030's picture
Update README.md
7df00cb verified

A newer version of the Gradio SDK is available: 5.42.0

Upgrade
metadata
title: Faheem
emoji: 🔥
colorFrom: gray
colorTo: yellow
sdk: gradio
sdk_version: 5.18.0
app_file: app.py
pinned: false
short_description: This project uses AI to transcribe and summarize media conte

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

AI-Powered Audio & Video to Text Conversion

Project Description

This project enables users to convert audio and video content into text using AI. Additionally, it allows interaction with the extracted text through question-answering, summarization, and text-to-speech functionalities.

Introduction

With the increasing amount of digital content, users often face challenges in accessing information from audio and video files efficiently. This project aims to facilitate information retrieval by leveraging AI to transcribe content into text, summarize key points, and provide an interactive Q&A system. Users can also listen to the summarized text if preferred.

Project Objectives

  1. Enhance the speed of information retrieval by enabling quick searches within extracted text.
  2. Provide an interactive experience where users can ask questions about the content and receive instant answers.
  3. Support both Arabic and English languages to accommodate a wider user base.
  4. Utilize AI to analyze and comprehend content, allowing effective extraction and interaction with information.

Features

  • Speech-to-Text: Extracts text from audio and video files.
  • Text Summarization: Generates concise summaries of extracted text.
  • Question Answering: Answers user queries based on extracted text.
  • Text-to-Speech: Converts text into speech for an auditory experience.
  • Multilingual Support: Works in both Arabic and English.

Technology Stack

1. Programming Language

  • Python: The core programming language used for this project due to its extensive support for AI and data processing libraries.

2. Libraries & Frameworks

  • Gradio: Used for creating an interactive web-based UI for users to upload files and interact with the extracted text.
  • Hugging Face Transformers: Provides pre-trained models for automatic speech recognition, summarization, translation, and question-answering.
  • MoviePy: Extracts audio from video files for further processing.
  • Librosa & SoundFile: Handles audio processing, including loading, resampling, and segmenting audio clips.
  • gTTS (Google Text-to-Speech): Converts text into spoken words.
  • LangDetect: Detects the language of the extracted text to provide appropriate processing.

Code Breakdown

1. Setting Up the Environment

The script starts by checking if a GPU (CUDA) is available, which significantly speeds up AI model inference.

import torch
device = "cuda" if torch.cuda.is_available() else "cpu"

2. Loading AI Models

Several AI models are loaded to handle different functionalities:

  • Whisper (Speech Recognition): Converts audio into text.
  • BART (Summarization): Generates concise summaries.
  • Helsinki-NLP (Translation): Translates text between Arabic and English.
  • BERT (Question Answering): Finds answers from extracted text.
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-medium", device=0 if device == "cuda" else -1)
bart_model = AutoModelForSeq2SeqLM.from_pretrained("ahmedabdo/arabic-summarizer-bart")
bart_tokenizer = AutoTokenizer.from_pretrained("ahmedabdo/arabic-summarizer-bart")
translate_ar_to_en = pipeline("translation", model="Helsinki-NLP/opus-mt-ar-en")
translate_en_to_ar = pipeline("translation", model="Helsinki-NLP/opus-mt-en-ar")
qa_pipeline = pipeline("question-answering", model="deepset/bert-base-cased-squad2", tokenizer="deepset/bert-base-cased-squad2")

3. Audio Processing

Handles both audio and video files. If a video is uploaded, it extracts the audio and converts it into a compatible format.

from moviepy.video.io.VideoFileClip import VideoFileClip
import librosa
import soundfile as sf

def convert_audio_to_text(uploaded_file):
    if not uploaded_file:
        return "⛔ Please upload a file first"

    input_path = uploaded_file if isinstance(uploaded_file, str) else uploaded_file.name
    output_path = "/tmp/processed.wav"

    if input_path.split('.')[-1].lower() in ['mp4', 'avi', 'mov', 'mkv']:
        VideoFileClip(input_path).audio.write_audiofile(output_path, codec='pcm_s16le')
    else:
        output_path = input_path

    audio, rate = librosa.load(output_path, sr=16000)
    return pipe(output_path)["text"]

4. Summarization

Summarizes extracted text using the pre-trained BART model.

def summarize_text(text):
    inputs = bart_tokenizer(text, return_tensors="pt", max_length=1024, truncation=True).to(device)
    summary_ids = bart_model.generate(inputs.input_ids, max_length=150, num_beams=4, early_stopping=True)
    return bart_tokenizer.decode(summary_ids[0], skip_special_tokens=True)

5. Question Answering

Users can input a question based on the extracted text, and the system finds the most relevant answer.

def answer_question(text, question):
    translated_context = translate_ar_to_en(text)[0]['translation_text']
    translated_question = translate_ar_to_en(question)[0]['translation_text']
    results = qa_pipeline({'question': translated_question, 'context': translated_context}, top_k=3)
    best_result = max(results, key=lambda res: res['score'])
    return translate_en_to_ar(best_result['answer'])[0]['translation_text']

6. Text-to-Speech

Converts text into an audio file using gTTS.

def text_to_speech(text):
    tts = gTTS(text=text, lang='ar' if detect(text) == 'ar' else 'en', slow=False)
    output = "/tmp/tts.wav"
    tts.save(output)
    return output

Installation & Setup

Prerequisites

  • Python 3.8+
  • Pip package manager
  • GPU (optional but recommended for better performance)

Installation Steps

  1. Clone the repository:
    git clone https://github.com/your-repo/ai-audio-video-text.git
    cd ai-audio-video-text
    
  2. Install dependencies:
    pip install -r requirements.txt
    
  3. Run the application:
    python main.py
    

Contributing

Contributions are welcome! Feel free to fork the repository and submit pull requests.

License

This project is licensed under the MIT License.

Contact

For inquiries, contact [ٍShatha khaled] –[ٍ[email protected]] [Sharifah Malhan] – [[email protected]]