ReviveAI β¨
Restore your memories. AI-powered image deblurring, sharpening, and scratch removal.
π About ReviveAI
ReviveAI leverages the power of Artificial Intelligence to breathe new life into your old or degraded photographs. Whether it's blurriness from camera shake, general lack of sharpness, or physical damage like scratches, ReviveAI aims to restore clarity and detail, preserving your precious moments.
This project utilizes state-of-the-art deep learning models trained specifically for image restoration tasks. Our goal is to provide an accessible tool for enhancing image quality significantly.
π₯ Key Features
- β Completed - Image Sharpening: Enhances fine details and edges for a crisper look.
- β Completed - Scratch Removal: Intelligently detects and inpaints scratches and minor damages on photographs.
- π οΈ Work-in-progress - Image Colorization(Coming Soon): Adds realistic color to grayscale images.
β¨ Before & After Showcase
See the results of ReviveAI in action!
Examples | Task Performed |
---|---|
![]() |
Image Sharpening |
![]() |
Image Sharpening |
![]() |
Scratch Removal |
![]() |
Scratch Removal |
π οΈ Tech Stack
π Implementation Status
Track the development progress of ReviveAI's key features and components:
Feature / Component | Status | Notes / Remarks (Optional) |
---|---|---|
Image Deblurring/Sharpening | β Completed | Core model functional |
Scratch Removal | β Completed | Core model functional |
Image Colorization | π§ In Progress | Model integration underway |
π Getting Started
Follow these steps to get ReviveAI running on your local machine or in a Jupyter/Kaggle notebook.
1. Prerequisites
Ensure you have the following installed:
- Python 3.8 or above
pip
(Python package manager)- Git (for cloning the repository)
- Hugging Face CLI (optional)
- Jupyter Notebook or run on Kaggle / Google Colab
2. Clone the Repository
git clone https://github.com/Zummya/ReviveAI.git
cd ReviveAI
3. Set Up the Environment
We recommend using a virtual environment:
python -m venv env
source env/bin/activate # On Windows: env\Scripts\activate
pip install -r requirements.txt
π― Load Pretrained Models
All models are hosted on the Hugging Face Hub for convenience and version control.
πΉ Load Image Sharpening Model
from huggingface_hub import hf_hub_download
from tensorflow.keras.models import load_model
model_path = hf_hub_download(
repo_id="Sami-on-hugging-face/RevAI_Deblur_Model",
filename="SharpeningModel_512_30Epochs.keras"
)
model = load_model(model_path, compile=False)
πΉ Load Scratch Removal Model
from huggingface_hub import hf_hub_download
from tensorflow.keras.models import load_model
model_path = hf_hub_download(
repo_id="Sami-on-hugging-face/RevAI_Scratch_Removal_Model",
filename="scratch_removal_test2.h5"
)
model = load_model(model_path, compile=False)
π Folder Structure
ReviveAI/
β
βββ README.md
βββ .gitignore
βββ requirements.txt
β
βββ models/
β βββ sharpening_model.txt # Hugging Face URL
β βββ scratch_removal_model.txt # Hugging Face URL
β
βββ notebooks/
β βββ scratch_removal_notebook.ipynb
β βββ sharpening_model_notebook.ipynb
β
βββ before_after_examples/
β βββ sharpening/
β βββ scratch_removal/
β
βββ assets/
β βββ revive banner.png, showcase images etc.
π§ͺ Training & Running the Models
ReviveAI includes end-to-end Jupyter notebooks that allow you to both train the models from scratch and test them on custom images.
π Available Notebooks
Notebook | Description |
---|---|
sharpening_model_notebook.ipynb |
Train the sharpening (deblurring) model + Run predictions |
scratch_removal_notebook.ipynb |
Train the scratch removal model + Run predictions |
π‘ Notebook Features
Each notebook includes:
- π§ Model Architecture
- π Data Loading & Preprocessing
- ποΈ Training Pipeline (with adjustable hyperparameters)
- πΎ Saving & Exporting Weights
- π Evaluation
- πΌοΈ Visual Demo on Custom Images
πΌοΈ Quick Test Function (for inference)
To run a prediction on a new image (after training or loading a model), use:
def display_prediction(image_path, model):
import cv2
import matplotlib.pyplot as plt
import numpy as np
img = cv2.imread(image_path)
img = cv2.resize(img, (256, 256)) / 255.0
input_img = np.expand_dims(img, axis=0)
predicted = model.predict(input_img)[0]
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.imshow(img[..., ::-1])
plt.title("Original Input")
plt.axis("off")
plt.subplot(1, 2, 2)
plt.imshow(predicted)
plt.title("Model Output")
plt.axis("off")
plt.show()
Run the function like this:
display_prediction("your_image_path.jpg", model)
β Tip: If you don't want to train from scratch, you can directly load pretrained weights from Hugging Face (see π― Load Pretrained Models) and skip to the testing section.
- Downloads last month
- 0