High_Res-vs-Low_Res
High_Res-vs-Low_Res is an image classification vision-language encoder model fine-tuned from google/siglip2-base-patch16-224 for a single-label classification task. It is designed to assess the resolution quality of images using the SiglipForImageClassification architecture.
The model categorizes images into two classes:
- Class 0: "High Resolution Image" – indicating that the image has a high resolution and appears sharp and detailed.
- Class 1: "Low Resolution Image" – indicating that the image has a low resolution and may appear pixelated or blurry.
Classification Report:
precision recall f1-score support
high resolution image 0.5697 0.5407 0.5548 1254
low resolution image 0.8495 0.8639 0.8566 3762
accuracy 0.7831 5016
macro avg 0.7096 0.7023 0.7057 5016
weighted avg 0.7795 0.7831 0.7812 5016
Run with Transformers🤗
!pip install -q transformers torch pillow gradio
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/High_Res-vs-Low_Res"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def resolution_classification(image):
"""Predicts image resolution classification."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {"0": "High Resolution Image", "1": "Low Resolution Image"}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=resolution_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Image Resolution Classification",
description="Upload an image to classify its resolution quality."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
Intended Use:
The High_Res-vs-Low_Res model is designed to evaluate the resolution quality of images. It helps distinguish between high-resolution and low-resolution images. Potential use cases include:
- Image Quality Assessment: Identifying whether an image meets high-resolution standards or suffers from low-quality artifacts.
- Content Moderation: Assisting platforms in filtering low-resolution images for better user experience.
- Forensic Analysis: Supporting researchers and analysts in determining the clarity of images used in various applications.
- Image Processing Pipelines: Helping developers optimize image enhancement algorithms by assessing resolution quality.
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for prithivMLmods/High_Res-vs-Low_Res
Base model
google/siglip2-base-patch16-224