File size: 3,829 Bytes
7549042 172d5e5 35fd07b 38e5a50 d432d2d 35fd07b d432d2d 38e5a50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
license: apache-2.0
datasets:
- Bingsu/Gameplay_Images
language:
- en
base_model:
- google/siglip2-so400m-patch14-384
pipeline_tag: image-classification
library_name: transformers
tags:
- Gameplay
- Classcode
- '10'
---

# **Gameplay-Classcode-10**
> **Gameplay-Classcode-10** is a vision-language model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies gameplay screenshots or thumbnails into one of ten popular video game titles.
```py
Classification Report:
precision recall f1-score support
Among Us 0.9990 0.9920 0.9955 1000
Apex Legends 0.9737 0.9990 0.9862 1000
Fortnite 0.9960 0.9910 0.9935 1000
Forza Horizon 0.9990 0.9820 0.9904 1000
Free Fire 0.9930 0.9860 0.9895 1000
Genshin Impact 0.9831 0.9890 0.9860 1000
God of War 0.9930 0.9930 0.9930 1000
Minecraft 0.9990 0.9990 0.9990 1000
Roblox 0.9832 0.9960 0.9896 1000
Terraria 1.0000 0.9910 0.9955 1000
accuracy 0.9918 10000
macro avg 0.9919 0.9918 0.9918 10000
weighted avg 0.9919 0.9918 0.9918 10000
```

The model predicts one of the following **game categories**:
- **0:** Among Us
- **1:** Apex Legends
- **2:** Fortnite
- **3:** Forza Horizon
- **4:** Free Fire
- **5:** Genshin Impact
- **6:** God of War
- **7:** Minecraft
- **8:** Roblox
- **9:** Terraria
---
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Gameplay-Classcode-10" # Replace with your actual model path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
0: "Among Us",
1: "Apex Legends",
2: "Fortnite",
3: "Forza Horizon",
4: "Free Fire",
5: "Genshin Impact",
6: "God of War",
7: "Minecraft",
8: "Roblox",
9: "Terraria"
}
def classify_game(image):
"""Predicts the game title based on the gameplay image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))}
predictions = dict(sorted(predictions.items(), key=lambda item: item[1], reverse=True))
return predictions
# Gradio interface
iface = gr.Interface(
fn=classify_game,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Game Prediction Scores"),
title="Gameplay-Classcode-10",
description="Upload a gameplay screenshot or thumbnail to identify the game title (Among Us, Fortnite, Minecraft, etc.)."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use**
This model can be used for:
- **Automatic tagging of gameplay content for streamers and creators**
- **Organizing gaming datasets**
- **Enhancing searchability in gameplay video repositories**
- **Training AI systems for game-related content moderation or recommendations** |