Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,9 @@ tags:
|
|
11 |
- fashion
|
12 |
- product
|
13 |
---
|
|
|
|
|
|
|
14 |
|
15 |
```py
|
16 |
Classification Report:
|
@@ -65,4 +68,125 @@ Loungewear and Nightwear 0.7604 0.6703 0.7125 464
|
|
65 |
accuracy 0.9568 44072
|
66 |
macro avg 0.7091 0.6270 0.6412 44072
|
67 |
weighted avg 0.9535 0.9568 0.9540 44072
|
68 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
- fashion
|
12 |
- product
|
13 |
---
|
14 |
+
# **Fashion-Product-subCategory**
|
15 |
+
|
16 |
+
> **Fashion-Product-subCategory** is a vision model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images into 45 fine-grained subcategories for retail and e-commerce applications.
|
17 |
|
18 |
```py
|
19 |
Classification Report:
|
|
|
68 |
accuracy 0.9568 44072
|
69 |
macro avg 0.7091 0.6270 0.6412 44072
|
70 |
weighted avg 0.9535 0.9568 0.9540 44072
|
71 |
+
```
|
72 |
+
|
73 |
+
The model predicts one of the following product subcategories:
|
74 |
+
|
75 |
+
```json
|
76 |
+
"id2label": {
|
77 |
+
"0": "Accessories",
|
78 |
+
"1": "Apparel Set",
|
79 |
+
"2": "Bags",
|
80 |
+
"3": "Bath and Body",
|
81 |
+
"4": "Beauty Accessories",
|
82 |
+
"5": "Belts",
|
83 |
+
"6": "Bottomwear",
|
84 |
+
"7": "Cufflinks",
|
85 |
+
"8": "Dress",
|
86 |
+
"9": "Eyes",
|
87 |
+
"10": "Eyewear",
|
88 |
+
"11": "Flip Flops",
|
89 |
+
"12": "Fragrance",
|
90 |
+
"13": "Free Gifts",
|
91 |
+
"14": "Gloves",
|
92 |
+
"15": "Hair",
|
93 |
+
"16": "Headwear",
|
94 |
+
"17": "Home Furnishing",
|
95 |
+
"18": "Innerwear",
|
96 |
+
"19": "Jewellery",
|
97 |
+
"20": "Lips",
|
98 |
+
"21": "Loungewear and Nightwear",
|
99 |
+
"22": "Makeup",
|
100 |
+
"23": "Mufflers",
|
101 |
+
"24": "Nails",
|
102 |
+
"25": "Perfumes",
|
103 |
+
"26": "Sandal",
|
104 |
+
"27": "Saree",
|
105 |
+
"28": "Scarves",
|
106 |
+
"29": "Shoe Accessories",
|
107 |
+
"30": "Shoes",
|
108 |
+
"31": "Skin",
|
109 |
+
"32": "Skin Care",
|
110 |
+
"33": "Socks",
|
111 |
+
"34": "Sports Accessories",
|
112 |
+
"35": "Sports Equipment",
|
113 |
+
"36": "Stoles",
|
114 |
+
"37": "Ties",
|
115 |
+
"38": "Topwear",
|
116 |
+
"39": "Umbrellas",
|
117 |
+
"40": "Vouchers",
|
118 |
+
"41": "Wallets",
|
119 |
+
"42": "Watches",
|
120 |
+
"43": "Water Bottle",
|
121 |
+
"44": "Wristbands"
|
122 |
+
}
|
123 |
+
```
|
124 |
+
|
125 |
+
---
|
126 |
+
|
127 |
+
# **Run with Transformers 🤗**
|
128 |
+
|
129 |
+
```python
|
130 |
+
!pip install -q transformers torch pillow gradio
|
131 |
+
```
|
132 |
+
|
133 |
+
```python
|
134 |
+
import gradio as gr
|
135 |
+
from transformers import AutoImageProcessor, SiglipForImageClassification
|
136 |
+
from PIL import Image
|
137 |
+
import torch
|
138 |
+
|
139 |
+
# Load model and processor
|
140 |
+
model_name = "prithivMLmods/Fashion-Product-subCategory" # Replace with your actual model path
|
141 |
+
model = SiglipForImageClassification.from_pretrained(model_name)
|
142 |
+
processor = AutoImageProcessor.from_pretrained(model_name)
|
143 |
+
|
144 |
+
# Label mapping
|
145 |
+
id2label = {
|
146 |
+
0: "Accessories", 1: "Apparel Set", 2: "Bags", 3: "Bath and Body", 4: "Beauty Accessories",
|
147 |
+
5: "Belts", 6: "Bottomwear", 7: "Cufflinks", 8: "Dress", 9: "Eyes", 10: "Eyewear",
|
148 |
+
11: "Flip Flops", 12: "Fragrance", 13: "Free Gifts", 14: "Gloves", 15: "Hair", 16: "Headwear",
|
149 |
+
17: "Home Furnishing", 18: "Innerwear", 19: "Jewellery", 20: "Lips", 21: "Loungewear and Nightwear",
|
150 |
+
22: "Makeup", 23: "Mufflers", 24: "Nails", 25: "Perfumes", 26: "Sandal", 27: "Saree",
|
151 |
+
28: "Scarves", 29: "Shoe Accessories", 30: "Shoes", 31: "Skin", 32: "Skin Care", 33: "Socks",
|
152 |
+
34: "Sports Accessories", 35: "Sports Equipment", 36: "Stoles", 37: "Ties", 38: "Topwear",
|
153 |
+
39: "Umbrellas", 40: "Vouchers", 41: "Wallets", 42: "Watches", 43: "Water Bottle", 44: "Wristbands"
|
154 |
+
}
|
155 |
+
|
156 |
+
def classify_subcategory(image):
|
157 |
+
"""Predicts the subcategory of a fashion product."""
|
158 |
+
image = Image.fromarray(image).convert("RGB")
|
159 |
+
inputs = processor(images=image, return_tensors="pt")
|
160 |
+
|
161 |
+
with torch.no_grad():
|
162 |
+
outputs = model(**inputs)
|
163 |
+
logits = outputs.logits
|
164 |
+
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
|
165 |
+
|
166 |
+
predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))}
|
167 |
+
return predictions
|
168 |
+
|
169 |
+
# Gradio interface
|
170 |
+
iface = gr.Interface(
|
171 |
+
fn=classify_subcategory,
|
172 |
+
inputs=gr.Image(type="numpy"),
|
173 |
+
outputs=gr.Label(label="Subcategory Prediction Scores"),
|
174 |
+
title="Fashion-Product-subCategory",
|
175 |
+
description="Upload a fashion product image to predict its subcategory (e.g., Dress, Shoes, Accessories, etc.)."
|
176 |
+
)
|
177 |
+
|
178 |
+
# Launch the app
|
179 |
+
if __name__ == "__main__":
|
180 |
+
iface.launch()
|
181 |
+
```
|
182 |
+
|
183 |
+
---
|
184 |
+
|
185 |
+
# **Intended Use**
|
186 |
+
|
187 |
+
This model is best suited for:
|
188 |
+
|
189 |
+
- **Product Subcategory Tagging**: Automatically assign fine-grained subcategories to fashion product listings.
|
190 |
+
- **Improved Search & Filters**: Enhance customer experience by enabling better filtering and browsing.
|
191 |
+
- **Catalog Structuring**: Streamline fashion catalog organization at scale for large e-commerce platforms.
|
192 |
+
- **Automated Inventory Insights**: Identify trends in product categories for sales, inventory, and marketing analysis.
|