prithivMLmods commited on
Commit
2ca4a1d
·
verified ·
1 Parent(s): 57bb9bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -3,6 +3,9 @@ license: apache-2.0
3
  datasets:
4
  - prithivMLmods/Human-vs-NonHuman
5
  ---
 
 
 
6
 
7
  ```py
8
  Classification Report:
@@ -16,4 +19,68 @@ Classification Report:
16
  weighted avg 0.9863 0.9862 0.9862 15635
17
  ```
18
 
19
- ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ToGf2iWUKacTCQQn9hRPD.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  datasets:
4
  - prithivMLmods/Human-vs-NonHuman
5
  ---
6
+ # **Human-vs-NonHuman-Detection**
7
+
8
+ > **Human-vs-NonHuman-Detection** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images as either human or non-human using the **SiglipForImageClassification** architecture.
9
 
10
  ```py
11
  Classification Report:
 
19
  weighted avg 0.9863 0.9862 0.9862 15635
20
  ```
21
 
22
+ ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ToGf2iWUKacTCQQn9hRPD.png)
23
+
24
+ The model categorizes images into two classes:
25
+ - **Class 0:** "Human 𖨆"
26
+ - **Class 1:** "Non Human メ"
27
+
28
+ # **Run with Transformers🤗**
29
+
30
+ ```python
31
+ !pip install -q transformers torch pillow gradio
32
+ ```
33
+
34
+ ```python
35
+ import gradio as gr
36
+ from transformers import AutoImageProcessor
37
+ from transformers import SiglipForImageClassification
38
+ from transformers.image_utils import load_image
39
+ from PIL import Image
40
+ import torch
41
+
42
+ # Load model and processor
43
+ model_name = "prithivMLmods/Human-vs-NonHuman-Detection"
44
+ model = SiglipForImageClassification.from_pretrained(model_name)
45
+ processor = AutoImageProcessor.from_pretrained(model_name)
46
+
47
+ def human_detection(image):
48
+ """Predicts whether the image contains a human or non-human entity."""
49
+ image = Image.fromarray(image).convert("RGB")
50
+ inputs = processor(images=image, return_tensors="pt")
51
+
52
+ with torch.no_grad():
53
+ outputs = model(**inputs)
54
+ logits = outputs.logits
55
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
56
+
57
+ labels = {
58
+ "0": "Human 𖨆",
59
+ "1": "Non Human メ"
60
+ }
61
+ predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
62
+
63
+ return predictions
64
+
65
+ # Create Gradio interface
66
+ iface = gr.Interface(
67
+ fn=human_detection,
68
+ inputs=gr.Image(type="numpy"),
69
+ outputs=gr.Label(label="Prediction Scores"),
70
+ title="Human vs Non-Human Detection",
71
+ description="Upload an image to classify whether it contains a human or non-human entity."
72
+ )
73
+
74
+ # Launch the app
75
+ if __name__ == "__main__":
76
+ iface.launch()
77
+ ```
78
+
79
+ # **Intended Use:**
80
+
81
+ The **Human-vs-NonHuman-Detection** model is designed to distinguish between human and non-human entities. Potential use cases include:
82
+
83
+ - **Surveillance & Security:** Enhancing monitoring systems to detect human presence.
84
+ - **Autonomous Systems:** Helping robots and AI systems identify humans.
85
+ - **Image Filtering:** Automatically categorizing human vs. non-human images.
86
+ - **Smart Access Control:** Identifying human presence for secure authentication.