--- license: apache-2.0 datasets: - alecsharpie/nailbiting_classification language: - en base_model: - google/siglip2-base-patch16-224 pipeline_tag: image-classification library_name: transformers tags: - Nailbiting - Human - Behaviour - siglip2 --- ![NB.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/rziUroDd0QVnPpXbys6zv.png) # **NailbitingNet** > **NailbitingNet** is a binary image classification model based on `google/siglip2-base-patch16-224`, designed to detect **nail-biting behavior** in images. Leveraging the **SiglipForImageClassification** architecture, this model is ideal for behavior monitoring, wellness applications, and human activity recognition. ```py Classification Report: precision recall f1-score support biting 0.8412 0.9076 0.8731 2824 no biting 0.9271 0.8728 0.8991 3805 accuracy 0.8876 6629 macro avg 0.8841 0.8902 0.8861 6629 weighted avg 0.8905 0.8876 0.8881 6629 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/SW6xjZzA7eViFsAmrmxdR.png) --- ## **Label Classes** The model distinguishes between: ``` Class 0: "biting" → The person appears to be biting their nails Class 1: "no biting" → No nail-biting behavior detected ``` --- ## **Installation** ```bash pip install transformers torch pillow gradio ``` --- ## **Example Inference Code** ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/NailbitingNet" model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # ID to label mapping id2label = { "0": "biting", "1": "no biting" } def detect_nailbiting(image): image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() prediction = {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))} return prediction # Gradio Interface iface = gr.Interface( fn=detect_nailbiting, inputs=gr.Image(type="numpy"), outputs=gr.Label(num_top_classes=2, label="Nail-Biting Detection"), title="NailbitingNet", description="Upload an image to classify whether the person is biting their nails or not." ) if __name__ == "__main__": iface.launch() ``` --- ## **Use Cases** * **Wellness & Habit Monitoring** * **Behavioral AI Applications** * **Mental Health Tools** * **Dataset Filtering for Behavior Recognition**