--- license: apache-2.0 datasets: - blanchon/FireRisk language: - en base_model: - google/siglip2-base-patch16-224 pipeline_tag: image-classification library_name: transformers tags: - fire-risk - detection - siglip2 --- ![zdfbdzf.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/rrxsJzH4HCNufCCb9duMQ.png) # **Fire-Risk-Detection** > **Fire-Risk-Detection** is a multi-class image classification model based on `google/siglip2-base-patch16-224`, trained to detect **fire risk levels** in geographical or environmental imagery. This model can be used for **wildfire monitoring**, **forest management**, and **environmental safety**. --- ```py Classification Report: precision recall f1-score support high 0.4430 0.3382 0.3835 6296 low 0.3666 0.2296 0.2824 10705 moderate 0.3807 0.3757 0.3782 8617 non-burnable 0.8429 0.8385 0.8407 17959 very_high 0.3920 0.3400 0.3641 3268 very_low 0.6068 0.7856 0.6847 21757 water 0.9241 0.7744 0.8427 1729 accuracy 0.6032 70331 macro avg 0.5652 0.5260 0.5395 70331 weighted avg 0.5860 0.6032 0.5878 70331 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ZFECguZt7jRW7mF5ZjlH1.png) ## **Label Classes** The model distinguishes between the following fire risk levels: ``` 0: high 1: low 2: moderate 3: non-burnable 4: very_high 5: very_low 6: water ``` --- ## **Installation** ```bash pip install transformers torch pillow gradio ``` --- ## **Example Inference Code** ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Fire-Risk-Detection" model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # ID to label mapping id2label = { "0": "high", "1": "low", "2": "moderate", "3": "non-burnable", "4": "very_high", "5": "very_low", "6": "water" } def detect_fire_risk(image): image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() prediction = {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))} return prediction # Gradio Interface iface = gr.Interface( fn=detect_fire_risk, inputs=gr.Image(type="numpy"), outputs=gr.Label(num_top_classes=7, label="Fire Risk Level"), title="Fire-Risk-Detection", description="Upload an image to classify the fire risk level: very_low, low, moderate, high, very_high, non-burnable, or water." ) if __name__ == "__main__": iface.launch() ``` --- ## **Applications** * **Wildfire Early Warning Systems** * **Environmental Monitoring** * **Land Use Assessment** * **Disaster Preparedness and Mitigation**