--- tags: - vision - image-classification - vit - facial-expression - emotion - sentiment-analysis - education - student - engagement - understanding - course-evaluation - online-learning - confusion-detection base_model: trpakov/vit-face-expression license: unlicense language: - en pipeline_tag: image-classification --- # ๐Ÿง  gowdaman-student-emotion-detection ## Model description This is a **fine-tuned version of [`trpakov/vit-face-expression`](https://huggingface.co/trpakov/vit-face-expression)** specifically adapted for analyzing student engagement and understanding during online or in-person educational sessions. ## ๐Ÿ“Œ Model Overview - **Model Type**: Vision Transformer (ViT) - **Base Model**: `trpakov/vit-face-expression` - **Task**: Binary emotion classification for educational sentiment analysis - **Input**: Student face image (RGB) - **Output Classes**: - `Understand`: The student appears to understand the concept. - `Not Understand`: The student appears to be confused or not following the lesson. ## ๐ŸŽฏ Use Case This model is designed for **automatic sentiment detection in educational environments**, helping instructors evaluate real-time or post-session student engagement by analyzing facial expressions. It is ideal for: - Online course engagement monitoring - Intelligent Learning Management Systems (LMS) - Post-lecture video analysis for feedback and insights ## ๐Ÿงช Training Details - Fine-tuned on a **custom dataset** of student facial expressions captured during course sessions - Labels were manually annotated as `Understand` or `Not Understand` based on visual indicators of comprehension ## ๐Ÿ“ฅ Input Format - **Image**: A single student facial image (preferably frontal and well-lit) - **Size**: Automatically resized to match the input size expected by the ViT model ## ๐Ÿ“ค Output ```json { "label": "Understand", "confidence": 0.87 }