Update README.md
Browse files
README.md
CHANGED
@@ -1,43 +1,40 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
-
-
|
4 |
-
-
|
5 |
-
-
|
6 |
-
-
|
7 |
-
|
8 |
-
-
|
9 |
-
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
11 |
base_model: trpakov/vit-face-expression
|
12 |
-
instance_prompt: >-
|
13 |
-
student, education, emotion, engagement, understanding, facial-expression,
|
14 |
-
sentiment-analysis, vision, vit, image-classification, course-evaluation,
|
15 |
-
online-learning, confusion-detection
|
16 |
license: unlicense
|
|
|
|
|
|
|
17 |
---
|
18 |
-
# gowdaman-student-emotion-detection
|
19 |
|
20 |
-
|
21 |
|
22 |
-
## Model description
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
---
|
27 |
-
|
28 |
-
# π§ Student Course Understanding Detection (Fine-tuned ViT)
|
29 |
-
|
30 |
-
This is a **fine-tuned version of [`trpakov/vit-face-expression`](https://huggingface.co/trpakov/vit-face-expression)** specifically adapted for analyzing student engagement and understanding during online or in-person educational sessions.
|
31 |
|
32 |
## π Model Overview
|
33 |
|
34 |
- **Model Type**: Vision Transformer (ViT)
|
35 |
-
- **Base Model**:
|
36 |
- **Task**: Binary emotion classification for educational sentiment analysis
|
37 |
- **Input**: Student face image (RGB)
|
38 |
- **Output Classes**:
|
39 |
-
-
|
40 |
-
-
|
41 |
|
42 |
## π― Use Case
|
43 |
|
@@ -51,77 +48,17 @@ It is ideal for:
|
|
51 |
## π§ͺ Training Details
|
52 |
|
53 |
- Fine-tuned on a **custom dataset** of student facial expressions captured during course sessions
|
54 |
-
- Labels were manually annotated as
|
55 |
|
56 |
## π₯ Input Format
|
57 |
|
58 |
- **Image**: A single student facial image (preferably frontal and well-lit)
|
59 |
-
- **Size**:
|
60 |
|
61 |
## π€ Output
|
62 |
|
63 |
-
|
64 |
{
|
65 |
-
|
66 |
-
|
67 |
-
}
|
68 |
-
```
|
69 |
-
|
70 |
-
## β οΈ Limitations
|
71 |
-
|
72 |
-
- Performance may vary based on image quality, lighting, and camera angle
|
73 |
-
- Cultural and individual facial expression variations are not fully accounted for
|
74 |
-
- Not intended for high-stakes decisions without human review
|
75 |
-
|
76 |
-
## π Citation
|
77 |
-
|
78 |
-
If you use this model in your research or application, please cite the base model:
|
79 |
-
|
80 |
-
```
|
81 |
-
@misc{trpakov_vit_face_expression,
|
82 |
-
author = {Trpakov},
|
83 |
-
title = {ViT Face Expression Recognition},
|
84 |
-
year = {2023},
|
85 |
-
publisher = {Hugging Face},
|
86 |
-
howpublished = {\url{https://huggingface.co/trpakov/vit-face-expression}}
|
87 |
-
}
|
88 |
-
```
|
89 |
-
|
90 |
-
---
|
91 |
-
|
92 |
-
Would you like me to generate a `README.md` file for this or help you write a model card JSON metadata?
|
93 |
-
|
94 |
-
## Trigger words
|
95 |
-
|
96 |
-
You should use `student` to trigger the image generation.
|
97 |
-
|
98 |
-
You should use `education` to trigger the image generation.
|
99 |
-
|
100 |
-
You should use `emotion` to trigger the image generation.
|
101 |
-
|
102 |
-
You should use `engagement` to trigger the image generation.
|
103 |
-
|
104 |
-
You should use `understanding` to trigger the image generation.
|
105 |
-
|
106 |
-
You should use `facial-expression` to trigger the image generation.
|
107 |
-
|
108 |
-
You should use `sentiment-analysis` to trigger the image generation.
|
109 |
-
|
110 |
-
You should use `vision` to trigger the image generation.
|
111 |
-
|
112 |
-
You should use `vit` to trigger the image generation.
|
113 |
-
|
114 |
-
You should use `image-classification` to trigger the image generation.
|
115 |
-
|
116 |
-
You should use `course-evaluation` to trigger the image generation.
|
117 |
-
|
118 |
-
You should use `online-learning` to trigger the image generation.
|
119 |
-
|
120 |
-
You should use `confusion-detection` to trigger the image generation.
|
121 |
-
|
122 |
-
|
123 |
-
## Download model
|
124 |
-
|
125 |
-
Weights for this model are available in Safetensors format.
|
126 |
-
|
127 |
-
[Download](/gowdaman/gowdaman-emotion-detection/tree/main) them in the Files & versions tab.
|
|
|
1 |
---
|
2 |
tags:
|
3 |
+
- vision
|
4 |
+
- image-classification
|
5 |
+
- vit
|
6 |
+
- facial-expression
|
7 |
+
- emotion
|
8 |
+
- sentiment-analysis
|
9 |
+
- education
|
10 |
+
- student
|
11 |
+
- engagement
|
12 |
+
- understanding
|
13 |
+
- course-evaluation
|
14 |
+
- online-learning
|
15 |
+
- confusion-detection
|
16 |
base_model: trpakov/vit-face-expression
|
|
|
|
|
|
|
|
|
17 |
license: unlicense
|
18 |
+
language:
|
19 |
+
- en
|
20 |
+
pipeline_tag: image-classification
|
21 |
---
|
|
|
22 |
|
23 |
+
# π§ gowdaman-student-emotion-detection
|
24 |
|
25 |
+
## Model description
|
26 |
|
27 |
+
This is a **fine-tuned version of [`trpakov/vit-face-expression`](https://huggingface.co/trpakov/vit-face-expression)** specifically adapted for analyzing student engagement and understanding during online or in-person educational sessions.
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## π Model Overview
|
30 |
|
31 |
- **Model Type**: Vision Transformer (ViT)
|
32 |
+
- **Base Model**: `trpakov/vit-face-expression`
|
33 |
- **Task**: Binary emotion classification for educational sentiment analysis
|
34 |
- **Input**: Student face image (RGB)
|
35 |
- **Output Classes**:
|
36 |
+
- `Understand`: The student appears to understand the concept.
|
37 |
+
- `Not Understand`: The student appears to be confused or not following the lesson.
|
38 |
|
39 |
## π― Use Case
|
40 |
|
|
|
48 |
## π§ͺ Training Details
|
49 |
|
50 |
- Fine-tuned on a **custom dataset** of student facial expressions captured during course sessions
|
51 |
+
- Labels were manually annotated as `Understand` or `Not Understand` based on visual indicators of comprehension
|
52 |
|
53 |
## π₯ Input Format
|
54 |
|
55 |
- **Image**: A single student facial image (preferably frontal and well-lit)
|
56 |
+
- **Size**: Automatically resized to match the input size expected by the ViT model
|
57 |
|
58 |
## π€ Output
|
59 |
|
60 |
+
```json
|
61 |
{
|
62 |
+
"label": "Understand",
|
63 |
+
"confidence": 0.87
|
64 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|