gowdaman commited on
Commit
d5a1095
Β·
verified Β·
1 Parent(s): 4cfee3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -91
README.md CHANGED
@@ -1,43 +1,40 @@
1
  ---
2
  tags:
3
- - text-to-image
4
- - lora
5
- - diffusers
6
- - template:diffusion-lora
7
- widget:
8
- - text: '-'
9
- output:
10
- url: images/banner_github.png
 
 
 
 
 
11
  base_model: trpakov/vit-face-expression
12
- instance_prompt: >-
13
- student, education, emotion, engagement, understanding, facial-expression,
14
- sentiment-analysis, vision, vit, image-classification, course-evaluation,
15
- online-learning, confusion-detection
16
  license: unlicense
 
 
 
17
  ---
18
- # gowdaman-student-emotion-detection
19
 
20
- <Gallery />
21
 
22
- ## Model description
23
 
24
- Here&#39;s a clear and professional description you can use for your Hugging Face model card:
25
-
26
- ---
27
-
28
- # 🧠 Student Course Understanding Detection (Fine-tuned ViT)
29
-
30
- This is a **fine-tuned version of [&#x60;trpakov&#x2F;vit-face-expression&#x60;](https:&#x2F;&#x2F;huggingface.co&#x2F;trpakov&#x2F;vit-face-expression)** specifically adapted for analyzing student engagement and understanding during online or in-person educational sessions.
31
 
32
  ## πŸ“Œ Model Overview
33
 
34
  - **Model Type**: Vision Transformer (ViT)
35
- - **Base Model**: &#x60;trpakov&#x2F;vit-face-expression&#x60;
36
  - **Task**: Binary emotion classification for educational sentiment analysis
37
  - **Input**: Student face image (RGB)
38
  - **Output Classes**:
39
- - &#x60;Understand&#x60;: The student appears to understand the concept.
40
- - &#x60;Not Understand&#x60;: The student appears to be confused or not following the lesson.
41
 
42
  ## 🎯 Use Case
43
 
@@ -51,77 +48,17 @@ It is ideal for:
51
  ## πŸ§ͺ Training Details
52
 
53
  - Fine-tuned on a **custom dataset** of student facial expressions captured during course sessions
54
- - Labels were manually annotated as &#x60;Understand&#x60; or &#x60;Not Understand&#x60; based on visual indicators of comprehension
55
 
56
  ## πŸ“₯ Input Format
57
 
58
  - **Image**: A single student facial image (preferably frontal and well-lit)
59
- - **Size**: Resized automatically to match the input size expected by ViT
60
 
61
  ## πŸ“€ Output
62
 
63
- &#x60;&#x60;&#x60;json
64
  {
65
- &quot;label&quot;: &quot;Understand&quot;,
66
- &quot;confidence&quot;: 0.87
67
- }
68
- &#x60;&#x60;&#x60;
69
-
70
- ## ⚠️ Limitations
71
-
72
- - Performance may vary based on image quality, lighting, and camera angle
73
- - Cultural and individual facial expression variations are not fully accounted for
74
- - Not intended for high-stakes decisions without human review
75
-
76
- ## πŸ“š Citation
77
-
78
- If you use this model in your research or application, please cite the base model:
79
-
80
- &#x60;&#x60;&#x60;
81
- @misc{trpakov_vit_face_expression,
82
- author &#x3D; {Trpakov},
83
- title &#x3D; {ViT Face Expression Recognition},
84
- year &#x3D; {2023},
85
- publisher &#x3D; {Hugging Face},
86
- howpublished &#x3D; {\url{https:&#x2F;&#x2F;huggingface.co&#x2F;trpakov&#x2F;vit-face-expression}}
87
- }
88
- &#x60;&#x60;&#x60;
89
-
90
- ---
91
-
92
- Would you like me to generate a &#x60;README.md&#x60; file for this or help you write a model card JSON metadata?
93
-
94
- ## Trigger words
95
-
96
- You should use `student` to trigger the image generation.
97
-
98
- You should use `education` to trigger the image generation.
99
-
100
- You should use `emotion` to trigger the image generation.
101
-
102
- You should use `engagement` to trigger the image generation.
103
-
104
- You should use `understanding` to trigger the image generation.
105
-
106
- You should use `facial-expression` to trigger the image generation.
107
-
108
- You should use `sentiment-analysis` to trigger the image generation.
109
-
110
- You should use `vision` to trigger the image generation.
111
-
112
- You should use `vit` to trigger the image generation.
113
-
114
- You should use `image-classification` to trigger the image generation.
115
-
116
- You should use `course-evaluation` to trigger the image generation.
117
-
118
- You should use `online-learning` to trigger the image generation.
119
-
120
- You should use `confusion-detection` to trigger the image generation.
121
-
122
-
123
- ## Download model
124
-
125
- Weights for this model are available in Safetensors format.
126
-
127
- [Download](/gowdaman/gowdaman-emotion-detection/tree/main) them in the Files & versions tab.
 
1
  ---
2
  tags:
3
+ - vision
4
+ - image-classification
5
+ - vit
6
+ - facial-expression
7
+ - emotion
8
+ - sentiment-analysis
9
+ - education
10
+ - student
11
+ - engagement
12
+ - understanding
13
+ - course-evaluation
14
+ - online-learning
15
+ - confusion-detection
16
  base_model: trpakov/vit-face-expression
 
 
 
 
17
  license: unlicense
18
+ language:
19
+ - en
20
+ pipeline_tag: image-classification
21
  ---
 
22
 
23
+ # 🧠 gowdaman-student-emotion-detection
24
 
25
+ ## Model description
26
 
27
+ This is a **fine-tuned version of [`trpakov/vit-face-expression`](https://huggingface.co/trpakov/vit-face-expression)** specifically adapted for analyzing student engagement and understanding during online or in-person educational sessions.
 
 
 
 
 
 
28
 
29
  ## πŸ“Œ Model Overview
30
 
31
  - **Model Type**: Vision Transformer (ViT)
32
+ - **Base Model**: `trpakov/vit-face-expression`
33
  - **Task**: Binary emotion classification for educational sentiment analysis
34
  - **Input**: Student face image (RGB)
35
  - **Output Classes**:
36
+ - `Understand`: The student appears to understand the concept.
37
+ - `Not Understand`: The student appears to be confused or not following the lesson.
38
 
39
  ## 🎯 Use Case
40
 
 
48
  ## πŸ§ͺ Training Details
49
 
50
  - Fine-tuned on a **custom dataset** of student facial expressions captured during course sessions
51
+ - Labels were manually annotated as `Understand` or `Not Understand` based on visual indicators of comprehension
52
 
53
  ## πŸ“₯ Input Format
54
 
55
  - **Image**: A single student facial image (preferably frontal and well-lit)
56
+ - **Size**: Automatically resized to match the input size expected by the ViT model
57
 
58
  ## πŸ“€ Output
59
 
60
+ ```json
61
  {
62
+ "label": "Understand",
63
+ "confidence": 0.87
64
+ }