FER+ Emotion Recognition
Description
This model is a deep convolutional neural network for emotion recognition in faces.
Model
Model | Download | Download (with sample test data) | ONNX version | Opset version |
---|---|---|---|---|
Emotion FERPlus | 34 MB | 31 MB | 1.0 | 2 |
Emotion FERPlus | 34 MB | 31 MB | 1.2 | 7 |
Emotion FERPlus | 34 MB | 31 MB | 1.3 | 8 |
Emotion FERPlus int8 | 19 MB | 18 MB | 1.14 | 12 |
Paper
"Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution" arXiv:1608.01041
Dataset
The model is trained on the FER+ annotations for the standard Emotion FER dataset, as described in the above paper.
Source
The model is trained in CNTK, using the cross entropy training mode. You can find the source code here.
Demo
Run Emotion_FERPlus in browser - implemented by ONNX.js with Emotion_FERPlus version 1.2
Inference
Input
The model expects input of the shape (Nx1x64x64)
, where N
is the batch size.
Preprocessing
Given a path image_path
to the image you would like to score:
import numpy as np
from PIL import Image
def preprocess(image_path):
input_shape = (1, 1, 64, 64)
img = Image.open(image_path)
img = img.resize((64, 64), Image.ANTIALIAS)
img_data = np.array(img)
img_data = np.resize(img_data, input_shape)
return img_data
Output
The model outputs a (1x8)
array of scores corresponding to the 8 emotion classes, where the labels map as follows:
emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}
Postprocessing
Route the model output through a softmax function to map the aggregated activations across the network to probabilities across the 8 classes.
import numpy as np
def softmax(scores):
# your softmax function
def postprocess(scores):
'''
This function takes the scores generated by the network and returns the class IDs in decreasing
order of probability.
'''
prob = softmax(scores)
prob = np.squeeze(prob)
classes = np.argsort(prob)[::-1]
return classes
Sample test data
Sets of sample input and output files are provided in
- serialized protobuf TensorProtos (
.pb
), which are stored in the folderstest_data_set_*/
.
Quantization
Emotion FERPlus int8 is obtained by quantizing fp32 Emotion FERPlus model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
Prepare Model
Download model from ONNX Model Zoo.
wget https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
Convert opset version to 12 for more quantization capability.
import onnx
from onnx import version_converter
model = onnx.load('emotion-ferplus-8.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'emotion-ferplus-12.onnx')
Model quantize
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
License
MIT