Implementing Face Emotion Recognition On-Device Ai with ZETIC.MLange

Implementing Face Emotion Recognition On-Device Ai with ZETIC.MLange

The Future of Serverless Computer Vision-4

The Future of Serverless Computer Vision-4

Oct 4, 2024

Introduction

Face Emotion Recognition (EMO-AffectNet) is an advanced deep learning model designed to detect and classify human emotions from facial expressions in images or video streams. In this blog post, we'll explore how to implement Real Time Face Emotion Recognition on various mobile devices using ZETIC.MLange, a powerful framework for on-device AI applications. After this post you can make your own on-device face emotion recognition app utilizing Mobile NPUs.

What is EMO-AffectNet?

EMO-AffectNet is a Resnet-50 based deep convolutional neural network architecture that is often used for various computer vision tasks, including image classification and facial emotion recognition.

EMO-AffectNet hugging face : link

What is ZETIC.MLange?: Bringing AI to Mobile devices

ZETIC.MLange is a On-device AI framework that enables developers to deploy complex AI models on mobile devices with target hardware utilizations. It leverages on-device NPU (Neural Processing Unit) capabilities for efficient inference.

Github Repository

We provide Face Emotion Recognition demo application source code for both Android and iOS. repository

Model pipelining

For accurate usage of the face emotion recognition model, it is necessary to pass an image of the correct facial area to the model. To accomplish this, we construct a pipeline with the Face Detection Model.

  1. Face Detection: we use the Face Detection Model to accurately detect the face regions in the image. Using the information from the detected face region, we extract that part of the original image.

  2. Face Emotion Recognition: Input the extracted face image into the Face Emotion Recognition model to analyze emotions.

Implementation Guide

0. Prerequisites

Prepare the model and input sample of Face Emotion Recognition and Face Detection from hugging face.

  1. Face Detection model

$ pip install tf2onnx
$ python -m tf2onnx.convert --tflite face_detection_short_range.tflite --output face_detection_short_range.onnx --opset 13
  1. Face Emotion Recognition model
    You can find ResNet50 class in here

import torch
import torch.nn as  nn
import numpy as np

emo_affectnet = ResNet50(7, channels=3)
emo_affectnet.load_state_dict(torch.load('FER_static_ResNet50_AffectNet.pt'))
emo_affectnet.eval()

model_cpu = emo_affectnet.cpu()
# cur_face would be cropped face image type of numpy array.
model_traced = torch.jit.trace(model_cpu, (cur_face))

np_cur_face = cur_face.detach().numpy()
np.save("data/cur_face.npy", np_cur_face)

output_model_path = f"models/FER_static_ResNet50_AffectNet_traced.pt"
torch.jit.save(model_traced, output_model_path)
  1. ZETIC.MLange module file

Step 1. Generate ZETIC.MLange Model Key

Generate MLange Model Key with mlange_gen

# (1) Get mlange_gen
$ wget <https://github.com/zetic-ai/ZETIC_MLange_document/raw/main/bin/mlange_gen> && chmod 755mlange_gen

# (2) Run mlange_gen for two models
#    - Face detection model
$ ./mlange_gen -m face_detection_short_range.onnx -i input.npy

#    - Face emotion recognition model
$ ./mlange_gen -m FER_static_ResNet50_AffectNet.pt -i

Expected output

...
MLange Model Key : {YOUR_FACE_DETECTION_MODEL_KEY}
...

...
MLange Model Key : {YOUR_FACE_EMOTION_RECOGNITION_MODEL_KEY}
...
Step 2. Implement ZeticMLangeModel with your model key

Anroid (Kotlin):
For the detailed application setup, please follow deploy to Android Studio page

val faceEmotionRecognitionModel = ZeticMLangeModel(this, 'face_emotion_recognition')

faceEmotionRecognitionModel.run(inputs)

val outputs = faceEmotionRecognitionModel.outputBuffers

iOS (Swift):
For the detailed application setup, please follow deploy to XCode page

let faceEmotionRecognitionModel = ZeticMLangeModel('face_emotion_recognition')

faceEmotionRecognitionModel.run(inputs)

let outputs = faceEmotionRecognitionModel.getOutputDataArray()
Step 3. Prepare Face Emotion Recognition image feature extractor for Android and iOS

Android (Kotlin)

// (0) Initialize ZeticMLangeFeatureFaceEmotionRecognition
val feature = ZeticMLangeFeatureFaceEmotionRecognition()

// (1) Preprocess bitmap and get processed float array
val inputs = feature.preprocess(bitmap)

...

// (2) Postprocess to bitmap
val resultBitmap = feature.postprocess(outputs

iOS (Swift)

import ZeticMLange

// (0) Initialize ZeticMLangeFeatureFaceEmotionRecognition
let feature = ZeticMLangeFeatureFaceEmotionRecognition()

// (1) Preprocess UIImage and get processed float array
let inputs = feature.preprocess(image)

...

// (2) Postprocess to UIImage
let resultBitmap = feature.postprocess(&outputs)
Step 4. Putting It All Together

Android (Kotlin)

  1. Face Detection Model

// (0) Initialization Models
val faceDetectionModel = ZeticMLangeModel(this, 'face_detection')

// (1) Initialization Feature
val faceDetectionFeature = ZeticMLangeFeatureFaceDetection()

// (2) Preprocess Image
val faceDetectionInputs = faceDetectionFeature.preprocess(bitmap)

// (3) Process Model
faceDetectionModel.run(faceDetectionInputs)
val faceDetectionOutputs = faceDetectionModel.outputBuffers

// (4) Postprocess model run result
val faceDetectionPostprocessed = faceDetectionFeature.postprocess(faceDetectionOutputs

  1. Face Emotion Recognition Model: Pass the result of face detection model as a input.

// (0) Initialization Models
val faceEmotionRecognitionModel = ZeticMLangeModel(this, 'face_emotion_recognition')

// (1) Initialization Feature
val faceEmotionRecognitionFeature = ZeticMLangeFeatureFaceEmotionRecognition()

// (2) Preprocess Image
val faceEmotionRecognitionInputs = faceEmotionRecognitionFeature.preprocess(bitmap, faceDetectionPostprocessed)

// (3) Process Model
faceEmotionRecognitionModel.run(faceEmotionRecognitionInputs)
val faceEmotionRecognitionOutputs = faceEmotionRecognitionModel.outputBuffers

// (4) Postprocess model run result
val faceEmotionRecognitionPostprocessed = faceEmotionRecognitionFeature.postprocess(faceEmotionRecognitionOutputs

iOS (Swift)

  1. Face Detection Model

// (0) Initialization Models
let faceEmotionRecognitionModel = ZeticMLangeModel('face_emotion_recognition')

// (1) Initialization Feature
let faceEmotionRecognitionFeature = ZeticMLangeFeatureFaceEmotionRecognition()

// (2) Preprocess Image
let faceEmotionRecognitionInputs = faceEmotionRecognitionFeature.preprocess(bitmap, faceDetectionPostprocessed)

// (3) Process Model
faceEmotionRecognitionModel.run(faceEmotionRecognitionInputs)
let faceEmotionRecognitionOutputs = faceEmotionRecognitionModel.outputBuffers

// (4) Postprocess model run result
let faceEmotionRecognitionPostprocessed = faceEmotionRecognitionFeature.postprocess(&faceEmotionRecognitionOutputs)
  1. Face Emotion Recognition Model: Pass the result of face detection model as a input.

// (0) Initialization Models
let faceDetectionModel = ZeticMLangeModel('face_detection')

// (1) Initialization Feature
let faceDetectionFeature = ZeticMLangeFeatureFaceDetection()

// (2) Preprocess Image
let faceDetectionInputs = faceDetectionFeature.preprocess(bitmap)

// (3) Process Model
faceDetectionModel.run(faceDetectionInputs)
let faceDetectionOutputs = faceDetectionModel.outputBuffers

// (4) Postprocess model run result
let faceDetectionPostprocessed = faceDetectionFeature.postprocess(&faceDetectionOutputs)

Conclusion: Face Emotion Recognition and On-Device AI - Innovation at the Edge and Limitless Potential

Face emotion recognition combined with On-Device AI represents a powerful leap toward smarter, more responsive technologies. By harnessing the power of neural processing units (NPUs) within mobile and edge devices, we unlock new possibilities for real-time, privacy-preserving, and efficient emotion analysis. These solutions promise to revolutionize fields such as healthcare, security, personalized marketing, and human-computer interaction.

The key advantage of On-Device AI is its ability to process data locally without reliance on cloud infrastructure, which enhances both speed and security while reducing operational costs. This shift toward decentralized computing reduces latency and provides users with seamless experiences, even in connectivity-constrained environments.

Do you have more questions? We welcome your thoughts and inquiries!

  • For More Information: If you need further details, please don't hesitate to reach out through ZETIC.ai's Contact Us.

  • Join Our Community: Want to share ideas with other developers? Join our Discord community and feel free to leave your comments!

Your participation can help shape the future of on-device AI. We look forward to meeting you in the exciting world of AI!

Let’s keep in touch

Interested in us? Receive our latest news and updates.

Let’s keep in touch

Interested in us? Receive our latest news and updates.

Let’s keep in touch

Interested in us? Receive our latest news and updates.

© 2024 ZETIC.ai All rights reserved.

© 2024 ZETIC.ai All rights reserved.

© 2024 ZETIC.ai All rights reserved.