Oct 4, 2024
Introduction
Face Emotion Recognition (EMO-AffectNet) is an advanced deep learning model designed to detect and classify human emotions from facial expressions in images or video streams. In this blog post, we'll explore how to implement Real Time Face Emotion Recognition on various mobile devices using ZETIC.MLange, a powerful framework for on-device AI applications. After this post you can make your own on-device face emotion recognition app utilizing Mobile NPUs.
What is EMO-AffectNet?
EMO-AffectNet is a Resnet-50 based deep convolutional neural network architecture that is often used for various computer vision tasks, including image classification and facial emotion recognition.
EMO-AffectNet hugging face : link
What is ZETIC.MLange?: Bringing AI to Mobile devices
ZETIC.MLange is a On-device AI framework that enables developers to deploy complex AI models on mobile devices with target hardware utilizations. It leverages on-device NPU (Neural Processing Unit) capabilities for efficient inference.
Github Repository
We provide Face Emotion Recognition demo application source code for both Android and iOS. repository
Model pipelining
For accurate usage of the face emotion recognition model, it is necessary to pass an image of the correct facial area to the model. To accomplish this, we construct a pipeline with the Face Detection Model.
Face Detection: we use the Face Detection Model to accurately detect the face regions in the image. Using the information from the detected face region, we extract that part of the original image.
Face Emotion Recognition: Input the extracted face image into the Face Emotion Recognition model to analyze emotions.
Implementation Guide
0. Prerequisites
Prepare the model and input sample of Face Emotion Recognition
and Face Detection
from hugging face.
Face Detection model
Face Emotion Recognition model
You can find ResNet50 class in here
ZETIC.MLange module file
Step 1. Generate ZETIC.MLange Model Key
Generate MLange Model Key with mlange_gen
Expected output
Step 2. Implement ZeticMLangeModel with your model key
Anroid (Kotlin):
For the detailed application setup, please follow deploy to Android Studio
page
iOS (Swift):
For the detailed application setup, please follow deploy to XCode
page
Step 3. Prepare Face Emotion Recognition image feature extractor for Android and iOS
Android (Kotlin)
iOS (Swift)
Step 4. Putting It All Together
Android (Kotlin)
Face Detection Model
Face Emotion Recognition Model: Pass the result of face detection model as a input.
iOS (Swift)
Face Detection Model
Face Emotion Recognition Model: Pass the result of face detection model as a input.
Conclusion: Face Emotion Recognition and On-Device AI - Innovation at the Edge and Limitless Potential
Face emotion recognition combined with On-Device AI represents a powerful leap toward smarter, more responsive technologies. By harnessing the power of neural processing units (NPUs) within mobile and edge devices, we unlock new possibilities for real-time, privacy-preserving, and efficient emotion analysis. These solutions promise to revolutionize fields such as healthcare, security, personalized marketing, and human-computer interaction.
The key advantage of On-Device AI is its ability to process data locally without reliance on cloud infrastructure, which enhances both speed and security while reducing operational costs. This shift toward decentralized computing reduces latency and provides users with seamless experiences, even in connectivity-constrained environments.
Do you have more questions? We welcome your thoughts and inquiries!
For More Information: If you need further details, please don't hesitate to reach out through ZETIC.ai's Contact Us.
Join Our Community: Want to share ideas with other developers? Join our Discord community and feel free to leave your comments!
Your participation can help shape the future of on-device AI. We look forward to meeting you in the exciting world of AI!