eMotion-GAN

A Motion-based GAN for Photorealistic and Facial Expression Preserving Frontal View Synthesis

Omar Ikne, Benjamin Allaert, Ioan Marius Bilasco, Hazem Wannous

IMT Nord Europe, Institut Mines-Télécom, Univ. Lille, Centre for Digital Systems, F-59000 Lille, France.

Paper Code Colab Demo Citation

Overview

eMotion-GAN is a novel motion‑based Generative Adversarial Network for photorealistic frontal view synthesis that preserves facial expressions. Our approach disentangles facial motion from identity and appearance, enabling high‑quality frontalization while maintaining the original expression dynamics. The method demonstrates state‑of‑the‑art performance on frontal view synthesis and cross‑subject facial motion transfer.

eMotion-GAN pipeline overview

Overview of the eMotion-GAN framework

Abstract

Facial expression recognition (FER) systems frequently suffer significant performance degradation when confronted with head pose variations, a pervasive challenge in real-world applications ranging from healthcare monitoring to human–computer interaction. While existing frontal view synthesis (FVS) methods attempt to address this issue, they predominantly operate in the appearance domain, often introducing artifacts that distort the subtle motion patterns crucial for accurate expression analysis. We present eMotion-GAN, a two-stage generative motion-domain framework that fundamentally rethinks frontalization by decomposing facial dynamics into two distinct components: (1) expression-related motion stemming from muscle activity, and (2) pose-related motion acting as noise. We conducted extensive evaluations using several widely recognized dynamic FER datasets, which encompass sequences exhibiting various degrees of head pose variations in both intensity and orientation. Our results demonstrate the effectiveness of our approach in significantly reducing the FER performance gap between frontal and non-frontal faces. Specifically, we achieved a FER improvement of up to +5% for small pose variations and up to +20% improvement for larger pose variations.

Explainer Video

Watch our explainer video for detailed explanations and more results:

Interactive Demo

Try our interactive visualization demo using Google Colab (no GPU needed):

Open Colab Demo

Frontal View Synthesis & Expression Preservation

eMotion-GAN generates photorealistic frontal views while preserving the original facial expressions, even under extreme poses and lighting conditions.

Frontal view synthesis results comparison

Comparison of frontalization results with state‑of‑the‑art methods

Frontal view synthesis animation

Motion Frontalization & Expression Embedding

Cross-subject Facial Motion Transfer

Our model can transfer facial motion from a source subject to a target subject while maintaining the target's identity and the source's expression dynamics.

Motion transfer results

Motion transfer examples

Cross-Category Animation Examples

Our approach demonstrates remarkable generalization capability by animating faces across diverse categories while preserving expression dynamics:

Artwork face animation - transferring facial expressions to classical artwork portraits
Artworks: Expression transfer to classical portrait paintings
Celebrity face animation - motion transfer on celebrity photographs
Celebrities: Motion transfer on real-world celebrity faces
Cartoon character animation - applying realistic expressions to animated characters
Cartoons: Realistic expression application to animated characters
Drawing animation - bringing sketch and drawing faces to life
Drawings: Animating sketch and illustration faces

Citation

If you find this work useful, please cite our paper:

@article{ikne2025emotion, title={eMotion-GAN: A motion-based GAN for photorealistic and facial expression preserving frontal view synthesis}, author={Ikne, Omar and Allaert, Benjamin and Bilasco, Ioan Marius and Wannous, Hazem}, journal={Computer Vision and Image Understanding}, pages={104555}, year={2025}, publisher={Elsevier} }