[Defense] 3D Facial Modeling with Geometry Wrinkles from Images
Thursday, March 30, 2023
10:30 am - 12:00 pm
In
Partial
Fulfillment
of
the
Requirements
for
the
Degree
of
Doctor
of
Philosophy
Qixin
Deng
will
defend
his
dissertation
3D
Facial
Modeling
with
Geometry
Wrinkles
from
Images
Abstract
Realistic 3D facial modeling and reconstruction have been increasingly used in many graphics, animation, and virtual reality applications. Currently many existing face models are not able to present rich details while deforming, which means lack of wrinkles while face shows different expressions. Also, to create a realistic face model for an individual is also needs complex setup and sophisticated works from experienced artists. The goal of this dissertation is to achieve an end-to-end system to augment coarse-scale 3D face models, and to reconstruct realistic face from in-the-wild images. In this dissertation, I first propose an end-to-end system to automatically augment coarse-scale 3D faces with synthesized fine scale geometric wrinkles. I define the wrinkle as the displacement value along the vertex normal direction, and save it as displacement map. The distribution of wrinkles has some spatial characteristics, and deep convolutional neural network (DCNN) is pretty good at learning spacial information across image-format data. I labeled the wrinkle data with its identity vector and expression vector. By formulating the wrinkle generation problem as a supervised generation task, I implicitly model the continuous space of face wrinkles via a compact generative model, such that plausible face wrinkles can be generated through effective sampling and interpolation in the space. Then I introduce a complete pipeline to transfer the synthesized wrinkles between faces with different shapes and topologies. The first work can augment an exist 3D face model with more fine-scale details, but to create a realistic human face model is not yet solved. Lighting environment is one of the most important information in face images, and specifically, the highlight of the face (specular albedo) is sensitive to the shape of the face. Properly modeling complex lighting effects in reality, including specular lighting, shadows, and occlusions, from a single in-the-wild face image is still considered as a widely open research challenge. To reconstruct an realistic face model from an in-the-wild image, I propose a convolutional neural network based framework to regress the face model from a single image in the wild. I designed novel hybrid loss functions to disentangle face shape identities, expressions, poses, albedos, and lighting. The outputted face model includes dense 3D shape, head pose, expression, diffuse albedo, specular albedo, and the corresponding lighting conditions. Besides a carefully-designed ablation study, I also conduct direct comparison experiments to show that our method can outperform state-of-art methods both quantitatively and qualitatively.
Thursday,
March
30,
2023
10:30AM
-
12:00PM
CT
PGH
550
Dr. Zhigang Deng, Faculty Advisor
Faculty, students and the general public are invited.

- Location
- Philip Guthrie Hoffman Hall (PGH), 3551 Cullen Blvd, Houston, TX 77204, USA