BFM 2017: Advanced 3D Face Model
- BFM-2017 is a statistically rigorous 3D Morphable Model representing facial shape, texture, and expression with PCA and Gaussian Process registration.
- The model corrects age distribution bias and incorporates a continuous expression subspace for realistic face reconstruction and inverse rendering.
- Its open-source pipeline and extensive benchmark evaluations demonstrate improved landmark accuracy and robust face recognition over earlier models.
The Basel Face Model 2017 (BFM-2017) is a fully open, reproducible, and statistically rigorous 3D Morphable Model (3DMM) of human faces, building upon the original Basel Face Model by incorporating a more representative age distribution, high-quality controlled scans, and a statistical facial expression subspace. The model provides a PCA-based framework for representing shape, color, and expression variations in facial geometry and texture, and is constructed via a principled Gaussian Process Morphable Model (GPMM) registration pipeline. BFM-2017 is supported by an open-source software suite enabling end-to-end workflows from mesh registration to analysis-by-synthesis fitting of 2D images, validated on face analysis benchmarks such as BU3D-FE, Multi-PIE, and LFW (Gerig et al., 2017).
1. Statistical Formulation of BFM-2017
BFM-2017 implements a classical principal component analysis (PCA) 3DMM formulation in which both the shape and the color of a (neutral) face are modeled as linear stochastic subspaces:
- Vectorized 3D vertex coordinates:
- Vertex RGB colors:
The models assume:
where and are the mean shape and mean color, and are matrices whose columns are principal directions, and and are the shape and color coefficients.
For expressive faces, a third subspace is incorporated: resulting in
In BFM-2017, typical dimensions are , , and .
2. Gaussian Process-Based Registration Framework
To establish dense correspondence across all training meshes, BFM-2017 utilizes the Gaussian Process Morphable Model (GPMM) registration framework. Here, a reference mesh is deformed to match a target scan via a vector-valued deformation field sampled from a Gaussian process: with and covariance function .
Low-rank Karhunen–Loève expansion parameterizes the deformation:
Registration solves a MAP estimation problem: where denotes the closest point on the target, and is a robust Huber loss.
The covariance is assembled from positive-definite kernel building blocks:
- Multi-scale B-spline kernel (coarse-to-fine deformations)
- Spatially-varying scale kernel to localize fine detail
- Mirror symmetry kernel across the sagittal plane
- Core expression subspace kernel based on empirical expression differences
Combined, these endow the registration with multi-scale, region-specific, symmetry-respecting, and expressive deformation capacities.
3. Model Construction and Open-Source Pipeline
The BFM-2017 pipeline is fully open-source, supporting end-to-end workflows:
- Preprocessing and Non-Rigid Registration: The mean of the original Basel scans serves as the reference mesh. Each scan is annotated with 23 manual landmarks and optional lines. Landmarks are integrated into the GP prior via Gaussian regression. The registration MAP problem is solved iteratively with successively lowered regularization (η from to ), discarding outliers and employing robust Huber loss with BFGS-style optimization.
- BU3D-FE Demonstration: Registration is validated on BU-3DFE (100 subjects, 6 expressions at level 4), using ICP to transfer richer 83-point F3D landmark sets and measuring fit accuracy with respect to supplied annotations.
- Model Building and 2D Analysis-by-Synthesis: The PCA subspaces , , and are computed from 200 neutral scans (100 male, 100 female) and 200 high-quality textures. Missing data in textures is addressed via masking and a small-scale color prior kernel . The finalized model is integrated into a probabilistic analysis-by-synthesis framework, enabling joint estimation of , , as well as illumination (via spherical harmonics) and pose from 2D images with MCMC optimization.
The pipeline, benchmarked on Multi-PIE and LFW, demonstrates generality and robust 2D-to-3D reconstructions.
4. Population Age Distribution and Expression Subspace
BFM-2017 corrects the age distribution bias of earlier BFM-2009 by resampling to match the EU-28 demographics (Eurostat 2013), especially increasing representation of the 40–80 age bracket. This adjustment results in better modeling of age-associated facial phenomena such as wrinkles and sagging.
For expressions, BFM-2017 transitions from hard-coded, discrete templates to a continuous statistical expression subspace , derived from 160 expression scans sampled equally from six basic expression classes. The covariance kernel learned from these data yields a continuous (dimension ) expression subspace, within which arbitrary linear combinations produce plausible deformations.
5. Quantitative Evaluation
BFM-2017 demonstrates enhanced performance on standard benchmarks relative to previous models:
| Region | BFM-2017 (mm) | Salazar et al. (mm) |
|---|---|---|
| Left eyebrow | 4.69 ± 4.64 | 6.25 ± 1.84 |
| Right eyebrow | 5.35 ± 4.69 | 6.75 ± 3.51 |
| Left eye | 3.10 ± 3.43 | 3.25 ± 1.84 |
| Right eye | 3.33 ± 3.53 | 3.81 ± 2.06 |
| Nose | 3.94 ± 2.58 | 3.96 ± 2.22 |
| Mouth | 3.66 ± 3.13 | 5.69 ± 4.45 |
| Chin | 11.37 ± 5.85 | 7.22 ± 4.73 |
| Left contour | 12.52 ± 6.04 | 18.48 ± 8.52 |
| Right contour | 10.76 ± 5.34 | 17.36 ± 9.17 |
Face recognition via inverse rendering on Multi-PIE reports rank-1 rates (%):
| Model | 15° | 30° | 45° | Smile |
|---|---|---|---|---|
| BFM ’17 | 98.8 | 98.0 | 90.0 | 87.6 |
| BFM ’09 | 97.6 | 95.2 | 89.6 | — |
| BU3D-FE | 90.4 | 82.7 | 68.7 | 59.4 |
These results evidence BFM-2017's reduction of landmark registration error and improved generalization in inverse rendering-based face recognition compared to antecedent models.
6. Applications and Availability
BFM-2017 supports a spectrum of research and commercial use cases, including:
- 3D face reconstruction from single images (joint estimation of shape, texture, expression, lighting, pose)
- Face recognition tolerant to pose and expression variations
- Synthesis of age and expression for graphics and animation
- Regularization or supervision for deep learning pipelines in 3D face analysis
The complete registration and model-building pipeline, along with all source code and released BFM-2017 meshes, are publicly accessible at https://github.com/unibas-gravis/basel-face-pipeline and http://gravis.dmi.unibas.ch/pmm/ (Gerig et al., 2017).