Papers
Topics
Authors
Recent
Search
2000 character limit reached

BFM 2017: Advanced 3D Face Model

Updated 7 February 2026
  • BFM-2017 is a statistically rigorous 3D Morphable Model representing facial shape, texture, and expression with PCA and Gaussian Process registration.
  • The model corrects age distribution bias and incorporates a continuous expression subspace for realistic face reconstruction and inverse rendering.
  • Its open-source pipeline and extensive benchmark evaluations demonstrate improved landmark accuracy and robust face recognition over earlier models.

The Basel Face Model 2017 (BFM-2017) is a fully open, reproducible, and statistically rigorous 3D Morphable Model (3DMM) of human faces, building upon the original Basel Face Model by incorporating a more representative age distribution, high-quality controlled scans, and a statistical facial expression subspace. The model provides a PCA-based framework for representing shape, color, and expression variations in facial geometry and texture, and is constructed via a principled Gaussian Process Morphable Model (GPMM) registration pipeline. BFM-2017 is supported by an open-source software suite enabling end-to-end workflows from mesh registration to analysis-by-synthesis fitting of 2D images, validated on face analysis benchmarks such as BU3D-FE, Multi-PIE, and LFW (Gerig et al., 2017).

1. Statistical Formulation of BFM-2017

BFM-2017 implements a classical principal component analysis (PCA) 3DMM formulation in which both the shape and the color of a (neutral) face are modeled as linear stochastic subspaces:

  • Vectorized 3D vertex coordinates: xR3mx \in \mathbb{R}^{3m}
  • Vertex RGB colors: cR3mc \in \mathbb{R}^{3m}

The models assume: shape: x=μs+Usα, αN(0,In)\text{shape}:~ x = \mu_s + U_s \alpha,~ \alpha \sim \mathcal{N}(0, I_n)

color: c=μc+Ucβ, βN(0,In)\text{color}:~ c = \mu_c + U_c \beta,~ \beta \sim \mathcal{N}(0, I_{n'})

where μs\mu_s and μc\mu_c are the mean shape and mean color, UsU_s and UcU_c are matrices whose columns are principal directions, and α\alpha and β\beta are the shape and color coefficients.

For expressive faces, a third subspace is incorporated: expression-difference: Δx=Ueγ, γN(0,Ine)\text{expression-difference}:~ \Delta x = U_e \gamma,~ \gamma \sim \mathcal{N}(0, I_{n_e}) resulting in

expressive face: x=μs+Usα+Ueγ\text{expressive face}:~ x = \mu_s + U_s \alpha + U_e \gamma

In BFM-2017, typical dimensions are n199n \approx 199, n199n' \approx 199, and ne159n_e \approx 159.

2. Gaussian Process-Based Registration Framework

To establish dense correspondence across all training meshes, BFM-2017 utilizes the Gaussian Process Morphable Model (GPMM) registration framework. Here, a reference mesh ΓR\Gamma_R is deformed to match a target scan ΓT\Gamma_T via a vector-valued deformation field u:ΓRR3u:\Gamma_R\to\mathbb{R}^3 sampled from a Gaussian process: uGP(μ,k)u \sim GP(\mu, k) with μ(x)0\mu(x) \equiv 0 and covariance function k:ΓR×ΓRR3×3k: \Gamma_R \times \Gamma_R \to \mathbb{R}^{3 \times 3}.

Low-rank Karhunen–Loève expansion parameterizes the deformation: u^(α,x)=μ(x)+i=1rαiλiϕi(x), αiN(0,1)\hat{u}(\alpha, x) = \mu(x) + \sum_{i=1}^{r} \alpha_i \sqrt{\lambda_i} \phi_i(x),~ \alpha_i \sim \mathcal{N}(0,1)

Registration solves a MAP estimation problem: minαα2+1σ2xiΓRρ(CPΓT(xi+u^(α,xi))(xi+u^(α,xi))2)\min_{\alpha} \|\alpha\|^2 + \frac{1}{\sigma^2} \sum_{x_i \in \Gamma_R} \rho(\|CP_{\Gamma_T}(x_i + \hat{u}(\alpha, x_i)) - (x_i + \hat{u}(\alpha, x_i))\|^2) where CPCP denotes the closest point on the target, and ρ\rho is a robust Huber loss.

The covariance kk is assembled from positive-definite kernel building blocks:

  • Multi-scale B-spline kernel (coarse-to-fine deformations)
  • Spatially-varying scale kernel to localize fine detail
  • Mirror symmetry kernel across the sagittal plane
  • Core expression subspace kernel based on empirical expression differences

Combined, these endow the registration with multi-scale, region-specific, symmetry-respecting, and expressive deformation capacities.

3. Model Construction and Open-Source Pipeline

The BFM-2017 pipeline is fully open-source, supporting end-to-end workflows:

  1. Preprocessing and Non-Rigid Registration: The mean of the original Basel scans serves as the reference mesh. Each scan is annotated with 23 manual landmarks and optional lines. Landmarks are integrated into the GP prior via Gaussian regression. The registration MAP problem is solved iteratively with successively lowered regularization (η from 10110^{-1} to 10510^{-5}), discarding outliers and employing robust Huber loss with BFGS-style optimization.
  2. BU3D-FE Demonstration: Registration is validated on BU-3DFE (100 subjects, 6 expressions at level 4), using ICP to transfer richer 83-point F3D landmark sets and measuring fit accuracy with respect to supplied annotations.
  3. Model Building and 2D Analysis-by-Synthesis: The PCA subspaces UsU_s, UcU_c, and UeU_e are computed from 200 neutral scans (100 male, 100 female) and 200 high-quality textures. Missing data in textures is addressed via masking and a small-scale color prior kernel kcsk_{cs}. The finalized model is integrated into a probabilistic analysis-by-synthesis framework, enabling joint estimation of α\alpha, γ\gamma, β\beta as well as illumination (via spherical harmonics) and pose from 2D images with MCMC optimization.

The pipeline, benchmarked on Multi-PIE and LFW, demonstrates generality and robust 2D-to-3D reconstructions.

4. Population Age Distribution and Expression Subspace

BFM-2017 corrects the age distribution bias of earlier BFM-2009 by resampling to match the EU-28 demographics (Eurostat 2013), especially increasing representation of the 40–80 age bracket. This adjustment results in better modeling of age-associated facial phenomena such as wrinkles and sagging.

For expressions, BFM-2017 transitions from hard-coded, discrete templates to a continuous statistical expression subspace UeU_e, derived from 160 expression scans sampled equally from six basic expression classes. The covariance kernel ksm(,)k_{sm}(\cdot, \cdot) learned from these data yields a continuous (dimension 159\approx 159) expression subspace, within which arbitrary linear combinations produce plausible deformations.

5. Quantitative Evaluation

BFM-2017 demonstrates enhanced performance on standard benchmarks relative to previous models:

Region BFM-2017 (mm) Salazar et al. (mm)
Left eyebrow 4.69 ± 4.64 6.25 ± 1.84
Right eyebrow 5.35 ± 4.69 6.75 ± 3.51
Left eye 3.10 ± 3.43 3.25 ± 1.84
Right eye 3.33 ± 3.53 3.81 ± 2.06
Nose 3.94 ± 2.58 3.96 ± 2.22
Mouth 3.66 ± 3.13 5.69 ± 4.45
Chin 11.37 ± 5.85 7.22 ± 4.73
Left contour 12.52 ± 6.04 18.48 ± 8.52
Right contour 10.76 ± 5.34 17.36 ± 9.17

Face recognition via inverse rendering on Multi-PIE reports rank-1 rates (%):

Model 15° 30° 45° Smile
BFM ’17 98.8 98.0 90.0 87.6
BFM ’09 97.6 95.2 89.6
BU3D-FE 90.4 82.7 68.7 59.4

These results evidence BFM-2017's reduction of landmark registration error and improved generalization in inverse rendering-based face recognition compared to antecedent models.

6. Applications and Availability

BFM-2017 supports a spectrum of research and commercial use cases, including:

  • 3D face reconstruction from single images (joint estimation of shape, texture, expression, lighting, pose)
  • Face recognition tolerant to pose and expression variations
  • Synthesis of age and expression for graphics and animation
  • Regularization or supervision for deep learning pipelines in 3D face analysis

The complete registration and model-building pipeline, along with all source code and released BFM-2017 meshes, are publicly accessible at https://github.com/unibas-gravis/basel-face-pipeline and http://gravis.dmi.unibas.ch/pmm/ (Gerig et al., 2017).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Basel Face Model 2017 (BFM17).