Papers
Topics
Authors
Recent
Search
2000 character limit reached

Three-Phase Gradient Fusion (TPGF)

Updated 12 January 2026
  • TPGF is an optimization mechanism that fuses client and server gradients to overcome heterogeneous encoder depths and intermittent connectivity in federated learning.
  • The method employs a three-phase process—local update, server-side computation, and gradient fusion—that dynamically weights gradient signals to accelerate convergence.
  • Empirical results demonstrate 2–5× faster convergence, up to 20× lower communication costs, and improved robustness in resource-variable, distributed training environments.

Three-Phase Gradient Fusion (TPGF) is an optimization mechanism introduced in the SuperSFL federated split learning framework to address critical bottlenecks encountered in distributed training across heterogeneous edge devices. TPGF coordinates local updates, server-side computation, and gradient fusion to accelerate convergence and enhance fault tolerance. The mechanism is specifically designed to mitigate issues arising from heterogeneous encoder depths and intermittent client-server connectivity, two pervasive challenges in real-world federated split learning deployments (Asif et al., 5 Jan 2026).

1. Problem Setting and Motivations

SuperSFL targets scenarios in which distributed clients possess varied computational capacities and network conditions, resulting in heterogeneous encoder depths (different numbers of layers trained locally per client) and unreliable connectivity. In standard split learning, shallow clients, which only train a small prefix of the global network, receive limited supervision from deep layers, leading to slow and unstable convergence. Additionally, when client-server connections fail, client training is stalled, causing resource wastage.

The Three-Phase Gradient Fusion mechanism was designed to address:

  • Heterogeneous encoder depths: Ensures every client, regardless of local depth, benefits from both local and deep-layer supervision.
  • Intermittent connectivity: Enables continuous encoder training through local supervision when server-side gradients are unavailable, with seamless integration upon reconnection.

TPGF achieves robust client optimization by producing, fusing, and applying two complementary gradient signals—one from client-local supervision, the other from server-computed deep-layer gradients. This suggests the approach is particularly advantageous in highly variable edge environments where uniform resource allocation and stable connectivity cannot be assumed.

2. Algorithmic Breakdown of the Three Phases

The TPGF workflow for each client comprises three distinct computational phases per batch:

Phase 1: Client-Side Local Update

  • The client computes a forward pass through its local encoder:

zic=fθi(xi)z_i^c = f_{\theta_i}(x_i)

  • A local classifier predicts labels:

%%%%1%%%%

  • The client-side loss (cross-entropy):

Lclient=CE(y^i,yi)\mathcal{L}_{\text{client}} = \mathrm{CE}(\hat{y}_i, y_i)

  • Client classifier parameters are updated:

ϕiϕiηϕiLclient\phi_i \leftarrow \phi_i - \eta \nabla_{\phi_i}\mathcal{L}_{\text{client}}

  • The gradient w.r.t. encoder parameters is computed and clipped:

gclient=clip2(θiLclient,τ)g_{\text{client}} = \mathrm{clip}_{\ell_2}(\nabla_{\theta_i}\mathcal{L}_{\text{client}},\,\tau)

Phase 2: Server-Side Computation

  • The client sends zicz_i^c to the server.
  • The server performs further forward passes and produces predictions:

zis=fθs(zic),y^s=hϕs(zis)z_i^s = f_{\theta_s}(z_i^c),\quad \hat{y}_s = h_{\phi_s}(z_i^s)

  • Server-side loss computation:

Lserver=CE(y^s,yi)\mathcal{L}_{\text{server}} = \mathrm{CE}(\hat{y}_s, y_i)

  • Server-side model updates:

θsθsηθsLserver,ϕsϕsηϕsLserver\theta_s \leftarrow \theta_s - \eta \nabla_{\theta_s}\mathcal{L}_{\text{server}}, \quad \phi_s \leftarrow \phi_s - \eta \nabla_{\phi_s}\mathcal{L}_{\text{server}}

  • The server returns the gradient on smashed data:

gz=zicLserverg_z = \nabla_{z_i^c}\mathcal{L}_{\text{server}}

  • The client backpropagates gzg_z through its encoder:

gserver=θiLserverg_{\text{server}} = \nabla_{\theta_i}\mathcal{L}_{\text{server}}

Phase 3: Gradient Fusion and Encoder Update

  • Fusion weights are computed based on encoder depth and inverse loss values:

wclient=didi+ds(Lclient+ϵ)1(Lclient+ϵ)1+(Lserver+ϵ)1w_{\text{client}} = \frac{d_i}{d_i + d_s}\frac{(\mathcal{L}_{\text{client}} + \epsilon)^{-1}}{(\mathcal{L}_{\text{client}} + \epsilon)^{-1} + (\mathcal{L}_{\text{server}} + \epsilon)^{-1}}

wserver=1wclientw_{\text{server}} = 1 - w_{\text{client}}

  • Gradients are fused:

θi=wclientgclient+wservergserver\nabla_{\theta_i} = w_{\text{client}}g_{\text{client}} + w_{\text{server}}g_{\text{server}}

  • Encoder parameters are updated:

θiθiηθi\theta_i \leftarrow \theta_i - \eta \nabla_{\theta_i}

3. Optimization Objective and Convergence

SuperSFL globally optimizes the sum of client- and server-side losses across all clients:

min{θi,ϕi},θs,ϕsi=1N[Lclienti(θi,ϕi)+Lserveri(θi,θs,ϕs)]\min_{\{\theta_i, \phi_i\},\, \theta_s, \phi_s}\sum_{i=1}^N \Bigl[ \mathcal{L}_{\text{client}}^i(\theta_i, \phi_i) + \mathcal{L}_{\text{server}}^i(\theta_i, \theta_s, \phi_s)\Bigr]

Each local encoder update utilizes the fused gradient resulting from TPGF, modulated by adaptive weights that reflect both structural depth and supervision quality. The empirical evaluation demonstrated convergence rate improvements by a factor of $2$–5×5\times in terms of communication rounds, and increased accuracy relative to conventional SFL. The reduction in global communication rounds yields up to 20×20\times lower total communication cost and 13×13\times shorter training time (Asif et al., 5 Jan 2026).

4. Robustness to Heterogeneity and Connectivity Failures

TPGF facilitates uninterrupted and efficient model training under device and network heterogeneity by:

  • leveraging both local and server gradients, which stabilizes updates for shallow clients that would otherwise have weak supervision;
  • enabling local encoder and classifier updates when the server is intermittently unreachable (Phase 1 fallback), thus utilize client computation without delay;
  • seamlessly re-integrating server gradients upon reconnection, by fusing accumulated local updates and fresh remote supervision.

A plausible implication is that TPGF constitutes a generalized strategy for mitigating convergence bottlenecks posed by dynamic resource allocation in federated settings.

5. Integration with Weight-Sharing Super-Networks

SuperSFL’s use of weight-sharing super-networks provides clients with dynamically allocated, resource-aware subnetworks, preserving structural alignment across the federation. TPGF updates only those encoder layers shared among clients, enabling smooth coordination of parameter updates despite non-uniform client model structures.

The collaborative client-server aggregation and subnetwork allocation synergistically enhance both the data efficiency and training stability afforded by TPGF.

6. Empirical Performance and Practical Implications

Experiments reported in (Asif et al., 5 Jan 2026) evaluated TPGF on CIFAR-10 and CIFAR-100 datasets with up to 100 heterogeneous clients, showing notable improvements:

Metric SuperSFL (with TPGF) Baseline SFL
Communication rounds 2–5× faster
Total communication cost up to 20× lower
Training time up to 13× shorter
Energy efficiency Improved

These results demonstrate TPGF’s effectiveness for federated split learning in resource-constrained and communication-variable edge environments, with applicability to settings where device heterogeneity is critical.

7. Limitations and Directions for Future Research

The paper did not furnish a formal convergence proof for TPGF, though empirical evidence indicates accelerated convergence and improved generalization. Potential limitations stem from the weight computation’s dependence on loss values and model depth, which may require tuning in regimes with extreme client-server disparity.

A plausible implication is that extensions of TPGF could explore more sophisticated fusion strategies, dynamic weighting schemes, or integration with privacy-preserving techniques to further generalize its robustness across broader federated learning domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Three-Phase Gradient Fusion (TPGF).