Papers
Topics
Authors
Recent
Search
2000 character limit reached

SecureDyn-FL: Secure Federated Learning for IoT

Updated 13 January 2026
  • SecureDyn-FL is a federated learning framework that enables robust and privacy-preserving intrusion detection in heterogeneous IoT networks.
  • It integrates dynamic temporal gradient auditing, transformed additive ElGamal encryption, and dual-objective personalized learning to counter poisoning attacks and non-IID challenges.
  • Empirical results demonstrate detection accuracy above 99% and resilience against up to 50% adversarial clients, optimizing efficiency for resource-constrained deployments.

SecureDyn-FL is a federated learning (FL) framework tailored for robust, privacy-preserving intrusion detection in heterogeneous Internet of Things (IoT) networks. It addresses key limitations of conventional FL-based intrusion detection systems (IDS), particularly privacy leakage, vulnerability to poisoning, non-IID data distributions, and communication efficiency. SecureDyn-FL integrates dynamic temporal gradient auditing based on Gaussian mixture models, a transformed additive ElGamal encryption protocol for secure aggregation, and dual-objective personalized learning utilizing logit-adjusted losses. Empirical evaluations on major IoT intrusion datasets demonstrate state-of-the-art detection accuracy and resilience against up to 50% adversarial clients, underscoring its efficacy for practical deployment in large-scale, resource-constrained environments (Soomro et al., 10 Jan 2026).

1. IoT Intrusion-Detection Challenges and SecureDyn-FL Objectives

Deployment of IDS in IoT environments presents distinct challenges:

  • Privacy: Transmission of raw traffic or model updates exposes sensitive device activity to inference and eavesdropping attacks.
  • Scalability: The communication and computation demands must fit resource-constrained devices and bandwidth-limited networks.
  • Robustness: FL aggregation is susceptible to a wide spectrum of poisoning (e.g., label flipping, model scaling, backdoor) from malicious clients.
  • Non-IID Data: IoT devices frequently generate highly skewed, heterogeneous data, leading to “client drift” and degraded convergence in vanilla FL.

SecureDyn-FL explicitly targets these demands through five design goals:

  1. Detection Accuracy: Achieve high accuracy (>>99%) and F1 (>>0.98) for multiple attack types.
  2. Robustness: Detect and filter both overt and stealthy poisoning attacks using temporal auditing.
  3. Privacy: Obviate inference and eavesdropping via gradient encryption.
  4. Adaptability: Enable robust adaptation to heterogeneous, non-IID client datasets through personalized learning.
  5. Efficiency: Lower communication and compute overhead with sparsification and quantization (Soomro et al., 10 Jan 2026).

2. System Architecture and Federated Workflow

SecureDyn-FL comprises the following principal entities:

  • Clients: IoT devices holding private, non-IID data, responsible for local model training and update encryption.
  • Central Auditor (CA): Trusted node handling key distribution, temporal auditing of client updates, and assurance of aggregation integrity.
  • FL Server: Aggregates verified, encrypted updates and synchronizes the global model.
  • Audit Table Repository: Stores per-client tag-IDs, public keys, and audit metadata.

Federated Workflow per Round tt:

  1. Registration: Each client ii receives a tag-ID TIDi\mathrm{TID}_i and key pair (pki,ski)(pk_i, sk_i). CA manages key-to-ID mappings.
  2. Local Model Training: Each client uses a split model: a shared feature extractor ff and two classifiers (global hglobh_{\rm glob}, personalized hpersh_{\rm pers}). Outputs:
    • z=f(x)z = f(x)
    • y^glob=hglob(z)\hat y^{\rm glob} = h_{\rm glob}(z)
    • y^pers=hpers(z)\hat y^{\rm pers} = h_{\rm pers}(z)
    • Total loss: Ltotal=LCE(y,y^glob)+λLLA(y,y^pers)\mathcal{L}_{\text{total}} = \mathcal{L}_{\rm CE}(y,\hat y^{\rm glob}) + \lambda\,\mathcal{L}_{\rm LA}(y, \hat y^{\rm pers})
  3. Pruning and Quantization: Gradients Δw\Delta w are sparsified via unstructured L1 pruning at round-dependent rate ptp_t, then quantized via adaptive quantization (reducing numerical precision to NN levels).
  4. Encryption and Upload: Each client encrypts Δw\Delta w using transformed ElGamal and uploads TIDi,Δwi\langle \mathrm{TID}_i,\, \llbracket \Delta w_i \rrbracket \rangle to CA.
  5. Dynamic Temporal Gradient Auditing: The CA evaluates each update for poisoning via multi-threshold tests on norm, Mahalanobis distance, and trajectory drift (see Section 3).
  6. Secure Aggregation and Global Update: Auditor forwards only accepted (or down-weighted) encrypted updates to the server. Aggregation occurs homomorphically, followed by collective decryption and global model update.
  7. Broadcast: Updated shared components (ft+1f^{t+1}, hglobt+1h_{\rm glob}^{t+1}) are encrypted and dispatched to clients (Soomro et al., 10 Jan 2026).

3. Dynamic Temporal Gradient Auditing Mechanism

The central innovation in SecureDyn-FL's defense against poisoning is dynamic temporal gradient auditing, which utilizes probabilistic modeling to differentiate benign from suspicious client updates over time.

Methodological Steps

  • Distribution Modeling: Gradients at each round are characterized as samples from a KK-component Gaussian Mixture Model (GMM):

p(xθ)=j=1KπjN(x;μj,Σj)p(x \mid \theta) = \sum_{j=1}^K \pi_j \mathcal{N}(x; \mu_j, \Sigma_j)

  • Statistical Distance: For each client ii, the Mahalanobis distance (MD) from its assigned cluster is computed:

MDi=(xiμj)TΣj1(xiμj)\mathrm{MD}_i = \sqrt{(x_i - \mu_{j^*})^T \Sigma_{j^*}^{-1} (x_i - \mu_{j^*})}

  • Temporal Dynamics: A forgetting factor α\alpha exponentially averages GMM parameters over rounds (θt=αθt1+(1α)θnew\theta_t = \alpha \theta_{t-1} + (1-\alpha)\theta_{\text{new}}). The trajectory difference ΔMDi=MDi(t)MDi(t1)\Delta\mathrm{MD}_i = |\mathrm{MD}_i(t) - \mathrm{MD}_i(t-1)| captures evolution between rounds.
  • Multi-Threshold Policy: Updates are classified according to:
    • Norm threshold ΔwiTnorm\Vert \Delta w_i \Vert \leq T_{\text{norm}}
    • MD threshold MDikσnormal\mathrm{MD}_i \leq k \sigma_{\text{normal}}
    • Trajectory threshold ΔMDiTΔ\Delta\mathrm{MD}_i \leq T_\Delta

Updates violating thresholds are down-weighted or rejected, mitigating both sudden and slow-evolving (“stealthy”) poisoning (Soomro et al., 10 Jan 2026).

4. Secure Aggregation Using Transformed Additive ElGamal

SecureDyn-FL incorporates a transformed additive ElGamal scheme for privacy-preserving aggregation of model updates.

Protocol Details

  • Message Encoding: Each quantized gradient value mm is mapped using the Cramer transform: gpmmodn2g_p^m \bmod n^2.
  • Encryption: For random rZp1r \in \mathbb{Z}_{p-1}:

c1=grmodp,c2=myrmodpc_1 = g^r \mod p, \quad c_2 = m' \cdot y^r \mod p

where m=gpmmodn2m' = g_p^m \mod n^2.

  • Homomorphic Addition:

m1m2=m1+m2\llbracket m_1 \rrbracket \cdot \llbracket m_2 \rrbracket = \llbracket m_1 + m_2 \rrbracket

  • Decryption: The sum is recovered via discrete log resolution on decrypted ciphertexts.

Compression Strategies

  • Pruning: Dimensionality is reduced by rate ptp_t, minimizing overhead.
  • Quantization: Each encrypted value uses log2N\lceil \log_2 N \rceil bits for NN quantization levels instead of 32 bits.

Security Guarantees

  • Honest-but-curious servers observe only masked ciphertexts.
  • Eavesdroppers, lacking private keys, are unable to invert the encryption.
  • Pruning and quantization further restrict information leakage (Soomro et al., 10 Jan 2026).

5. Dual-Objective Personalized Learning for Non-IID Adaptation

SecureDyn-FL addresses non-IID heterogeneity using a split model and composite loss formulation.

Model Structure

  • Feature Extractor ff: Shared across all clients.
  • Global Classifier hglobh_{\rm glob}: Trained to align with the global objective.
  • Personalized Classifier hpersh_{\rm pers}: Tailors predictions to local client data.

Loss Functions

  • Global Cross-Entropy:

LCE(y,y^glob)=logexp(y^yglob)yexp(y^yglob)\mathcal{L}_{\rm CE}(y, \hat{y}^{\rm glob}) = -\log \frac{\exp(\hat{y}^{\rm glob}_y)}{\sum_{y'} \exp(\hat{y}^{\rm glob}_{y'})}

  • Logit-Adjusted Loss:

LLA(y,y^pers)=logexp(y^ypers+τlogαyk)yexp(y^ypers+τlogαyk)\mathcal{L}_{\rm LA}(y, \hat{y}^{\rm pers}) = -\log \frac{\exp(\hat{y}^{\rm pers}_y + \tau \log \alpha^k_y)}{\sum_{y'} \exp(\hat{y}^{\rm pers}_{y'} + \tau \log \alpha^k_{y'})}

where αyk\alpha^k_y is class-frequency in client kk, and τ\tau calibrates class imbalance.

  • Total Client Loss:

Ltotal=LCE(y,y^glob)+λLLA(y,y^pers)\mathcal{L}_{\text{total}} = \mathcal{L}_{\rm CE}(y,\hat y^{\rm glob}) + \lambda\,\mathcal{L}_{\rm LA}(y, \hat y^{\rm pers})

Parameters λ\lambda (local/global weighting) and τ\tau (imbalance control) modulate the adaptation (Soomro et al., 10 Jan 2026).

6. Empirical Evaluation and Comparative Results

Datasets and Settings

  • N-BaIoT: 115-dimensional, binary/multiclass attacks (Mirai/BASHLITE).
  • TON_IoT: 46-dimensional, multi-class.
  • Clients: 20, simulated non-IID/IID partitions (including Dirichlet splits, extreme heterogeneity).

Adversarial Scenarios

  • Up to 50% malicious clients using label-flipping, model scaling, or backdoor attacks.
  • Metrics: Overall accuracy, F1, target-class accuracy, attack success rate, malicious alarm ROC, detection delay, communication overhead.

Results

Scenario Baseline SecureDyn-FL
T_acc (Non-IID #2, 50%) 0.015 0.995
O_acc (Non-IID #2, 50%) 0.94 0.992
F1 (Targeted) 0.04 0.89
ROC AUC (IID) 0.7–0.85 ~0.98
ROC AUC (Non-IID) 0.7–0.85 ~0.97
SSIM (Grad inversion) 0.78 0.07
Membership inf. acc. 0.82 0.51
Detection delay (s) 4.82 2.14
Comm. reduction ~60%

SecureDyn-FL under both IID and non-IID achieves accuracy and F1 scores within 1–2% of clean (attack-free) runs, AUC \approx0.97–0.98 in malicious alarm ROC, and privacy gains indicated by substantially lowered gradient inversion and membership inference success (Soomro et al., 10 Jan 2026).

7. Trade-offs, Limitations, and Future Directions

Trade-offs

  • Cryptographic Cost: Larger keys increase privacy but add encryption/decryption latency.
  • Pruning Aggressiveness: Heightened pruning accelerates communication but may degrade model accuracy if excessive.
  • Auditing Sensitivity: High thresholds (smaller kk, α\alpha) reduce missed attacks at possible expense of false positives (benign update rejection).

Limitations

  • Full Participation Assumption: All clients synchronize each round; does not yet address client selection or dropped participants.
  • Single Auditor Trust Model: Relies on an honest CA; trust distribution via multiparty computation or DLT is prospective.
  • GMM Auditing Scalability: Computational costs of GMM EM scale poorly with the number of clients; lightweight clustering may be required for massive deployments.

Prospective Enhancements

  • Client Selection/Straggler Handling: Sampling or timeout mechanisms for large fleets.
  • Model Architecture Advances: Hybrid CNN+LSTM or GNN architectures for richer feature learning.
  • Differential Privacy Hybridization: Integrate DP with encryption to mitigate risks from colluding auditors.
  • Distributed Trust: Auditing via blockchain or MPC to remove reliance on a single CA.
  • Approximate/Scalable Auditing: Employ mini-batch or online clustering schemes for large-scale deployments (Soomro et al., 10 Jan 2026).
Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SecureDyn-FL.