SecureDyn-FL: Secure Federated Learning for IoT
- SecureDyn-FL is a federated learning framework that enables robust and privacy-preserving intrusion detection in heterogeneous IoT networks.
- It integrates dynamic temporal gradient auditing, transformed additive ElGamal encryption, and dual-objective personalized learning to counter poisoning attacks and non-IID challenges.
- Empirical results demonstrate detection accuracy above 99% and resilience against up to 50% adversarial clients, optimizing efficiency for resource-constrained deployments.
SecureDyn-FL is a federated learning (FL) framework tailored for robust, privacy-preserving intrusion detection in heterogeneous Internet of Things (IoT) networks. It addresses key limitations of conventional FL-based intrusion detection systems (IDS), particularly privacy leakage, vulnerability to poisoning, non-IID data distributions, and communication efficiency. SecureDyn-FL integrates dynamic temporal gradient auditing based on Gaussian mixture models, a transformed additive ElGamal encryption protocol for secure aggregation, and dual-objective personalized learning utilizing logit-adjusted losses. Empirical evaluations on major IoT intrusion datasets demonstrate state-of-the-art detection accuracy and resilience against up to 50% adversarial clients, underscoring its efficacy for practical deployment in large-scale, resource-constrained environments (Soomro et al., 10 Jan 2026).
1. IoT Intrusion-Detection Challenges and SecureDyn-FL Objectives
Deployment of IDS in IoT environments presents distinct challenges:
- Privacy: Transmission of raw traffic or model updates exposes sensitive device activity to inference and eavesdropping attacks.
- Scalability: The communication and computation demands must fit resource-constrained devices and bandwidth-limited networks.
- Robustness: FL aggregation is susceptible to a wide spectrum of poisoning (e.g., label flipping, model scaling, backdoor) from malicious clients.
- Non-IID Data: IoT devices frequently generate highly skewed, heterogeneous data, leading to “client drift” and degraded convergence in vanilla FL.
SecureDyn-FL explicitly targets these demands through five design goals:
- Detection Accuracy: Achieve high accuracy (99%) and F1 (0.98) for multiple attack types.
- Robustness: Detect and filter both overt and stealthy poisoning attacks using temporal auditing.
- Privacy: Obviate inference and eavesdropping via gradient encryption.
- Adaptability: Enable robust adaptation to heterogeneous, non-IID client datasets through personalized learning.
- Efficiency: Lower communication and compute overhead with sparsification and quantization (Soomro et al., 10 Jan 2026).
2. System Architecture and Federated Workflow
SecureDyn-FL comprises the following principal entities:
- Clients: IoT devices holding private, non-IID data, responsible for local model training and update encryption.
- Central Auditor (CA): Trusted node handling key distribution, temporal auditing of client updates, and assurance of aggregation integrity.
- FL Server: Aggregates verified, encrypted updates and synchronizes the global model.
- Audit Table Repository: Stores per-client tag-IDs, public keys, and audit metadata.
Federated Workflow per Round :
- Registration: Each client receives a tag-ID and key pair . CA manages key-to-ID mappings.
- Local Model Training: Each client uses a split model: a shared feature extractor and two classifiers (global , personalized ). Outputs:
- Total loss:
- Pruning and Quantization: Gradients are sparsified via unstructured L1 pruning at round-dependent rate , then quantized via adaptive quantization (reducing numerical precision to levels).
- Encryption and Upload: Each client encrypts using transformed ElGamal and uploads to CA.
- Dynamic Temporal Gradient Auditing: The CA evaluates each update for poisoning via multi-threshold tests on norm, Mahalanobis distance, and trajectory drift (see Section 3).
- Secure Aggregation and Global Update: Auditor forwards only accepted (or down-weighted) encrypted updates to the server. Aggregation occurs homomorphically, followed by collective decryption and global model update.
- Broadcast: Updated shared components (, ) are encrypted and dispatched to clients (Soomro et al., 10 Jan 2026).
3. Dynamic Temporal Gradient Auditing Mechanism
The central innovation in SecureDyn-FL's defense against poisoning is dynamic temporal gradient auditing, which utilizes probabilistic modeling to differentiate benign from suspicious client updates over time.
Methodological Steps
- Distribution Modeling: Gradients at each round are characterized as samples from a -component Gaussian Mixture Model (GMM):
- Statistical Distance: For each client , the Mahalanobis distance (MD) from its assigned cluster is computed:
- Temporal Dynamics: A forgetting factor exponentially averages GMM parameters over rounds (). The trajectory difference captures evolution between rounds.
- Multi-Threshold Policy: Updates are classified according to:
- Norm threshold
- MD threshold
- Trajectory threshold
Updates violating thresholds are down-weighted or rejected, mitigating both sudden and slow-evolving (“stealthy”) poisoning (Soomro et al., 10 Jan 2026).
4. Secure Aggregation Using Transformed Additive ElGamal
SecureDyn-FL incorporates a transformed additive ElGamal scheme for privacy-preserving aggregation of model updates.
Protocol Details
- Message Encoding: Each quantized gradient value is mapped using the Cramer transform: .
- Encryption: For random :
where .
- Homomorphic Addition:
- Decryption: The sum is recovered via discrete log resolution on decrypted ciphertexts.
Compression Strategies
- Pruning: Dimensionality is reduced by rate , minimizing overhead.
- Quantization: Each encrypted value uses bits for quantization levels instead of 32 bits.
Security Guarantees
- Honest-but-curious servers observe only masked ciphertexts.
- Eavesdroppers, lacking private keys, are unable to invert the encryption.
- Pruning and quantization further restrict information leakage (Soomro et al., 10 Jan 2026).
5. Dual-Objective Personalized Learning for Non-IID Adaptation
SecureDyn-FL addresses non-IID heterogeneity using a split model and composite loss formulation.
Model Structure
- Feature Extractor : Shared across all clients.
- Global Classifier : Trained to align with the global objective.
- Personalized Classifier : Tailors predictions to local client data.
Loss Functions
- Global Cross-Entropy:
- Logit-Adjusted Loss:
where is class-frequency in client , and calibrates class imbalance.
- Total Client Loss:
Parameters (local/global weighting) and (imbalance control) modulate the adaptation (Soomro et al., 10 Jan 2026).
6. Empirical Evaluation and Comparative Results
Datasets and Settings
- N-BaIoT: 115-dimensional, binary/multiclass attacks (Mirai/BASHLITE).
- TON_IoT: 46-dimensional, multi-class.
- Clients: 20, simulated non-IID/IID partitions (including Dirichlet splits, extreme heterogeneity).
Adversarial Scenarios
- Up to 50% malicious clients using label-flipping, model scaling, or backdoor attacks.
- Metrics: Overall accuracy, F1, target-class accuracy, attack success rate, malicious alarm ROC, detection delay, communication overhead.
Results
| Scenario | Baseline | SecureDyn-FL |
|---|---|---|
| T_acc (Non-IID #2, 50%) | 0.015 | 0.995 |
| O_acc (Non-IID #2, 50%) | 0.94 | 0.992 |
| F1 (Targeted) | 0.04 | 0.89 |
| ROC AUC (IID) | 0.7–0.85 | ~0.98 |
| ROC AUC (Non-IID) | 0.7–0.85 | ~0.97 |
| SSIM (Grad inversion) | 0.78 | 0.07 |
| Membership inf. acc. | 0.82 | 0.51 |
| Detection delay (s) | 4.82 | 2.14 |
| Comm. reduction | — | ~60% |
SecureDyn-FL under both IID and non-IID achieves accuracy and F1 scores within 1–2% of clean (attack-free) runs, AUC 0.97–0.98 in malicious alarm ROC, and privacy gains indicated by substantially lowered gradient inversion and membership inference success (Soomro et al., 10 Jan 2026).
7. Trade-offs, Limitations, and Future Directions
Trade-offs
- Cryptographic Cost: Larger keys increase privacy but add encryption/decryption latency.
- Pruning Aggressiveness: Heightened pruning accelerates communication but may degrade model accuracy if excessive.
- Auditing Sensitivity: High thresholds (smaller , ) reduce missed attacks at possible expense of false positives (benign update rejection).
Limitations
- Full Participation Assumption: All clients synchronize each round; does not yet address client selection or dropped participants.
- Single Auditor Trust Model: Relies on an honest CA; trust distribution via multiparty computation or DLT is prospective.
- GMM Auditing Scalability: Computational costs of GMM EM scale poorly with the number of clients; lightweight clustering may be required for massive deployments.
Prospective Enhancements
- Client Selection/Straggler Handling: Sampling or timeout mechanisms for large fleets.
- Model Architecture Advances: Hybrid CNN+LSTM or GNN architectures for richer feature learning.
- Differential Privacy Hybridization: Integrate DP with encryption to mitigate risks from colluding auditors.
- Distributed Trust: Auditing via blockchain or MPC to remove reliance on a single CA.
- Approximate/Scalable Auditing: Employ mini-batch or online clustering schemes for large-scale deployments (Soomro et al., 10 Jan 2026).