Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fog Architecture: Edge-Cloud Paradigm

Updated 5 February 2026
  • Fog architecture is a multi-tier system that bridges resource-constrained IoT devices and elastic cloud datacenters for low-latency processing and efficient data management.
  • It employs virtualization, SDN/NFV, and dynamic orchestration to allocate resources and guarantee QoS across heterogeneous network layers.
  • Real-world applications include smart cities, industrial IoT, and health monitoring, benefiting from reduced latency, optimized bandwidth, and enhanced security.

Fog architecture is the design paradigm that situates distributed computing, storage, and networking capabilities physically close to data sources and end-users, thus bridging the gap between @@@@1@@@@ and elastic cloud datacenters. Its purpose is to enable low-latency processing, bandwidth-efficient data management, improved scalability, and, increasingly, advanced programmability and security guarantees for diverse applications such as smart cities, industrial IoT, health monitoring, and next-generation networks. Fog architectures are highly heterogeneous, spanning multiple logical and physical layers, and incorporate virtualization, SDN/NFV, orchestration, and robust interfaces for cross-layer coordination and QoS control.

1. Architectural Models and Layered Structure

Fog architectures are generally defined as multi-tier models, interposing a distributed fog (edge) layer between IoT endpoints and the centralized cloud. Several canonical decompositions appear in the literature:

  • Three-Tier Model:
    • Perception/IoT layer: Raw sensor/actuator nodes, edge gateways (e.g., BLE, ZigBee, Wi-Fi interfaces, protocol translation) (Gupta et al., 2023, Moustafa, 2019).
    • Fog layer: Micro-data centers (“cloudlets”), enhanced routers, small servers with compute/storage/virtualization—deployed at various granularities (building, street, vehicle, room, etc.). Supports latency-sensitive computation, pre-processing, stream analytics, short-term storage, and local control-loops. Fog nodes often run containers/VMs and expose APIs for local and regional orchestration (Varshney et al., 2017, Naha et al., 2018, Abuseta, 2019).
    • Cloud layer: Hierarchical, centralized clusters that perform batch analytics, global data integration, model training, and high-volume long-term storage (Dastjerdi et al., 2016).
  • Fine-Grained Layer Models:
    • Certain reference designs add management, monitoring, and software-defined orchestration layers atop the physical/networking substrate (Dastjerdi et al., 2016, Naha et al., 2018).
    • Industrial reference architectures (e.g., FORA FCP) define explicit hardware, deterministic virtualization, middleware, orchestration, security, and application layers, with strong support for real-time and safety-critical workloads (Pop et al., 2020).
  • Hierarchical/Clustering Models:

2. Key Mechanisms: Virtualization, Orchestration, and QoS

Virtualization and Abstraction

Fog nodes typically run hypervisor or container engines (Docker, KVM, LXC) to support multi-tenancy and hardware abstraction (Varshney et al., 2017, Pop et al., 2020, Moustafa, 2019). Virtualization supports:

Orchestration and Control

  • SDN/NFV Integration:
  • Distributed Service Orchestration Engines (DSOEs):
    • Map high-level application requests (modeled as service graphs) onto actual service endpoints, including handling discovery, overlay construction, deployment commands, and VNF placement, using local peer-to-peer or centralized coordination (Gupta et al., 2016).
  • Task Placement and Resource Management:

Quality of Service (QoS) and SLA Management

QoS provisioning in fog requires joint optimization over:

  • Bandwidth and Latency Constraints: Paths and placements must meet per-flow constraints such as exe,f(e)Lreq(f)\sum_e x_{e,f} \cdot \ell(e) \leq L_{req}(f), where xe,fx_{e,f} encodes link path assignment (Gupta et al., 2016).
  • End-to-End Resource Minimization: Common objectives minimize f,exe,fBreq(f)\sum_{f,e} x_{e,f}\cdot B_{req}(f), subject to flow-specific and network-wide constraints (Gupta et al., 2016).
  • Adaptive VNF Placement: If direct overlay paths cannot satisfy constraints, the system instantiates VNFs for traffic shaping or acceleration on-the-fly.

3. Data, Control, and Service Flows

  • Data Flows:
    • Downstream: IoT device readings are batch-collected by edge gateways, undergo pre-processing/aggregation in fog nodes, and may trigger local actuation (e.g., control-message back to actuator) (Gupta et al., 2023).
    • Upstream: Pre-processed, compressed, or feature-extracted data are forwarded for further analysis or storage to higher fog tiers or the cloud (Dubey et al., 2016, Abuseta, 2019, Wang et al., 2018).
  • Control Flows:
    • Orchestration, scheduling, configuration updates, software deployment, and key management typically propagate top-down from global orchestrator/cloud towards fog gateways and IoT devices.
    • Microservices or tasks may migrate between fog nodes for load balancing or mobility support, using explicit migration hooks and stateless API layers (Wang et al., 2018).

4. Performance Models, Metrics, and Optimization

Fog architectures are quantitatively analyzed with models that explicitly capture energy use, latency, and bandwidth consumption:

  • Latency Reduction:
    • Rlatency=(LcloudLfog)/LcloudR_{latency} = (L_{cloud} - L_{fog})/L_{cloud}, where LfogL_{fog} is the round-trip to the nearest fog node.
    • Multiple architectural studies demonstrate reduction in average response time by 20–70% owing to fog placement of analytics and service loops (Abuseta, 2019, Wang et al., 2018, Dastjerdi et al., 2016).
  • Bandwidth Optimization:
    • Rbw=1(Dfog/Draw)R_{bw} = 1 - (D_{fog}/D_{raw}), expressing the benefit of fog-side filtering and aggregation.
  • Energy Efficiency:
    • MILP models for energy minimization consider both processing and networking, using device- and link-specific power profiles (Fadlelmula et al., 2023, Yosuf et al., 2020, Fadlelmula et al., 2022).
    • Passive optical and VLC fog architectures demonstrate up to 80–91% power savings compared to spine-and-leaf or pure cloud-centric solutions.
  • Resource Allocation Sketch:
    • Assign service ii to node jj if CjC_{j} allows: ixijCj\sum_{i} x_{i j} \leq C_{j}.
    • Joint objective: minαTi+βEj+γFj\min \alpha \sum T_{i} + \beta \sum E_{j} + \gamma \sum F_{j} (latency, energy, cost weights) (Naha et al., 2018, Fadlelmula et al., 2023).

5. Security, Reliability, and Management

  • Authentication and Trust:
  • Virtualization and Attack Protection:
  • Statistical Analytics and Federated Learning:
    • Local ML (e.g., federated CNN-LSTM) for predictive analytics is supported in modern fog architecture, with privacy preserved by only exchanging model weights (Sobati-M, 22 Jul 2025).
    • Digital twin simulations pre-validate any action before deployment, using edge-tier and macro-grid twins, further reducing error rates and energy waste.
  • Fault Tolerance and Autonomy:
    • Decentralized control loops (local MAPEaaS) allow continued operation during network splits or upstream cloud outages.
    • Migration APIs and distributed orchestration primitives enable mobile and dynamic fog deployments, supporting variable real-world conditions (Wang et al., 2018).

6. Specialized and Emerging Fog Architectural Extensions

  • PON-Enabled and Passive Optical Fog:
  • Industrial/Real-Time Systems:
    • FORA FCP architecture supports deterministic virtualization, time-sensitive networking (TSN), safety-critical partitioning, and compositional scheduling, with constraint-programming for end-to-end scheduling (Pop et al., 2020).
  • Satellite-Terrestrial Fog:
    • LEO satellites equipped with virtualized fog nodes (FSNs) provide on-orbit computation, edge AI, and cooperative handovers, orchestrated with terrestrial 6G fog/cloud via integrated waveform design and federated learning (Yuan et al., 23 Mar 2025).
  • QoS-Aware Software-Defined Fog:
    • SDFog uses service graphs, SDN/NFV, global resource monitoring, and QoS-driven multi-commodity flow optimization to guarantee video quality (tested via SSIM in smart home) under intense background load (Gupta et al., 2016).

7. Service Decomposition and Programmability

  • Linked-Microservices (LMS) Model:
    • Decomposes monolithic applications into microservices that can be deployed flexibly across the fog–cloud continuum, respecting resource and data-dependency constraints (Alturki et al., 2019).
    • Experimental results show bandwidth reductions (10%–70%) for hybrid fog–cloud decomposed pipelines, with modest or dataset-dependent impact on accuracy and end-to-end latency.
  • Programming Abstractions:
    • Distributed data-flow frameworks (Node-RED, uFlow) enable rapid deployment of per-node or cross-cluster flows, supporting real-time migration and dynamic placement (Wang et al., 2018).
    • Resource-constraint, placement, and migration APIs are essential for responsive and adaptive fog application engineering.

Fog architecture thus provides a highly adaptive, layered, and programmable substrate, supporting computation, storage, service orchestration, and enhanced networking close to data sources. Continued advances in hardware abstraction, energy optimization, QoS enforcement, real-time control, distributed ML, and domain-specific security are pushing the boundary of what is possible in distributed and edge-centric systems (Varshney et al., 2017, Dastjerdi et al., 2016, Pop et al., 2020, Sobati-M, 22 Jul 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to FOG Architecture.