Confidential Containers (CoCo)
- Confidential Containers are cloud-native constructs that employ hardware-backed TEEs to establish strong trust boundaries for containerized workloads on potentially untrusted infrastructure.
- They utilize a Container-in-TEE model that encapsulates the full container runtime, resulting in increased TCB size and measurable performance overheads across various hardware platforms.
- Deployments must navigate trade-offs between enhanced security and operational complexity, prompting design innovations such as per-container TEEs and rigorous attestation protocols.
Confidential Containers (CoCo) are cloud-native security constructs that employ Trusted Execution Environments (TEEs) to ensure confidentiality and integrity of containerized workloads deployed on potentially untrusted infrastructure. CoCo frameworks interpose hardware-backed TEE boundaries between sensitive application runtime and the host operating system, hypervisor, and orchestration stack, allowing tenants to protect code, data, and execution flows even against privileged adversaries residing in the cloud provider or host stack. Multiple architectural paradigms exist for realizing CoCo, differentiated by isolation granularity, trusted computing base (TCB) size, attestation coverage, and performance overheads. This article synthesizes deeply technical perspectives from the state-of-the-art research literature, focusing on the canonical “Container-in-TEE” model, its quantitative properties, security impact, and operational tradeoffs (Lu et al., 3 Jan 2026).
1. Architectural Foundations of Container-in-TEE CoCo
The archetypal CoCo implementation encapsulates the entire container runtime environment inside a hardware TEE, such as Intel SGX, Intel TDX, or AMD SEV, to create a robust cryptographic trust boundary for containers (Lu et al., 3 Jan 2026). The architecture is bifurcated into untrusted host-side components—a container orchestrator (e.g., Kubernetes kubelet or Docker daemon), shim, and interface agents—and trusted enclave-side logic comprising:
- The full container engine (e.g., Docker-in-Enclave, containerd-in-Enclave)
- LibOS or enclave shim (providing POSIX-like APIs)
- Container runtime libraries (runc, networking plugins, OS services)
- Guest application, language runtimes, and dependencies
The typical lifecycle is as follows:
- Orchestrator signals via host agent/shim.
- Host agent ECALL into the Agent Enclave.
- Agent Enclave instantiates an App Enclave (or loads containerd/LibOS).
- In-enclave runc/containerd unpacks images, configures namespaces, networking.
- Application inside the enclave handles disk/network I/O via OCALLs to the host kernel.
Letting HostOS denote the untrusted domain and TEE denote the enclave, the TCB is formally defined as:
$TCB_\text{CoCo} = \{\text{TEE_Runtime_SDK}\} \cup \{\text{LibOS}\} \cup \{\text{Containerd, runc, shim}\} \cup \{\text{Host-side Agent Enclave code}\}$
Inter-domain communication crosses the enclave boundary through ECALL/OCALL or vsock channels, with AEAD protection when supported.
2. Quantitative Characterization of Trusted Computing Base
The inflation of TCB in CoCo deploys is driven by the inclusion of multiple layers of complex code. Empirical line-of-code (LoC) tallies, consistent with open-source references, are:
| Component | Approx. LoC Contribution |
|---|---|
| TEE SDK/runtime (SGX/TDX/SEV) | ~20,000 |
| LibOS (e.g., Occlum) | ~100,000 |
| Container runtime & daemon | ~200,000 |
| In-enclave OS services | ~50,000 |
| Application dependencies | variable |
Total estimated:
This TCB size exceeds the commonly recommended ≤10,000 LoC ceiling required for formal verification by an order of magnitude. The vast inter-module call graph and cross-layer invariants, such as namespace correctness and policy enforcement, make exhaustive analysis practically infeasible.
3. Performance Analysis: Empirical Overheads
CoCo frameworks deployed on Intel TDX, AMD SEV, and SGX have been quantitatively benchmarked against lightweight architectures such as Arca (Lu et al., 3 Jan 2026), providing precise overhead breakdowns. Representative results from UnixBench are summarized below (Arca normalized to 1.00):
| Subtest | CoCo on TDX | Overhead_CoCo (%) | CoCo on SEV | Overhead_CoCo (%) |
|---|---|---|---|---|
| context1 | 0.78 | 28 | 0.92 | 8 |
| syscall | 0.85 | 15 | 0.95 | 5 |
| spawn | 0.75 | 25 | – | – |
| fstime_write | 0.88 | 12 | 0.94 | 6 |
| pipe | 0.98 | 2 | 0.97 | 3 |
I/O in SGX-based CoCo traverses a path: application → OCALL → LibOS → enclave → host kernel, impacting syscall-intensive and context-switch workloads. Cross-boundary cryptography for data transfer commonly applies equations of the form:
Aggregate slowdowns range from 5–8% (SEV) to 25% (TDX) for context- and syscall-dominated microbenchmarks.
4. Security Implications and Attack Surface Considerations
The CoCo paradigm provides robust defense against host-level root adversaries, preventing direct inspection or tampering with enclave memory. However, several subtle security challenges are introduced:
- Cross-layer dependency: A flaw in in-enclave services (containerd, init, plugins, LibOS) potentially compromises the entire TEE boundary, undermining all colocated containers.
- OCALL/ECALL frequency: Marshaling logic at enclave boundaries increases the exploitable surface for Iago-style or covert channel attacks.
- Blast radius: Shared enclave per VM enables lateral movement—once a shared TCB is subverted, all workloads in that instance are vulnerable.
- Attestation complexity: Verification of the enclave's integrity requires attestation not only for the application, but also for LibOS, containerd, plugins, agent code, complicating trust guarantees.
A plausible implication is that, while CoCo isolates workloads from host root, it does not intrinsically guarantee isolation between compromised or buggy container runtime components within the same TEE instance.
5. Comparative Evaluation: Strengths and Weaknesses
Key strengths of Container-in-TEE CoCo include broad hardware compatibility (SGX, SEV, TDX), the ability to run unmodified container images within existing orchestration (Kubernetes CRI plugins), and mature integration with upstream cloud tooling. Weaknesses, however, are considerable:
- TCB inflation: 300–400 KLoC vs. alternative minimal TCB architectures (~10–20 KLoC).
- Performance: 5–25% slowdowns depending on workload profile and hardware platform.
- Isolation semantics: Single TEE per VM exposes tenants to non-trivial blast radius in case of compromise.
- Operational overhead: Attestation and trust verification workflows are more complex due to the enlarged and multi-component TCB.
In comparison, architectures like Arca (Lu et al., 3 Jan 2026) invert the model (“TEE-in-Container”), achieving per-workload isolation, dramatically reduced TCB, and simplified attestation, with 0–5% overhead on SEV and 0–20% on TDX.
6. Lessons Learned, Best Practices, and Evolving Directions
Empirical and formal analysis indicates several best practices for secure and performant CoCo deployments:
- TCB minimization: Limit trusted code to necessary application logic and cryptographic glue.
- Boundary reduction: Minimize cross-enclave calls and surface area of syscalls exposed to the host.
- Per-container TEEs: Deploy isolation at the most granular level practicable to contain damage and simplify attestation.
- Formal verification: Pursue design patterns that facilitate verification (≤10 KLoC), limiting reliance on opaque or third-party libraries.
- Transparent attestation: Prefer simple, single-point measurements (e.g., MRENCLAVE per-container) for scalable, audit-friendly trust management.
This suggests an ongoing transition from monolithic Container-in-TEE deployments toward minimal TCB, per-workload TEE models that align more closely with the original minimal-trust principle of TEE architectures. Emerging alternative patterns, such as TEE-in-Container (Arca), decentralized code management (dstack), and shared-enclave primitives (TEEMATE), offer promising directions for scalable, auditable, and resilient confidential container frameworks (Lu et al., 3 Jan 2026, Lee et al., 2024, Zhou et al., 15 Sep 2025).