Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Fingerprinting: 1D CNN Attack on Tor

Updated 11 January 2026
  • Deep Fingerprinting (DF) is a website fingerprinting method that leverages a one-dimensional CNN to analyze timing and directional metadata in Tor traffic, breaching anonymity defenses.
  • It employs fixed-length vector representations and a modular deep convolutional architecture with block-wise feature extraction and aggressive dropout to maintain high classification accuracy.
  • DF demonstrates varied performance against defenses, achieving up to 90.7% accuracy with WTF-PAD and facing significant drops with countermeasures like Walkie-Talkie.

Deep Fingerprinting (DF) is a website fingerprinting attack utilizing a one-dimensional convolutional neural network (1D CNN) to classify encrypted Tor traffic into website identities. DF operates in an adversarial model where a passive observer intercepts only the timing and direction of packet flows, targeting a core anonymity vulnerability in low-level traffic features resistant to traditional defenses. DF was introduced as a principal method for breaching even advanced lightweight padding defenses such as WTF-PAD, while retaining high accuracy in both closed-world and open-world scenarios (Sirinam et al., 2018).

1. Problem Formulation and Threat Model

The DF attack formalizes the website fingerprinting task, in which an adversary (e.g., ISP, Wi-Fi sniffer, or compromised Tor guard) infers website visits over Tor based solely on observable packet directions and timing. In the Tor protocol, all packet payloads are encrypted and standardized, making each cell exactly 512 bytes. Therefore, attacks are predicated on the sequence and timing of ingress/egress packets, ignoring size as a discriminative feature. The adversarial setup comprises:

  • Observation point: Local, passive eavesdropping between Tor client and its guard, observing only TLS channel metadata (timing and direction, never payloads).
  • Attack paradigms:
    • Closed-world: User activity restricted to NN monitored sites, with attacker training and evaluating on this same set.
    • Open-world: User activity spans NN monitored sites and MNM \gg N unmonitored sites; attacker trains on NN monitored plus sampled unmonitored traffic, testing generalization against arbitrary background traffic.

2. Data Representation and Preprocessing

Website visits are transduced into traces, each represented as sequences [(t1,+1),(t2,1),][(t_1, +1), (t_2, -1), …], where +1+1 denotes outgoing and 1-1 incoming Tor cells. As all Tor cell sizes are fixed, only directionality and order encode discriminative information. Each sequence is preprocessed as follows:

  • Traces are truncated or zero-padded to a fixed length L=5000L=5000, yielding input vectors xR5000x \in \mathbb{R}^{5000}, xi{1,0,+1}x_i \in \{-1, 0, +1\}.
  • This fixed input structure standardizes the batch processing pipeline for CNN ingestion, removing variable-length ambiguities and aligning with GPU-based parallelization requirements.

3. Deep Fingerprinting CNN Architecture

DF employs a deep 1D convolutional architecture structured as follows:

Feature Extraction (Blocks 1–4):

  • Each block consists of two consecutive 1D convolutional layers, each with FkF_k filters of kernel size K=8K=8, stride s=4s=4.
  • Filters per block: F1=32F_1=32, F2=64F_2=64, F3=128F_3=128, F4=256F_4=256.
  • Activation: Exponential Linear Unit (ELU) for block 1, Rectified Linear Unit (ReLU) for others.
  • Batch Normalization follows every convolution.
  • Max pooling with window P=8P=8, stride P=8P=8, and dropout p=0.1p=0.1 for regularization after each block.

Classification Head:

  • Flattening yields a feature vector (≈1024 elements).
  • Two fully connected (FC) layers with 512 units each, batch normalization, and dropout (p=0.7p=0.7 and p=0.5p=0.5).
  • Output via softmax layer over NN classes (i.e., monitored sites).

Mathematical Layer Operations:

  • 1D convolution:

yj(f)=i=1Kwi,fxjs+i1+bfy_j^{(f)} = \sum_{i=1}^{K} w_{i,f} \cdot x_{j \cdot s + i - 1} + b_f

  • ELU activation: ELU(z)=z\operatorname{ELU}(z) = z if z>0z > 0, else α(ez1)\alpha(e^z-1) with α=1\alpha=1; ReLU(z)=max(0,z)(z)=\max(0, z).
  • Max pooling: yj(f)=max0m<PxjP+m(f)y_j^{(f)} = \max_{0\leq m<P}x_{j\cdot P+m}^{(f)}.
  • Dropout: diBernoulli(1p)d_i\sim \operatorname{Bernoulli}(1-p), yi=dixiy_i=d_i\cdot x_i (train time).

4. Training Methodology and Regularization

DF is optimized using categorical cross-entropy loss:

L=c=1Nyclogy^cL = -\sum_{c=1}^{N} y_c \log \hat{y}_c

where yy is a one-hot true label and y^\hat{y} is network output.

  • Optimizer: Adamax (variant of Adam), parameters α=0.002\alpha=0.002, β1=0.9\beta_1=0.9, β2=0.999\beta_2=0.999, ϵ=107\epsilon=10^{-7}.
  • Batch size: 128; epochs: 30.
  • Regularization: Dropout (tuned per layer, up to p=0.7p=0.7), batch normalization.
  • Training/Validation/Test splits: For closed-world, 80%/10%/10% per site. For open-world, 900 traces per monitored site plus up to 20,000 one-off unmonitored, with testing on 100 traces per monitored and 20,000 unmonitored.

The architecture and hyperparameters were selected using block-wise search transferred across datasets. Batch normalization, though doubling per-epoch computation, accelerated convergence, typically stabilizing in 30 epochs.

5. Evaluation Metrics and Results

Performance is evaluated under both defended and undefended Tor traffic, using accuracy for closed-world and precision/recall/FPR/TPR for open-world.

Closed-World Results (95 monitored sites)

  • Undefended traffic:
    • DF: 98.3%
    • CUMUL (SVM): 97.3%
    • k-NN: 95.0%
    • k-FP (random forest \to k-NN): 95.5%
    • AWF (CNN): 94.9%
    • SDAE: 92.3%
  • Defended traffic (accuracy in %):
Defense Bandwidth Overhead Latency DF AWF CUMUL k-FP k-NN SDAE
BuFLO 246% 137% 12.6 11.7 13.5 13.1 10.4 9.2
Tamaraw 328% 242% 11.8 12.9 16.8 11.0 9.7 11.8
WTF-PAD 64% 0% 90.7 60.8 60.3 69.0 16.0 36.9
Walkie-Talkie 31% 34% 49.7 45.8 38.4 7.0 20.2 23.1

Open-World Results (95 monitored, 20,000 unmonitored)

  • Undefended: DF achieves up to (P=0.99,R=0.94)(P=0.99, R=0.94); at R=0.94R=0.94, FPR\approx0.004.
  • WTF-PAD: DF precision P=0.96P=0.96, recall R=0.68R=0.68 (tuned for precision), or P=0.67P=0.67, R=0.96R=0.96 (tuned for recall). Competing attacks perform near random.
  • Walkie-Talkie: All attacks, including DF, degrade to P<0.36P<0.36 for any non-trivial recall.

Further, as training on unmonitored traffic scales to 20,000, DF shows TPR\to0.957, FPR\to0.007. Against mis-implemented (asymmetric collisions) Walkie-Talkie, DF accuracy is raised to ≈87.2% (closed-world), with open-world TPR=0.85, FPR=0.23.

Top-2 accuracy for DF against Walkie-Talkie in closed-world is 98.44%, indicating robust recovery of both true and decoy sites, reflecting the defense mechanism.

6. Design Rationale and Countermeasures Against Overfitting

Critical architectural decisions in DF are empirically motivated:

  • ELU activation in block 1 preserves the sign (especially negative) directionality, important as incoming and outgoing cells are encoded by ±1\pm1.
  • Two-layer FC classifier with batch normalization plus aggressive dropout (up to p=0.7p=0.7) counteracts overfitting even with high-capacity models.
  • Modular block design (2×Conv+BN+Act \to Pool \to Dropout), inspired by VGG/ResNet, facilitates hierarchical temporal feature extraction.
  • Batch normalization increases per-epoch compute time, but effectively halves the number of epochs required for convergence.
  • Hyperparameter search is systematic, with block-wise tuning generalized across datasets.

This suggests that effective regularization and architectural innovations in deep learning can yield large advances in practical website fingerprinting capabilities even under strong defense mechanisms and unbalanced open-world class distributions.

7. Implications and Open Challenges

The DF attack shows that sophisticated deep learning architectures, specifically 1D CNNs, can outperform both handcrafted feature-based and prior deep-learning-based attacks on Tor traffic, including in the presence of adaptive padding defenses such as WTF-PAD. While BuFLO and Tamaraw induce overheads prohibitive for practical deployment (e.g., up to 328% bandwidth, 242% latency), lighter-weight defenses like Walkie-Talkie are more effective at resisting DF, limiting accuracy to approximately 49.7% in closed-world and rendering precision <0.36<0.36 in open-world.

A plausible implication is that future defenses must be explicitly constructed with the expressiveness of deep neural attackers in mind, as existing lightweight countermeasures can be decisively broken by architectures like DF. DF’s high precision-recall and ROC performance in open-world settings highlight the outstanding threat posed by these attacks. Evaluation and deployment of defenses must therefore account for both classical and deep learning-based adversaries, balancing practical overheads with resilience to sophisticated attacks (Sirinam et al., 2018).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Deep Fingerprinting (DF).