Papers
Topics
Authors
Recent
Search
2000 character limit reached

Partitioned Quantum Neural Network (PQNN)

Updated 23 January 2026
  • Partitioned Quantum Neural Networks (PQNN) are hybrid architectures that divide quantum and classical computations across subcomponents to enhance scalability and privacy.
  • They employ parallel hybrid, partitioned-feature, and split-learning approaches to reduce quantum resource demands and mitigate bottlenecks in contemporary QNNs.
  • Empirical evaluations demonstrate that PQNNs achieve lower error rates, improved robustness, and effective privacy guarantees in tasks like regression, image classification, and anomaly detection.

A Partitioned Quantum Neural Network (PQNN) is a class of hybrid or distributed quantum neural network architectures in which either the input features, the circuit model, or the computational workflow is explicitly divided across parallel quantum and/or classical subcomponents. These subcomponents process disjoint or redundant partitions of the data or model, and their outputs are subsequently aggregated. This architectural paradigm addresses key scalability, expressiveness, resource, and privacy bottlenecks inherent in contemporary quantum neural network (QNN) methodologies. The PQNN concept spans parallel hybrid quantum-classical networks, partitioned-feature distributed QNNs, federated or split quantum learning with compressed communication channels, and privacy-preserving anomaly detection under adversarial and industrial constraints.

1. Definitions and Architectural Paradigms

Three principal instantiations of PQNN have been formalized in current literature:

  • Parallel Hybrid PQNN (quantum–classical): The input vector is broadcast without alteration to two or more independent computational branches: a quantum variational circuit (VQC) and a classical multilayer perceptron (MLP). The outputs are linearly combined with trainable weights, yielding a final prediction with branch-specific interpretability (Kordzanganeh et al., 2023).
  • Partitioned-Feature PQNN (distributed QNN): The input vector of high dimension is segmented into mm disjoint blocks. Each block is processed by an independent small quantum neural sub-network. The outputs (typically expectation vectors of observables) from each subnetwork are summed or otherwise aggregated to form the overall network prediction (Kawase, 2023, Ngo et al., 16 Jan 2026).
  • Split-Learning PQNN (federated/split quantum): The global QNN is split between client and server. Clients apply local QNNs and then project classical or quantum-compressed features (via cross-channel pooling) upstream to a global quantum/classical head, optimizing for both privacy and communication cost (Yun et al., 2022, Ngo et al., 16 Jan 2026).

All instantiations exploit the ability to distribute quantum circuit depth, width, or processing responsibility, enabling more favorable scaling properties and practical compatibility with hardware-constrained near-term quantum processing units (NPQCs).

2. Formal Model and Mathematical Structure

The precise mathematical formalizations differ between PQNN variants:

Given an input xRnx \in \mathbb{R}^n:

  • Quantum branch (fqf_q): A KK-qubit VQC alternates LL layers of angle-embedded feature encoding and variational parameter unitaries. Measurement yields an MM-dimensional output vector.

fq(x;θ)=[ψ(x;θ)Mmψ(x;θ)]m=1Mf_q(x; \theta) = [\langle \psi(x; \theta) | M_m | \psi(x; \theta) \rangle]_{m=1}^M

  • Classical branch (fcf_c): A single hidden-layer MLP with FF neurons, ReLU activations, and an output mapping to RM\mathbb{R}^M.
  • Aggregation: Trained affine mixing with learnable vectors α,βRM\alpha, \beta \in \mathbb{R}^M:

f(x;θ,ϕ,α,β)=αfq(x;θ)+βfc(x;ϕ)f(x; \theta, \phi, \alpha, \beta) = \alpha \odot f_q(x; \theta) + \beta \odot f_c(x; \phi)

For xRdx \in \mathbb{R}^d, partition into mm contiguous blocks xi,jRdjx_{i,j} \in \mathbb{R}^{d_j}. Each subnetwork:

  • Quantum circuit per block: Each block uses njn_j-qubit angle-encoding circuits interleaved with parameterized variational layers. The output is an expectation vector yi,jy_{i,j} across doutd_{\mathrm{out}} observables.
  • Aggregation: Ensemble (sum) output across all blocks:

y^i=Softmax(cj=1myi,j)\hat{y}_i = \mathrm{Softmax}\left( c \sum_{j=1}^m y_{i,j} \right)

  • Local side: Quantum CNN (QCNN) encodes image patches; outputs are compressed via cross-channel pooling:

f~k=fkαk=i=1coutfikαik\tilde{f}^k = f^k \cdot \alpha^k = \sum_{i=1}^{c_\mathrm{out}} f^k_i \alpha^k_i

  • Server side: Processes pooled features F~k\tilde{F}^k with another QCNN and a global quantum classifier.
  • Communication: Only compressed features and label tuples are transmitted.

3. Training Protocols and Gradient Estimation

PQNN training is inherently hybrid, supporting both classical and quantum optimization loops:

  • Loss Functions: Standard mean squared error (MSE) for regression or cross-entropy for classification tasks, sometimes regularized (e.g., with 2\ell_2 penalties).
  • Optimizer: Gradient-based methods (Adam, SGD) with separate learning rates for quantum and classical parameters.
  • Quantum Gradients: Parameter-shift rule for all variational circuit angles:

Eθ=E(θ+π2)E(θπ2)\frac{\partial E}{\partial \theta_\ell} = E(\theta_\ell + \frac{\pi}{2}) - E(\theta_\ell - \frac{\pi}{2})

  • Classical Gradients: Standard backpropagation for MLP weights and affine combiner parameters.
  • Distributed/Split Protocols: In PQNN split learning, gradient messages are communicated between clients and a central server. Privacy is enforced as raw inputs are never transmitted.

4. Interpretability, Expressivity, and Theoretical Analysis

  • Fourier Structure: Angle-embedded VQCs generate truncated multivariate Fourier series expansions in the input. This structure limits the expressivity of quantum branches to band-limited, smooth (sinusoidal) functions (Kordzanganeh et al., 2023). Classical MLP branches, as universal approximators in position space, complement this with the ability to fit high-frequency, localized, or non-harmonic features.
  • Resource Scaling: Partitioning input features or circuits reduces required quantum resources per partition block, alleviates barren-plateau issues, and matches the operational constraints of NISQ devices (Kawase, 2023, Ngo et al., 16 Jan 2026).
  • Privacy Amplification: In split-learning PQNNs, both empirical obfuscation and formal (ϵ,δ)(\epsilon,\delta)-differential privacy bounds can be attained by integrating quantum noise (e.g., depolarizing channels) and classical Gaussian mechanisms. Composition rules scale the privacy budget linearly in the number of parallel blocks (Ngo et al., 16 Jan 2026).

5. Empirical Performance and Scalability

  • Parallel hybrid PQNN achieves MSE an order of magnitude lower than either pure VQC or pure MLP on periodic-plus-noise regression tasks. Example: 1D task, PQNN MSE ≈ 2×1042 \times 10^{-4}, versus VQC ≈ 4×1024 \times 10^{-2}, MLP ≈ 1×1031 \times 10^{-3}.
  • Partitioned QNN outperforms single QNN in multi-class classification with lower required qubit count:
    • Semeion: Distributed m=2m=2, accuracy = 0.94787±0.014990.94787 \pm 0.01499.
    • MNIST: Distributed m=14m=14, accuracy = $0.96140$.
  • Quantum split learning (QSL): On 14×1414\times 14 MNIST, split PQNN achieves 64.96%64.96\% accuracy (vs. 63.32%63.32\% QFL, 58.13%58.13\% standalone QNN). Cross-channel pooling reduces communication 16× and increases accuracy by ≈6% over naive pooling (Yun et al., 2022).
  • On the ICS Smart Grid dataset, PQNN (QUPID) attains accuracy up to $0.940$ with ROC-AUC $0.975$ and F1 $0.925$, outperforming FTTransformer and deep classical baselines.
  • Robustness: With adversarial FGSM/PGD (α=0.05\|\alpha\|_\infty=0.05), R-PQNN sustains 80%\approx 80\% accuracy vs. classical models' 70%70\%.
  • Scalability: Qubit requirement for each block is log2(tm)\propto \log_2(t m). Parallelization across KK blocks yields linear speedup; hardware requirements per block fit NISQ devices with 100\sim 100 qubits.

6. Privacy, Communication, and Robustness

  • Communication Efficiency: PQNN split/federated architectures (e.g., with cross-channel pooling) transmit compressed feature maps (64\sim 64 bytes), not raw data or model weights, minimizing client-server bandwidth (Yun et al., 2022).
  • Privacy Guarantees: Empirical visualization reveals transmitted features are highly obfuscated. With quantum and classical noise, theoretical differential privacy is amplified, and model robustness to adversarial manipulations is certified (Ngo et al., 16 Jan 2026).
  • Trade-offs: Privacy and communication efficiency are achieved at the cost of potentially reduced expressivity if feature partitioning is excessive, as global correlations across feature blocks may be harder to recover (Kawase, 2023).

7. Limitations, Open Directions, and Extensions

  • Partitioning Sensitivity: Excessive partitioning (mm large) can degrade performance due to insufficient modeling of global feature correlations.
  • Quantum Hardware Limitations: No PQNN variant has yet been demonstrated on NISQ hardware at full model scale; all published results are based on classical simulation.
  • Data/Deployment Constraints: Experiments so far have used downsampled datasets or spatially homogeneous data splits; extensions to heterogeneous non-IID settings are open.
  • Architectural Extensions: Dynamic routing, adaptive block weighting (learned or input-dependent α(x),β(x)\alpha(x), \beta(x)), multi-view ensembles, and integration of quantum differential privacy primitives are active topics.

PQNN architectures represent a flexible family of hybrid and distributed QNN models that provide clear benefits in terms of scalability, robustness, privacy, and accuracy across regression, classification, and anomaly detection tasks. The paradigm leverages partitioning at the architectural, computational, or communication levels to circumvent key obstacles of monolithic quantum models and offers promising routes for high-dimensional, resource-constrained, and privacy-sensitive machine learning applications (Kordzanganeh et al., 2023, Yun et al., 2022, Ngo et al., 16 Jan 2026, Kawase, 2023).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Partitioned Quantum Neural Network (PQNN).