Partitioned Quantum Neural Network (PQNN)
- Partitioned Quantum Neural Networks (PQNN) are hybrid architectures that divide quantum and classical computations across subcomponents to enhance scalability and privacy.
- They employ parallel hybrid, partitioned-feature, and split-learning approaches to reduce quantum resource demands and mitigate bottlenecks in contemporary QNNs.
- Empirical evaluations demonstrate that PQNNs achieve lower error rates, improved robustness, and effective privacy guarantees in tasks like regression, image classification, and anomaly detection.
A Partitioned Quantum Neural Network (PQNN) is a class of hybrid or distributed quantum neural network architectures in which either the input features, the circuit model, or the computational workflow is explicitly divided across parallel quantum and/or classical subcomponents. These subcomponents process disjoint or redundant partitions of the data or model, and their outputs are subsequently aggregated. This architectural paradigm addresses key scalability, expressiveness, resource, and privacy bottlenecks inherent in contemporary quantum neural network (QNN) methodologies. The PQNN concept spans parallel hybrid quantum-classical networks, partitioned-feature distributed QNNs, federated or split quantum learning with compressed communication channels, and privacy-preserving anomaly detection under adversarial and industrial constraints.
1. Definitions and Architectural Paradigms
Three principal instantiations of PQNN have been formalized in current literature:
- Parallel Hybrid PQNN (quantum–classical): The input vector is broadcast without alteration to two or more independent computational branches: a quantum variational circuit (VQC) and a classical multilayer perceptron (MLP). The outputs are linearly combined with trainable weights, yielding a final prediction with branch-specific interpretability (Kordzanganeh et al., 2023).
- Partitioned-Feature PQNN (distributed QNN): The input vector of high dimension is segmented into disjoint blocks. Each block is processed by an independent small quantum neural sub-network. The outputs (typically expectation vectors of observables) from each subnetwork are summed or otherwise aggregated to form the overall network prediction (Kawase, 2023, Ngo et al., 16 Jan 2026).
- Split-Learning PQNN (federated/split quantum): The global QNN is split between client and server. Clients apply local QNNs and then project classical or quantum-compressed features (via cross-channel pooling) upstream to a global quantum/classical head, optimizing for both privacy and communication cost (Yun et al., 2022, Ngo et al., 16 Jan 2026).
All instantiations exploit the ability to distribute quantum circuit depth, width, or processing responsibility, enabling more favorable scaling properties and practical compatibility with hardware-constrained near-term quantum processing units (NPQCs).
2. Formal Model and Mathematical Structure
The precise mathematical formalizations differ between PQNN variants:
2.1 Parallel Hybrid PQNN (Kordzanganeh et al., 2023)
Given an input :
- Quantum branch (): A -qubit VQC alternates layers of angle-embedded feature encoding and variational parameter unitaries. Measurement yields an -dimensional output vector.
- Classical branch (): A single hidden-layer MLP with neurons, ReLU activations, and an output mapping to .
- Aggregation: Trained affine mixing with learnable vectors :
2.2 Partitioned-Feature PQNN (Kawase, 2023, Ngo et al., 16 Jan 2026)
For , partition into contiguous blocks . Each subnetwork:
- Quantum circuit per block: Each block uses -qubit angle-encoding circuits interleaved with parameterized variational layers. The output is an expectation vector across observables.
- Aggregation: Ensemble (sum) output across all blocks:
2.3 Split-Learning PQNN (Yun et al., 2022)
- Local side: Quantum CNN (QCNN) encodes image patches; outputs are compressed via cross-channel pooling:
- Server side: Processes pooled features with another QCNN and a global quantum classifier.
- Communication: Only compressed features and label tuples are transmitted.
3. Training Protocols and Gradient Estimation
PQNN training is inherently hybrid, supporting both classical and quantum optimization loops:
- Loss Functions: Standard mean squared error (MSE) for regression or cross-entropy for classification tasks, sometimes regularized (e.g., with penalties).
- Optimizer: Gradient-based methods (Adam, SGD) with separate learning rates for quantum and classical parameters.
- Quantum Gradients: Parameter-shift rule for all variational circuit angles:
- Classical Gradients: Standard backpropagation for MLP weights and affine combiner parameters.
- Distributed/Split Protocols: In PQNN split learning, gradient messages are communicated between clients and a central server. Privacy is enforced as raw inputs are never transmitted.
4. Interpretability, Expressivity, and Theoretical Analysis
- Fourier Structure: Angle-embedded VQCs generate truncated multivariate Fourier series expansions in the input. This structure limits the expressivity of quantum branches to band-limited, smooth (sinusoidal) functions (Kordzanganeh et al., 2023). Classical MLP branches, as universal approximators in position space, complement this with the ability to fit high-frequency, localized, or non-harmonic features.
- Resource Scaling: Partitioning input features or circuits reduces required quantum resources per partition block, alleviates barren-plateau issues, and matches the operational constraints of NISQ devices (Kawase, 2023, Ngo et al., 16 Jan 2026).
- Privacy Amplification: In split-learning PQNNs, both empirical obfuscation and formal -differential privacy bounds can be attained by integrating quantum noise (e.g., depolarizing channels) and classical Gaussian mechanisms. Composition rules scale the privacy budget linearly in the number of parallel blocks (Ngo et al., 16 Jan 2026).
5. Empirical Performance and Scalability
5.1. Synthetic Regression (Kordzanganeh et al., 2023)
- Parallel hybrid PQNN achieves MSE an order of magnitude lower than either pure VQC or pure MLP on periodic-plus-noise regression tasks. Example: 1D task, PQNN MSE ≈ , versus VQC ≈ , MLP ≈ .
5.2. Image Classification (Kawase, 2023, Yun et al., 2022)
- Partitioned QNN outperforms single QNN in multi-class classification with lower required qubit count:
- Semeion: Distributed , accuracy = .
- MNIST: Distributed , accuracy = $0.96140$.
- Quantum split learning (QSL): On MNIST, split PQNN achieves accuracy (vs. QFL, standalone QNN). Cross-channel pooling reduces communication 16× and increases accuracy by ≈6% over naive pooling (Yun et al., 2022).
5.3. Smart Grid Anomaly Detection (Ngo et al., 16 Jan 2026)
- On the ICS Smart Grid dataset, PQNN (QUPID) attains accuracy up to $0.940$ with ROC-AUC $0.975$ and F1 $0.925$, outperforming FTTransformer and deep classical baselines.
- Robustness: With adversarial FGSM/PGD (), R-PQNN sustains accuracy vs. classical models' .
- Scalability: Qubit requirement for each block is . Parallelization across blocks yields linear speedup; hardware requirements per block fit NISQ devices with qubits.
6. Privacy, Communication, and Robustness
- Communication Efficiency: PQNN split/federated architectures (e.g., with cross-channel pooling) transmit compressed feature maps ( bytes), not raw data or model weights, minimizing client-server bandwidth (Yun et al., 2022).
- Privacy Guarantees: Empirical visualization reveals transmitted features are highly obfuscated. With quantum and classical noise, theoretical differential privacy is amplified, and model robustness to adversarial manipulations is certified (Ngo et al., 16 Jan 2026).
- Trade-offs: Privacy and communication efficiency are achieved at the cost of potentially reduced expressivity if feature partitioning is excessive, as global correlations across feature blocks may be harder to recover (Kawase, 2023).
7. Limitations, Open Directions, and Extensions
- Partitioning Sensitivity: Excessive partitioning ( large) can degrade performance due to insufficient modeling of global feature correlations.
- Quantum Hardware Limitations: No PQNN variant has yet been demonstrated on NISQ hardware at full model scale; all published results are based on classical simulation.
- Data/Deployment Constraints: Experiments so far have used downsampled datasets or spatially homogeneous data splits; extensions to heterogeneous non-IID settings are open.
- Architectural Extensions: Dynamic routing, adaptive block weighting (learned or input-dependent ), multi-view ensembles, and integration of quantum differential privacy primitives are active topics.
PQNN architectures represent a flexible family of hybrid and distributed QNN models that provide clear benefits in terms of scalability, robustness, privacy, and accuracy across regression, classification, and anomaly detection tasks. The paradigm leverages partitioning at the architectural, computational, or communication levels to circumvent key obstacles of monolithic quantum models and offers promising routes for high-dimensional, resource-constrained, and privacy-sensitive machine learning applications (Kordzanganeh et al., 2023, Yun et al., 2022, Ngo et al., 16 Jan 2026, Kawase, 2023).