Papers
Topics
Authors
Recent
Search
2000 character limit reached

Domain-aware priors enable vertical federated learning in data-scarce coral multi-omics

Published 31 Dec 2025 in q-bio.QM | (2601.00050v1)

Abstract: Vertical federated learning enables multi-laboratory collaboration on distributed multi-omics datasets without sharing raw data, but exhibits severe instability under extreme data scarcity (P much greater than N) when applied generically. Here, we investigate how domain-aware design choices, specifically gradient saliency guided feature selection with biologically motivated priors, affect the stability and interpretability of VFL architectures in small-sample coral stress classification (N = 13 samples, P = 90579 features across transcriptomics, proteomics, metabolomics, and microbiome data). We benchmark a domain-aware VFL framework against two baselines on the Montipora capitata thermal stress dataset: (i) a standard NVFlare-based VFL and (ii) LASER, a label-aware VFL method. Domain-aware VFL achieves an AUROC of 0.833 plus or minus 0.030 after reducing dimensionality by 98.6 percent, significantly outperforming NVFlare VFL, which performs at chance level (AUROC 0.500 plus or minus 0.125, p = 0.0058). LASER shows modest improvement (AUROC 0.600 plus or minus 0.215) but exhibits higher variance and does not reach statistical significance. Domain-aware feature selection yields stable top-feature sets across analysis parameters. Negative control experiments using permuted labels produce AUROC values below chance (0.262), confirming the absence of data leakage and indicating that observed performance arises from genuine biological signal. These results motivate design principles for VFL in extreme P much greater than N regimes, emphasizing domain-informed dimensionality reduction and stability-focused evaluation.

Authors (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.