Papers
Topics
Authors
Recent
Search
2000 character limit reached

S-MDMA: Sensitivity-Aware MDMA for Satellite SemCom

Updated 1 February 2026
  • Sensitivity-Aware MDMA is a framework that integrates sensitivity-based dimensionality reduction with orthogonal embedding to preserve key semantic features.
  • It leverages fixed, orthogonal carrier vectors to eliminate multi-user interference, ensuring robust performance under adverse channel conditions.
  • The end-to-end architecture employs a geometric mean loss for balanced multi-user reconstruction, achieving up to 20% SSIM gains over random pruning.

Sensitivity-Aware Model Division Multiple Access (S-MDMA) is a framework developed for semantic communication in satellite-ground systems, targeting bandwidth-limited multi-user environments with a rigorous emphasis on semantic fidelity and transmission robustness. Built upon the Model Division Multiple Access (MDMA) architecture and incorporating sensitivity-guided dimensionality reduction, orthogonal subspace embedding, and a multi-user fairness loss, S-MDMA addresses the dual challenges of bandwidth compression and inter-user interference in satellite semantic communications (Cao et al., 25 Jan 2026).

1. Framework Architecture and Processing Pipeline

S-MDMA employs a modular end-to-end architecture that systematically extracts, compresses, separates, and reconstructs semantic features for transmission between a Low-Earth-Orbit (LEO) satellite and multiple ground terminals. The pipeline consists of the following sequence:

  1. Semantic Extraction Module: Each source image siRH×W×Cs_i \in \mathbb{R}^{H \times W \times C} is encoded independently with a Swin-Transformer-based encoder:

xi=fse(i)(si)Rd,i=1,2x_i = f_{\mathrm{se}}^{(i)}(s_i) \in \mathbb{R}^d, \quad i=1,2

  1. Semantic Merging & Sorting Module:
    • Shared vs. Differential Decomposition: The shared semantic feature is assigned as Fs=x1F_s = x_1, while the difference feature is thresholded via

    Fd[i]={(x2x1)i,(x2x1)i>τ 0,otherwiseF_d[i] = \begin{cases} (x_2-x_1)_i, & |(x_2-x_1)_i| > \tau \ 0, & \text{otherwise} \end{cases}

  • Sensitivity-Aware Sorting and Pruning: Sensitivity scores γi\gamma_i are calculated for each dimension (see §2), with only the top K=rdK = \lfloor r d \rfloor dimensions retained under bandwidth ratio rr.
  1. Orthogonal Embedding Module:

    • Fixed orthonormal 'carrier' vectors u1,u2Rpu_1,u_2 \in \mathbb{R}^p are designated, satisfying u1u2=0u_1^\top u_2 = 0.
    • Semantic vectors are embedded using Kronecker products:

    Fs-emb=Fs-sortu1,Fd-emb=Fd-sortu2F_{s\text{-emb}} = F_{s\text{-sort}} \otimes u_1,\quad F_{d\text{-emb}} = F_{d\text{-sort}} \otimes u_2

  • The resulting embeddings are strictly orthogonal (Fs-embFd-emb=0F_{s\text{-emb}}^\top F_{d\text{-emb}} = 0) and are superposed for transmission:

    Fmix=Fs-emb+Fd-embF_{\mathrm{mix}} = F_{s\text{-emb}} + F_{d\text{-emb}}

  1. Transmission Module:
    • The combined semantic representation is encoded for the physical channel: y=fce(Fmix)y = f_{\mathrm{ce}}(F_{\mathrm{mix}}).
    • Transmission occurs over a Shadowed-Rician fading channel: yi=hiy+ni,  niN(0,σi2)y_i = h_i y + n_i,\; n_i \sim \mathcal{N}(0, \sigma_i^2).
    • At each ground receiver: decoding, orthogonal projection, unsorting, and semantic decoding reconstruct s^i\hat s_i.

This systematic design enables competitive multi-user semantic communications under fundamental physical-layer constraints.

2. Semantic Sensitivity Sorting and Dimensional Pruning

A key innovation of S-MDMA is the semantic sensitivity sorting algorithm, which enables judicious allocation of limited bandwidth to the most meaningful semantic dimensions:

  • For each element ii in the semantic vector F=(f1,,fd)F = (f_1, \ldots, f_d)^\top, sensitivity is quantified via a perturbation-based metric:

γi=L(s,s^(i))L(s,s^)\gamma_i = \mathcal{L}(s, \hat s^{(i)}) - \mathcal{L}(s, \hat s)

where L\mathcal{L} is the base loss (e.g., MSE), and s^(i)\hat s^{(i)} is the reconstruction using the perturbed code F(i)=F+ϵeiF^{(i)} = F + \epsilon e_i.

  • The optimal set of semantic dimensions under a cardinality constraint BB is obtained by solving:

maxS{1,,d}iSγis.t.SB\max_{\mathcal{S} \subseteq \{1,\dots,d\}} \sum_{i \in \mathcal{S}} \gamma_i \quad \text{s.t.} \quad |\mathcal{S}| \le B

This is efficiently implemented by sorting {γi}\{\gamma_i\} and retaining the top-BB entries.

This sensitivity-aware subset selection preserves maximal semantic information under stringent bandwidth ratios, with empirical gains up to 20% in SSIM over random pruning in low-compression regimes.

3. Orthogonal Embedding and Multi-User Interference Mitigation

S-MDMA eliminates inter-user interference via orthogonal subspace embedding:

  • In the two-user scenario, carrier vectors u1,u2u_1, u_2 are selected such that u1u2=0u_1^\top u_2 = 0 and u1=u2=1\|u_1\| = \|u_2\| = 1.
  • Embedding the compressed semantic features via Kronecker product produces block-diagonal projection matrices, yielding

Ws=Fs-sortu1,Wd=Fd-sortu2,WsWd=0W_s = F_{s\text{-sort}} \otimes u_1,\quad W_d = F_{d\text{-sort}} \otimes u_2,\quad W_s^\top W_d = 0

  • For KK-user cases, projection matrices {Wk}k=1K\{W_k\}_{k=1}^K are constructed to satisfy WkWj=0W_k^\top W_j = 0 for kjk \ne j.
  • If projection matrices are learned rather than fixed, an orthogonality penalty

Rorth=kjWkWjF2R_{\mathrm{orth}} = \sum_{k\ne j} \| W_k^\top W_j \|_F^2

can be incorporated to enforce subspace separation.

Strict subspace orthogonality allows each ground terminal to recover its designated semantic features via linear projection, thus nullifying interference even under adverse channel conditions.

4. Multi-User Reconstruction Loss and Optimization

To foster balanced reconstruction quality across users and suppress severe quality disparities, S-MDMA introduces a geometric mean–based joint reconstruction loss:

  • The per-user loss is:

Lk=EsD[(z,z^k)]L_k = \mathbb{E}_{s\sim \mathcal{D}}\bigl[\ell(z, \hat z_k)\bigr]

where \ell may be MSE, zz is the ground-truth semantic feature, and z^k\hat z_k is the reconstruction for user kk.

  • For two users, the global objective is:

Lall=L1L2\mathcal{L}_{\mathrm{all}} = \sqrt{L_1 L_2}

This loss is dominated by the worse-performing user and is scale-invariant, which ensures fairness and prevents overfitting to a single user's channel conditions.

  • The KK-user extension generalizes to:

Lall=(L1L2LK)1/K\mathcal{L}_{\mathrm{all}} = (L_1 L_2 \dots L_K)^{1/K}

  • With learnable projection matrices, a sum of per-user losses and orthogonality regularization is permissible:

L=k=1KLk+λkjWkWjF2\mathcal{L} = \sum_{k=1}^K L_k + \lambda \sum_{k\ne j} \|W_k^\top W_j\|_F^2

where λ\lambda modulates the trade-off between accuracy and strict orthogonality.

A plausible implication is that this geometric mean loss acts as a max-min fairness criterion in practice, strongly disincentivizing models that would sacrifice weak-user fidelity for aggregate performance.

5. Empirical Evaluation and Comparative Performance

Comprehensive experiments validate S-MDMA's gains over prior semantic communications schemes under a representative set of satellite-ground scenarios:

Evaluation Aspect S-MDMA Results Baselines
PSNR at SNR = −10 dB >>28 dB (maintained across users) 2–5 dB lower
SSIM at all SNRs >>0.95 <<0.85 (low SNR)
Cross-dataset (DLRSD→NWPU) \sim2 dB PSNR, \sim0.1 SSIM over baselines Lower generalization
r=0.5r=0.5 compression \approx35 dB PSNR MDMA: \sim28 dB
Sensitivity sorting gain Up to 20% SSIM over random pruning (low rr)
  • Datasets: DLRSD (2,100 images, 256×256, 17 classes); NWPU VHR-10 (800 images).
  • Channel Model: Shadowed-Rician fading, Nakagami-m=19.4m = 19.4.
  • Bandwidth Regimes: Bandwidth ratio r(0,1]r \in (0, 1] with ablation at various settings.
  • Baselines: Deep JSCC, WITT, MDMA.

Visual inspection under severe bandwidth (r=0.3r=0.3) and SNR (−5 dB) confirms S-MDMA uniquely reconstructs key spatial structures and semantic textures lost by comparators. Orthogonal embedding ablations further show that without this mechanism, color distortions and blurred reconstructions emerge, confirming the necessity of strict subspace separation.

6. Synthesis and Impact

S-MDMA concretely advances the field of satellite-ground semantic communication through the synergistic deployment of sensitivity-aware dimension selection, provable orthogonal embedding, and balanced multi-user optimization. Its seminal design guarantees:

  1. Retention of the most semantically critical information per bit budget.
  2. Suppression of multi-user interference via orthogonal encoding.
  3. Robust performance across wide SNR ranges, different compression ratios, and out-of-distribution datasets.

These strengths have positioned S-MDMA as a state-of-the-art solution for satellite-ground ComAI communication tasks where reliability, efficiency, and fairness under adversarial physical-layer conditions are paramount (Cao et al., 25 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sensitivity-Aware Model Division Multiple Access (S-MDMA).