Papers
Topics
Authors
Recent
Search
2000 character limit reached

Liquid-Graph Time-Constant Network (LGTC)

Updated 4 February 2026
  • LGTC is a continuous-time graph neural network that adaptively modulates each agent’s dynamics using input-driven, liquid time constants.
  • Its closed-form update approximates stiff ODE dynamics in a single communication round, ensuring computational efficiency and stability.
  • LGTC achieves communication efficiency by selectively broadcasting only critical hidden features, leading to improved performance in large-scale flocking control.

The Liquid-Graph Time-Constant (LGTC) Network is a continuous-time graph neural network (GNN) architecture designed for distributed control of multi-agent systems. Extending the single-agent Liquid Time-Constant (LTC) network to communication graphs, LGTC introduces agent-specific, input-driven time constants via graph-filtered gating, and provides both ODE-based and closed-form state evolution. Its core innovations include a stable, contractive update rule, communication-efficient message-passing, and empirical validation in large-scale flocking control tasks.

1. Mathematical Formulation of the LGTC Layer

Given an undirected graph G=(V,E)\mathcal{G}=(\mathcal{V},\mathcal{E}) of NN agents, each maintains a hidden state xi(t)RFx_i(t)\in\mathbb{R}^F and receives local input ui(t)RGu_i(t)\in\mathbb{R}^G. The support matrix SRN×NS\in\mathbb{R}^{N\times N} encodes the (possibly time-varying) communication topology.

The LGTC layer is governed by the following continuous-time ODE:

$\begin{cases} f(x,u,S) = \rho(\hat{A}_S(x) + b_x) + \rho(\hat{B}_S(u) + b_u), \[6pt] \dot{x} = -\left(b + f(x,u,S)\right)\circ x - \sum_{k=1}^K S^k x A_k + f(x,u,S)\circ \sigma_c(B_S(u)), \end{cases} \tag{1} \label{LGTC-ODE}$

where:

  • ρ()\rho(\cdot) is the pointwise ReLU,
  • σc()\sigma_c(\cdot) is the pointwise tanh\tanh,
  • \circ” denotes the Hadamard product,
  • A^S(x)=k=0KSkxA^k\hat{A}_S(x) = \sum_{k=0}^K S^k x \hat{A}_k and BS(u)=k=0KSkuBkB_S(u) = \sum_{k=0}^K S^k u B_k are graph filters of length KK with learned weight matrices,
  • b,bx,buRN×Fb, b_x, b_u\in\mathbb{R}^{N\times F} are positive agent-wise bias maps.

Every hidden component xijx_i^j is modulated by an adaptive “liquid time constant” bij+fij(t)b_i^j + f_i^j(t) determined by graph-aggregated states and inputs.

2. Closed-Form LGTC Update

Integrating the stiff ODE over [0,T][0,T] at each step is computationally expensive. A closed-form, single-step approximation, preserving the contraction rate, is constructed as follows: $\begin{aligned} f_\sigma &= \rho(\hat{B}_S(u)+b_u) + \rho(\hat{A}_S(x)+b_x), \ f_x &= \rho(\hat{A}_S(x)+b_x), \ f &= -D_x f\, \hat{A}_S(x) + \sum_{k=1}^K S^k x A_k/(x+\epsilon), \ x^+ &= \left(x \circ \sigma\left(-[b+f_x+f]\,T+\pi\right) - \sigma_c(B_S(u))\right) \circ \sigma(2\,f_\sigma) + \sigma_c(B_S(u)). \end{aligned} \tag{2} \label{LGTC-CF}$ where:

  • σ\sigma denotes the logistic sigmoid,
  • DxfD_x f is the derivative of the ReLU term,
  • ϵ1\epsilon\ll 1 prevents division by zero,
  • π1\pi\approx 1 is a small stabilizing constant.

This update yields x+x(T)x^+\approx x(T) in a single communication round, with a contraction rate matching that of the ODE.

3. Stability via Contraction Analysis

The stability of the LGTC system is grounded in contraction theory. The induced \infty-log-norm is defined as

μ(M)=maxi{Mii+jiMij}.\mu_\infty(M) = \max_i\left\{M_{ii} + \sum_{j\ne i} |M_{ij}|\right\}.

A vector field F(x,u,S)F(x,u,S) is cc-contractive if

$\mu_\infty(D_x F(x,u,S)) < -c,\quad c>0. \tag{3} \label{contract}$

Theorem (δISS of LGTC–ODE):

Under bounded SS-norm and b0b\ge 0, if

c=b+A1:KS~1:K+bxA^0:KSˉ0:K>0,c = \|b\|_\infty + \|A_{1:K}\|_\infty\,\|\tilde{S}_{1:K}\|_\infty + \|b_x\|_\infty - \|\hat{A}_{0:K}\|_\infty\,\|\bar{S}_{0:K}\|_\infty > 0,

then, for all solutions x1(t)x_1(t), x2(t)x_2(t) with the same SS but different inputs or initial states,

x1(t)x2(t)ectx1(0)x2(0)+uc(1ect)supτtu1u2+Sc(1ect)supτtS1S2.(4)\|x_1(t)-x_2(t)\|_\infty \le e^{-ct}\|x_1(0)-x_2(0)\|_\infty + \frac{\ell_u}{c}(1-e^{-ct})\sup_{\tau\le t}\|u_1-u_2\|_\infty + \frac{\ell_S}{c}(1-e^{-ct})\sup_{\tau\le t}\|S_1-S_2\|_\infty. \tag{4}

Thus, the LGTC dynamics are incrementally input-to-state stable (δISS) under suitable norm bounds on graph filter weights and biases.

A supporting lemma states: if b>0b>0 and all AkTSk0A_k^T\otimes S^k\ge 0, then x(t)1\|x(t)\|_\infty\le 1 whenever x(0)1\|x(0)\|_\infty\le 1.

4. Communication-Efficient Message Passing

LGTC achieves communication efficiency through selective message broadcasting: each agent communicates only a subset FFF'\ll F of its hidden features, and only GGG'\ll G input channels if required. The graph filter k=0KSkxHk\sum_{k=0}^K S^k x H_k is computed with KK successive 1-hop exchanges; lowering FF' reduces per-edge payload. The adaptive time-constant term fi(t)f_i(t) is locally computable without exchanging additional gating variables, unlike in standard GNNs (e.g., GGNN) where all hidden and gate vectors are broadcast. This design constrains per-step communication to O(EF)\mathcal{O}(|\mathcal{E}| F').

5. Empirical Evaluation in Flocking Control

The LGTC network is evaluated in decentralized flocking, modeling agents as double-integrators in R2\mathbb{R}^2, updated discretely as r(t+1)=r(t)+Tv(t)r(t+1)=r(t)+T v(t), v(t+1)=v(t)+Tu(t)v(t+1)=v(t)+T u(t) with T=0.05T=0.05 s. Communication links are determined by proximity (rirj2R\|r_i-r_j\|_2\le R), with RR and team size NN varied across experiments. The centralized expert implements a leader-follower control policy based on global velocity averaging and collision avoidance.

Each agent receives a $10$-dimensional input vector, processes it through a single LGTC (or alternative) layer (hidden size F=50F=50, filter length K=2K=2, F=4F'=4 communicated dims), and outputs control via a readout MLP. All models are regularized for contraction using a Softplus penalty.

Training follows the DAGGER paradigm over 60 expert trajectories, with the Adam optimizer, and mean-squared error loss between predicted and expert controls.

Results:

  • Scalability: For R=4R=4 m, LGTC and its closed-form variant (CfGC) reduce flocking error by \sim30–40% and leader error by \sim10% compared to GGNN, while GraphODE performs worst. LGTC/CfGC performance is near-identical, confirming closed-form fidelity.
  • Communication Range Robustness: All methods degrade at R=2R=2 m, flocking improves for R=8R=8 m but leader tracking worsens. LGTC/CfGC maintain closest adherence to expert policy under range variation.
  • Communication efficiency: LGTC/CfGC outperform or match baselines with dramatically fewer exchanged features.
Model Mean Flocking Error Leader Tracking Error Comm. Dims per Edge
LGTC/CfGC Lowest Lowest F=4F'=4
GGNN Higher Higher F4F\gg 4
GraphODE Highest Highest F4F\gg 4

6. Implementation Details and Hyperparameters

One step of the discrete (closed-form) LGTC update is:

1
2
3
4
5
6
7
8
9
10
11
12
13
Âx  = sum_k(S**k * x * Â_k)
B̂u = sum_k(S**k * u * B̂_k)
BSu = sum_k(S**k * u * B_k)
Ax  = sum_k(S**k * x * A_k, k=1..K)
f_sigma = ReLU(B̂u+b_u) + ReLU(Âx+b_x)
f_x = ReLU(Âx+b_x)
Df = derivative of ReLU(Âx+b_x)
f = - (Df  x)/(x+ε) * Âx + Ax
tau = -(b + f_x + f) * Δt + π
stau = sigmoid(tau)
s2 = sigmoid(2*f_sigma)
s = tanh(BSu)
x_plus = (x  stau - s)  s2 + s

Key hyperparameters include hidden size FF, communicated dims FF', filter length KK, step Δt\Delta t, bias initializations bb, bxb_x, bub_u, and contraction margin in the Softplus regularization.

7. Significance and Context

LGTC advances multi-agent control by enabling each agent’s state evolution to depend adaptively on both local and graph-filtered signals, via liquid time constants. The closed-form update achieves the expressivity of continuous-time dynamics with the computational tractability and communication frugality needed for large-scale distributed deployment. LGTC consistently outperforms discrete models in challenging flocking control tasks, with strong theoretical stability guarantees grounded in contraction analysis. These properties position LGTC as a theoretically principled and practically scalable approach to distributed learning and control on graphs (Marino et al., 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Liquid-Graph Time-Constant Network (LGTC).