Liquid-Graph Time-Constant Network (LGTC)
- LGTC is a continuous-time graph neural network that adaptively modulates each agent’s dynamics using input-driven, liquid time constants.
- Its closed-form update approximates stiff ODE dynamics in a single communication round, ensuring computational efficiency and stability.
- LGTC achieves communication efficiency by selectively broadcasting only critical hidden features, leading to improved performance in large-scale flocking control.
The Liquid-Graph Time-Constant (LGTC) Network is a continuous-time graph neural network (GNN) architecture designed for distributed control of multi-agent systems. Extending the single-agent Liquid Time-Constant (LTC) network to communication graphs, LGTC introduces agent-specific, input-driven time constants via graph-filtered gating, and provides both ODE-based and closed-form state evolution. Its core innovations include a stable, contractive update rule, communication-efficient message-passing, and empirical validation in large-scale flocking control tasks.
1. Mathematical Formulation of the LGTC Layer
Given an undirected graph of agents, each maintains a hidden state and receives local input . The support matrix encodes the (possibly time-varying) communication topology.
The LGTC layer is governed by the following continuous-time ODE:
$\begin{cases} f(x,u,S) = \rho(\hat{A}_S(x) + b_x) + \rho(\hat{B}_S(u) + b_u), \[6pt] \dot{x} = -\left(b + f(x,u,S)\right)\circ x - \sum_{k=1}^K S^k x A_k + f(x,u,S)\circ \sigma_c(B_S(u)), \end{cases} \tag{1} \label{LGTC-ODE}$
where:
- is the pointwise ReLU,
- is the pointwise ,
- “” denotes the Hadamard product,
- and are graph filters of length with learned weight matrices,
- are positive agent-wise bias maps.
Every hidden component is modulated by an adaptive “liquid time constant” determined by graph-aggregated states and inputs.
2. Closed-Form LGTC Update
Integrating the stiff ODE over at each step is computationally expensive. A closed-form, single-step approximation, preserving the contraction rate, is constructed as follows: $\begin{aligned} f_\sigma &= \rho(\hat{B}_S(u)+b_u) + \rho(\hat{A}_S(x)+b_x), \ f_x &= \rho(\hat{A}_S(x)+b_x), \ f &= -D_x f\, \hat{A}_S(x) + \sum_{k=1}^K S^k x A_k/(x+\epsilon), \ x^+ &= \left(x \circ \sigma\left(-[b+f_x+f]\,T+\pi\right) - \sigma_c(B_S(u))\right) \circ \sigma(2\,f_\sigma) + \sigma_c(B_S(u)). \end{aligned} \tag{2} \label{LGTC-CF}$ where:
- denotes the logistic sigmoid,
- is the derivative of the ReLU term,
- prevents division by zero,
- is a small stabilizing constant.
This update yields in a single communication round, with a contraction rate matching that of the ODE.
3. Stability via Contraction Analysis
The stability of the LGTC system is grounded in contraction theory. The induced -log-norm is defined as
A vector field is -contractive if
$\mu_\infty(D_x F(x,u,S)) < -c,\quad c>0. \tag{3} \label{contract}$
Theorem (δISS of LGTC–ODE):
Under bounded -norm and , if
then, for all solutions , with the same but different inputs or initial states,
Thus, the LGTC dynamics are incrementally input-to-state stable (δISS) under suitable norm bounds on graph filter weights and biases.
A supporting lemma states: if and all , then whenever .
4. Communication-Efficient Message Passing
LGTC achieves communication efficiency through selective message broadcasting: each agent communicates only a subset of its hidden features, and only input channels if required. The graph filter is computed with successive 1-hop exchanges; lowering reduces per-edge payload. The adaptive time-constant term is locally computable without exchanging additional gating variables, unlike in standard GNNs (e.g., GGNN) where all hidden and gate vectors are broadcast. This design constrains per-step communication to .
5. Empirical Evaluation in Flocking Control
The LGTC network is evaluated in decentralized flocking, modeling agents as double-integrators in , updated discretely as , with s. Communication links are determined by proximity (), with and team size varied across experiments. The centralized expert implements a leader-follower control policy based on global velocity averaging and collision avoidance.
Each agent receives a $10$-dimensional input vector, processes it through a single LGTC (or alternative) layer (hidden size , filter length , communicated dims), and outputs control via a readout MLP. All models are regularized for contraction using a Softplus penalty.
Training follows the DAGGER paradigm over 60 expert trajectories, with the Adam optimizer, and mean-squared error loss between predicted and expert controls.
Results:
- Scalability: For m, LGTC and its closed-form variant (CfGC) reduce flocking error by 30–40% and leader error by 10% compared to GGNN, while GraphODE performs worst. LGTC/CfGC performance is near-identical, confirming closed-form fidelity.
- Communication Range Robustness: All methods degrade at m, flocking improves for m but leader tracking worsens. LGTC/CfGC maintain closest adherence to expert policy under range variation.
- Communication efficiency: LGTC/CfGC outperform or match baselines with dramatically fewer exchanged features.
| Model | Mean Flocking Error | Leader Tracking Error | Comm. Dims per Edge |
|---|---|---|---|
| LGTC/CfGC | Lowest | Lowest | |
| GGNN | Higher | Higher | |
| GraphODE | Highest | Highest |
6. Implementation Details and Hyperparameters
One step of the discrete (closed-form) LGTC update is:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
Âx = sum_k(S**k * x * Â_k) B̂u = sum_k(S**k * u * B̂_k) BSu = sum_k(S**k * u * B_k) Ax = sum_k(S**k * x * A_k, k=1..K) f_sigma = ReLU(B̂u+b_u) + ReLU(Âx+b_x) f_x = ReLU(Âx+b_x) Df = derivative of ReLU(Âx+b_x) f = - (Df ∘ x)/(x+ε) * Âx + Ax tau = -(b + f_x + f) * Δt + π stau = sigmoid(tau) s2 = sigmoid(2*f_sigma) s = tanh(BSu) x_plus = (x ∘ stau - s) ∘ s2 + s |
Key hyperparameters include hidden size , communicated dims , filter length , step , bias initializations , , , and contraction margin in the Softplus regularization.
7. Significance and Context
LGTC advances multi-agent control by enabling each agent’s state evolution to depend adaptively on both local and graph-filtered signals, via liquid time constants. The closed-form update achieves the expressivity of continuous-time dynamics with the computational tractability and communication frugality needed for large-scale distributed deployment. LGTC consistently outperforms discrete models in challenging flocking control tasks, with strong theoretical stability guarantees grounded in contraction analysis. These properties position LGTC as a theoretically principled and practically scalable approach to distributed learning and control on graphs (Marino et al., 2024).