Strengthen RNN bias-learning approximation from point-wise to L1 convergence

Establish L1 convergence (or stronger) for the approximation of finite-time trajectories of continuous dynamical systems by discrete-time recurrent neural networks with fixed random weights and learned biases, extending the current result which guarantees only point-wise convergence over initial conditions within compact invariant sets.

Background

The RNN result in the paper shows that random-weight RNNs with learned biases can approximate finite-time trajectories from a continuous dynamical system, but the proof provides point-wise convergence rather than convergence in an integral norm (e.g., L1) as in the feedforward case.

Demonstrating L1 (or Lp) convergence would align the RNN guarantees with classical universal approximation results and offer stronger performance assurances for bias-learning in dynamical prediction tasks.

References

This proof is weaker than the proof given in the previous section in that it shows point-wise convergence, rather than $L1$, convergence. We conjecture that this can be extended straightforwardly but we leave this result to future work.

Expressivity of Neural Networks with Random Weights and Learned Biases  (2407.00957 - Williams et al., 2024) in Section 2.2 (Recurrent neural networks), after Theorem \ref{mainrnn}