Strengthen RNN bias-learning approximation from point-wise to L1 convergence
Establish L1 convergence (or stronger) for the approximation of finite-time trajectories of continuous dynamical systems by discrete-time recurrent neural networks with fixed random weights and learned biases, extending the current result which guarantees only point-wise convergence over initial conditions within compact invariant sets.
References
This proof is weaker than the proof given in the previous section in that it shows point-wise convergence, rather than $L1$, convergence. We conjecture that this can be extended straightforwardly but we leave this result to future work.
— Expressivity of Neural Networks with Random Weights and Learned Biases
(2407.00957 - Williams et al., 2024) in Section 2.2 (Recurrent neural networks), after Theorem \ref{mainrnn}