Papers
Topics
Authors
Recent
Search
2000 character limit reached

The Kinetic Limit of Balanced Neural Networks

Published 24 May 2025 in math.PR and math.DS | (2505.18481v1)

Abstract: The theory of Balanced Neural Networks is a very popular explanation for the high degree of variability and stochasticity in the brain's activity. Roughly speaking, it entails that typical neurons receive many excitatory and inhibitory inputs. The network-wide mean inputs cancel, and one is left with the stochastic fluctuations about the mean. In this paper we determine kinetic equations that describe the population density. The intrinsic dynamics is nonlinear, with multiplicative noise perturbing the state of each neuron. The equations have a spatial dimension, such that the strength-of-connection between neurons is a function of their spatial position. Our method of proof is to decompose the state variables into (i) the network-wide average activity, and (ii) fluctuations about this mean. In the limit, we determine two coupled limiting equations. The requirement that the system be balanced yields implicit equations for the evolution of the average activity. In the large n limit, the population density of the fluctuations evolves according to a Fokker-Planck equation. If one makes an additional assumption that the intrinsic dynamics is linear and the noise is not multiplicative, then one obtains a spatially-distributed neural field equation.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.