Compiling molecular ultrastructure into neural dynamics
Abstract: High-resolution brain imaging can now capture not just synapse locations but their molecular composition, with the cost of such mapping falling exponentially. Yet such ultrastructural data has so far told us little about local neuronal physiology - specifically, the parameters (e.g., synaptic efficacies, local conductances) that govern neural dynamics. We propose to translate molecularly annotated ultrastructure into physiology, introducing the concept of an ultrastructure-to-dynamics compiler: a learned mapping from molecularly annotated ultrastructure to simulator-ready, uncertainty-aware physiological parameters. The requirement is paired training data, with jointly acquired ultrastructure from imaging, and dynamical responses to perturbations from physiological experiments. With this data we can train models that predict local physiology directly from structure. Such a compiler would support biophysical simulations by turning anatomical maps into models of circuit dynamics, shifting structure-to-function from a descriptive program to a predictive one and opening routes to understanding neural computation and forecasting intervention effects.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
What is this paper about?
This paper suggests a new tool for neuroscience called an “ultrastructure-to-dynamics compiler” (UDC). In simple terms, it’s a smart translator that looks at super-detailed pictures of brain cells and their molecules (the “parts list” and how they’re arranged) and turns that into predictions about how those cells actually behave electrically (how they fire, connect, and respond). The goal is to go from what the brain looks like at the tiniest scales to what it does, so we can build realistic simulations of brain circuits and forecast what will happen if we change something, like giving a drug or shining light on certain cells.
The big questions the authors ask
The paper focuses on a few clear questions, restated in everyday language:
- If we can see the tiny parts of brain cells and which molecules are where, can we predict how those cells will act?
- Can we build a general “translator” that, given detailed images and labels of molecules, outputs the key knobs and dials needed to run realistic brain simulations (like how strong a synapse is, or how fast channels open and close)?
- Can this translator also say how confident it is, and how things might change across different brain states (like when a neuromodulator is present)?
- What kind of training data do we need to make this translator work well, and how do we test that it really generalizes to new situations?
How do they plan to do it?
Think of a computer compiler: it converts code you write into machine instructions the computer can run. The authors want a brain version of that. The “source language” is high-resolution images of brain tissue that show:
- Wiring (who connects to whom),
- Shapes of cells and synapses,
- And crucially, which molecules are present and how they’re organized, even at nanometer scales.
The “target language” is a set of numbers a simulator can use to reproduce neural behavior:
- Synapse strengths and timing,
- Channel conductances and kinetics,
- How likely synapses are to release chemicals,
- How all of this changes with state (and how uncertain we are).
To make this translator trustworthy, they propose training it with paired data: the exact same piece of tissue is both (1) imaged in great molecular detail and (2) tested for function. Function can be measured with tools like patch-clamp electrophysiology or optical methods that read out voltage and calcium. With enough of these “image + behavior” pairs, machine learning can learn the mapping from structure to dynamics.
Two complementary learning routes are described:
- End-to-end learning: feed the 3D images (with molecular labels) directly to a model that learns to predict the simulator parameters.
- Feature-guided learning: first extract meaningful features (like how many vesicles are ready to release, receptor types and densities, or synapse geometry), then infer the parameters using interpretable biophysical models.
A key idea is “forward vs. inverse” problems. Guessing hidden parameters just from activity (inverse) is like tasting a cake and trying to figure out the recipe—many different recipes might taste similar. Going from measured structure and molecules to parameters, then simulating (forward), is more like starting with the ingredient list and cooking method to predict the taste. The UDC aims to shrink the guesswork by using what we can now see.
To know it’s working, the authors propose community benchmarks at three levels, tested on held-out cases and perturbations:
- Synapse level: predict the size and timing of synaptic responses, including short-term changes and randomness.
- Cell level: predict how a neuron responds to inputs (its excitability, adaptation, etc.), and how that changes with drugs that affect channels.
- Circuit level: build up from the predicted synapses and cells to simulate networks, and predict how activity changes when we stimulate, block, or otherwise perturb the circuit.
Why now? Imaging technologies are getting much faster and cheaper at capturing ultrastructure with molecular detail, while in vivo physiology (recording a lot of neurons at once in a living brain) is improving more slowly and has physical limits. The authors argue we can produce enough paired training data at the component level (synapses, dendrites, somas) to train the compiler, and then apply it widely wherever structural maps exist.
What do the authors find or argue?
This is a vision paper, not a report of a finished system. The authors make a case that the approach is feasible and worth pursuing now:
- Evidence that structure carries signal: Features like synapse size and certain molecular markers can already explain a meaningful portion of how strong a synapse is and how it behaves, though not perfectly.
- Modeling is ready: We already have methods to fit detailed neuron models from recordings and morphology, and we know that a cell’s gene-expression type predicts aspects of its electrical behavior. So the “output side” (turning parameters into simulations) is viable.
- Imaging is scaling fast: New methods can label many molecules and map them in 3D at very fine resolution, and costs are dropping.
- A realistic plan for training data: They outline how to collect the needed paired datasets using combinations of optogenetics, pharmacology, voltage imaging, and post hoc molecular/ultrastructural imaging on the same cells or circuits.
- Clear success tests: They propose falsifiable first targets like the retina (a well-understood system with defined inputs and outputs) and C. elegans (a small nervous system with a known wiring diagram) to prove the idea—or learn why it fails.
They also emphasize uncertainty: because similar-looking synapses can behave differently depending on context (like neuromodulators), the compiler should output distributions (ranges with confidence), not just single numbers.
Why this matters
Here are the main reasons this idea could be transformative:
- Better predictions for treatments: If we can predict how changing a molecule affects circuit activity, we could forecast a drug’s effects and side effects earlier, potentially saving time and money in drug development.
- Structure-to-function that actually predicts: Instead of just drawing beautiful brain maps, we’d turn those maps into models that can be run and tested under new conditions—like changing a receptor or stimulating certain cells.
- Improved simulations: More realistic parameters would make cell, circuit, and even whole-brain simulations more faithful to biology, enabling better scientific tests and engineering applications.
- Community benchmarks and shared data: The paper pushes for standardized datasets and tests, which would help the field make steady, comparable progress.
A simple discussion of challenges and impact
The authors are honest about hurdles:
- Static pictures don’t show changing brain states: A snapshot can miss recent activity or neuromodulators that shift behavior. The plan is to handle this by training under different states and having the model express uncertainty.
- Data is the bottleneck: We urgently need shared, standardized datasets that link molecular ultrastructure to controlled physiological measurements. Without these, learning the translation is impossible.
- Generalization and scale: What works in one species or brain region might not transfer automatically. Computation could also be heavy for very large simulations.
- Ethics and goals: As simulations get more realistic, there are ethical questions (e.g., simulating animals) and practical questions about how to measure “success” fairly.
Still, the potential payoff is big: if the UDC works, we can move from “describing” brain structure to “using” it to predict function and intervention outcomes. That could reshape how we study brains, design experiments, and develop therapies. The authors argue this needs a coordinated, community-level effort—an organized push to build the datasets, benchmarks, and tools so our maps of the brain can finally “speak” in the language of dynamics.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
Below is a single, consolidated list of concrete gaps that the paper leaves unresolved and that future researchers could act on:
- Standardized paired corpora: Define and build large, shared datasets that co-register molecularly annotated ultrastructure with perturbation-rich local physiology (synapse- and compartment-level), including explicit protocols for tissue handling, timing, and metadata capture.
- Coverage required for generalization: Quantify how many synapse classes, cell types, developmental stages, brain regions, and perturbation conditions are minimally necessary for a compiler to generalize out of distribution.
- Co-registration workflows: Develop robust, loss-minimizing pipelines to collect physiology first and then obtain high-fidelity ultrastructure and molecular readouts from the same cells and synapses at scale.
- Minimal molecular panel and resolution: Determine the smallest set of molecular markers and nanoscale organizational features (e.g., receptor nanodomains, pre–post alignment) and the spatial resolution needed to achieve target prediction accuracies for PSP amplitude/kinetics and intrinsic excitability.
- State inference from static tissue: Establish which aspects of neuromodulatory tone, recent activity history, phosphorylation/state-dependent channel gating, and subunit composition are inferable from fixed-tissue measurements, and design auxiliary readouts to capture them when they are not.
- Volume transmission and non-synaptic signaling: Define how to represent and learn parameters for diffusion-mediated signaling (neuromodulators), astrocytic and vascular coupling, and extracellular space properties from structural and molecular proxies.
- Measurement artifacts and bias correction: Quantify and correct distortions from fixation, expansion, clearing, antibody accessibility, labeling nonlinearity, shrinkage, and EM segmentation errors; release reference standards and phantoms for cross-modality calibration.
- Identifiability under partial observability: Perform formal identifiability analyses to determine which effective parameters are learnable from available measurements and what perturbation sets make them identifiable; specify parameter subsets that must be marginalized or constrained by priors.
- Output representation and abstraction level: Determine the most reliable target parameterization (e.g., conductance densities, kinetics, STP parameters, noise models) and model timescale choice (discrete vs continuous vs event-driven) that balance fidelity and learnability.
- Conditioning on state and context: Specify how compiler outputs should be conditioned on neuromodulatory/circuit state (e.g., via explicit covariates) and define standard perturbation protocols to map state-dependent parameter shifts.
- Domain shift and transfer: Develop and benchmark domain adaptation strategies to handle shifts across species, labs, preparations, temperatures, ion concentrations, and disease states; define when a universal vs region-specific compiler is warranted.
- Benchmark suite and metrics: Operationalize the proposed synapse/cell/circuit benchmarks with preregistered tasks, datasets, and metrics (prediction error, calibration metrics such as coverage or ECE for continuous variables) and establish third-party evaluation.
- Learning-curve quantification: Empirically map prediction error vs number of annotated molecules, imaging resolution, and sample size; perform power analyses to guide data collection.
- End-to-end vs feature-based strategies: Compare learning approaches (raw volumes vs hand-crafted biophysical features) under controlled conditions to determine sample efficiency, interpretability, and robustness.
- Integration of multi-omic priors: Specify how to incorporate transcriptomic/proteomic information (e.g., Patch-seq) as priors or covariates and quantify their incremental value over ultrastructure alone.
- Handling missing or incomplete labels: Develop imputation and uncertainty-aware methods for cases where some molecules/structures are not stained or are below detection thresholds.
- Stochastic synaptic parameters: Define methods and necessary measurements to infer stochastic release models (e.g., vesicle pool sizes, release probabilities, failure rates) from ultrastructural features and molecular markers.
- Plasticity rules and history dependence: Determine what additional measurements (e.g., activity reporters, post hoc plasticity assays) are needed to infer short- and long-term plasticity rules from structure, beyond static receptor and scaffold counts.
- Error propagation to circuit simulations: Build sensitivity and uncertainty-propagation analyses to quantify how local parameter estimation errors affect circuit- and population-level predictions; define acceptable error budgets.
- Computational tractability: Establish reduced-order model forms or surrogate simulators that retain predictive validity while remaining computationally feasible at circuit/brain scales.
- Spatial discretization and compartment mapping: Define standards for mapping nanoscale molecular and ultrastructural features onto multi-compartment neuron models (e.g., mapping nanodomains to effective compartment parameters).
- Pharmacology linkage: Specify how to map molecular interventions (pharmacologic/genetic) to parameter changes in the compiler outputs and evaluate prediction of intervention effects under held-out drugs and doses.
- PK/PD and clinical translation: Outline how compiler-based tissue-level predictions can be integrated with pharmacokinetics/pharmacodynamics and network compensation to forecast patient-level efficacy and side effects.
- Retinal and C. elegans test designs: Provide concrete experimental designs (imaging panels, perturbations, physiological readouts, sample sizes, evaluation metrics) for the proposed first falsifiable tests in retina and C. elegans.
- Uncertainty calibration standards: Define required calibration tests (e.g., posterior predictive checks, coverage diagnostics) and reporting norms for predictive distributions over parameters and responses.
- Cross-lab reproducibility: Organize inter-lab ring trials with shared tissues, protocols, and blind evaluations to quantify reproducibility and batch effects in both imaging and physiology.
- Data governance and incentives: Specify data formats, ontologies, QC pipelines, access policies, and incentive structures (e.g., credit, funding mechanisms) required to sustain a community calibration stack.
- Ethical and regulatory frameworks: Develop guidelines for data ownership of brain content, consent for post hoc molecular-ultrastructural mapping, and boundaries around simulations that could approximate sentient systems.
- Equivalence class definition: Operationally define the “equivalence class” of effective parameterizations (what constitutes “same” for prediction) and how to compare/merge parameter sets across models and labs.
- Robustness to heterogeneous reagents: Quantify the impact of antibody lots, labeling chemistries, and imaging hardware variability on parameter inference; create normalization strategies and shared controls.
- Cross-modal registration at scale: Improve automated, high-throughput registration between optical (e.g., ExM/LSFM) and EM volumes with molecule-level alignment, including confidence estimates for correspondence.
- Sample selection bias: Assess and mitigate biases introduced by focusing on accessible regions/cell types or well-labeled molecules; design sampling strategies that ensure representativeness.
- Open-source tooling: Deliver reference implementations for segmentation, feature extraction, parameter inference, and simulator interfacing with documented APIs and test datasets.
- Success/failure criteria: Set explicit, falsifiable performance thresholds (e.g., synaptic EPSP amplitude RMSE ≤ X%, cell f–I curve error ≤ Y%, circuit firing-rate change prediction within Z%) that define milestone success and termination conditions.
Practical Applications
Overview
The paper proposes an ultrastructure-to-dynamics compiler (UDC): a learned system that converts molecularly annotated ultrastructure into simulator-ready, uncertainty-calibrated physiological parameters for synapses and neuronal compartments. Below are practical applications derived from its concepts, methods, and proposed infrastructure, grouped by deployment horizon. Each item includes sectors, potential tools/products/workflows, and key assumptions/dependencies.
Immediate Applications
- Build a “community calibration stack” for structure–function data integration
- Sectors: academia, funding agencies, policy, neuroinformatics/software
- Tools/products/workflows: standardized formats for co-registering EM/expansion microscopy with patch/optical physiology; QC pipelines; alignment services; public benchmark suites (synapse, cell, circuit levels) with preregistered tasks and uncertainty-calibration metrics; open APIs and data portals
- Assumptions/dependencies: multi-institutional consortia; FAIR data governance; agreed benchmarking criteria; sustained funding for shared infrastructure
- Launch tractable testbeds (retina, C. elegans, organoids/slices) to falsify/validate UDC principles
- Sectors: academia, core facilities, imaging vendors
- Tools/products/workflows: paired recordings (voltage/calcium imaging, automated patch) + post hoc molecularly annotated ultrastructure (ExA-SPIM, expansion microscopy, MultiSEM); perturbation libraries (optogenetic, pharmacological) executed under controlled conditions; train-and-test splits with held-out perturbations
- Assumptions/dependencies: robust co-registration; perturbation throughput; clear preregistration of success criteria
- Hybrid experimental pipelines that pair perturbation-rich physiology with multiplexed ultrastructure
- Sectors: core facilities, CROs, imaging/software vendors
- Tools/products/workflows: Patch2MAP-like workflows, all-optical electrophysiology with multiplexed molecular reporters, expansion-assisted light-sheet for nanoscale molecular organization, automated sample-to-archive pipelines
- Assumptions/dependencies: sample prep fidelity; antibody panels and labeling specificity; scalable alignment between modalities
- Feature-extraction libraries linking ultrastructural motifs to biophysical priors
- Sectors: software, academia
- Tools/products/workflows: open-source Python packages that quantify PSD size, receptor/ion-channel densities, vesicle dock states, cleft geometry, dendritic morphology, and map them to priors over conductances/kinetics/short-term plasticity (for use with differentiable simulators like Jaxley)
- Assumptions/dependencies: high-quality segmentations/annotations; validated mappings to parameter priors; uncertainty quantification modules
- Uncertainty-aware parameter-fitting workflows constrained by structure
- Sectors: software, academia
- Tools/products/workflows: Bayesian/differentiable simulation toolchains that incorporate structural priors to fit single-cell and microcircuit models (e.g., f–I curves, dendritic nonlinearities) and report calibrated posterior distributions
- Assumptions/dependencies: identifiability with partial observations; robust uncertainty calibration; cross-dataset validation
- Early-stage pharma use: hypothesis triage with molecularly informed circuit expectations
- Sectors: healthcare/pharma, biotech
- Tools/products/workflows: multiplexed imaging of receptor/nanodomain organization pre/post compound; qualitative-to-semiquantitative forecasts of circuit targets/risks; fast feedback loops to design targeted physiology assays
- Assumptions/dependencies: correlation (not yet causal prediction) between molecular organization and effective parameters; access to tissue and imaging; acceptance of uncertainty bounds
- Policy and ethics frameworks for brain-structure datasets and model use
- Sectors: policy/regulators, funders, ELSI experts
- Tools/products/workflows: guidelines for consent, ownership, data sharing; evaluation standards for domain shift; criteria for success/failure; pathways for regulatory-grade datasets
- Assumptions/dependencies: stakeholder engagement; alignment with existing biomedical regulations; international coordination
- Education and workforce training modules on structure-to-function modeling
- Sectors: education, academia
- Tools/products/workflows: interactive notebooks/demos showing how nanoscale features constrain dynamics; mini-benchmarks for student projects; best-practice guides for multimodal co-registration
- Assumptions/dependencies: release of small, well-annotated public datasets; simplified yet faithful models for teaching
Long-Term Applications
- Production UDC services and SDKs (compile images to simulator-ready parameters)
- Sectors: software, CROs, neurotech startups, academia
- Tools/products/workflows: cloud APIs/SDKs that accept volumes + metadata and output distributions over synaptic/compartment parameters with uncertainty; integration with NEURON/Brain Modeling Toolkits/Jaxley; automated domain-adaptation modules
- Assumptions/dependencies: large paired corpora across species/preparations; robust out-of-distribution performance; sustainable compute and data pipelines
- Mechanism-guided drug development and side-effect prediction
- Sectors: healthcare/pharma, regulators
- Tools/products/workflows: in silico evaluation of receptor-level interventions on circuit dynamics in tissue- or patient-typical models; prioritization of compounds/targets; adverse-effect risk maps across brain regions/states
- Assumptions/dependencies: validated translation from molecular organization to effective parameters across states; integration with PK/PD models and human variability; regulatory acceptance of model-informed decisions
- Patient-specific therapeutic planning (epilepsy, DBS, neurostimulation)
- Sectors: healthcare, medical devices, hospital systems
- Tools/products/workflows: compile patient tissue (biopsies, resected tissue, or high-res ex vivo imaging) to parameterize focal circuit models; simulate resection/stimulation/drug strategies; optimize DBS lead placement and waveforms
- Assumptions/dependencies: clinical-grade imaging/annotation; pipelines from sample to model within clinical timelines; prospective validation trials; handling of neuromodulatory/volume transmission effects
- Large-scale circuit and whole-brain simulations grounded in measured ultrastructure
- Sectors: academia/HPC, national labs
- Tools/products/workflows: supercomputer deployments (e.g., Fugaku-class) running circuits with compiled parameters; assimilation of meso-scale population data to re-calibrate; scenario testing (lesions, optogenetics, learning protocols)
- Assumptions/dependencies: compute and memory scaling; error-compounding management; benchmarking against out-of-distribution perturbations
- AI/ML innovation inspired by nanoscale computational motifs and plasticity rules
- Sectors: AI/software, robotics
- Tools/products/workflows: extraction of local motifs (e.g., nanodomain-aligned synaptic rules, dendritic nonlinearities, short-term plasticity distributions) and incorporation into differentiable architectures or learning rules; neuromodulation-conditioned dynamics for continual learning
- Assumptions/dependencies: reliable motif discovery across cell types; demonstrable gains over baselines; transferable abstractions from biology to engineering domains
- Organoid and disease-model forecasting for target discovery and toxicity
- Sectors: biotech, pharma, regulators
- Tools/products/workflows: compile organoid ultrastructure to predict maturation trajectories and intervention responses; screen gene edits/compounds for circuit-level effects; regulatory neurotoxicity assessments
- Assumptions/dependencies: representativeness of organoids for human circuits; standardized perturbation suites; cross-lab reproducibility
- Neurodevice and BCI design via in silico stimulation response prediction
- Sectors: medical devices, neurotech startups
- Tools/products/workflows: simulate how stimulation protocols propagate through compiled tissue; optimize electrode geometry, waveforms, and closed-loop policies before animal/human testing
- Assumptions/dependencies: accurate modeling of extracellular space and glia; integration of non-synaptic signaling; validation across states and subjects
- Precision neuropathology and computational diagnostics
- Sectors: healthcare, diagnostics, payers
- Tools/products/workflows: compile biopsy/archival tissue to quantify shifts in effective synaptic/ionic parameters characteristic of disease stages; support prognosis and therapy selection
- Assumptions/dependencies: clinic-compatible specimen workflows; normative parameter atlases; health-economic validation
- Policy frameworks for model-based evidence in neuroscience and medicine
- Sectors: regulators, standards bodies, funders
- Tools/products/workflows: pathways to qualify UDC outputs as adjunct evidence; standards for uncertainty reporting and domain-shift evaluation; incentives for shared calibration datasets (e.g., FROs/consortia)
- Assumptions/dependencies: consensus on metrics; ethical guidance on brain simulation; sustained funding and governance
Notes on feasibility across all long-term items: success hinges on assembling large, standardized, perturbation-rich paired corpora; solving identifiability under partial observation; calibrating uncertainty; managing distribution shift across species and preparations; and ensuring compute scalability. Ethical, regulatory, and data-ownership questions must be addressed as capabilities advance.
Glossary
- Active zone: The presynaptic region where neurotransmitter vesicles dock and release. "Active zone area/volume"
- Adaptation: A neuron's change in firing behavior over sustained stimulation. "e.g., f-I curve, subthreshold impedance, adaptation"
- Astrocytes: Star-shaped glial cells involved in neurotransmission, metabolism, and homeostasis. "Diffusion, volume transmission, astrocytes, vascular coupling, immune effects, and extracellular space properties may be essential"
- Automated optical electrophysiology: High-throughput, optics-based stimulation/recording paradigms for neuronal activity. "Current automated optical electrophysiology and multiplexed reporter platforms41,42 are approaching the throughput needed to produce corpora of this size"
- bdTEM: Beam-deflection transmission electron microscopy, a fast TEM modality for large areas. "bdTEM"
- Biophysical simulations: Mechanistic computational models grounded in physical/biological parameters. "Such a compiler would support biophysical simulations by turning anatomical maps into models of circuit dynamics"
- Biophysically interpretable features: Imaging-derived measurements mapped to mechanistic parameters (e.g., receptor density). "Biophysically Interpretable Features"
- Calcium imaging: Optical measurement of activity via calcium-sensitive indicators. "calcium imaging can supply lower-bandwidth constraints"
- Calibrated uncertainty: Uncertainty estimates that are statistically well-aligned with real error frequencies. "with calibrated uncertainty."
- Chemogenetic: Using engineered receptors/chemicals to control cellular signaling. "optogenetic or chemogenetic control of specific signaling pathways"
- Cleft geometry: The physical structure of the synaptic cleft affecting transmission. "Cleft geometry"
- C. elegans: A model organism with a compact nervous system used for whole-organism neural studies. "In systems such as C. elegans, increasingly complete simulations already combine connectome, morphology, body, and environment73."
- Compartment parameters: Local neuronal properties (e.g., conductances) for soma/dendrite/axon compartments. "effective synapse and compartment parameters"
- Conditional generative translation: A learned mapping that outputs parameter distributions conditioned on structural inputs. "We will train a conditional generative translation from molecular ultrastructure to model parameters and stochastic components, with calibrated uncertainty."
- Connectome: A complete wiring diagram of neural connections in a nervous system/tissue. "beyond what a known connectome alone provides."
- Connectomics: The science of mapping and analyzing neural connectivity at scale. "purely structural connectomics is insufficient"
- Conductance-equivalent parameters: Effective parameters representing ion channel-mediated currents in models. "distributions over conductance-equivalent parameters"
- Conductances: Measures of ion channel-mediated electrical conductance in membranes/synapses. "for example, synaptic conductances and kinetics"
- Dendritic nonlinearities: Nonlinear integrative properties of dendrites affecting synaptic summation. "dendritic nonlinearities"
- Distribution shift: Changes in data distribution across experiments/species that impair model generalization. "distribution shift across preparations and species"
- Electron microscopy (EM): High-resolution imaging using electron beams to visualize ultrastructure. "With EM, we can see synapses and their sizes, shapes, and connectivity."
- End-to-end machine learning: Learning a direct mapping from inputs to outputs without hand-crafted intermediates. "End-to-end Machine Learning"
- Equivalence class: A set of parameterizations that are functionally indistinguishable for predictions. "The inferred parameters define an equivalence class"
- EPSP (excitatory postsynaptic potential): Depolarization of the postsynaptic neuron following excitatory input. "somatic excitatory postsynaptic potential (EPSP)"
- Excitation–inhibition balance: The relative strengths of excitatory and inhibitory influences in a circuit. "how excitation-inhibition balance shifts"
- Expansion microscopy (ExM): A technique that physically expands tissue to enable nanoscale imaging with light microscopes. "With expansion microscopy, we can quantify molecular content and aspects of molecular organization"
- ExA-SPIM (LSFM): Expansion-assisted selective plane illumination microscopy, a light-sheet method for expanded tissues. "ExA-SPIM (LSFM) (2024)"
- ExLLSM: An expansion-light-sheet variant enabling large-volume, nanoscale-resolution imaging. "ExLLSM"
- f-I curve: The relationship between input current and output firing rate of a neuron. "e.g., f-I curve, subthreshold impedance, adaptation"
- Feature-based mechanistic inference: Inferring parameters via biologically interpretable features rather than fully end-to-end learning. "two complementary paths, end-to-end learning and feature-based mechanistic inference (see Fig. 3)."
- FlyWire: A large-scale Drosophila connectomics project providing dense wiring diagrams. "for example, FlyWire33 provides extraordinary anatomy and connectivity"
- Forward problem: Simulating outputs from known parameters/structure (easier than inverse inference). "the much simpler forward problem of simulating a circuit"
- Identifiability: Whether model parameters can be uniquely determined from available data. "the goal is identifiability of effective parameters"
- Kinetics: The time course of ion channel/synaptic responses (e.g., rise/decay). "channel densities and kinetics set excitability"
- Multiplexed reporters: Sets of biosensors enabling simultaneous readout of multiple signals. "Increasing multiplexed reporters further allows simultaneous measurement of multiple signals"
- MultiSEM: Multi-beam scanning electron microscopy for high-throughput EM imaging. "Zeiss MultiSEM"
- Nanodomains: Nanoscale clusters of receptors or proteins within synapses that modulate transmission. "receptor nanodomains"
- Neuromodulatory state: The prevailing influence of neuromodulators that shifts neuronal/synaptic properties. "effective parameters are dynamically shifted by neuromodulatory state"
- Optogenetic: Light-driven control of genetically targeted cells via opsins. "optogenetic or chemogenetic control of specific signaling pathways"
- Optophysiology: Optical methods to probe physiological function in intact tissue. "e.g., optophysiology, patch-clamping, etc"
- Out-of-distribution benchmarks: Tests evaluating generalization to data outside the training distribution. "out-of-distribution benchmarks"
- Patch-clamping: Electrophysiological technique to record/control membrane currents/voltages via a pipette. "patch-clamping"
- PEEM: Photoemission electron microscopy, an EM-related modality for contrast and speed. "PEEM"
- Perturb-seq: A pooled genetic perturbation approach combined with single-cell RNA profiling. "Perturb-seq43"
- Postsynaptic density: Protein-dense region beneath the postsynaptic membrane at excitatory synapses. "Postsynaptic density size"
- Pre/post alignment: Spatial alignment of presynaptic release sites and postsynaptic receptors. "pre/post alignment"
- Receptor subtypes: Different molecular forms of a receptor with distinct functional properties. "for example via receptor subtypes, transporters, and synthesis machinery"
- Short-term plasticity: Rapid, transient changes in synaptic strength due to recent activity. "short-term plasticity"
- Simulator-ready: Parameters formatted for immediate use in mechanistic simulators. "simulator-ready parameters"
- Spike sorting: Computational process of assigning recorded spikes to individual neurons. "spike sorting"
- Stochastic release: Probabilistic neurotransmitter vesicle release at synapses. "stochastic release"
- Stochasticity: Random variability included in model parameters or outputs. "their stochasticity (not shown)"
- Supramolecular organization: Higher-order assembly of protein complexes affecting function. "the supramolecular organization of postsynaptic proteins"
- Synaptic efficacies: Effective strengths of synaptic connections. "synaptic efficacies"
- Synchrotron micro-CT: High-energy X-ray micro–computed tomography using a synchrotron source. "Synchrotron micro-CT"
- Subthreshold dynamics: Membrane potential fluctuations below spike threshold. "subthreshold dynamics"
- Subthreshold impedance: Frequency-dependent membrane impedance measured below spiking threshold. "e.g., f-I curve, subthreshold impedance, adaptation"
- Transcriptomic identity: A cell’s gene-expression profile used to predict function/connectivity. "transcriptomic identity predicts intrinsic electrophysiological phenotype"
- Ultrastructure: Nanoscale cellular architecture, including synapses and molecular organization. "ultrastructure does not mean geometry alone."
- Ultrastructure-to-dynamics compiler (UDC): A learned system mapping molecular ultrastructure to physiological parameters with uncertainty. "an 'ultrastructure-to-dynamics compiler' (or UDC for short; Box 1)"
- Uncertainty calibration: Procedures/metrics ensuring predicted uncertainties reflect true error frequencies. "uncertainty-calibration norms."
- Uncertainty-aware: Models that explicitly represent uncertainty in their predictions. "uncertainty-aware physiological parameters"
- Volume transmission: Diffuse, extrasynaptic signaling via neuromodulators in extracellular space. "The compiler therefore does not ignore volume transmission."
- Voxels: Volumetric pixels representing 3D imaging resolution units. "Voxels per dollar"
- Voltage imaging: Optical measurement of membrane potential using voltage-sensitive indicators. "Voltage imaging and targeted electrophysiology can provide synapse- and compartment-level constraints"
- Zeiss MultiSEM: A commercial multi-beam SEM instrument enabling rapid EM imaging. "Zeiss MultiSEM"
Collections
Sign up for free to add this paper to one or more collections.