Papers
Topics
Authors
Recent
Search
2000 character limit reached

Universal Functional Blocks for Neuromorphic Systems

Updated 4 February 2026
  • Universal functional blocks are reconfigurable circuit elements that integrate synaptic plasticity and neuronal thresholding to emulate biological neural computation.
  • They leverage diverse device technologies—such as SET junctions, memristor arrays, and photonic modulators—to achieve energy efficiency, rapid switching, and support for both short- and long-term plasticity.
  • Modular composition of these blocks enables large-scale neuromorphic architectures, though challenges remain in device variability and integration maturity.

Universal Functional Blocks for Neuromorphic Systems

Universal functional blocks are physical, circuit-level, and algorithmic primitives that enable the scalable construction of large-scale neuromorphic architectures. These blocks typically integrate both synaptic (memory/update) and neuronal (thresholding/nonlinearity) behavior in a single, reconfigurable element, supporting efficient emulation of biological neural computation. Key requirements include simultaneity of processing and memory, support for short- and long-term plasticity, energy and speed competitiveness, and compositionality across diverse neural and computational tasks. Recent research demonstrates that such blocks can be implemented with a variety of device technologies including single-electron tunneling (SET) structures, hybrid memristor-CMOS arrays, programmable analog blocks, photonic media, emerging transistor platforms, and stochastic memristor networks.

1. Device- and Circuit-Level Universal Blocks

Universal functional blocks at the device level exploit physical phenomena to achieve nonlinear, history-dependent input/output mapping—faithfully mimicking both neuron and synapse dynamics.

Single-Electron Tunneling (SET) PbS/InP Junctions: Each PbS/InP SET junction consists of colloidal PbS nanocrystals on p-InP, separated by an amorphous oxide, forming a double-barrier Coulomb island (Jarschel et al., 2019). Physical parameters (barrier resistances R₁ ≈ 20 GΩ, R₂ ≈ 4 GΩ; capacitances C₁ ≈ 5×10⁻¹⁹ F, C₂ ≈ 2×10⁻¹⁹ F) yield room-temperature Coulomb blockade (E_c ≈ 110 meV), enabling both memory and threshold detection. These elements implement:

  • Short-term plasticity (STP): Pulsed voltage fills oxide traps with reversible dynamics (recovery τ_STP ≈ 1 s).
  • Long-term plasticity (LTP): Extended pulsing drives persistent conductance shifts (τ_LTP ≈ 10² s).
  • Nonlinear thresholding: Single-electron effects mediate cumulative, stepwise changes in conductance, providing both synaptic-weight updates and neuron-like threshold nonlinearity.
  • Energy and speed: Synaptic operation requires ≈1 fJ/event at MHz rates, surpassing both memristors and biological synapses in efficiency.

Monolayer Graphene Electrochemical Transistors (EGTs): EGTs exhibit voltage-gated transitions between volatile (neuron-like/spiking) and non-volatile (synapse-like/memory) modes, with extreme on/off resistance ratios (10⁶–10⁸), ms-range switching, and >10⁶-cycle endurance (Yu et al., 2023). This is achieved by employing reversible electrochemical hydrogenation to modulate channel conductivity, with the gate bias serving as a switch between synaptic (V_G ≈ 1 V) and neuronal (V_G ≳ 2 V) operation.

Diffusive Memristor Convergence Blocks: A block comprising three diffusive Ag:SiOₓ memristors in a convergence architecture can realize universal thresholding, logic (AND/OR/NOT), multilevel classification, and coincidence detection via stochastic, noise-driven filament formation and dissolution, modeling biological channel noise (Otieno et al., 3 Feb 2026). The same minimal motif—each node a Pearson–Anson oscillator—enables implementation of analog comparators, Boolean logic, and temporal processing by varying applied voltages and exploiting the inherent stochasticity.

2. Algorithmic Universality: Information Representation and Processing

At the algorithmic level, universal functional blocks are characterized by their capacity to realize arbitrary mappings (e.g., universal approximation) or general-purpose digital computation (Turing-completeness).

Trainable Analogue Block (TAB) for Population Coding: The TAB framework uses a population of nonlinear analog neurons (implemented as transistor differential pairs with systematic and mismatch-induced heterogeneity in their tuning) whose outputs are linearly pooled via digitally programmed weights to approximate any continuous function, achieving universal approximation (Thakur et al., 2015). The architecture exploits device mismatch to create a high-dimensional feature basis, with offline-learned least-squares weights mapping inputs to outputs. TABs can be modularly cascaded for deep or multi-output systems, forming a reprogrammable "analog neural fabric."

Logic and Arithmetic: Binary Operations and SNN Gates: Spiking SNNs can be configured to perform Boolean computation with minimal gate sets (NAND or universal two's complement binary adders/multipliers), supporting both logic and fixed-precision arithmetic (Iaroshenko et al., 2021, Ayuso-Martinez et al., 2022). For instance, compositions of LIF-neuron-based AND, OR, NOT, NAND, NOR, XOR gates (via appropriate synaptic weights, delays, and spike timing) on SpiNNaker demonstrate deterministic sub-ms logic function with neuromorphic efficiency (Ayuso-Martinez et al., 2022). Similarly, binary matrix–vector multiply (using neuron types γ₀–γ₂; see full-adder topology and two's complement encoding) underlies integer and numerical computation on neuromorphic hardware (Iaroshenko et al., 2021). This block composition suffices for deep networks and general stochastic simulations.

Turing-Completeness via Primitive Neuromorphic Blocks: It is proven that six neuromorphic circuit modules—constant, successor, projection functions, and the operators composition, primitive recursion, and minimization—implemented using LIF neurons with tunable threshold and leak parameters, suffice to realize all μ-recursive functions, thereby establishing Turing completeness (Date et al., 2021). Each module has explicit neuron/synapse topologies, spike-encoding rules, and compositional wiring, allowing systematic construction of arbitrary digital computation.

3. Hybrid and Large-Scale Integration: Neuromorphic Arrays and Tiles

Universal blocks must be readily tilable and compose into high-density, large-scale circuits.

Hybrid Memristive–CMOS Tiles: Each neuromorphic tile consists of (1) CMOS-compatible event-driven sensory transducers (e.g., silicon retina/cochlea), (2) memristor-based crossbar synapse arrays (for in-memory weight storage and weighted sum), (3) LIF/adaptive neuron circuits, and (4) learning engines to coordinate local plasticity (STDP, STP, homeostasis) (Chicca et al., 2019). Tiles are modular, with standardized Address-Event Representation (AER) for inter-tile communication and support for both real-time inference and background learning. Nonvolatile RRAM/PCM/CBRAM elements enable persistent memory, while volatile modes support dynamic plasticity.

FPGA-Based SNN Processors with Universal Interconnections: Arrays of parameterizable LIF neurons, programmable synaptic weights/delays, and all-to-all configurable multiplexed interconnect realize arbitrary SNN topologies. The system supports runtime reconfiguration (UART-driven parameter loads), sub-µs inference, and efficient resource scaling (Harlikar et al., 11 Dec 2025). Each block is instantiated in Verilog, enabling compositional construction of layered, recurrent, or feedforward spiking networks.

Content-Addressable Memory-Based Reference Frames: NeRTCAM implements reference-frame "Place Cell" and "Grid Cell" abstractions in CMOS using reverse-ternary CAM blocks for high-throughput, low-latency associative lookups, supporting fuzzy matching, hierarchical tiling, and biologically inspired inference (Nair et al., 2024).

4. Physical and Photonic Platforms for Universal Blocks

Non-electronic media also support universal neuromorphic blocks.

Percolation-with-Plasticity (PWP) Networks: A PWP device comprises a random network of microscopic plastic (nonvolatile-switchable) resistors. For N electrodes, PWP networks provide an exponentially large number (N!) of high-dimensional, non-linear, multi-valued memory channels, each independently programmable (Karpov et al., 2020). Voltage-driven plasticity yields multilevel storage, random number generation, matrix–vector multiplication (via high-dimensional, nonlinear mapping), fading memory, and associative recall, forming a physical reservoir for neuromorphic computation.

Photonic Functional Blocks with Transparent Conductive Oxides (TCO): SiN waveguides clad with ITO, GZO, AZO, or In:CdO layers form sub-µm photonic synapses and neurons with femtosecond-scale, bistable optical/electrical switching (Gosciniak et al., 2023). Key blocks include TCO modulators (synapses storing analog weights via carrier accumulation), MMI/WDM-based spatial and temporal summation modules, and bistable activation (neurons) implementing integrate-and-fire behavior. Integration supports ultra-high-density, low-energy, THz-bandwidth neuromorphic photonics.

5. Practical Implementation and Compositionality

Universal functional blocks are integrated at the system level through modular composition, calibration routines, and digital/analog interface standards.

Mixed-Signal Universal Substrates: Chips such as Spikey comprise arrays of analog LIF neurons (adjustable C, g_leak, V_th, E_rev), conductance-based plastic synapses (weight and STP/D parameters), digital event routing, and precise calibration logic for tailoring time constants and compensating pattern noise (Pfeil et al., 2012). Parameter configurability allows emulation of a diverse spectrum of spiking dynamics, including feedforward, recurrent, winner-take-all, attractor dynamics, and liquid-state machines, using exactly the same physical primitives.

Middleware and Hardware Abstraction: Frameworks such as Fugu formalize functional blocks as parameterized software/hardware bricks (e.g., logic gates, convolution, graph search, random walks), automatically composing them into platform-neutral circuit graphs (Aimone et al., 2019). Metadata annotations (input/output dimensionality, timing, index mappings) standardize interface definitions and enable automated translation to disparate neuromorphic back-ends (Loihi, SpiNNaker, TrueNorth).

Analog and Digital Hybrid Blocks: Memristor bridge synapses, OTA summers, CMOS-based ReLU/tanh activators, and convolutional/pooling/softmax blocks can be combined into full analog hardware networks for both fully-connected and convolutional architectures with µJ-class inference energy and sub-µs per-layer latency (Agarwal et al., 2022).

6. Limitations, Variability, and Outlook

Dominant challenges include device-to-device variability (threshold dispersion, trap density, stochasticity), operational voltage compatibility with commercial CMOS, and integration maturity with large-scale foundry manufacturing (Jarschel et al., 2019, Yu et al., 2023). Inhomogeneities and single-electron or stochastic effects require calibration or algorithmic tolerance at the circuit and algorithm levels. For many nanoscale platforms (SET, EGT, diffusive memristor), robust large-array integration is still at the proof-of-concept stage.

From an architectural perspective, compositional universality has been established for several platforms (e.g., μ-recursive function implementation with LIF/synaptic delay blocks (Date et al., 2021)), general-purpose logic/arithmetic (Iaroshenko et al., 2021, Ayuso-Martinez et al., 2022), and function approximation with population coding (Thakur et al., 2015). However, trade-offs among energy, speed, precision, and scalability vary across device physics, and remain a focus of practical hardware research.


Universal Block Type Physical Realization Major Functions
SET PbS/InP Coulomb island, oxide-tunnel junction STP/LTP, thresholding, nonlinear I–V, MHz operation
Diffusive Memristor Metal–insulator–metal stochastic cell Coincidence, AND/OR/NOT, comparator, noise integration
EGT (Graphene) Electrochemical gating, H⁺ doping Synapse/neuron switching (by V_G), nonvolatile/volatile
Memristor–CMOS 1T–1R crossbar arrays + LIF/AdEx In-memory weighted sum, nonlinearity, STDP/STP engine
Photonic TCO SiN waveguide + TCO Analog weight, integrate–fire, sub-ps operation

Universal functional blocks constitute the foundation of physical, algorithmic, and system-level implementations in neuromorphic systems, enabling scalable, efficient, and flexible brain-inspired computation across diverse substrates and architectures (Jarschel et al., 2019, Yu et al., 2023, Thakur et al., 2015, Iaroshenko et al., 2021, Otieno et al., 3 Feb 2026, Chicca et al., 2019, Agarwal et al., 2022, Harlikar et al., 11 Dec 2025, Gosciniak et al., 2023, Nair et al., 2024, Pfeil et al., 2012, Karpov et al., 2020, Aimone et al., 2019, Date et al., 2021, Ayuso-Martinez et al., 2022).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Universal Functional Blocks for Neuromorphic Systems.