SODACER: Dual-Buffer Adaptive Clustering RL
- The paper introduces SODACER, a reinforcement learning framework that integrates dual-buffer experience replay, adaptive clustering, and CBF-based safety to optimize nonlinear control tasks.
- It employs a novel adaptive clustering mechanism to reduce memory redundancy and balance rapid adaptability with stable policy improvements.
- Empirical results show up to a 40% faster convergence and zero safety violations, demonstrating significant advancements in sample efficiency and robust control.
Self-Organizing Dual-buffer Adaptive Clustering Experience Replay (SODACER) is a reinforcement learning (RL) framework designed to enhance safety, sample efficiency, and scalability in the optimal control of nonlinear dynamical systems. It introduces a dual-buffer experience replay structure with an adaptive clustering mechanism to maintain both rapid adaptability to recent experiences and a compact, diverse archive of historical interactions. Integration with Control Barrier Functions (CBFs) enforces state and input constraints, ensuring safety throughout learning, while use of the Sophia optimizer accelerates and stabilizes policy improvement. SODACER achieves notable reductions in redundant memory usage, faster convergence rates, and improved safety performance in constrained optimal control settings, as empirically validated on a nonlinear Human Papillomavirus (HPV) transmission model (Amirabadi et al., 10 Jan 2026).
1. Dual-Buffer Experience Replay Architecture
SODACER employs a two-tiered experience replay memory:
- Fast-Buffer: A small, fixed-size FIFO buffer (capacity ) storing the most recent transitions . This buffer supplies “low-bias, high-variance” samples, facilitating rapid adaptation to policy changes.
- Slow-Buffer: A larger repository (capacity ) for maintaining “low-variance, high-relevance” samples sampled from the entire training history. Experiences transferred from the Fast-Buffer undergo clustering to enforce diversity and to prune redundant samples, optimizing memory usage and ensuring critical environmental patterns are retained.
The experience flow is governed by the routine:
1 2 3 4 5 |
On each new transition S_new = (x_t, u_t, r_t, x_{t+1}):
FastBuffer.push(S_new)
if FastBuffer.size > M1:
S_old = FastBuffer.pop_oldest()
SlowBuffer.cluster_and_insert(S_old) |
2. Self-Organizing Adaptive Clustering for Redundancy Reduction
The Slow-Buffer implements a self-organizing clustering mechanism utilizing Gaussian membership:
where is the -th cluster centroid and its standard deviation.
Key clustering operations:
- New Cluster Creation: If , allocate a new cluster initialized at .
- Centroid and Count Update: The closest cluster updates its centroid and count:
- Variance Amplification: Absorbs outliers via .
- Variance Reduction: Implements a “forgetting” mechanism:
- Pruning and Merging: Clusters with are pruned; clusters within proximity are merged.
This adaptive mechanism dynamically regulates cluster population, maximizing experience diversity and minimizing storage overhead (Amirabadi et al., 10 Jan 2026).
3. Safety Enforcement via Control Barrier Functions
To guarantee satisfaction of safety-critical state and input constraints, SODACER integrates CBFs, which enforce state constraints by ensuring:
where is a class- function gain.
At each action selection step, the policy action is projected into the feasible set by solving the quadratic program:
This projection ensures that every action executed by the agent respects all encoded safety constraints, delivering robust operation in dynamic or safety-critical environments (Amirabadi et al., 10 Jan 2026).
4. Deep RL Optimization with Sophia
Policy and value function updates in SODACER employ the Sophia optimizer, a scalable stochastic second-order variant:
- First Moment Estimate:
- Second Moment (Diagonal Hessian Proxy):
- Bias Correction: ,
- Parameter Update:
Sophia’s adaptive diagonal curvature estimation permits per-coordinate step sizes, mitigating ill-conditioning and enabling accelerated, stable convergence compared to purely first-order methods (Amirabadi et al., 10 Jan 2026).
5. Reinforcement Learning Pipeline
The overall SODACER-Sophia RL algorithmic process comprises the following steps:
- Observe current state .
- Generate unconstrained action from the actor network.
- Apply CBF projection to compute safe action .
- Execute action, observe reward and next state.
- Store interaction into Fast-Buffer; when full, insert oldest into Slow-Buffer.
- For learning, form a mini-batch from all Fast-Buffer content and one representative per Slow-Buffer cluster (with optional weighting).
- Compute critic loss and gradient.
- Parameter update via Sophia optimizer.
- Actor (policy) update using CBF-adjusted samples.
This loop continues for a designated horizon , ensuring both safety compliance and efficient learning (Amirabadi et al., 10 Jan 2026).
6. Empirical Validation and Comparative Results
SODACER-Sophia was empirically validated on a nonlinear five-compartment HPV transmission model with three independent control inputs and explicit state/input constraints. Its performance was benchmarked against Random Experience Replay (RER) and static Clustering-Based Experience Replay (CBER):
| Method | Epochs to Converge | Samples Used | Safety Violations |
|---|---|---|---|
| RER | 450 ± 30 | 8,000 ± 500 | 7% |
| CBER | 380 ± 25 | 7,200 ± 450 | 4% |
| SODACER | 310 ± 20 | 6,300 ± 300 | 0% |
The cost-minimization trajectory demonstrated that SODACER achieves convergence approximately 40% faster than RER and 20% faster than CBER. The variance across runs was also lowest for SODACER, indicating superior robustness.
Further, SODACER was ranked best (average rank 1.00) in a Friedman test over five control scenarios (CBER: 2.20, RER: 2.80). Redundancy reduction in experience storage was quantified at approximately 25–35%, with a convergence acceleration of 15–30%, and zero observed constraint violations (Amirabadi et al., 10 Jan 2026).
7. Significance and Applicability
SODACER offers a reproducible blueprint for off-policy actor–critic or value-based RL frameworks where sample-efficient and safe control is required. Its dual-buffer, clustering-enhanced replay mechanism, in tandem with enforced CBF safety and accelerated convergence from Sophia, makes it suitable for high-stakes domains such as robotics, healthcare, and large-scale optimization under constraints. The modular structure facilitates extension to alternative clustering schemes, safety certificate methods, or optimizers.
A plausible implication is that sophisticated experience management via adaptive clustering, as realized in SODACER, provides tangible advantages in both memory efficiency and the bias-variance trade-off in RL with formal safety requirements. Empirical findings suggest SODACER’s approach generalizes across diverse, constrained control applications (Amirabadi et al., 10 Jan 2026).