Real-Time Motion Focus Recognition
Updated 19 January 2026
Real-Time Motion Focus Recognition is a technique that employs dynamic batch reconfiguration and sliding-batch inference to handle variable-length queries in large language models.
It optimizes throughput and reduces idle computation by dynamically inserting new queries and synchronizing attention masks and KV-caches during inference.
Empirical results demonstrate significant speedups and reduced overhead, validating its effectiveness in maintaining output correctness even with early exits.
Real-Time Motion Focus Recognition encompasses a set of batchwise and token-level dynamic inference scheduling schemes for LLMs that preserve computational throughput and correctness during highly variable, interactive workloads. The field has focused on solving latency bottlenecks in autoregressive LLM deployment resulting from disparate query lengths, divergent early exit points, and non-uniform computational demand per token or hypothesis. The prevailing trend is the development of "sliding-batch" techniques and synchronization/focus restoration procedures that enable continuous, resource-efficient, and correct model decode even as queries arrive, complete, and yield in-flight early-exits at arbitrary times.
1. Dynamic Batch Reconfiguration and “Sliding-Batch” Inference
Traditional run-to-completion batching in LLM deployment produces substantial idle computation: queries that terminate early or decode short outputs remain in batch, outputting end-of-sequence tokens (“” ) w h i l e w a i t i n g f o r o t h e r b a t c h m e m b e r s t o f i n i s h . B A T O N ( < a h r e f = " / p a p e r s / 2410.18701 " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > C o n g e t a l . , 2024 < / a > ) i n t r o d u c e s a d y n a m i c r e − b a t c h i n g m e c h a n i s m t h a t , a t e a c h d e c o d e i t e r a t i o n , r e c o g n i z e s e a r l y − f i n i s h e d q u e r i e s a n d i m m e d i a t e l y i n s e r t s n e w a r r i v i n g q u e r i e s i n t o t h e i r s l o t s . I n s t e a d o f d u p l i c a t i n g t h e s e l f − a t t e n t i o n l a y e r s f o r n e w q u e r i e s ( a s O r c a d o e s ) , B A T O N p e r f o r m s e x p l i c i t v e c t o r s h a p i n g a n d a t t e n t i o n m a s k u p d a t e s t o a l i g n t h e d i m e n s i o n s a n d g u a r a n t e e d e c o d e r c o r r e c t n e s s , t h e r e b y s u s t a i n i n g b a t c h − l e v e l e f f i c i e n c y w i t h o u t i n c u r r i n g e x t r a n e o u s r e s o u r c e c o n s u m p t i o n . < / p > < p > A l l b a t c h s l o t s r e t a i n i n d e p e n d e n c e . W h e n e v e r a s l o t e m i t s E O S , i t s i n p u t − t o k e n , a t t e n t i o n − m a s k , a n d K V C a c h e e n t r i e s a r e r e − i n i t i a l i z e d f o r t h e i n c o m i n g q u e r y , t h e b a t c h s i z e i s m a i n t a i n e d a t B , a n d t e n s o r c o n t e n t i s r e s h a p e d ( p a d d e d ) t o e n s u r e d i m e n s i o n a l c o n f o r m i t y . N e w l y i n s e r t e d q u e r i e s ’ K V C a c h e e n t r i e s a r e e m b e d d e d v i a a p r e f i l l i n g a n d d e c o d i n g s e p a r a t i o n m e c h a n i s m , s o p r e f i l l i n g n e v e r c a u s e s b a t c h i d l e n e s s . < / p > < h 2 c l a s s = ′ p a p e r − h e a d i n g ′ i d = ′ m a t h e m a t i c a l − f o r m u l a t i o n − v e c t o r − s h a p i n g − m a s k i n g − k v c a c h i n g ′ > 2. M a t h e m a t i c a l F o r m u l a t i o n : V e c t o r S h a p i n g , M a s k i n g , K V C a c h i n g < / h 2 > < p > L e t t h e b a t c h s i z e b e ”) while waiting for other batch members to finish. BATON (<a href="/papers/2410.18701" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Cong et al., 2024</a>) introduces a dynamic re-batching mechanism that, at each decode iteration, recognizes early-finished queries and immediately inserts new arriving queries into their slots. Instead of duplicating the self-attention layers for new queries (as Orca does), BATON performs explicit vector shaping and attention mask updates to align the dimensions and guarantee decoder correctness, thereby sustaining batch-level efficiency without incurring extraneous resource consumption.</p>
<p>All batch slots retain independence. Whenever a slot emits EOS, its input-token, attention-mask, and KV_Cache entries are re-initialized for the incoming query, the batch size is maintained at B, and tensor content is reshaped (padded) to ensure dimensional conformity. Newly inserted queries’ KV_Cache entries are embedded via a prefilling and decoding separation mechanism, so prefilling never causes batch idleness.</p>
<h2 class='paper-heading' id='mathematical-formulation-vector-shaping-masking-kv_caching'>2. Mathematical Formulation: Vector Shaping, Masking, KV_Caching</h2>
<p>Let the batch size be ” ) w hi l e w ai t in g f oro t h er ba t c hm e mb ers t o f ini s h . B A TON ( < ah re f = "/ p a p ers /2410.18701" t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > C o n g e t a l . , 2024 < / a > ) in t ro d u ces a d y nami cre − ba t c hin g m ec hani s m t ha t , a t e a c h d eco d e i t er a t i o n , reco g ni zese a r l y − f ini s h e d q u er i es an d imm e d ia t e l y in ser t s n e w a rr i v in g q u er i es in t o t h e i rs l o t s . I n s t e a d o fd u pl i c a t in g t h ese l f − a tt e n t i o n l a yers f or n e wq u er i es ( a s O rc a d oes ) , B A TONp er f or m se x pl i c i t v ec t ors ha p in g an d a tt e n t i o nma s k u p d a t es t o a l i g n t h e d im e n s i o n s an d gu a r an t ee d eco d ercorrec t n ess , t h ere b ys u s t ainin g ba t c h − l e v e l e ff i c i e n cy w i t h o u t in c u rr in g e x t r an eo u sreso u rceco n s u m pt i o n . < / p >< p > A ll ba t c h s l o t sre t ainin d e p e n d e n ce . Wh e n e v er a s l o t e mi t s EOS , i t s in p u t − t o k e n , a tt e n t i o n − ma s k , an d K V C a c h ee n t r i es a rere − ini t ia l i ze df or t h e in co min g q u ery , t h e ba t c h s i ze i s main t ain e d a tB , an d t e n sorco n t e n t i sres ha p e d ( p a dd e d ) t oe n s u re d im e n s i o na l co n f or mi t y . N e wl y in ser t e d q u er i es ’ K V C a c h ee n t r i es a ree mb e dd e d v iaa p re f i ll in g an dd eco d in g se p a r a t i o nm ec hani s m , so p re f i ll in g n e v erc a u ses ba t c hi d l e n ess . < / p >< h 2 c l a ss = ′ p a p er − h e a d in g ′ i d = ′ ma t h e ma t i c a l − f or m u l a t i o n − v ec t or − s ha p in g − ma s kin g − k v c a c hin g ′ > 2. M a t h e ma t i c a lF or m u l a t i o n : V ec t or S ha p in g , M a s kin g , K V C a c hin g < / h 2 >< p > L e tt h e ba t c h s i ze b e B, t h e c u r r e n t s e q u e n c e l e n g t h f o r s l o t , the current sequence length for slot , t h ec u rre n t se q u e n ce l e n g t h f ors l o t bb e be b e \ell_i^{(b)}, a n d a n e w l y a r r i v e d q u e r y h a v e p r o m p t l e n g t h , and a newly arrived query have prompt length , an d an e wl y a rr i v e d q u ery ha v e p ro m ptl e n g t h \ell_q. D e f i n e t h e p a d d e d i n p u t m a t r i x . Define the padded input matrix . De f in e t h e p a dd e d in p u t ma t r i x X' \in \mathbb{R}^{B \times L'}w i t h with w i t h L' = \max(\max_{b < B} \ell_i^{(b)}, \ell_q)a n d p e r − s l o t p a d d i n g and per-slot padding an d p er − s l o tp a dd in g p_b = L' - \ell_i^{(b)}. F o r e a c h . For each . F ore a c h b: < / p > < u l > < l i > :</p>
<ul>
<li> :< / p >< u l >< l i > X'[b, 0:\ell_i^{(b)}] = \text{old tokens}_b< / l i > < l i > </li>
<li> < / l i >< l i > X'[b, \ell_i^{(b)}:L'] = \text{PAD}< / l i > < l i > </li>
<li> < / l i >< l i > X'[\text{new}, 0:\ell_q] = \text{prompt}_\text{new}< / l i > < l i > </li>
<li> < / l i >< l i > X'[\text{new}, \ell_q:L'] = \text{PAD}< / l i > < / u l > < p > C o r r e s p o n d i n g l y , t h e a t t e n t i o n m a s k </li>
</ul>
<p>Correspondingly, the attention mask < / l i >< / u l >< p > C orres p o n d in g l y , t h e a tt e n t i o nma s k M' \in \{0,1\}^{B \times L'}i s d e f i n e d b y : < / p > < u l > < l i > is defined by:</p>
<ul>
<li> i s d e f in e d b y :< / p >< u l >< l i > M'[b, j] = 1i f if i f j < \ell_i^{(b)}, e l s e , else , e l se 0< / l i > < l i > </li>
<li> < / l i >< l i > M'[\text{new}, j] = 1i f if i f j < \ell_q, e l s e , else , e l se 0< / l i > < / u l > < p > T h e s e u p d a t e s g u a r a n t e e t h a t a l l p a d d i n g c o l u m n s a r e i g n o r e d i n t r a n s f o r m e r c o m p u t a t i o n , a n d s l o t s r e m a i n i s o l a t e d . < / p > < p > P r e f i l l / d e c o d e s e p a r a t i o n a l l o w s e m b e d d i n g p r e f i l l e d k e y s a n d v a l u e s f o r n e w q u e r i e s d i r e c t l y i n t o t h e s h a r e d </li>
</ul>
<p>These updates guarantee that all padding columns are ignored in transformer computation, and slots remain isolated.</p>
<p>Prefill/decode separation allows embedding prefilled keys and values for new queries directly into the shared < / l i >< / u l >< p > T h ese u p d a t es gu a r an t ee t ha t a llp a dd in g co l u mn s a re i g n ore d in t r an s f or m erco m p u t a t i o n , an d s l o t sre maini so l a t e d . < / p >< p > P re f i ll / d eco d ese p a r a t i o na ll o w se mb e dd in g p re f i ll e d k eys an d v a l u es f or n e wq u er i es d i rec tl y in t o t h es ha re d KV_{Cache}, c i r c u m v e n t i n g r e p e a t e d p r e f i l l i n g a n d t h u s r e m o v i n g i d l e c o m p u t a t i o n . < / p > < h 2 c l a s s = ′ p a p e r − h e a d i n g ′ i d = ′ i n f e r e n c e − a l g o r i t h m s − e n d − t o − e n d − w o r k f l o w s ′ > 3. I n f e r e n c e A l g o r i t h m s : E n d − t o − E n d W o r k f l o w s < / h 2 > < p > T h e r e a l − t i m e m o t i o n f o c u s a p p r o a c h c o m p r i s e s s e v e r a l a l g o r i t h m i c i n s t a n t i a t i o n s : < / p > < u l > < l i > < s t r o n g > B A T O N S l i d i n g − B a t c h : < / s t r o n g > E a c h d e c o d e i t e r a t i o n s t a r t s b y c h e c k i n g f o r f i n i s h e d b a t c h s l o t s . F i n i s h e d s l o t s a r e m a r k e d f r e e ; f o r e a c h f r e e s l o t , a n y q u e u e d q u e r y i s a s s i g n e d , i t s p r o m p t p r e f i l l i n g i s r u n t o g e n e r a t e t e m p o r a r y , circumventing repeated prefilling and thus removing idle computation.</p>
<h2 class='paper-heading' id='inference-algorithms-end-to-end-workflows'>3. Inference Algorithms: End-to-End Workflows</h2>
<p>The real-time motion focus approach comprises several algorithmic instantiations:</p>
<ul>
<li><strong>BATON Sliding-Batch:</strong> Each decode iteration starts by checking for finished batch slots. Finished slots are marked free; for each free slot, any queued query is assigned, its prompt prefilling is run to generate temporary , c i rc u m v e n t in g re p e a t e d p re f i ll in g an d t h u sre m o v in g i d l eco m p u t a t i o n . < / p >< h 2 c l a ss = ′ p a p er − h e a d in g ′ i d = ′ in f ere n ce − a l g or i t hm s − e n d − t o − e n d − w or k f l o w s ′ > 3. I n f ere n ce A l g or i t hm s : E n d − t o − E n d W or k f l o w s < / h 2 >< p > T h ere a l − t im e m o t i o n f oc u s a pp ro a c h co m p r i sesse v er a l a l g or i t hmi c in s t an t ia t i o n s :< / p >< u l >< l i >< s t ro n g > B A TONSl i d in g − B a t c h :< / s t ro n g > E a c h d eco d e i t er a t i o n s t a r t s b yc h ec kin g f or f ini s h e d ba t c h s l o t s . F ini s h e d s l o t s a re ma r k e df ree ; f ore a c h f rees l o t , an y q u e u e d q u ery i s a ss i g n e d , i t s p ro m ptp re f i ll in g i sr u n t o g e n er a t e t e m p or a ry KV, a n d t h e n t h i s , and then this , an d t h e n t hi s KVi s e m b e d d e d i n t o t h e s h a r e d c a c h e a f t e r n e c e s s a r y r e s h a p i n g . < / l i > < l i > < s t r o n g > S k i p D e c o d e C o l u m n w i s e E x i t S c h e d u l i n g : < / s t r o n g > R a t h e r t h a n s k i p p i n g c o m p u t a t i o n p e r − t o k e n , t h e a l g o r i t h m s y n c h r o n i z e s e a r l y e x i t s a c r o s s a l l b a t c h h y p o t h e s e s i n a c o l u m n w i s e ( p e r − g e n e r a t i o n s t e p ) f a s h i o n ( < a h r e f = " / p a p e r s / 2307.02628 " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > C o r r o e t a l . , 2023 < / a > ) . F o r g e n e r a t i o n s t e p is embedded into the shared cache after necessary reshaping.</li>
<li><strong>SkipDecode Columnwise Exit Scheduling:</strong> Rather than skipping computation per-token, the algorithm synchronizes early exits across all batch hypotheses in a columnwise (per-generation step) fashion (<a href="/papers/2307.02628" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Corro et al., 2023</a>). For generation step i se mb e dd e d in t o t h es ha re d c a c h e a f t er n ecess a ryres ha p in g . < / l i >< l i >< s t ro n g > S ki p Deco d e C o l u mn w i se E x i tS c h e d u l in g :< / s t ro n g > R a t h er t han s ki pp in g co m p u t a t i o n p er − t o k e n , t h e a l g or i t hm sy n c h ro ni zese a r l ye x i t s a cross a ll ba t c hh y p o t h eses ina co l u mn w i se ( p er − g e n er a t i o n s t e p ) f a s hi o n ( < ah re f = "/ p a p ers /2307.02628" t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > C orroe t a l . , 2023 < / a > ) . F or g e n er a t i o n s t e p t, a l l h y p o t h e s e s a r e p r o c e s s e d t h r o u g h t h e s a m e , all hypotheses are processed through the same , a ll h y p o t h eses a re p rocesse d t h ro ug h t h es am e e(t)l a y e r s , w h e r e layers, where l a yers , w h ere e(t)i s s c h e d u l e d v i a a m o n o t o n i c l i n e a r − d e c a y f o r m u l a : < / l i > < / u l > < p > is scheduled via a monotonic linear-decay formula:</li>
</ul>
<p> i ssc h e d u l e d v iaam o n o t o ni c l in e a r − d ec a y f or m u l a :< / l i >< / u l >< p > e(t) = \lceil (1-\alpha_t) L_{max} + \alpha_t L_{min} \rceil,\quad \alpha_t = \frac{t-\ell_0}{N-\ell_0}< / p > < p > T h i s e x i t s c h e d u l i n g p r e s e r v e s c a c h e v a l i d i t y a n d b a t c h a l i g n m e n t . < / p > < u l > < l i > < s t r o n g > E X S P E C S l i d i n g P o o l f o r B a t c h < a h r e f = " h t t p s : / / w w w . e m e r g e n t m i n d . c o m / t o p i c s / s p e c u l a t i v e − d e c o d i n g − s p d " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > S p e c u l a t i v e D e c o d i n g < / a > : < / s t r o n g > I n s p e c u l a t i v e d e c o d i n g w i t h b a t c h v e r i f i c a t i o n ( < a h r e f = " / p a p e r s / 2510.22876 " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > Z h a n g e t a l . , 26 O c t 2025 < / a > ) , b a t c h f o r m a t i o n e n s u r e s s a m e − l e n g t h g r o u p i n g t o a v o i d r a g g e d t e n s o r r e a l i g n m e n t . S e q u e n c e s o f e q u a l p r e f i x l e n g t h a r e b a t c h e d f o r d r a f t / v e r i f i c a t i o n r o u n d s ; w h e n n o s u c h g r o u p e x i s t s , a f a l l b a c k u n p a d – a p p e n d – r e p a d p r o c e d u r e i s i n v o k e d . T h i s s c h e d u l e r e d u c e s t h e r e a l i g n m e n t o v e r h e a d b y u p t o 65 < / u l > < h 2 c l a s s = ′ p a p e r − h e a d i n g ′ i d = ′ c o r r e c t n e s s − s y n c h r o n i z a t i o n − a n d − e f f i c i e n c y − g u a r a n t e e s ′ > 4. C o r r e c t n e s s , S y n c h r o n i z a t i o n , a n d E f f i c i e n c y G u a r a n t e e s < / h 2 > < p > F o r b a t c h s p e c u l a t i v e d e c o d i n g , k e y i n v a r i a n t s — c o n t i g u o u s p o s i t i o n I D s , c o r r e c t a t t e n t i o n m a s k s , a n d s y n c h r o n i z e d K V − c a c h e r o w s — m u s t b e r e s t o r e d a f t e r e a c h v e r i f i c a t i o n , e s p e c i a l l y a s v a r i a b l e t o k e n a c c e p t a n c e y i e l d s “ r a g g e d ” b a t c h s h a p e . < / p > < p > E X S P E C ’ s s c h e d u l i n g p o l i c y m a i n t a i n s o u t p u t e q u i v a l e n c e ( </p>
<p>This exit scheduling preserves cache validity and batch alignment.</p>
<ul>
<li><strong>EXSPEC Sliding Pool for Batch <a href="https://www.emergentmind.com/topics/speculative-decoding-spd" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Speculative Decoding</a>:</strong> In speculative decoding with batch verification (<a href="/papers/2510.22876" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Zhang et al., 26 Oct 2025</a>), batch formation ensures same-length grouping to avoid ragged tensor realignment. Sequences of equal prefix length are batched for draft/verification rounds; when no such group exists, a fallback unpad–append–repad procedure is invoked. This schedule reduces the realignment overhead by up to 65% relative to prior EqSpec.</li>
</ul>
<h2 class='paper-heading' id='correctness-synchronization-and-efficiency-guarantees'>4. Correctness, Synchronization, and Efficiency Guarantees</h2>
<p>For batch speculative decoding, key invariants—contiguous position IDs, correct attention masks, and synchronized KV-cache rows—must be restored after each verification, especially as variable token acceptance yields “ragged” batch shape.</p>
<p>EXSPEC’s scheduling policy maintains output equivalence ( < / p >< p > T hi se x i t sc h e d u l in g p reser v esc a c h e v a l i d i t y an d ba t c ha l i g nm e n t . < / p >< u l >< l i >< s t ro n g > EXSPECSl i d in g P oo l f or B a t c h < ah re f = " h ttp s : // www . e m er g e n t min d . co m / t o p i cs / s p ec u l a t i v e − d eco d in g − s p d " t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > Sp ec u l a t i v eDeco d in g < / a >:< / s t ro n g > I n s p ec u l a t i v e d eco d in g w i t hba t c h v er i f i c a t i o n ( < ah re f = "/ p a p ers /2510.22876" t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > Z han g e t a l . , 26 O c t 2025 < / a > ) , ba t c h f or ma t i o n e n s u ress am e − l e n g t h g ro u p in g t o a v o i d r a gg e d t e n sorre a l i g nm e n t . S e q u e n ceso f e q u a lp re f i x l e n g t ha re ba t c h e df or d r a f t / v er i f i c a t i o n ro u n d s ; w h e nn os u c h g ro u p e x i s t s , a f a ll ba c k u n p a d – a pp e n d – re p a d p roce d u re i s in v o k e d . T hi ssc h e d u l ere d u ces t h ere a l i g nm e n t o v er h e a d b y u pt o 65 < / u l >< h 2 c l a ss = ′ p a p er − h e a d in g ′ i d = ′ correc t n ess − sy n c h ro ni z a t i o n − an d − e ff i c i e n cy − gu a r an t ee s ′ > 4. C orrec t n ess , S y n c h ro ni z a t i o n , an d E ff i c i e n cy G u a r an t ees < / h 2 >< p > F or ba t c h s p ec u l a t i v e d eco d in g , k ey in v a r ian t s — co n t i gu o u s p os i t i o n I Ds , correc t a tt e n t i o nma s k s , an d sy n c h ro ni ze d K V − c a c h ero w s — m u s t b eres t ore d a f t ere a c h v er i f i c a t i o n , es p ec ia ll y a s v a r iab l e t o k e na cce pt an cey i e l d s “ r a gg e d ” ba t c h s ha p e . < / p >< p > EXSPEC ’ ssc h e d u l in g p o l i cy main t ain so u tp u t e q u i v a l e n ce ( \hat{S}_i = S_if o r e v e r y p r o m p t for every prompt f ore v ery p ro m pt p_ia n d t i m e s t e p and timestep an d t im es t e p t) , w i t h e m p i r i c a l e x a c t − m a t c h r a t e s ), with empirical exact-match rates ) , w i t h e m p i r i c a l e x a c t − ma t c h r a t es \geq95\%f o r for f or B\leq4a n d and an d \approx94\%a t at a t B=8. T h i s i s a c h i e v e d b e c a u s e b a t c h r e a l i g n m e n t ( t h e p r i m a r y o v e r h e a d i n E q S p e c ) i s o n l y i n c u r r e d w h e n s a m e − l e n g t h g r o u p i n g f a i l s . < / p > < p > I n B A T O N , c o r r e c t n e s s i s m a i n t a i n e d s i n c e p a d d i n g a n d m a s k i n g u p d a t e s f u l l y d e c o u p l e s l o t c o n t e n t s , a n d n e i t h e r o l d p a d d e d t o k e n s n o r p l a c e h o l d e r s c o n t r i b u t e t o n e w l y i n s e r t e d q u e r i e s ’ c o n t e x t . < / p > < p > E f f i c i e n c y i n B A T O N a n d S k i p D e c o d e i s c h a r a c t e r i z e d b y c o n t i n u o u s , n e a r − z e r o i d l e c o m p u t e . O n c e a s l o t c o m p l e t e s , a f r e s h p r o m p t i s i n s e r t e d w i t h n o i t e r a t i o n s p e n t o n g e n e r a t i n g i d l e t o k e n s . I n S k i p D e c o d e , t h e m o n o t o n i c e x i t s c h e d u l e e l i m i n a t e s c a c h e i n v a l i d a t i o n s a n d e n a b l e s b a t c h s l i d i n g w i t h f u l l r e u s e o f a l l c o m p u t a t i o n a n d m e m o r y . < / p > < h 2 c l a s s = ′ p a p e r − h e a d i n g ′ i d = ′ c o m p l e x i t y − a n a l y s i s − a n d − e m p i r i c a l − r e s u l t s ′ > 5. C o m p l e x i t y A n a l y s i s a n d E m p i r i c a l R e s u l t s < / h 2 > < p > C o m p l e x i t y i n t h e s e f r a m e w o r k s i s d o m i n a t e d b y b a t c h i n g , m e m o r y m a n a g e m e n t f o r K V c a c h e s , a n d s c h e d u l i n g o v e r h e a d . F o r E X S P E C w i t h b a t c h s i z e . This is achieved because batch realignment (the primary overhead in EqSpec) is only incurred when same-length grouping fails.</p>
<p>In BATON, correctness is maintained since padding and masking updates fully decouple slot contents, and neither old padded tokens nor placeholders contribute to newly inserted queries’ context.</p>
<p>Efficiency in BATON and SkipDecode is characterized by continuous, near-zero idle compute. Once a slot completes, a fresh prompt is inserted with no iteration spent on generating idle tokens. In SkipDecode, the monotonic exit schedule eliminates cache invalidations and enables batch sliding with full reuse of all computation and memory.</p>
<h2 class='paper-heading' id='complexity-analysis-and-empirical-results'>5. Complexity Analysis and Empirical Results</h2>
<p>Complexity in these frameworks is dominated by batching, memory management for KV caches, and scheduling overhead. For EXSPEC with batch size . T hi s i s a c hi e v e d b ec a u se ba t c h re a l i g nm e n t ( t h e p r ima ryo v er h e a d in EqSp ec ) i so n l y in c u rre d w h e n s am e − l e n g t h g ro u p in g f ai l s . < / p >< p > I n B A TON , correc t n ess i s main t ain e d s in ce p a dd in g an d ma s kin gu p d a t es f u ll y d eco u pl es l o t co n t e n t s , an d n e i t h ero l d p a dd e d t o k e n s n or pl a ce h o l d ersco n t r ib u t e t o n e wl y in ser t e d q u er i es ’ co n t e x t . < / p >< p > E ff i c i e n cy in B A TON an d S ki p Deco d e i sc ha r a c t er i ze d b yco n t in u o u s , n e a r − zero i d l eco m p u t e . O n ce a s l o t co m pl e t es , a f res h p ro m pt i s in ser t e d w i t hn o i t er a t i o n s p e n t o n g e n er a t in g i d l e t o k e n s . I n S ki p Deco d e , t h e m o n o t o ni ce x i t sc h e d u l ee l imina t esc a c h e in v a l i d a t i o n s an d e nab l es ba t c h s l i d in g w i t h f u ll re u seo f a ll co m p u t a t i o nan d m e m ory . < / p >< h 2 c l a ss = ′ p a p er − h e a d in g ′ i d = ′ co m pl e x i t y − ana l ys i s − an d − e m p i r i c a l − res u lt s ′ > 5. C o m pl e x i t y A na l ys i s an d E m p i r i c a lR es u lt s < / h 2 >< p > C o m pl e x i t y in t h ese f r am e w or k s i s d o mina t e d b y ba t c hin g , m e m ory mana g e m e n t f orK V c a c h es , an d sc h e d u l in g o v er h e a d . F or EXSPECw i t hba t c h s i ze Ba n d w i n d o w s i z e and window size an d w in d o w s i ze W: < / p > < u l > < l i > D r a f t : :</p>
<ul>
<li>Draft: :< / p >< u l >< l i > Dr a f t : O(B\,c_d)< / l i > < l i > V e r i f y : </li>
<li>Verify: < / l i >< l i > V er i f y : O(B\,c_v)< / l i > < l i > R e a l i g n o n </li>
<li>Realign on < / l i >< l i > R e a l i g n o n (1-g)o f s t e p s : of steps: o f s t e p s : O\big((1-g)\,c_{\text{overhead}}^{Eq}(B)\big)< / l i > < l i > B a t c h − f o r m a t i o n s c a n : </li>
<li>Batch-formation scan: < / l i >< l i > B a t c h − f or ma t i o n sc an : O(W \log W)< / l i > < / u l > < p > T h i s y i e l d s t h r o u g h p u t s c a l i n g </li>
</ul>
<p>This yields throughput scaling < / l i >< / u l >< p > T hi sy i e l d s t h ro ug h p u t sc a l in g S(B) = \frac{\alpha K}{c_{\text{draft}} + c_{\text{verify}} + c_{\text{overhead}}(B)}. < / p > < p > E m p i r i c a l l y o n < a h r e f = " h t t p s : / / w w w . e m e r g e n t m i n d . c o m / t o p i c s / s p e c b e n c h " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > S p e c B e n c h < / a > ( < a h r e f = " / p a p e r s / 2510.22876 " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > Z h a n g e t a l . , 26 O c t 2025 < / a > ) , f o r .</p>
<p>Empirically on <a href="https://www.emergentmind.com/topics/specbench" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">SpecBench</a> (<a href="/papers/2510.22876" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Zhang et al., 26 Oct 2025</a>), for . < / p >< p > E m p i r i c a ll yo n < ah re f = " h ttp s : // www . e m er g e n t min d . co m / t o p i cs / s p ec b e n c h " t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > Sp ec B e n c h < / a > ( < ah re f = "/ p a p ers /2510.22876" t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > Z han g e t a l . , 26 O c t 2025 < / a > ) , f or B=8: < / p > < u l > < l i > E X S P E C a c h i e v e s :</p>
<ul>
<li>EXSPEC achieves :< / p >< u l >< l i > EXSPEC a c hi e v es \approx3\timess p e e d u p o v e r speedup over s p ee d u p o v er B=1< / l i > < l i > R e a l i g n m e n t o v e r h e a d d r o p s f r o m </li>
<li>Realignment overhead drops from < / l i >< l i > R e a l i g nm e n t o v er h e a dd ro p s f ro m 40\% \rightarrow 14\%< / l i > < l i > E x a c t − m a t c h e q u i v a l e n c e r e m a i n s </li>
<li>Exact-match equivalence remains < / l i >< l i > E x a c t − ma t c h e q u i v a l e n cere main s >93\%a c r o s s s a m p l e d m o d e l p a i r s < / l i > < / u l > < p > I n B A T O N ( < a h r e f = " / p a p e r s / 2410.18701 " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > C o n g e t a l . , 2024 < / a > ) , e n d − t o − e n d t h r o u g h p u t i m p r o v e s u p t o across sampled model pairs</li>
</ul>
<p>In BATON (<a href="/papers/2410.18701" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Cong et al., 2024</a>), end-to-end throughput improves up to a crosss am pl e d m o d e lp ai rs < / l i >< / u l >< p > I n B A TON ( < ah re f = "/ p a p ers /2410.18701" t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > C o n g e t a l . , 2024 < / a > ) , e n d − t o − e n d t h ro ug h p u t im p ro v es u pt o 1.75\timesc o m p a r e d t o O r c a , p a r t i c u l a r l y w h e n p r o m p t s a r e l o n g a n d p r e f i l l i n g i s d o m i n a n t . < / p > < p > F o r S k i p D e c o d e ( < a h r e f = " / p a p e r s / 2307.02628 " t i t l e = " " r e l = " n o f o l l o w " d a t a − t u r b o = " f a l s e " c l a s s = " a s s i s t a n t − l i n k " x − d a t a x − t o o l t i p . r a w = " " > C o r r o e t a l . , 2023 < / a > ) , s p e e d u p s o f compared to Orca, particularly when prompts are long and prefilling is dominant.</p>
<p>For SkipDecode (<a href="/papers/2307.02628" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Corro et al., 2023</a>), speedups of co m p a re d t o O rc a , p a r t i c u l a r l y w h e n p ro m pt s a re l o n g an d p re f i ll in g i s d o minan t . < / p >< p > F or S ki p Deco d e ( < ah re f = "/ p a p ers /2307.02628" t i tl e = "" re l = " n o f o ll o w " d a t a − t u r b o = " f a l se " c l a ss = " a ss i s t an t − l ink " x − d a t a x − t oo lt i p . r a w = "" > C orroe t a l . , 2023 < / a > ) , s p ee d u p so f 2\timest o to t o 5\timesa r e o b s e r v e d w i t h n e g l i g i b l e r e g r e s s i o n ; e . g . , R o u g e − L d e g r a d e s b y are observed with negligible regression; e.g., Rouge-L degrades by a reo b ser v e d w i t hn e g l i g ib l ere g ress i o n ; e . g . , R o ug e − L d e g r a d es b y <0.2\%a t at a t 3\timesa n d and an d 1.5\%a t at a t 5\times$ speedup.
6. Integration, Trade-offs, and Practical Considerations
All described methods integrate directly with conventional batch inference stacks and KV-cache optimization frameworks found in PyTorch and TensorFlow. For BATON and EXSPEC, batch formation (and potential realignment) can be scheduled by lightweight pool managers. No duplication of modules, custom CUDA kernels, or modification of transformer attention logic is required.
Trade-offs include a small increase in masking and slot bookkeeping logic (compute-negligible), the need for accurate per-sequence tracking of batch index, and, in EXSPEC, the requirement for window-based sorting to maximize same-length group formation.
A plausible implication is that real-time motion focus recognition achieves optimal throughput in many-server scenarios by perpetually reusing GPU resources and minimizing per-token latency across highly dynamic query mixes, subject to underlying workload distributions.
7. Comparison to Prior Approaches and Algorithmic Innovations
EXSPEC improves upon prior batch speculative methods such as BSP [Su et al. ’23], DSD [Yan et al. ’25], and BASS [Qian et al. ’24] by preserving output equivalence without sacrificing integration or requiring custom kernels. BATON advances the state of sliding-batch inference by multi-dimensional alignment and cache separation, achieving continuous, non-idle batch decode. SkipDecode generalizes early-exit methods into batch-level, monotonic schedules, unlocking both parallel efficiency and cache integrity.
Collectively, these innovations delineate the contemporary direction of real-time motion focus recognition in production LLM inference: maintaining full resource utilization and consistent output integrity amid dynamic, non-uniform sequence progressions.
Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)