Papers
Topics
Authors
Recent
Search
2000 character limit reached

Codified Foreshadowing-Payoff Generation (CFPG)

Updated 18 January 2026
  • CFPG is a framework that formalizes long-range narrative dependencies by structuring stories into foreshadow, trigger, and payoff elements.
  • It employs a codify-gate mechanism and finite-state control to guide language models in resolving narrative promises with precise timing.
  • Empirical evaluations demonstrate improved payoff accuracy and narrative coherence over baseline models, validating its causal approach.

Codified Foreshadowing-Payoff Generation (CFPG) is a formal framework for story generation with LLMs that addresses the problem of long-range narrative dependencies—specifically, the explicit setup and fulfillment of foreshadowed narrative commitments ("foreshadowing" and "payoff"). By reframing narrative generation as the satisfaction of executable causal predicates, CFPG enforces the timely realization of narrative promises, overcoming the tendency of contemporary LLM-based systems to neglect or mishandle deferred resolutions even when context is available (Yun et al., 11 Jan 2026).

1. Formalization of Foreshadow–Trigger–Payoff Triples

At the core of CFPG is the formalization of each long-range narrative dependency as a Foreshadow–Trigger–Payoff triple: (F,T,P)(F, T, P) where:

  • Foreshadow (FF): A setup that introduces a causal debt—such as an object, intention, rule, or anomaly—requiring eventual resolution.
  • Trigger (TT): The minimal narrative condition that activates the foreshadow, marking it as actionable.
  • Payoff (PP): The concrete event or revelation that fulfills the commitment introduced by FF once TT is satisfied.

Given a narrative prefix X=(s1,,st)X = (s_1, \dots, s_t) up to time tt, three mapping functions are defined: F:Narrative{f1,,fn}, T:Narrative{t1,,tm}, P:Narrative{p1,,pk}.\begin{aligned} F &: \text{Narrative} \longrightarrow \{f_1, \dots, f_n\}, \ T &: \text{Narrative} \longrightarrow \{t_1, \dots, t_m\}, \ P &: \text{Narrative} \longrightarrow \{p_1, \dots, p_k\}. \end{aligned} Extracted triples (fi,ti,pi)(f_i, t_i, p_i) are recorded in the global foreshadow pool

C={(fi,ti,pi)}i=1n\mathcal{C} = \{(f_i, t_i, p_i)\}_{i=1}^n

At each generation step tt, CFPG samples the next segment yy conditioned on both the text prefix XtX_t and the subset StS_t of foreshadows in Ct\mathcal{C}_t whose trigger condition is newly satisfied: $S_t = \{ f \in \mathcal{C}_t \mid \codify(X_t, f) = \text{True} \}, \qquad y \sim p_\theta(y \mid X_t, S_t)$ where $\codify$ is a symbolic predicate function returning True when the trigger TT for foreshadow ff is entailed by XtX_t (Yun et al., 11 Jan 2026).

2. Extraction and Structuring of Foreshadow–Trigger–Payoff Supervision

CFPG's training corpus of Foreshadow–Trigger–Payoff (FTP) triples is mined from the hierarchical BookSum corpus in three stages:

  • Stage 1 (Candidate Identification):

A GPT-4.1–based extractor processes each summary, identifying candidate sentence pairs (stf,stp)(s_{t_f}, s_{t_p}) where stfs_{t_f} resembles a foreshadow and stps_{t_p} a possible payoff.

  • Stage 2 (Payoff Alignment Verification):

A symbolic verifier discards spurious pairs by checking that the context surrounding stps_{t_p} entails or fulfills the setup at stfs_{t_f}.

  • Stage 3 (Rubric‐Based Filtering):

Independent verification models score pairs for Setup Validity, Payoff Validity, Temporal Separation, and Foreshadow Justifiability, retaining only those passing all criteria.

The final dataset includes 629 FTP triples from 148 books, with a mean payoff separation of 20.9 sentences (median 13.0). Foreshadows vary in type: 48.2% object-based and 35.3% event-based, among others.

Structured supervision is derived by converting each triple into training instances: at tft_f (foreshadow introduction), the instance is (Xtf,,“...continue the story...”)(X_{t_f}, \emptyset, \text{“...continue the story...”}), and at each tt where $\codify(X_t, f_i)$ flips true, the instance (Xt,{fi},Pi)(X_t, \{f_i\}, P_i) is emitted. The loss is negative log-likelihood: L(θ)=(X,S,y)logpθ(yX,S)\mathcal{L}(\theta) = -\sum_{(X, S, y)} \log p_\theta(y \mid X, S) In this setup, the model is directly supervised to generate the gold payoff when its trigger is fired (Yun et al., 11 Jan 2026).

3. Architecture and Generation Dynamics

CFPG is implemented as a finite-state controller over narrative generation, proceeding through:

  1. Foreshadow Pool Maintenance (Ct\mathcal{C}_t): At any point, a stateful collection of all unfulfilled foreshadow commitments.
  2. Selection: For each fCtf \in \mathcal{C}_t, apply $\codify(X_t, f)$; if True, add ff to the eligible set StS_t.
  3. Conditional Generation: Input XtX_t and StS_t to the underlying LLM, prompting for explicit resolution of eligible payoffs:

ytpθ(yXt,St)y_t \sim p_\theta(y \mid X_t, S_t)

  1. State Update: After generating yty_t, verify the realization of payoffs and remove satisfied triples. New foreshadows discovered in yty_t are added to Ct+1\mathcal{C}_{t+1}.

Comparative baselines do not enforce temporal or logical precision; they lack symbolic gating and persistent state. In contrast, CFPG's codify-gate and stateful foreshadow pool deterministically track and fire payoffs only when narrative logic is satisfied (Yun et al., 11 Jan 2026).

4. Evaluation Protocols and Empirical Findings

Metrics

CFPG is evaluated against standard prompted LLM baselines on the following:

  • Payoff Accuracy under Oracle Timing:
    • Should-Payoff Rate: Fraction generating the gold payoff when unblocked.
    • Average Continuation Score: Entailment-based alignment in [0,1][0,1].
  • Incremental Payoff Sensing:
  1. Detection Rate: Payoff triggered within ±3\pm3 sentences of the gold location.
  2. Early/Late Triggers: Counts of mistimed firings.
  3. Localization Error: Average offset (in sentences).
  4. Continuation Fidelity: Alignment of post-trigger continuation.

Quantitative Results

Base Model Method Should‐Payoff % Avg. Score
GPT-4.1-mini Prompt 0.569
CFPG 1.000 0.911
Claude-Haiku-4.5 Prompt 0.657
CFPG 0.965 0.940
Qwen2.5-14B Prompt 0.583
CFPG 1.000 0.898

For GPT-4.1-mini, detection rate rises from 58.0% (Prompt) to 69.8% (CFPG). Early triggers decrease by 29.3%, localization error drops 35%, and continuation score improves by 43%.

A qualitative example on "The Hound of the Baskervilles" shows a baseline continuation leaving the disappearance of a boot unexplained, while CFPG produces: "It is then revealed that Stapleton stole the boot to train his hound on Sir Henry’s scent, finally resolving the mystery."

5. Conclusions, Limitations, and Implications

CFPG demonstrates that codifying foreshadow–trigger–payoff relations transforms narrative coherence from an emergent statistical feature into an executable property. Explicit causal gating yields near-perfect payoff realization under oracle timing and substantially reduces omitted or mistimed resolutions during incremental narrative continuation. Attention analyses reveal that the approach re-anchors model focus on the original foreshadow at payoff time (Yun et al., 11 Jan 2026).

CFPG can be extended beyond simple foreshadows to other long-range dependencies, such as character arcs or world-building commitments. Its design moves narrative generation from surface-level fluency toward true causal competence.

Current limitations include handling only explicit, textually grounded foreshadows; symbolic or thematic devices remain out of reach. The system is trained on literary summaries, not full novels, and extraction recall depends on automated pipeline quality—rare or subtle triples may not be detected.

6. Prospects for Future Research

Potential research directions include:

  • Extending codification to multi-modal domains (e.g., visual foreshadowing in graphic novels).
  • Generalizing from explicit foreshadow–payoff relations to state-machines over complex narrative commitments (e.g., moral codes, quest obligations).
  • Integrating CFPG with planning-based story outlines to coordinate both narrative content and timing.
  • Scaling extraction and codification methods to operate over raw, full-length prose.

A plausible implication is that standardized, symbolic control of narrative causal structure could prove foundational for further advances in automated long-form storytelling and interactive fiction generation (Yun et al., 11 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Codified Foreshadowing-Payoff Generation (CFPG).