Papers
Topics
Authors
Recent
Search
2000 character limit reached

Empowerment Gain and Causal Model Construction: Children and adults are sensitive to controllability and variability in their causal interventions

Published 9 Dec 2025 in cs.AI | (2512.08230v1)

Abstract: Learning about the causal structure of the world is a fundamental problem for human cognition. Causal models and especially causal learning have proved to be difficult for large pretrained models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. In the very different tradition of reinforcement learning, researchers have described an intrinsic reward signal called "empowerment" which maximizes mutual information between actions and their outcomes. "Empowerment" may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines. If an agent learns an accurate causal world model, they will necessarily increase their empowerment, and increasing empowerment will lead to a more accurate causal world model. Empowerment may also explain distinctive features of childrens causal learning, as well as providing a more tractable computational account of how that learning is possible. In an empirical study, we systematically test how children and adults use cues to empowerment to infer causal relations, and design effective causal interventions.

Summary

  • The paper demonstrates that empowerment gain—maximizing action-outcome mutual information—provides a robust framework for causal model construction.
  • It integrates Bayesian causal inference with reinforcement learning, using empirical studies contrasting children’s and adults’ sensitivity to control and variability.
  • Results show that combining controllability and variability enhances generalization and aligns with both work and exploratory behaviors.

Empowerment Gain as a Framework for Human Causal Learning

Introduction

The paper "Empowerment Gain and Causal Model Construction: Children and adults are sensitive to controllability and variability in their causal interventions" (2512.08230) presents a comprehensive synthesis—both theoretical and empirical—of empowerment as a key intrinsic driver in human causal model acquisition. The study targets the intersection between Bayesian approaches to causal inference and empowerment-driven reinforcement learning, positing empowerment as a bridge that can account for humans' remarkable capacity for discovering, generalizing, and exploiting causal structure. Empirical evidence is provided through developmental studies contrasting how children and adults use controllability and variability as cues for causal inference and effective intervention.

Theoretical Synthesis: Causal Models, RL, and Empowerment

The paper critically reviews the limitations of current Deep Learning methods, especially LLMs, in discovering and capturing true causal mechanisms, underlining that their apparent causal reasoning is derivative of pattern detection within human-produced data. In contrast, humans—most notably children—can spontaneously derive novel causal models from limited evidence, a hallmark that remains fundamentally unsolved in AI.

Within Bayesian Causal Bayes Net frameworks (sensu Pearl, Spirtes, Tenenbaum), causal relations are inferred via interventions and counterfactual reasoning. However, these formal models encounter severe scalability issues in hypothesis search due to the combinatorial growth of potential structures. RL, particularly in the classic utility-maximizing (model-free or model-based) tradition, robustly associates interventions (actions) with outcomes (rewards), but is myopic to epistemic learning and causally agnostic except in the pursuit of reward.

"Empowerment" is defined as the maximization of mutual information between an agent's actions and environmental outcomes. Importantly, empowerment is indifferent to extrinsic reward values; it rewards agents for discoverable relations whereby actions predictably manipulate outcomes and for sampling a wide range of actions. The authors establish a formal and conceptual equivalence: learning an accurate causal model is tantamount to empowerment gain, and acting to maximize empowerment implicitly constructs a more precise causal model. This conceptual unification is argued to overcome exploration-exploitation dilemmas endemic to both Bayesian causal and RL paradigms.

Empirical Studies: Controllability, Variability, and Causal Exploration

Study 1: Manipulating Controllability and Variability

Children (n=80, 5–10 years) and adults (n=120) were exposed to three classes of machines that manipulated objects in distinct regimes:

  • Purely Controllable: Deterministic output, no variability.
  • Variable but Not Controllable: Output varies, but mapping is stochastic.
  • Controllable and Variable: Multiple outcomes, each reliably determined by a distinct input—maximizing mutual information.

Participants were required to generalize the underlying causal mappings to new slots, new object types, and new perceptual modalities (brightness), and express preferences for which machine to "keep" for work (goal-directed causal production) or play (exploration).

Results

  • Generalization: Both children and adults significantly preferred the controllable and variable machine across tasks, with adults performing at higher accuracy (e.g., 80.8% vs. 46.3% on novel value generalization).
  • Preferences: For work, most adults (75%) and a plurality of children (48.8%) preferred the controllable and variable machine. For play, children were less discriminative, whereas adults retained their preference but showed a modest shift toward the purely variable machine.
  • Causal Inference: Participants used not just variability or controllability alone, but specifically their intersection, to select machines and interventions supporting both robust causal inference and generalization. Children showed some excess sensitivity to variability, possibly reflecting information-maximizing exploratory tendencies.

Study 2: Feature-Dimensional Empowerment

A second experiment required participants to integrate over machines in which only one perceptual feature (size or hue) was reliably controllable, while the other varied randomly. They had to identify which dimension supported reliable interventions and generalize (e.g., "make an extra-bright star") and indicate which machine to keep depending on the feature relevant to the task.

Results

  • Participants overwhelmingly preferred the feature-machine maximizing empowerment along the task-relevant dimension (e.g., 80% of adults selected the size-controllable machine for "make things bigger"), with strong above-chance performance across all mapping/generalization tasks.

Claims, Contradictions, and Numerical Performance

The authors assert a bidirectional bootstrapping between empowerment and causal model construction: gaining empowerment (discovering high-action/outcome mutual information) yields better causal models, and improved causal models enhance empowerment. This feedback cycle is supported by empirical data, including above-chance generalization in complex transfer tasks with novel features, and systematic preference-behavior alignment.

Numerically, adults consistently exceeded 70% accuracy on constrained mapping tasks (empowerment-based generalization), with statistical tests returning p<0.001p < 0.001 in most comparisons. Children, while noisier, showed far above-chance performance and a general sensitivity to empowerment cues, with significant shifts across play/exploration motivational contexts.

Implications and Future Directions

Theoretical

The empowerment framework offers a computationally tractable path (e.g., via approximate mutual information calculation [Zhao et al., 2020]) around the intractable hypothesis spaces faced in Bayesian inference. It provides a formal mechanism for intrinsically-motivated, intervention-driven causal discovery—distinct from reward maximization (RL) or passive data consumption (LLMs)—and aligns with foundational findings that infants and children act to maximize empowerment before they generalize from correlation.

This work recasts human causal learning as an agent-centric, empowerment-driven process, potentially explaining developmental phenomena (e.g., imitation, directed play, early model bootstrapping) that elude both classical RL and deep learning approaches.

Practical

The findings recommend explicit modeling of empowerment in artificial agents intended for open-ended causal discovery, especially in high-dimensional, variable, and partially observable environments. Rather than hand-designing utility functions or priors, systems could be coded to seek empowerment gain, leading to more robust, actionable, and generalizable world models—opportunities for synergy with recent work in empowerment-guided RL [De Abril & Kanai, 2018; Du et al., 2020] and exploratory learning architectures (2512.08230).

Speculation

Future AI systems integrating empowerment-based objectives could narrow the gap between human and machine causal learning, providing the scaffolding for flexible, robust generalization well beyond static domain data distributions. Further empirical work on empowerment-driven exploration in younger children, in non-human animals, and in agents operating in naturalistic, embodied settings will be critical to delineate the limits and breadth of this framework.

Conclusion

This paper provides a compelling synthesis of empowerment gain and causal model learning, positing empowerment as a critical, measurable, and developmentally instantiated driver of human conceptual growth. The empirical results demonstrate that both children and adults are sensitive to, and actively exploit, empowerment cues in novel contexts, and that the joint maximization of controllability and variability underlies adaptive causal learning and generalization. Theoretically, empowerment is advanced as a unifying principle linking epistemic Bayesian model acquisition with the agent-centric focus of RL, offering a plausible pathway for advancing computational models of both human and artificial intelligence (2512.08230).

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 18 likes about this paper.