Papers
Topics
Authors
Recent
Search
2000 character limit reached

Self-Efficacy-Driven Dependency

Updated 20 December 2025
  • Self-efficacy-driven dependency is defined as the interplay between one’s belief in their abilities and the selective reliance on external systems, with implications in threat avoidance, search tasks, and AI interactions.
  • Empirical models, including path and latent growth analyses, reveal that lower self-efficacy increases overreliance on aids while higher self-efficacy fosters calibrated, independent decision-making.
  • In AI contexts, tailored prompt engineering that modulates self-efficacy signals has been shown to enhance large language model performance, particularly in tasks of moderate difficulty.

Self-efficacy-driven dependency denotes a phenomenon in which individuals’ or artificial agents’ confidence in their own abilities modulates their reliance on external aids, tools, and automated systems. This construct, grounded in Bandura’s social cognitive theory, has emerged as a key explanatory variable across contexts such as threat avoidance behaviors, cognitive offloading to search engines, student–AI interactions in education, and even the performance modulation of LLMs. Self-efficacy-driven dependency is characterized by the interdependent relationship between one’s perceived capability (self-efficacy), the motivation to independently perform a task, and the propensity to accept, offload, or delegate to external systems. This article traces its theoretical foundations, formal models, empirical findings, and practical implications across human and AI domains.

1. Theoretical Foundations and Definitions

Self-efficacy, defined as the judgment of one’s capacity to organize and execute actions required to attain specific goals (Bandura 1977, 1986), is operationalized in diverse domains. In “Building Confidence not to be Phished through a Gamified Approach,” self-efficacy is the belief that one can detect and avoid phishing threats (Baral et al., 2018). In cognitive HCI contexts, as explored by Akgun and Toker, “cognitive self-esteem” (CSE) extends self-efficacy to include self-perceived ability to think, remember, and locate information, with “search self-efficacy” capturing online search task confidence (Akgun et al., 17 Jan 2025). In higher education, programming self-efficacy structures the student’s approach to AI assistants (Pitts et al., 16 Jun 2025). In LLMs, Bandura’s sources are formalized, treating verbal persuasion, mastery, vicarious experience, and affect as contributing to a model’s self-efficacy estimate (Chen et al., 10 Feb 2025).

Self-efficacy-driven dependency describes the effect whereby elevated self-efficacy leads to more autonomous behavior and lower (or more selective) dependency on external aids, while low-self-efficacy predisposes individuals (or models) to overreliance—uncritically accepting external recommendations or solutions.

2. Formal Models and Empirical Measurement

Human Domain Models

Baral & Arachchilage propose a structural path model for phishing threat avoidance with self-efficacy (E) as the central mediator between various knowledge constructs (procedural, conceptual, structural, heuristic, observational), avoidance motivation (M), and observed behavior (B):

  • E=f(Kp,Kc,Kp×Kc,Ks,Kh,Ko)E = f(K_p, K_c, K_p\times K_c, K_s, K_h, K_o)
  • M=g(E)M = g(E)
  • B=h(M)B = h(M)

where KpK_p = procedural knowledge, KcK_c = conceptual, KsK_s = structural, KhK_h = heuristic, KoK_o = observational (Baral et al., 2018).

Akgun and Toker use a latent growth change (LGC) model to capture CSE changes under controlled search access, with mediational analysis establishing that prior search experience exerts its effect on baseline CSE through self-efficacy, with strong indirect effects (77.5% of CSE variance explained; β=0.414β = 0.414, p<.001p < .001) (Akgun et al., 17 Jan 2025). Subgroup analysis reveals that low-CSE individuals experience the greatest inflation of CSE upon gaining access to tools, indicating heightened tool dependency.

In educational settings, appropriate reliance on AI assistants is positively correlated with self-efficacy, programming literacy, and need for cognition; underreliance is negatively correlated (Pearson’s rr(SE, Appropriate Reliance) = 0.333, p=.018p = .018; rr(SE, Underreliance) = -0.308, p=.030p = .030) (Pitts et al., 16 Jun 2025). Regression analyses quantify the incremental effect of self-efficacy on calibrated reliance, with each 1-point SE increase amplifying odds of appropriate reliance by a factor of 1.30.

AI and LLM Domain Models

Chen et al. propose an extension of self-efficacy modeling to LLMs, where verbal efficacy stimulations (VES) modulate an internal SE state:

SE=ϕ(αMM+αVV+αAA+αPP)SE = \phi(\alpha_M M + \alpha_V V + \alpha_A A + \alpha_P P)

with MM (mastery), VV (vicarious), AA (affect), PP (verbal persuasion), and a sigmoidal mapping ϕ\phi. Task performance is modeled as a function of self-efficacy, task difficulty, and stimulation type: P=f(SE,D,s)P = f(SE, D, s) (Chen et al., 10 Feb 2025).

3. Mechanisms of Self-Efficacy–Dependent Reliance and Offloading

In transactive memory settings, such as search engine use, high search self-efficacy mediates the translation of past tech experience into higher cognitive self-esteem but also increases the tendency towards cognitive offloading. Notably, low initial CSE users exhibit disproportionately larger CSE gains—and thus greater dependency—when tools are available (Akgun et al., 17 Jan 2025).

In educational AI, students with low self-efficacy depend excessively on AI assistants, leading to overreliance (acceptance of erroneous recommendations), while high self-efficacy enables critical evaluation and calibrated trust, mitigating both over- and underreliance (Pitts et al., 16 Jun 2025). This supports automation-bias predictions, where perceived self-competence modulates the gatekeeping role between external automation and action.

Within LLMs, verbal prompts with positive or challenging tone elevate the model’s self-efficacy signature, thereby increasing zero-shot performance, especially at intermediate task difficulty (“stretch zone”), mirroring human optimal challenge bands in social cognitive theory (Chen et al., 10 Feb 2025).

4. Empirical Findings Across Domains

Paper Domain Self-Efficacy Effect Dependency Pattern
(Baral et al., 2018) Phishing Predicts threat avoidance Higher E → motivated avoidance
(Akgun et al., 17 Jan 2025) Search/HCI Mediates CSE, predicts offloading Low CSE: larger CSE increase and stronger tool dependency
(Pitts et al., 16 Jun 2025) EdTech/AI SE predicts appropriate reliance Low SE: overreliance; High SE: calibration
(Chen et al., 10 Feb 2025) LLMs/NLP VES raises SE, boosts task accuracy Maximal boosts in moderate difficulty regions

A common finding is that dependency on external agents (search engines, AI systems, LLM prompts) increases as self-efficacy decreases and vice versa.

5. Design and Intervention Implications

Mitigation of self-efficacy–driven overreliance can be targeted by:

  • Embedding recall-first and reflection checkpoints in search interfaces to prompt internal retrieval before tool use (“What do you recall already?”) (Akgun et al., 17 Jan 2025).
  • Training modules that build “true skill” rather than transient confidence, metacognitive feedback to make offloading explicit, and deliberate interaction pacing to encourage cognitive engagement (Akgun et al., 17 Jan 2025).
  • Educational interventions to scaffold mastery experiences, employ peer-led vicarious learning, embed forced reflection on AI-generated outputs, and provide calibrated feedback dashboards (Pitts et al., 16 Jun 2025).
  • In LLMs, prompt engineering with tone and content tailored to the task and model architecture: encouragement for general boost, provocation for depth with ARchitecture-specific tuning (Chen et al., 10 Feb 2025). Overuse of critical tone can depress performance and should be avoided except where corrective pressure is needed and tested.

6. Expansions and Open Directions

Recent research extends self-efficacy-driven dependency to artificial agents, opening questions about the interpretability of LLM “confidence” and its role as an analog to human metacognition (Chen et al., 10 Feb 2025). The calibration of self-efficacy itself remains a target for both human-system and model–user interface design, with future investigation needed into adaptive, personalized interventions that balance confidence, accuracy, and autonomy. A plausible implication is that next-generation HCI and AI systems will operationalize real-time estimates of user or agent self-efficacy to dynamically modulate interface scaffolds or automation transparency, with empirical validation required to prevent new forms of overreliance or underuse.

7. Cross-Domain Synthesis and Theoretical Integration

Self-efficacy-driven dependency provides a unifying construct for explaining reliance, offloading, and trust behaviors in both human and machine systems. Across studies, the modulation of dependency by self-efficacy is robust to domain, measurement model, and application. This supports the extension of social cognitive theory principles into computational settings, and underscores the necessity for multidimensional knowledge, critical reflection, and tailored feedback in the design of human–AI partnerships and agentic systems.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Self-Efficacy-Driven Dependency.