Papers
Topics
Authors
Recent
Search
2000 character limit reached

Architecting Human-AI Cocreation for Technical Services -- Interaction Modes and Contingency Factors

Published 18 Jul 2025 in cs.HC | (2507.14034v1)

Abstract: Agentic AI systems, powered by LLMs, offer transformative potential for value co-creation in technical services. However, persistent challenges like hallucinations and operational brittleness limit their autonomous use, creating a critical need for robust frameworks to guide human-AI collaboration. Drawing on established Human-AI teaming research and analogies from fields like autonomous driving, this paper develops a structured taxonomy of human-agent interaction. Based on case study research within technical support platforms, we propose a six-mode taxonomy that organizes collaboration across a spectrum of AI autonomy. This spectrum is anchored by the Human-Out-of-the-Loop (HOOTL) model for full automation and the Human-Augmented Model (HAM) for passive AI assistance. Between these poles, the framework specifies four distinct intermediate structures. These include the Human-in-Command (HIC) model, where AI proposals re-quire mandatory human approval, and the Human-in-the-Process (HITP) model for structured work-flows with deterministic human tasks. The taxonomy further delineates the Human-in-the-Loop (HITL) model, which facilitates agent-initiated escalation upon uncertainty, and the Human-on-the-Loop (HOTL) model, which enables discretionary human oversight of an autonomous AI. The primary contribution of this work is a comprehensive framework that connects this taxonomy to key contingency factors -- such as task complexity, operational risk, and system reliability -- and their corresponding conceptual architectures. By providing a systematic method for selecting and designing an appropriate level of human oversight, our framework offers practitioners a crucial tool to navigate the trade-offs between automation and control, thereby fostering the development of safer, more effective, and context-aware technical service systems.

Summary

  • The paper introduces a six-mode taxonomy to systematically balance AI autonomy and human oversight by linking interaction modes to contingency factors.
  • It employs case studies across enterprise software to validate how task complexity and risk influence optimal human-AI collaboration.
  • The framework enhances technical service reliability by addressing AI operational brittleness and managing AI hallucinations through structured human involvement.

Architecting Human-AI Cocreation for Technical Services

Introduction to Human-AI Cocreation

The paper "Architecting Human-AI Cocreation for Technical Services -- Interaction Modes and Contingency Factors" proposes a systematic framework to manage the collaboration between human agents and AI systems within technical services. The research is motivated by the need for robust frameworks to address persistent challenges inherent in AI systems, such as hallucinations and operational brittleness, while maximizing their transformative potential in value co-creation. The authors present a structured taxonomy of human-agent interaction modes based on empirical findings from case studies of technical support platforms.

Taxonomy of Interaction Modes

The paper introduces a six-mode taxonomy that ranges from full AI autonomy to passive AI assistance. Key interaction modes include:

  1. Human-Augmented Model (HAM): Characterizes scenarios where humans retain control, leveraging AI for assistance.
  2. Human-in-Command (HIC): AI systems propose solutions requiring human approval.
  3. Human-in-the-Process (HITP): Integrated workflows where humans perform specific deterministic tasks.
  4. Human-in-the-Loop (HITL): AI systems act autonomously until escalation conditions are met.
  5. Human-on-the-Loop (HOTL): Humans monitor AI systems and intervene as deemed necessary.
  6. Human-Out-of-the-Loop (HOOTL): Represents complete AI autonomy with no human intervention.

Each interaction mode addresses specific operational needs by balancing AI autonomy with human oversight.

Key Contributions and Implications

The primary contribution of this taxonomy lies in its ability to connect interaction modes to contingency factors, such as task complexity, risk, and reliability. This mapping provides a systematic approach for practitioners to select appropriate levels of human oversight, thus fostering safer and more context-aware systems. The taxonomy offers valuable guidance on architecting human-agent systems by considering trade-offs between automation and control.

The paper underscores the practical implications of these modes in enhancing productivity and reliability in technical service systems. The careful handling of AI hallucinations and brittleness through structured human involvement addresses common industry concerns over AI reliability and security.

Research Methodology

The study employs a case study methodology, focusing on leading providers of AI-empowered enterprise software. This methodological approach allows for a deep examination of contemporary human-AI interactions within real-world settings, ensuring that the findings are both relevant and applicable. The researchers have triangulated their findings across multiple cases to enhance validity and generalizability.

Discussion of Contingency Factors

The choice of collaboration mode is contingent upon several interconnected factors:

  • Task Complexity and Novelty: Determines the required level of AI autonomy.
  • Safety, Criticality, and Risk: Influences the extent of human oversight needed.
  • System Reliability and Trust: Correlates with the degree of automation acceptable.
  • Human Operator State: Impacts cognitive workload and automation bias.

By linking these factors to interaction modes, the paper provides a structured process for decision-making that improves system design and deployment.

Conclusion

This paper offers a comprehensive framework for structuring human-AI collaboration in technical services, presenting a six-mode taxonomy that addresses vital contingency factors. This research serves as a foundational guide for designing effective human-agent systems, reducing cognitive load, and enhancing service delivery in technical domains. Future research can build upon these findings to validate the taxonomy across more diverse sectors and investigate emerging interaction modes.

Overall, the paper delivers a practical, insightful approach to managing AI-human cocreation, highlighting pathways for safely integrating AI into complex service environments.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.