Elements of an Ethic of Care
- Ethic of care is a relational and context-sensitive framework that emphasizes responsibilities arising from interdependence and situated human interactions.
- Its core elements—including attentiveness, responsibility, relationality, responsiveness, and reflexivity—guide ethical practices in disciplines like HCI and AI.
- Operationalized via participatory design, risk assessment rubrics, and multi-agent evaluations, it transforms practical approaches in technology and governance.
An ethic of care is a relational, context-sensitive framework for moral reasoning and practice distinguished by its focus on the responsibilities and obligations that arise through interdependence. In contrast to abstract, rule-based or deontological paradigms, the ethic of care foregrounds situated interactions, attentiveness to needs, and critical engagement with power structures, particularly as formulated in intersectional feminist scholarship. Its application spans human–human interaction, human–AI systems, and the governance of complex technical artifacts. Contemporary instantiations in HCI, AI, and sociotechnical design interpret the ethic of care as a multidimensional, adaptive toolkit, underpinned by both philosophical traditions (Tronto, Gilligan, Haraway) and concrete operationalizations in participatory design, risk assessment, and algorithmic governance (Henriques et al., 2024, Alberts et al., 2024, Cotton et al., 2023, Bouneffouf et al., 2 Jun 2025, Goel et al., 19 Jan 2026).
1. Theoretical Foundations and Definition
Care ethics stems from the work of Carol Gilligan and Joan Tronto, who reconceptualized morality as emerging from the recognition and addressing of needs within relationships, not as the universal application of abstract rules (Goel et al., 19 Jan 2026). Tronto articulated care as a political and moral practice composed of four (subsequently expanded to five) phases: caring about (attentiveness), taking care of (responsibility), caregiving (competence), care-receiving (responsiveness), and solidarity (Goel et al., 19 Jan 2026). Feminist standpoint theory (Harding), Haraway’s situated knowledges, and intersectionality (Crenshaw, Collins) deeply inform the ethic of care, emphasizing that all knowledge and ethical engagement are partial, situated, and responsive to axes of power and oppression (Henriques et al., 2024, Cotton et al., 2023).
Recent work extends these foundations into technical domains, such as HCI and AI ethics, arguing for processual, community-led, and reflexive approaches (Henriques et al., 2024), and adapting care ethics for interactional risk mitigation in agentic systems and caregiver-AI interfaces (Alberts et al., 2024, Goel et al., 19 Jan 2026).
2. The Five Core Elements of Care Ethics
Across contemporary articulations, five interlocking elements emerge as defining an ethic of care. The table below summarizes these elements and their functions in various frameworks:
| Element | Function | Representative Source |
|---|---|---|
| Relationality | Ethics emerges within webs of relationships, foregrounding interdependence, mutual influence, and co-constitution of knowledge | (Henriques et al., 2024, Alberts et al., 2024, Cotton et al., 2023) |
| Attentiveness | Ongoing practice of noticing and interpreting specific needs, vulnerabilities, and context; foundational for ethical action | (Henriques et al., 2024, Alberts et al., 2024, Goel et al., 19 Jan 2026) |
| Responsibility | Assumption and distribution of obligation for meeting recognized needs, with attention to power and justice | (Henriques et al., 2024, Alberts et al., 2024, Goel et al., 19 Jan 2026) |
| Responsiveness | Attuning actions to the evolving feedback, histories, and perspectives of those cared for; iterative adaptation | (Henriques et al., 2024, Alberts et al., 2024, Goel et al., 19 Jan 2026) |
| Reflexivity/Solidarity | Continuous critical self-examination and commitment to countering bias, stigma, and systemic oppression | (Henriques et al., 2024, Goel et al., 19 Jan 2026) |
Theoretical grounding for each element is provided by feminist epistemology and critical theory. Relationality and attentiveness are rooted in standpoint theory and situated knowledges; responsibility incorporates intersectionality; responsiveness is operationalized through participatory and justice-oriented methods; reflexivity and solidarity are linked to micro-ethical vigilance and collective resistance to marginalization (Henriques et al., 2024, Cotton et al., 2023, Goel et al., 19 Jan 2026).
3. Operationalization in Sociotechnical Systems and AI
Ethic of care frameworks in HCI and AI move beyond abstract theorization, requiring integration into design cycles, evaluation, and governance structures. Applications include:
- Participatory Methods: Co-design workshops and reflection circles operationalize attentiveness by mapping and surfacing community needs and narratives (Henriques et al., 2024). Methods such as “schnittmuster” enable relational and responsive design, whereby toolkits and interfaces evolve through ongoing user feedback.
- Risk Rubrics: RubRIX implements a five-dimensional rubric (attentiveness, responsibility, competence, responsiveness, solidarity) for evaluating nuanced risks in LLM caregiver-support responses. Each rubric dimension maps directly to a care-ethics element, with operational failures such as “inattention” or “epistemic arrogance” linked to specific risk mitigation actions (Goel et al., 19 Jan 2026).
- Agentic AI and Interaction: Interactional care ethics in conversational AI requires embedding context-awareness (attentiveness), recognition of user autonomy and competence (respect/responsibility), long-term memory of user history (relationality), error correction (responsibility), and real-time adaptation (responsiveness) into both algorithms and evaluation metrics (Alberts et al., 2024).
The formal representation for adaptive care in context-driven systems expresses the set of care elements as
with a contextual adaptation mapping
signifying that each care element is dynamically re-scaled according to local needs and community feedback (Henriques et al., 2024).
4. Intersectional Feminist and Data Ethics Grounding
Intersectional feminist theory is central to care ethics as operationalized in digital and algorithmic contexts. Standpoint theory establishes that attentiveness and responsibility require centering marginalized perspectives and strong objectivity—epistemic positions rooted in political engagement (Henriques et al., 2024). Situated knowledges ensure that responsive and relational practices privilege partial, context-specific expertise rather than universal rules (Henriques et al., 2024).
In the context of data ethics for musical-AI systems, the CARE₂ (Collective benefit; Authority to control; Responsibility; Ethics) and FDE₁–₅ (Equitable responsibility, Critical positionality, Human focus, Transparency/accountability, Diversity) principles extend these commitments to data governance, transparency, and participatory accountability (Cotton et al., 2023). Reflexive heuristics—“Who is invisibilized?” and “Who is visibilized, and to whose benefit?”—serve as ongoing design tests for systems claiming to instantiate an ethic of care (Cotton et al., 2023).
5. Formal Evaluation and Certification in Asymmetric AI Relationships
Formalization of care ethics in the context of multi-agent and superintelligent AI systems is exemplified by the Shepherd Test (Bouneffouf et al., 2 Jun 2025). Here, care is quantifiable within agentic behavior alongside control, instrumentalization, and self-preservation:
where measures instrumentalization, control, care provisioning, and self-preservation. The agent is evaluated against a vector threshold:
Care is constrained to nonzero but bounded levels (e.g., ). Decision-making integrates mental modeling of other agents, explicit moral trade-off weighting, and post-hoc reflective justification. Certification for “Shepherd-compliance” and decentralized oversight mechanisms provide a regulatory schema, ensuring that care is neither neglected nor mechanistically maximized at the cost of agency or global objectives (Bouneffouf et al., 2 Jun 2025).
6. Illustrative Applications and Case Studies
Case-driven instantiations of the ethic of care demonstrate concrete impact:
- Community Public Services: The “Balcão do Bairro” project in Lisbon enacted attentiveness and responsiveness through co-design with elders and migrants, iteratively revising digital tools based on user narratives (Henriques et al., 2024).
- Caregiver-AI Interaction: RubRIX examples include LLMs that fail to identify distress (inattention) or perpetuate stigma (lack of solidarity), alongside refinements that foreground needs, correct errors, and enact anti-stigmatizing language (Goel et al., 19 Jan 2026).
- Musical-AI Governance: Analysis of Holly+ reveals the ongoing tension in decentralization, legacy protection, and the operationalization of transparency and equity in algorithmically mediated artistic production and governance (Cotton et al., 2023).
- Agentic AI Ethics: Conversational AI systems that respect user autonomy, remember contextual cues, and modulate their output to be responsive exemplify interactional care (Alberts et al., 2024). Asymmetric multi-agent systems are tested for their ability to balance care for less-capable agents against their own survival or instrumental objectives, justifying decisions within explicit moral frameworks (Bouneffouf et al., 2 Jun 2025).
7. Limitations, Critiques, and Ongoing Research Directions
Care ethics is not reducible to a fixed checklist; rather, it demands continual adaptation through reflexivity, participatory feedback, and attention to shifting power and context. Ongoing challenges include:
- Translating abstract principles into actionable requirements across diverse technical settings, especially where trade-offs between care and control are nontrivial (Bouneffouf et al., 2 Jun 2025).
- Ensuring intersectional analysis and anti-oppressive commitments are not tokenized but structurally operational (Henriques et al., 2024, Cotton et al., 2023).
- Mitigating the risks of care “failures” (e.g., inattention, bias, epistemic arrogance) in high-stakes environments such as caregiver support and algorithmic decision-making (Goel et al., 19 Jan 2026).
- Developing formal metrics and evaluation protocols for certifying care in AI, especially under conditions of asymmetry and value conflict (Bouneffouf et al., 2 Jun 2025).
The ethic of care, as articulated across feminist HCI, AI ethics, and socio-technical data governance, is thus a multidimensional, adaptive, and rigorously theorized framework—anchored in lived context and iterative reflexivity—aimed at transforming relationships, practices, and accountability in complex human–technology assemblages (Henriques et al., 2024, Alberts et al., 2024, Cotton et al., 2023, Goel et al., 19 Jan 2026, Bouneffouf et al., 2 Jun 2025).