AI for Social Good Overview
- Artificial Intelligence for Social Good (AI4SG) is defined as the use of AI and data-driven methods to tackle complex societal challenges such as healthcare, climate action, and justice.
- It emphasizes a balanced approach that integrates technical innovation with early community engagement, co-design, and ethically guided frameworks.
- Key methodologies include human-AI collaboration, rigorous fairness assessments, and mixed-method impact evaluations to ensure sustainable and equitable benefits.
Artificial Intelligence for Social Good (AI4SG) denotes the application of artificial intelligence and data-driven technologies to achieve positive and enduring social impact across complex domains such as public health, development, climate action, and justice. The field encompasses methodological research, practical deployments, and critical inquiry at the intersection of computing, social processes, and ethical frameworks. AI4SG initiatives span technical domains—machine learning, optimization, natural language processing, computer vision, and multi-agent systems—while targeting societal problems including healthcare disparities, environmental sustainability, public safety, education, and resource equity. Unlike purely techno-centric endeavors, current best practice in AI4SG requires cross-disciplinary, community-grounded, and ethically reflexive approaches to ensure that impact claims translate into lived, equitable benefits for the intended populations (Lin et al., 15 Sep 2025, Lin et al., 2024, Bondi et al., 2021, Shi et al., 2020).
1. Definitions, Foundations, and Scope
AI4SG is defined as the application of AI and data-driven technologies to foster “positive and long-lasting social impact” in domains marked by complexity, power asymmetries, and the need for sustainable, context-responsive solutions (Lin et al., 15 Sep 2025, Lin et al., 2024). Authors distinguish between:
- Techno-centric approaches, focused on technology deployment, accuracy, scalability, and technical achievements.
- Balanced, context-driven approaches, which integrate technical innovation with community engagement, needs assessment, and co-leadership throughout project lifecycles.
Operational definitions further circumscribe AI4SG projects as those that demonstrate measurable improvements in socially significant metrics—such as health outcomes, educational attainment, environmental resilience, and social equity—through systematic application of AI methodologies (Akula et al., 2021, Shi et al., 2020, Abbasi et al., 2023).
Table: Scope and Domains of AI4SG (based on (Shi et al., 2020))
| Application Domain | Typical AI Methods | Example Metrics |
|---|---|---|
| Healthcare | ML, causal inference, RL | Diagnosis accuracy, health equity, lives saved |
| Environmental Sustainability | Computer vision, optimization | Species tracked, emissions reduced |
| Urban Mobility/Transportation | Predictive ML, game theory | Reduced delays, emissions, commuter equity |
| Education | Adaptive learning, NLP | Retention rates, proficiency gains |
| Public Welfare/Justice | Risk modeling, fairness metrics | Equity of service, false positive/negative rates |
AI4SG is methodologically driven by both descriptive/predictive analytics and prescriptive, intervention-oriented optimization, with emphasis evolving toward the latter as the field matures (Shi et al., 2020, Abbasi et al., 2023).
2. Funding Structures, Project Lifecycle, and Stakeholder Dynamics
Qualitative analyses of $410$M in AI4SG funding illustrate a spectrum from techno-centric agendas to participatory, community-driven program design (Lin et al., 15 Sep 2025). Key findings include:
- Dominance of Technical Deliverables: Many funding documents privilege technical milestones (accuracy, scalability, code release) over genuine social process outcomes (capacity building, community benefit, sustained relationships).
- Consultation-Only Engagement: Community engagement is often limited to late-stage implementation, not early-stage co-design or leadership.
- Inadequate Sustainability Planning: Post-deployment maintenance, local capacity, and resource commitment for continued benefit are frequently under-addressed (Lin et al., 15 Sep 2025, Lin et al., 2024).
Effective AI4SG projects, by contrast, are characterized by:
- Collaborative Problem Scoping: Defining social challenges through partnership with intended beneficiaries (Emmerson et al., 28 Apr 2025, Kshirsagar et al., 2021).
- Co-development and Co-leadership: Including community representatives as funded co-applicants with authority over project direction (Lin et al., 2024).
- Iterative, Flexible Funding: Mandating phases for relationship building, participatory design, and post-deployment support.
Table: Key Criteria for AI4SG Project and Funding Evaluation (Lin et al., 15 Sep 2025)
| Criterion | Techno-centric Funding | Balanced/Community-grounded Funding |
|---|---|---|
| Community engagement | Consultation (late-stage) | Co-leadership (from inception) |
| Success metrics | Technical deliverables | Community benefit and capacity |
| Sustainability planning | Limited post-deployment | Explicit maintenance and ownership |
3. Conceptual, Ethical, and Participatory Frameworks
AI4SG research explicitly engages with frameworks from human-computer interaction (HCI), information and communication technologies for development (ICTD), and critical data studies. Notable paradigms include:
- Data Feminism: Critique of entrenched power structures in data science; principles include examining power, embracing pluralism, and making invisible labor visible. “Data co-liberation” is proposed as a necessary principle—sharing control and benefit with community partners throughout (Lin et al., 2024).
- Capabilities Approach: Shift from utilitarian aggregative metrics to the distribution, expansion, and equalization of substantive freedoms among the least advantaged. The PACT framework operationalizes this via community-steered problem definition, co-design, and capability-centered evaluation, moving beyond accuracy to actual empowerment and equity (Bondi et al., 2021).
- Responsible Norms (RAIN) Framework: Systematic translation of high-level ethical values (privacy, fairness, transparency) into actionable technical norms and assessment layers, allowing policy-compliant, SDG-aligned design and deployment (Brännström et al., 2022).
4. Methodologies, Domain Strategies, and Technical Challenges
AI4SG initiatives typically proceed along the following methodological pillars:
- Collaborative Scoping and Data Audit: Joint definition of socially relevant problems with domain experts and community–partner organizations (Kshirsagar et al., 2021, Emmerson et al., 28 Apr 2025).
- Human–AI Collaboration in Design and Evaluation: Implementation of active learning, human-in-the-loop annotation, and iterative model updates aligning with community-set objectives (Hsu et al., 2021).
- Algorithmic Fairness and Power-Aware Data Practices: Rigorous use of group fairness (statistical parity, equalized odds), calibration within and across groups, and participatory debiasing (Leavy et al., 2020, Luccioni et al., 2019).
- Participatory Evaluation and Mixed-Methods Impact Assessment: Quantitative monitoring (precision, recall, engagement, service delivery metrics) coupled with qualitative assessments (capability indices, partnership quality, narratives of community outcomes) (Bondi et al., 2021, Lin et al., 15 Sep 2025, Lin et al., 2024).
Critical technical challenges persist:
- Data scarcity and limited infrastructure in target domains
- Sampling and measurement bias reflecting historic inequalities
- Deployment and maintenance in resource-constrained or adversarial settings
- Translation of high-level values to operational requirements
- Trade-offs among fairness, representativeness, transparency, and accuracy (Leavy et al., 2020, Brännström et al., 2022, Hsu et al., 2021, Akula et al., 2021)
5. Impact Domains, Case Studies, and Empirical Findings
AI4SG applications span all UN SDGs, with documented impact in health, education, climate, transportation, and justice (Gosselink et al., 2024, Goh, 2021, Shi et al., 2020, Hager et al., 2019).
- Healthcare: Clinical prediction models, digital decision support (TREWScore, >50% reduction in septic shock mortality), equity-aware risk adjustment (mitigating bias in resource allocation) (Shi et al., 2020, Gosselink et al., 2024).
- Environment: Wildlife security via Stackelberg games (PAWS), sensor-driven air-quality monitoring co-created with communities (Smell Pittsburgh; model precision of 92%) (Shi et al., 2020, Hsu et al., 2021).
- Education: Adaptive learning systems in low-resource settings, automated feedback for student writing (Quill.org reaching 8.9M students; 64% reading proficiency gains for Read Along) (Gosselink et al., 2024).
- Climate and Infrastructure: ML for urban transport optimization, energy scheduling, emissions tracking (Climate TRACE monitoring 99.9% of global sources) (Gosselink et al., 2024, Goh, 2021).
- Public Welfare and Justice: Early-warning systems for child lead poisoning, officer risk assessment, fair resource allocation in public service delivery (Hager et al., 2019, Shi et al., 2020).
Deployment case studies uniformly point to the necessity of long-term maintenance planning, local organizational capacity, and careful mapping of AI outputs to community-defined utility functions.
6. Best Practices, Limitations, and Future Directions
Best practices synthesized from recent analyses recommend:
- Problem-First Orientation: Begin with social problems identified by communities, not predefined AI solutions (Lin et al., 2024, Bondi et al., 2021).
- Participatory and Reflexive Design: Embed deliberative stakeholder engagement, co-leadership, and iterative feedback in every project lifecycle phase (Shi et al., 2020, Lin et al., 15 Sep 2025, Hsu et al., 2021).
- Balanced Metrics: Evaluate both technical deliverables and social impact, including community capacity and relationship quality (Lin et al., 15 Sep 2025).
- Structural Support for Engagement: Extend project timelines, explicitly budget for relationship building, capacity development, and post-deployment transitions (Lin et al., 15 Sep 2025).
- Scalable Open Platforms: Develop reusable, open-source platform components to reduce duplication and accelerate validated impact across similar organizations (Varshney et al., 2019, Go et al., 2024).
- Transparent, Accountable, and Inclusive Governance: Anchor every intervention in transparent processes and multi-stakeholder oversight (Brännström et al., 2022, Zhang et al., 2024).
Documented limitations include:
- Short funding horizons and lack of post-deployment resources
- Imbalanced power and credit, with community organizations often reduced to data providers
- Overemphasis on technical novelty for academic publication at the expense of contextual, equity-driven outcomes
- Insufficient longitudinal impact evaluation and sustainable maintenance pathways
Ongoing research directions call for robust common-good frameworks, shared data and benchmarking resources, incentive realignment for sustained impact, and field studies validating data co-liberation and participatory governance models (Abbasi et al., 2023, Lin et al., 15 Sep 2025, Lin et al., 2024, Bondi et al., 2021). Cross-disciplinary and community-grounded scholarship is necessary to advance AI4SG from aspiration to equitably realized, sustainable practice.
7. Recommendations for Practitioners and Funders
Guidance emerging from meta-analyses and qualitative studies includes:
- Mandate community-defined problem and metric co-creation in funding criteria (Lin et al., 15 Sep 2025).
- Support training, toolkit provision, and extended community relationship phases prior to technical deployment.
- Require funded projects to include community co-leadership and fair compensation for diverse forms of labor (Lin et al., 15 Sep 2025, Lin et al., 2024).
- Balance technical innovation with evidence of social and policy relevance, local sustainability, and reflexive learning loops (Lin et al., 15 Sep 2025).
- Anchor evaluations in mixed-methods assessment: technical and social impact metrics, field study documentation, co-authored outputs, and capacity transitions.
By codifying these design principles and confronting enduring power asymmetries in AI application, AI4SG can deliver not only technical progress but enduring, community-anchored social benefit (Lin et al., 15 Sep 2025, Lin et al., 2024, Bondi et al., 2021, Brännström et al., 2022, Abbasi et al., 2023).