Artificial Intelligence for Global Health: But Whose AI, and Whose Health?
Kyoung Kyun Oh (Social and Public Health Sciences Unit, School of Health & Wellbeing, University of Glasgow, Glasgow, UK)
Ⅰ. Introduction: The Neutrality Myth in the Digital Health Era
Artificial intelligence (AI) is increasingly positioned as a transformative tool for global health (Fahim et al., 2025). From predictive diagnostics to health system optimisation, AI is often framed as a neutral enabler which is a plug-and-play solution that can be deployed across low- and middle-income countries (LMICs) to accelerate progress toward Universal Health Coverage (UHC) (WHO, 2021). Techno-optimists argue that AI can leapfrog infrastructural deficits, offering high-level expertise in settings where quality of care remains limited.
However, this framing obscures a fundamental political question: who defines what AI in global health should be, and whose interests does it ultimately serve? The assumption of technological neutrality ignores the fact that algorithms are embedded with the epistemological assumptions, priorities, and institutional logics of their creators. AI is not merely a technical artefact but a site of power structured through control over data, problem definition, and decision-making authority.
This paper argues that AI in global health should be understood as a form of epistemic governance in which inequalities are reproduced not only through resource gaps but through asymmetries in knowledge production and technological authority. By examining the political economy of AI development, the paper highlights how LMICs are positioned as data providers rather than decision-makers, and how these dynamics risk reinforcing a new form of digital colonialism. Drawing on the failure of IBM WatsonTM for Oncology and the emerging trajectory of Korea’s “K-ODA”, this paper critically interrogates the assumption that AI-driven innovation is inherently beneficial. It concludes by proposing a shift from technological deployment to co-architected governance, where LMICs are not passive recipients but active shapers of AI systems.
Ⅱ. Epistemic Asymmetry and Data Extractivism
AI in global health does not emerge in a vacuum. It is shaped by a small set of actors such as technology firms, elite research institutions, and philanthropic funders, predominantly located in high-income countries (HICs) (Birhane, 2020; Herzog & Branford, 2025). These actors do not merely develop tools; they define the very parameters of ‘the problem’. They set research agendas, determine what counts as “evidence” and shape the standards by which success is measured (Birhane, 2020).
In this process, LMICs are often not treated as equal partners, but ‘as sites of implementation and evidence generation’. This reflects a profound epistemic asymmetry: knowledge about health systems in LMICs’ context is extracted, codified, and fed into AI systems, yet the authority to interpret and operationalise that knowledge remains external (Birhane, 2020; Herzog & Branford, 2025). This ‘foreign gaze’ in global health means that data flows outward to HICs for processing and seeking profit, while decision-making flows inward to LMICs as a pre-packaged technical instruction (Abimbola, 2019). Consequently, LMICs become hyper-visible as sources of raw data for algorithmic training, yet structurally invisible in the governance and design of those very systems.
Ⅲ. The Technological Fix Fallacy: From IBM Watson to Korea’s Digital ODA
The history of AI in global health is marked by recurring technological fixes that fail to account for the social, economic, and institutional realities of health systems. The case of IBM Watson for Oncology illustrates this pattern clearly (Strickland, 2019). Trained on clinical data and treatment protocols from high-resource American cancer centres, IBM Watson was deployed in hospitals in India with the expectation that it could standardise and improve care (Strickland, 2019). This failure occurred not because of technical malfunction, but because of structural mismatch. Its recommendations were frequently economically unaffordable, clinically misaligned, or irrelevant to local epidemiological contexts. In many instances, IBM Watson suggested costly immunotherapies in settings where even basic chemotherapy posed a financial burden. Rather than bridging gaps in care, it exposed them. This case demonstrates a broader principle: AI does not generalise across contexts; it reproduces the clinical, economic, and epistemic conditions of its origin (Strickland, 2019; Kelly et al., 2019). When systems developed in high-resource environments are exported without deep socio-contextual adaptation, they risk reinforcing the very inequities they are intended to address.
Crucially, this dynamic is not confined to past failures. It is being reproduced in contemporary development strategies, including those of emerging donors such as the Republic of Korea. The 4th Basic Plan for International Development Cooperation (2026-2030) (Korean: 제4차 국제개발협력 종합기본계획 (2026~2030)) identifies digitalisation and AI as central pillars of Korea’s ODA strategy (Government of Korea, 2026), promoting the export of Korean digital health platforms and AI-based diagnostic tools under the name of K-ODA. While framed as innovation-driven solidarity, this approach may reflect a supply-side orientation of development cooperation, in which technological solutions are shaped by the capabilities and commercial incentives of donor countries. As with IBM Watson, there is a risk that these systems are designed in one context and deployed in another without sufficient alignment to local health system realities. This orientation is further reinforced by financing and procurement structures that prioritise scalable, exportable technologies linked to domestic industrial policy.
The implications for health systems are significant. For instance, Korean AI Tech such as LUNIT has developed highly accurate tools for tuberculosis diagnosis. Yet diagnostic accuracy alone does not translate into improved health outcomes. In contexts with weak referral systems, fragmented treatment pathways, and limited access to medicines, enhanced detection may generate demand that health systems are not equipped to meet. In this sense, AI does not resolve systemic constraints; it exposes and, in some cases, intensifies them.
Moreover, the export-oriented nature of K-Digital ODA raises concerns about technological lock-in. If funding and procurement mechanisms favour proprietary platforms, partner countries may become dependent on externally developed systems, limiting their capacity for adaptation, regulation, and local innovation. This creates path dependency, where technological trajectories are shaped externally rather than through domestic priorities. The lesson from IBM Watson is therefore not historical but structural. The same assumptions that underpinned its failure, the universality of technical solutions and the marginalisation of local context, continue to shape contemporary AI-driven development models. Without a fundamental shift in how technologies are designed and governed, AI risks becoming not a tool for equity, but a mechanism for reproducing asymmetry in global health.
Ⅳ. Digital Colonialism and Algorithmic Dependency
The expansion of AI in global health is not only a question of innovation, but of political economy. It raises fundamental concerns about who captures value from data and who remains dependent on its outputs. When health data generated in LMICs such as radiological images from community clinics are used to train proprietary algorithms without equitable benefit-sharing, this dynamic reflects what has been described as digital colonialism (Reba et al., 2026). In this model, data function as extractive resources. They are collected in resource-constrained settings, processed and valorised in high-income contexts, and reintroduced as commercial products. The value chain is structurally asymmetrical: LMICs supply the raw material, while economic and epistemic gains are concentrated elsewhere.
This asymmetry extends beyond economics into clinical practice. The growing reliance on externally developed black-box systems risks producing a form of algorithmic dependency, in which healthcare providers become end-users of decision-support tools rather than active knowledge producers. Over time, this can erode clinical judgement, particularly in settings where training systems are already constrained. Such dependency has systemic consequences for the resilience and autonomy of health systems. When diagnostic reasoning is outsourced to opaque systems, the resilience of health systems becomes contingent on technologies that are externally governed and potentially inaccessible. In cases where AI systems fail or encounter unfamiliar conditions, local capacity to respond may be diminished.
Digital colonialism, therefore, is not only about the extraction of data, but about the restructuring of authority in knowledge and practice. It creates a condition in which LMICs are simultaneously data providers and dependent users, reinforcing a cycle of technological reliance that undermines long-term health system autonomy (Reba et al., 2026). If the problem is structural rather than technical, then technical solutions alone cannot resolve it. What is required is not better AI, but a reconfiguration of how AI is governed, designed, and embedded within global health systems.
Ⅴ. Reconfiguring Power: From Beneficiaries to Co-Architects
Moving beyond critique requires a fundamental redistribution of power in how AI is governed, designed, and embedded within health systems. Rather than treating LMICs as sites of implementation, a meaningful alternative must address three interconnected dimensions: governance, design, and political economy.
First, governance must shift toward data sovereignty and shared authority. Current AI ecosystems are characterised by asymmetrical data flows, where health data from LMICs are extracted, processed, and commercialised externally. Addressing this requires institutional mechanisms that ensure LMIC actors retain control over how data are collected, used, and monetised. This includes frameworks for data ownership, equitable data-sharing agreements, and meaningful participation in algorithmic governance. Without such structures, calls for “ethical AI” risk remaining rhetorical rather than transformative.
Second, AI design must move from deployment to co-development. The dominant model of exporting pre-built AI systems assumes that health problems are universal and transferable across contexts. In reality, disease patterns, clinical practices, and resource constraints are deeply contextual. Co-development approaches such as engaging local clinicians, policymakers, and technologists from the outset, can produce systems that are not only technically effective but socially and institutionally appropriate. This requires rethinking innovation pipelines to prioritise contextual adaptation over scalability, and usability over technological sophistication.
Third, the political economy of AI must incorporate benefit-sharing and local innovation ecosystems. If LMICs’ data contribute to the development of commercially valuable AI systems, reciprocal benefits must follow. These may include preferential access to technologies, reinvestment into local health systems, or support for domestic digital health industries. Strengthening local innovation ecosystems through investment in education, infrastructure, and entrepreneurial ecosystems is essential for enabling LMICs to transition from data suppliers to technology producers.
These shifts reframe AI from a delivered product to a negotiated system. The central question is no longer how to deploy AI efficiently, but how to structure the conditions under which it is governed and developed. Without such restructuring, AI may continue to reproduce inequality under the guise of innovation.
ᅠ
Ⅵ. Conclusion: Toward Epistemic Justice
The expansion of AI in global health is fundamentally a process of market formation. As Karl Polanyi argued, markets are constructed through political and institutional choices (Cangiani, 2011). The emerging digital ODA landscape is being shaped through procurement rules, donor funding structures, and regulatory frameworks that determine whose knowledge and priorities prevail. The question is not whether LMICs will participate in this system, but on what terms. If AI is governed primarily by private-sector incentives prioritising scalability and return on investment, it will reproduce and deepen inequality. Re-politicising AI governance is therefore essential.
Justice in this context requires more than access to technology. It requires authority over how technology is defined, developed, and deployed. This entails shifting from deploying AI in LMICs to designing it with them, from extracting data to building systems grounded in local knowledge, and from exporting models to supporting digital sovereignty.
Ultimately, AI in global health is not a question of innovation, but of power. The future of global health will depend on whether LMICs remain sites of extraction or become architects of their own digital futures. Only then can AI serve as a vehicle for epistemic justice, rather than a mechanism of its reproduction.
ᅠ
Reference
Abimbola, S. (2019). The foreign gaze: Authorship in academic global health. BMJ Global Health, 4, e002068. https://doi.org/10.1136/bmjgh-2019-002068
Birhane, A. (2020). Algorithmic colonization of Africa. SCRIPTed, 17(2), 389–409. https://doi.org/10.2966/scrip.170220.389
Cangiani, M. (2011). Karl Polanyi’s institutional theory: Market society and its “disembedded” economy. Journal of Economic Issues, 45(1), 177–197. https://www.jstor.org/stable/25800760
Fahim, Y. A., Hasani, I. W., Kabba, S., & Ragab, W. M. (2025). Artificial intelligence in healthcare and medicine: Clinical applications, therapeutic advances, and future perspectives. European Journal of Medical Research, 30(1), 848. https://doi.org/10.1186/s40001-025-03196-w
Government of the Republic of Korea. (2026). 제4차 국제개발협력 종합기본계획 (2026-2030) [The 4th Basic Plan for International Development Cooperation (2026-2030)]. https://www.odakorea.go.kr/kor/cont/ContShow?cont_seq=21
Herzog, C., & Branford, J. (2025). Relational ethics and structural epistemic injustice of AI in medicine. Philosophy & Technology, 38, 160. https://doi.org/10.1007/s13347-025-00987-1
Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G., & King, D. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17(1), 195. https://doi.org/10.1186/s12916-019-1426-2
Reba, Y. A., Arim, D. R., Wafumilena, R. A., Balriyanan, A. F., Mabikafola, F., & Djemani, T. S. G. (2026). Data colonialism and digital sovereignty in the Global South. Development in Practice. https://doi.org/10.1080/09614524.2025.2609155
Strickland, E. (2019). How IBM Watson overpromised and underdelivered on AI health care. IEEE Spectrum, 56(4), 32–38. https://doi.org/10.1109/MSPEC.2019.8678513
World Health Organization. (2021). Ethics and governance of artificial intelligence for health. https://www.who.int/publications/i/item/9789240029200
