AI for humanitarian action: Human rights

THIS ARTICLE WAS WRITTEN BY SAAKSHI S. RAWAT, A STUDENT OF BENNETT UNIVERSITY.

The COVID-19 epidemic that is currently ravaging the world has been terrible on many levels. However, as the UN Secretary-General has stated, the pandemic has also provided an opportunity to learn about the importance of global response operations. The world is “watching directly how information devices help to fight the risk and keep people linked,” according to the report.[1] Almost all of these statistics therapies are based on artificial intelligence (AI). Governments and international organisations have recently used AI systems’ predictive capability, flexibility, and accessibility to construct prediction models of the virus’s transmission, as well as to ease molecular-level study.[2] From contact tracing and other types of pandemic monitoring to clinical and genetic research, AI and other data-driven initiatives have proven critical to slowing the disease’s spread, advancing important medical research, and keeping the wider public informed. The study focuses on how AI may help the UN Sustainable Development Goals. The objectives of this paper is to look at how a governance framework might help achieve the Millennium Development Goals and other humanitarian goals. As a result, rather than focusing on malevolent applications of AI, it will concentrate on risks and harms that may occur unintentionally or inevitably from uses that are intended to serve a lawful purpose.

A few of the dangers and consequences posed by AI are covered by other fields and bodies of legislation, such as confidentiality and privacy, but many appear to be brand new. AI ethics, also known as AI governance, is a new area that aims to address the new concerns that these systems pose. To date, the profusion of AI “ethical principles” that strive to guide the development and construction of AI systems has dominated it. Hundreds of organisations, which include international organisations, national governments, private corporations, and non-governmental organisations (NGOs), have penned their own combinations of principles to guide the accountable use of AI, either among their own organisations or beyond them, in recent years.[3]

Opportunities In AI

Algorithmic devices are capable of “executing complicated tasks beyond human capabilities and speed, self-learning to better performance, and doing extensive analysis to forecast likely future consequences,” according to the researchers.[4] Linguistics, computer vision, audio and voice recognition, predictive analytics, and advanced robotics are just a few of the technologies available today. These and other tools are already being used in novel ways to supplement development and humanitarian efforts. In humanitarian emergencies, computer vision has been used to automatically detect structures in satellite photography, allowing for faster tracking of migratory flows and more effective delivery of aid. Several efforts in the third world are utilising AI to provide farmers with predictive information, allowing them to avoid the risks of drought and other unfavourable weather, as well as increase crop yields by vegetative propagation at the most opportune time.[5] In areas where medical resources are sparse, cutting-edge AI algorithms enable remote detection of medical disorders such as malnutrition. Day after day, the list becomes longer. Day after day, the list becomes longer. The expansion of AI in these and many other industries can be attributed to a number of factors. The data revolution, which has witnessed the exponential expansion of data sets essential to development and humanitarianism, may be the most crucial catalyst.[6] Data is the lifeblood of Artificial intelligence; an AI model can’t learn unless it’s trained on relevant data sets. Finding good data has always been more challenging in emerging markets, particularly in LDCs and humanitarian situations, where technology infrastructure, money, and knowledge are typically lacking.

Key challenges for rights-respecting AI

Lack of transparency and explainability

The black box dilemma refers to the fact that AI systems are often opaque to human decision-makers. Unlike traditional algorithms, the judgments made by ML or DL processes might be difficult for humans to trace, and hence inspect or otherwise explain to the public and those in charge of monitoring their use.  As a result, AI systems may be obscure to individuals who are affected by their use, making it difficult to ensure accountability when algorithms cause damage. Individuals may be unable to recognise if and why their rights have been infringed due to the obscurity of AI systems, and hence seek recourse for those infringement. Furthermore, even if comprehending the system is possible, it may necessitate a high level of technical competence that most individuals lack. This may stymie efforts to find solutions to the problems posed by AI systems.

Accountability

On either a governance and operational level, this lack of honesty and explainability can substantially hamper effective accountability for harms produced by automated judgments. The issue is two-fold. To begin with, people are frequently unaware of when and how AI is employed to determine their rights. Individual people are rarely aware of the “range, intensity, or even presence of quantitative decision-making processes that could have an impact on their rights and dignity,” as former UN Special Rapporteur on the Promotion and Maintenance of Freedom of Opinion and Expression David Kaye cautioned. As a result, individual notification about the employment of AI systems is “nearly inherently unattainable.”

That’s also particularly true in humanitarian situations, where people are frequently unable to offer informed consent to data collection and processing. Second, the data economy’s secrecy and lack of accountability for human rights might make it harder for people to learn about violations of their rights and seek remedies when they occur. Even qualified professionals or fact-finders may find it challenging to audit these systems and detect flaws. Most development and humanitarian programmes have a high level of organisational complexity, which can exacerbate these issues. Who is ultimately liable when a system gives out a discriminating judgement when a single project is made up of a long chain of?

Erosion of privacy

The potential of AI systems to evaluate and draw conclusions from large amounts of private or publicly accessible data could have major ramifications for many aspects of the right to privacy that are currently protected. AI systems can expose sensitive information about people’s location, social networks, political affiliations, sexual inclinations, and more, all based on data that people voluntarily put online or that their digital gadgets provide inadvertently. These dangers are magnified in humanitarian situations, as individuals affected by AI systems are likely to be the most disenfranchised. As a consequence, data or analysis which will not normally be considered sensitive could become sensitive as a result of this. In most cases, information like names, hometowns, and addresses is public, but for a refugee escaping oppression or persecution in their home country, this information may imperil their safety and security if it fell into the wrong hands.

Human rights as Baseline

Human rights should be the foundation of any effective AI governance regime, according to the dialogues done by UN Global Pulse and UN Human Rights. IHRL provides an internationally legitimate and complete framework for anticipating, preventing, and redressing the aforementioned risks and harms.

Second, States are bound by the international human rights regime. It asks them to create a framework that “prevents human rights breaches, implements monitoring and oversight procedures as safeguards, holds those responsible accountable, and provides a remedy to people and groups who feel their rights have been infringed.”[7]

Third, IHRL concentrates its analytical lens on the rights holder and responsibility carrier in a given setting, making concepts much easier to apply in real-world circumstances. Instead of striving for general concepts like “fairness,” human rights law requires AI system developers and implementers to concentrate about who would be affected by the technology and which one of their fundamental rights will be violated. This is a highly practical effort that entails converting higher principles into specific risks and damages.

Fourth, the IHRL identifies the harms that must be avoided, reduced, or corrected when defining specific rights. It does so by identifying the results that States and other organisations, such as humanitarian and development actors, can strive for. The Committee of Economic, Social, and Cultural Rights of the United Nations, for example, has created guidelines for “accessibility, adaptation, and acceptability” that States must pursue in their social welfare programs.[8]

Finally, human rights law and jurisprudence give a framework for balancing rights that are at odds with one another. This is critical when selecting whether or not to use a technical instrument that has both advantages and disadvantages. Human rights law gives guidance in these situations on how and when certain fundamental rights can be limited, namely by understanding the ideas of legality, legitimacy, need, and appropriateness to the proposed AI intrusion.[9]

[1] UN General Assembly, Roadmap for Digital Cooperation: Implementation of the Recommendations of the High-Level Panel on Digital Cooperation. para. 6, https://undocs.org/A/74/821

[2] Joseph Bullock et al., “Mapping the Landscape of Artificial Intelligence Applications against COVID-19”, Journal of Artificial Intelligence Research, Vol. 69, 2020, www.jair.org/index.php/jair/article/view/12162.

[3] Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy and Madhulika Srikumar, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, Berkman Klein Center Research Publication No. 2020-1, 14 February 2020.

[4] L. McGregor, D. Murray and V. Ng, p. 310.

[5] UN Global Pulse’s Pulse Satellite project, www.unglobalpulse.org/microsite/pulsesatellite/.

[6] UN Secretary-General’s Independent Expert Advisory Group on a Data Revolution for Sustainable Development, A World That Counts: Mobilising the Data Revolution for Sustainable Development, 2014.

[7] L. McGregor, D. Murray and V. Ng, p. 311.

[8] “Standards of Accessibility, Adaptability, and Acceptability”, Social Protection and Human Rights, https://socialprotection-humanrights.org/framework/principles/standards-of-accessibility-adaptability-and-acceptability/.

[9] ESCR Report, pp. 10–11, N. A. Smuha, above note 88, observing that similar formulas for balancing competing rights are found in the EU Charter, the European Convention of Human Rights, and Article 29 of the UDHR.

Add a Comment

Your email address will not be published. Required fields are marked *