Ethical & Legal Considerations¶
This work is licensed under a Creative Commons Attribution 4.0 International License.
Foundations of the Ethical principles for AI¶
This lesson focuses on the ethical principles that ground AI in a legal landscape.
Science Fiction or a Philosophical Theory?¶
In the early 1950's Alan Turing the father of all modern computing, proposed a test for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions put to both.
Author Isaac Asimov wrote a series of popular science fiction novels in the 1950's through the 1980's. His work continues to be adapted into television series and movies. In his novels, Asimov developed Three Laws of Robotics which described how artificial intelligence interacted with humanity in his fictional universe.
The Three Laws
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later wrote of a 'zeroth' law which superceded the first three laws,
0. A robot may not injure humanity or, through inaction, allow humanity to come to harm.
Asimov's Three Laws are difficult to interpret in a real-world setting and he himself spent most of his novels describing creative and unexpected ways in which the Three Laws were twisted yet not broken. The basis of the Three Laws as a legal framework is untenable, but does represent a moral and ethical starting point from which we can think about AI and the legal rights of non-biological beings.
Another science-fiction author Sir Arthur C. Clarke, in 1978 provided an interesting perspective on how humanity would have to come to terms with AI once its capabilities surpass our own:
Recently, researchers published findings showing that current GPTs are now capable of passing Turing tests. As our conception of intelligence shifts (Mitchell 2024), mostly in reaction to the release of ChatGPT and its myriad of competitors, new standards of the Turing Test are being proposed.
Importantly, current AI exposes the limits of Turing Tests based on imitation without comprehension.
The Turing Trap is a term coined by Stanford University professor Erik Brynjolfsson to describe the idea that focusing too much on developing human-like artificial intelligence (HLAI) can be detrimental.
Brynjolfsson argues that the real potential of AI lies in its ability to augment human abilities, rather than replacing them. He suggests that we should work on challenges that are easy for machines and hard for humans, rather than the other way around.
Beware the Turing Trap
Automation can replace humans
HLAI can replace humans in the workplace, which can lead to:
-
Lower wages
As machines become better substitutes for human labor, wages can be driven down.
-
Loss of economic and political power
Workers can lose economic and political bargaining power, and become increasingly dependent on those who control the technology.
-
Decision-making processes incentivize automation
Companies may choose to automate tasks to do the same thing faster and cheaper.
-
Misaligned incentives
The risks of the Turing Trap are increased by the misaligned incentives of technologists, businesspeople, and policy-makers.
this text was written by AI and then reviewed by a human. Do you still trust it?
Researchers have found disclosing the use of AI makes people trust you less.
Ethical AI¶
In "A Unified Framework of Five Principles for AI in Society" (Floridi & Cowls 2019) core principles for ethical AI are introduced (Table 1).
Table 1: Floridi & Cowls (2019) Five principles for AI in Society¶
Beneficiance | Non-Maleficence | Autonomy | Justice | Explicability |
---|---|---|---|---|
Promoting Well-Being, Preserving Dignity, and Sustaining the Planet | Privacy, Security and ‘Capability Caution’ | The Power to Decide (to Decide) | Promoting Prosperity, Preserving Solidarity, Avoiding Unfairness | Enabling the Other Principles through Intelligibility and Accountability |
International Agreements on AI¶
A milestone in the Ethics of Artificial Intelligence () occurred in January 2017 in Pacific Grove, California at the historic Asilomar Hotel and Conference Grounds (Table 2). There the Asilomar AI Principles were signed by leading AI researchers, ethicists, and thought leaders.
By 2021, UNESCO had created their own recommendations on AI, focused on human rights and sustainable development.
Table 2: International AI agreements¶
Agreement | Date | Signatories | Description |
---|---|---|---|
Asilomar AI Principles | January 2017 | AI researchers, ethicists, and thought leaders | A set of 23 principles designed to guide the development of beneficial AI, covering research, ethics, and long-term issues. |
Toronto Declaration | May 16, 2018 | Amnesty International, Access Now, Human Rights Watch, Wikimedia Foundation, and others | A declaration advocating for the protection of the rights to equality and non-discrimination in machine learning systems. |
OECD AI Principles | May 22, 2019 | OECD member countries and others | Principles to promote AI that is innovative and trustworthy and that respects human rights and democratic values. |
G20 AI Principles | June 9, 2019 | G20 member countries | A commitment to human-centered AI, building upon the OECD AI Principles, emphasizing inclusivity, transparency, and accountability. |
WHO Ethics and governance of artificial intelligence for health | June 2021 | WHO Ministries of Health members | A guidance on eighteen months of deliberation amongst experts from Ministries of Health |
UNESCO Recommendation on the Ethics of Artificial Intelligence | November 2021 | UNESCO member states | A global framework to ensure that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals. |
European Union Artificial Intelligence Act | July 2024 | EU member countries | Classifies risk, obligations, legal, and general purpose AI laws |
In response to the rapid rise of generative AI, specifically GPTs, new agreements on the application of AI for military use, safety, and on its adoption in business and industry were recently signed (Table 3).
Table 3: Declarations on AI¶
Agreement | Date | Signatories | Description | Source |
---|---|---|---|---|
Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy | February 16, 2023 | United States and 50 other countries | A declaration outlining principles for the responsible use of AI and autonomy in military applications. | U.S. Department of State |
International Network of AI Safety Institutes | May 2024 | United Kingdom, United States, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, European Union | A network formed to evaluate and ensure the safety of advanced AI models through international collaboration. | The Independent |
AI Safety Agreement between the UK and US | June 2024 | United Kingdom, United States | An agreement to collaborate on testing advanced AI models to ensure safety and manage risks. | BBC News |
Framework Convention on Artificial Intelligence | September 5, 2024 | United States, United Kingdom, European Union, Andorra, Georgia, Iceland, Norway, Republic of Moldova, San Marino, Israel | The first legally binding international treaty on AI, aiming to ensure AI activities are consistent with human rights, democracy, and the rule of law. | Council of Europe |
AI Alliance Network | December 11, 2024 | Russia, BRICS countries (Brazil, China, India, South Africa), Serbia, Indonesia, and others | An initiative to develop AI collaboratively, focusing on joint research, regulation, and commercialization of AI products among member countries. | Reuters |
Current Legislation¶
National Conference of State Legislatures (NCSL) Artificial Intelligence 2025 Legislation
The previous administration had proposed a "Blueprint for an AI Bill of Rights", and executive order around the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" which is now rescinded.
The current administration has instead focused most of its efforts on executive orders related to AI and federal agencies. Pending legislation would ban states' ability to enforce AI regulations.
2025 Executive Orders
As of today, there are no comprehensive federal laws or regulations that have been enacted to specifically regulate AI in the United States of America.
AI Ethics¶
What are we talking about, the Ethics of AI, or Ethical AI? How are they different?
They are not the same thing
Siau and Wang 2020 delineate "Ethics of AI" and "Ethical AI" as
Ethics of AI: studies the ethical principals, rules, guidelines, policies, and regulations related to AI.
Ethical AI: is AI that performs or behaves ethically.
As consumers of GPTs and other AI platforms, we must consider in what ways can we use AI both effectively, and ethically.
When can you use a GPT for research and education?
graph TB
A((Start)) --> B("Does it matter if the outputs are true?");
B -->| No | F("Safe to use GPT");
B -->| Yes | C("Do you have the ability to verify output truth and accuracy?");
C -->| Yes | D("Understand legal and moral responsibility of your errors?");
C -->| No | E("Unsafe to use GPT");
D -->| Yes | F("Safe to use GPT");
D -->| No | E("Unsafe to use GPT");
style A fill:#2ECC71,stroke:#fff,stroke-width:2px,color:#fff
style B fill:#F7DC6F,stroke:#fff,stroke-width:2px,color:#000
style C fill:#F7DC6F,stroke:#fff,stroke-width:2px,color:#000
style D fill:#F7DC6F,stroke:#fff,stroke-width:2px,color:#000
style E fill:#C0392B,stroke:#fff,stroke-width:2px,color:#fff
style F fill:#2ECC71,stroke:#fff,stroke-width:2px,color:#fff
Figure credit: ChatGPT and Artificial Intelligence in Education, UNESCO 2023
Recent Controversy¶
Maps of AI Copyright Lawsuits¶
Master list of current lawsuits against AI companies
Current AI models are overwhelmingly based on European and North American historical literature and language. Over half of the content on the internet () is written in English. This creates a Eurocentric bias in AI training data, resulting in an erasure of global culture, experience, and language. Such asymmetries need to be addressed, but there is at present a lack economic incentives for large tech companies and organizations (see The Imitation Game above).
The
Bullshit Machines
Professors Carl T. Bergstrom and Jevin D. West teach a course at University of Washington titled "Calling Bullshit", they have written an e-book on GPTs called:
"Modern-Day Oracles or Bullshit Machines?"
Their website provides online lesson vignettes and materials for instructors.
Negative consequences of GPTs explosion into the public space are its mis-use as well as its adoption for illegal activity.
There are deep ethical concerns about the use of AI like GPT and LLMs, particularly concerning their training data.
AI companies also effectively steal designs, visual art, and music styles to train their private models.
ChatGPT has effectively gamified higher education and is being used to spread disinformation and hate speech.
Recent Literature¶
Here are some recent papers that discuss the ethical concerns surrounding AI:
-
"AI Safety and the Age of Convergences" (2024) - Schuett, J., Schuett, J., & Korinek, A. https://doi.org/10.48550/arXiv.2401.06531
-
"On the Opportunities and Risks of Foundation Models" (2023) - Bommasani et al. https://doi.org/10.48550/arXiv.2108.07258
-
Unraveling the Ethical Conundrum of Artificial Intelligence: A Synthesis of Literature and Case Studies Poli, P.K.R., Pamidi, S. & Poli, S.K.R. Augment Hum Res 10, 2 (2025). https://doi.org/10.1007/s41133-024-00077-5
-
"The Ethics of Artificial Intelligence in Education: A Review of the Literature" (2023) - Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. https://doi.org/10.1007/s10639-019-09882-z
-
"The Ethical Challenges of Algorithmic Bias in Artificial Intelligence: a scoping review" (2023) - Borenstein, J., Glikson, E., & Krishnamurthy, V. https://doi.org/10.1007/s43681-023-00313-z
-
"Ethics of Artificial Intelligence" (2020) - S. Matthew Liao https://doi.org/10.1093/oso/9780190905033.001.0001
-
The Ethics of AI Ethics: An Evaluation of Guidelines. (2020) Hagendorff, T. Minds & Machines 30, 99–120. https://doi.org/10.1007/s11023-020-09517-8
Assessment¶
True or False: The "Turing Trap" primarily warns against the socio-economic disruptions and misaligned incentives that arise from an overemphasis on creating AI that imitates human intelligence.
True
The Turing Trap by Stanford University professor Erik Brynjolfsson describes the idea that focusing too much on developing human-like artificial intelligence (HLAI) is detrimental.
Brynjolfsson further elaborates risks like lower wages, loss of economic power, and misaligned incentives due to automation replacing humans.
True or False: The concepts of "Ethics of AI" and "Ethical AI" are fundamentally distinct.
True
Siau and Wang (2020): "Ethics of AI: studies the ethical principals, rules, guidelines, policies, and regulations related to AI." and "Ethical AI: is AI that performs or behaves ethically."
Multiple Choice: According to Floridi & Cowls' (2019) "Unified Framework of Five Principles for AI in Society," which principle most directly underscores the importance of AI systems being designed to be understandable, traceable, and accountable for their operations and decisions?
- A) Beneficence
- B) Non-Maleficence
- C) Justice
- D) Explicability
Answer
D) Explicability
Table 1 from Floridi & Cowls (2019) describes Explicability as "Enabling the Other Principles through Intelligibility and Accountability." This directly relates to AI systems being understandable, traceable, and accountable.
Multiple Choice: The Asilomar AI Principles, established in 2017, are best characterized as:
- A) A legally binding international treaty mandating specific safety protocols for all AI development.
- B) A technical specification for building universally safe Artificial General Intelligence.
- C) A foundational set of guiding principles addressing research ethics, societal values, and long-term considerations for developing beneficial AI.
- D) A corporate social responsibility charter adopted exclusively by major technology companies.
Answer
C) A foundational set of guiding principles addressing research ethics, societal values, and long-term considerations for developing beneficial AI.
Table 2 describes the Asilomar AI Principles as "A set of 23 principles designed to guide the development of beneficial AI, covering research, ethics, and long-term issues." This aligns with option C and not with the descriptions of a legally binding treaty, a technical specification, or an exclusive corporate charter.
What recent international agreement is the "first legally binding international treaty on AI," specifically designed to ensure that AI activities are developed and applied in a manner consistent with human rights, democracy, and the rule of law. What is the name of this treaty?
Success
Framework Convention on Artificial Intelligence
Table 3 lists the Framework Convention on Artificial Intelligence (September 5, 2024) with the description: "The first legally binding international treaty on AI, aiming to ensure AI activities are consistent with human rights, democracy, and the rule of law."
True or False: The United States has the strongest regulations and most comprehensive federal laws specifically enacted to regulate AI.
False
The US has no laws around the regulation of AI to-date. Current legislation around AI is happening at a state level, but that may be stopped by federal legislation banning regulation. Currently, the administration favors Executive Orders.
On the other side of the pond, the EU has proposed and is developing regulations through the European Union Artificial Intelligence Act (2024-2031).