AI Ethics:
The Dark Side of AI
Transparency statement: This presentation was built with the assistance of multiple generative AI tools, namely, Claude, ChatGPT, Gamma, and Flux. These tools were used to organize the presenter's notes, edit textual elements, divide the information into slides, and generate images.

The content is partly based on Ethan Furze's posts on Teaching AI Ethics: https://leonfurze.com/ai-ethics/
Extraordinary Discovery:
Ethical use of AI tools results in better AI outputs!
The Ethics of AI in College: Your Choice
1
Choosing not to use AI for a school assignment knowing that the teacher expects you to have written the text and generated the ideas yourself.
2
Choosing not to use AI because you do not want to be caught and suffer the consequences of academic dishonesty.
3
Choosing not to use AI to avoid gaining an unfair advantage over students who did not use AI.
Unethical AI User Profile
Personal Gain Focus
An unethical AI user primarily employs AI tools for personal convenience or advantage, without considering broader implications.
Disregard for Consequences
They knowingly ignore or fail to adequately consider the potential negative impacts of their AI use on others, society, or the environment.
Ethical AI User Profile
Thoughtful Consideration
An ethical AI user carefully considers the wider impact of their AI use, reflecting on potential consequences on others, society, and the environment.
Adaptation of Behavior
They adapt their AI use to limit potential negative consequences for others, society, and the environment.
Value-Driven Choices
Ethical users modify their behavior based on core values, choosing alternatives that align with their personal beliefs as to what is right and what is wrong.
Continuous Learning
They stay informed about AI developments and continuously reassess their usage patterns.
Positive Values Associated with AI Use?
Positive Values Associated with AI Use
Productivity
AI tools can significantly enhance work speed and output in various tasks.
Enjoyment
Using AI can make certain tasks more engaging, interactive, and enjoyable for users.
Quality
AI assistance can potentially improve the overall quality of work in certain contexts.
The Dark Side of AI: What values are potentially compromised by AI use?
Some Troubling Facts about AI:
There are plenty of reasons for choosing not to use AI tools
1
Broad Societal and Environmental Ethical Concerns
2
Interpersonal and Professional Ethical Concerns
Broad Societal and Environmental Ethical Concerns
Environmental Impact of AI
Energy / Water Consumption
Carbon Footprint
Mitigation Efforts
  • Data centers and Cloud-based AI consume enormous amounts of electricity
  • Cooling systems for server farms require additional energy and water (e.g., one email two cups of water)
  • AI model training emits substantial carbon (e.g., ChatGPT-2 ≈ 5 cars' lifetime emissions)
  • Generating content adds to impact (e.g., one image charging a phone)
  • AI hardware requires rare earth minerals, the extraction of which has significant impacts
  • Efficient, smaller (mini) models
  • Model optimization for reduced environmental impact
  • Improved training efficiency and renewable energy use
Violation of Intellectual Property and Copyright
1
Training Data Issues
Most currently available generative AI models have been trained in part on content – texts and images – from the internet, without the original creators' consent.
2
Legal Challenges
The New York Times lawsuit against OpenAI (Dec. 2023) is the first big test regarding copyright violation in AI training data.
3
Industry Response
While being sued, AI companies, like Apple and OpenAI, are now making deals with content publishers to acquire the right to use their content.
Data Labeling Exploitation
Low Wages
OpenAI used workers in Kenya paid between $1.32 to $2 per hour to label harmful content for its AI safety system.
Global Inequality
The disparity between the value created by AI and the compensation for those involved in its development highlights issues of global economic inequality.
Disturbing Content
These workers were exposed to graphic situations of abuse, murder, and self-harm as part of their labeling tasks.
Privacy Concerns in AI
Training Issues
OpenAI faced regulatory issues in Italy for using personal information of millions of Italians in ChatGPT's training data without proper justification.
User Concerns
Many users worry that the information they provide to AI tools might be used to train future models or sold as personal data.
Data Protection
These highlight the need for stronger data protection measures and transparency in AI training processes.
Misinformation and AI
False Information Spread
AI tools like Midjourney and ChatGPT can be used to spread false information, as demonstrated by Elliot Higgins using Midjourney for fun.
Incorrect Information ("hallucinations")
AI generative tools also generate incorrect information as if it were true, potentially deceiving users.
Legal Implications
Stephen Schwartz, a lawyer, submitted to court six fake case precedents generated by ChatGPT in a case against Avianca Airlines, claiming he did not know AI tools could generate inaccurate information (2023).
Biased Training Data >> Biased Outputs
1
Training Data Bias
Large language models are trained on vast amounts of web data, which includes racist, sexist, ableist, and otherwise discriminatory language, potentially perpetuating these biases.
2
WEIRD Bias
ChatGPT and similar models are biased towards WEIRD: views expressed in their outputs resemble that of people from Western, Educated, Industrialized, Rich, and Democratic societies.
3
English Language Dominance
English is overwhelmingly represented on the web compared to other world languages, skewing the data toward English-speaking populations.
Bias in AI-Generated Images
LLMs as Yes-Men: Input >> Output
Reflection of User Bias
Large language models tend to reflect and reinforce the biases or opinions in the user's input.
Garbage In > > Garbage Out
Continuation Mechanism
The AI's output is an extension of the user's input, naturally agreeing with or expanding on the given perspective.
Lack of Contradiction
Unless there is something unethical, potentially harmful, or glaringly wrong or exaggerated in the input, the AI will go along and expand on the given perspective.
Anchoring Effect
AI may rely too heavily on an initial piece of information in the user's input—the “anchor.” This can lead to inaccuracies or biases as the output is pulled in the direction of the anchor.
Interpersonal and Professional Ethical Considerations
Lack of Transparency in AI Use
1
Deception Risks
AI-generated content can be passed off as human-created, leading to deception.
2
Trust Breach
Overreliance on AI without disclosing its use can undermine honesty and trust, leading to ethical breaches in academic, professional, and personal settings.
3
Skill Discrepancy
Regular undisclosed AI use can create a discrepancy between perceived and actual skills or knowledge.
Lack of Explainability
Black Box Problem
AI models function as "black boxes," making it difficult to understand how they arrive at specific outputs.
Bias Detection Difficulties
The lack of explainability in AI decisions can lead to unintended biases or errors that are hard to detect and correct.
Accountability Issues
In professional settings, the inability to explain AI-generated results can undermine accountability and decision-making processes.
Degradation of Human Communication
Efficiency vs. Connection
Efficiency gained through AI use may come at the cost of meaningful human connections, reducing the authenticity and depth of interactions.
AI-Mediated Interaction
AI-generated responses may lack the nuance and context-awareness necessary for sensitive conversations
Skill Atrophy
Overuse of AI for communication may lead to a decrease in empathy and emotional intelligence, leading to difficulties with spontaneous communication and real-time interactions.
Compromise of Human Autonomy
1
Loss of Personal Voice & Expertise
Overuse of AI can lead to a loss of personal voice and erosion of professional expertise, making it harder to stand out in professional settings.
2
Critical Thinking Decline
Constant reliance on AI can weaken critical thinking skills.
3
Devaluation of Human Labour
Widespread AI adoption may undervalue non-AI-assisted work, risking human job displacement in certain fields.
For all these reasons, opting-out is a legitimate choice.
Ethical AI Prompting:
Six Actions
Protect Your Privacy
Be Transparent
Validate AI Outputs
Personalize Your Inputs
Practice Digital Sobriety
Prioritize Human Value
Basic prompts that lack personal insights and detailed instructions generate outputs based solely on generic patterns in the AI's training data. This raises ethical concerns, as much of this data has been used without the original creators' consent. By incorporating your unique perspective into your prompts, you reduce reliance on these patterns and contribute to more ethical and original content creation.
Following these ethical guidelines not only ensures responsible AI use but also leads to better prompting and higher quality AI outputs ≈ more original / less generic.
Three personal values & actions I care about:
Truth & Avoiding Bias:
Validate and cross-check AI outputs
Apply critical thinking to AI content. Verify it with reliable sources. Create neutral prompts. Ask open-ended questions. Ask for diverse perspectives. Provide step-by-step instructions to increase explainability.
Human Autonomy &
Intellectual Property:
Incorporate my own voice and ideas
Build complex prompts infused with personal insights to: reflect my own perspective, produce original content, ensure authentic communication, avoid generic patterns, and respect human creators.
Transparency & Trust:
Be transparent about my use of AI
Honestly disclose when I use AI assistance and clearly differentiate between AI-generated and human-authored content.
the end
Made with