Spaces:
Running
Running
title-long: "Framing the Impact of AI Systems on Individuals' Rights and Interests" | |
title-short: Impact of AI on Persons | |
document-id: individuals | |
tags: | |
- discrimination | |
- privacy | |
- safety | |
- consent | |
# abstract in text format | |
abstract: > | |
The pervasive integration of AI into our daily lives is reshaping individual experiences, | |
relationships, and societal norms in profound and often unsettling ways. This technological revolution, while offering | |
numerous benefits, also presents a complex array of challenges that demand careful consideration and proactive measures | |
to protect human well-being and autonomy. | |
# introduction and sections in HTML format | |
introduction: > | |
<p> | |
AI is increasingly integral to daily life, shaping | |
individual experiences and societal dynamics through personalized recommendations, | |
chat assistants, and care robots. While AI enhances convenience and efficiency, it | |
raises concerns about privacy, mental health, and bias. As AI becomes pervasive, questions about individuals' | |
rights to understand the technology and its implications emerge, emphasizing the need for diverse group | |
involvement to prevent exacerbating discrimination or creating new inequalities. AI's impact extends beyond | |
users to passive subjects, raising ethical concerns about consent and autonomy. AI tools like | |
GitHub Copilot are changing work and creativity, prompting questions about human skills and over-reliance on AI. | |
Power dynamics shift in sectors like healthcare and home security as AI systems assume more responsibilities. | |
Understanding AI's multifaceted impact is crucial to ensure its deployment promotes well-being, equity, and human agency, | |
necessitating critical examination and active shaping to align with human values and rights. | |
</p> | |
<p> | |
Biased and stereotypical AI systems pose a significant threat to mental health and social equity. | |
When AI algorithms reflect and amplify societal biases present in their training data, they can exacerbate | |
discrimination against marginalized groups. This can manifest in biased hiring algorithms that disadvantage | |
certain demographics or facial recognition systems that perform poorly on non-white faces. | |
The psychological impact of interacting with biased AI can be profound, reinforcing negative self-perceptions | |
and contributing to mental health issues among affected individuals. Additionally, AI's role in shaping beauty | |
standards and self-perception is concerning. AI-generated and manipulated images on social media and in advertising | |
contribute to unrealistic beauty standards and negative body image issues. Campaigns like Dove's "Keep Beauty Real" | |
emphasize the importance of promoting authentic representations of diverse body types and appearances in the face of | |
sophisticated AI-driven image manipulation tools. | |
</p> | |
sections: | |
- section-title: Discrimination, Biases, and Accessibility | |
section-text: > | |
<p> | |
Section text, HTML-formatted, TODO | |
</p> | |
- section-title: Harassment, NCII, and AI-enabled Abuse | |
section-text: > | |
<p> | |
Section text, HTML-formatted, TODO | |
</p> | |
- section-title: Consent and Control of Personal Data and Work | |
section-text: > | |
<p> | |
Section text, HTML-formatted, TODO | |
</p> | |
- section-title: Right to an Alternative and Recourse for Harms | |
section-text: > | |
<p> | |
Section text, HTML-formatted, TODO | |
</p> | |
resources: | |
- resource-name: Google Doc topic Card | |
resource-url: https://docs.google.com/document/d/18DsFYeaoWcK8VOkl77E9AgdHA7vhyiqUhSmGm-vnIEE | |
contributions: > | |
Avijit Ghosh and Yacine Jernite wrote this document. | |