To study the impact of responses generated by AI language models like ChatGPT and Google Bard on opinion formation through semantic and sentimental analysis and fact-checking
Material type:
- SP2023/3690
Item type | Current library | Collection | Shelving location | Call number | Status | Date due | Barcode | |
---|---|---|---|---|---|---|---|---|
Student Project | Vikram Sarabhai Library | Reference | Students Project | SP2023/3690 (Browse shelf(Opens below)) | e-Book - Digital Access | SP003690 |
Submitted to Prof. Sundaravalli Narayanaswami
Submitted by: Kabadi Gauravi Pramod, Sanya Nikita Kachhap
Introduction
The term "Artificial Intelligence (AI)" has become a household name in both corporate and academic spheres, owing to the widespread adoption of AI-based applications. These applications are no longer confined to mimicking human behavior; advanced linguistic processes like Natural Language Processing (NLP) have enabled the development of AI-text generators such as ChatGPT (Generative Pre-trained Transformer), which have revolutionized human-computer interaction.
The primary objective of this study is to explore how AI-generated responses can inadvertently transmit societal biases. This investigation spans diverse domains including politics, science, medicine, sports, organizational culture, and environmental change.
In today’s digital age, machines increasingly generate content that humans consume. It is critical to acknowledge that AI-generated texts can exhibit biases, as these models learn from massive datasets that may contain imperfections. While deep technical understanding of NLP is not necessary to detect such biases, awareness of their potential existence is vital. This study proposes to identify and validate these biases through structured methodologies.
It is essential to be cautious and critical of information consumed online, especially when the source is an AI model. Unlike humans, who infuse emotion and experience into their writing, AI systems may lack these personal nuances. Recognizing this distinction enables readers to make more informed judgments.
Language is a fundamental vehicle for conveying ideas, perspectives, and beliefs, thus shaping societal narratives. AI linguistic models introduce new dimensions to discourse analysis, necessitating a thorough assessment of biases that may be unintentionally reinforced through AI-generated content. Given the increasing presence of AI across various sectors, understanding these biases has become imperative.
Furthermore, comparing AI-generated texts with those written by humans allows for an evaluation of how well AI captures the complexity, ambiguity, and subtlety inherent in human expression. As AI models are trained on vast datasets, they often mimic human-like behaviors. This study seeks to explore the structural, contextual, and emotional differences between AI- and human-generated texts to inform the development of more accurate and ethical AI communication systems.
A comprehensive investigation into AI-induced bias can contribute significantly to content moderation frameworks, enhance algorithmic transparency, and promote responsible technological practices. Ultimately, this study aims to deepen our understanding of the intricate relationship between language, technology, and society.
There are no comments on this title.