MARC details
000 -LEADER |
fixed length control field |
03654nmm a22001937a 4500 |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION |
fixed length control field |
250210b2023 |||||||| |||| 00| 0 eng d |
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER |
Classification number |
SP2023/3690 |
100 ## - MAIN ENTRY--PERSONAL NAME |
Personal name |
Pramod, Kabadi Gauravi |
245 ## - TITLE STATEMENT |
Title |
To study the impact of responses generated by AI language models like ChatGPT and Google Bard on opinion formation through semantic and sentimental analysis and fact-checking |
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) |
Place of publication, distribution, etc |
Ahmedabad |
Name of publisher, distributor, etc |
Indian Institute of Management |
Date of publication, distribution, etc |
2023 |
300 ## - PHYSICAL DESCRIPTION |
Extent |
19 p. : ill. |
500 ## - GENERAL NOTE |
General note |
Submitted to Prof. Sundaravalli Narayanaswami<br/><br/>Submitted by: Kabadi Gauravi Pramod, Sanya Nikita Kachhap |
520 ## - SUMMARY, ETC. |
Summary, etc |
Introduction<br/><br/>The term "Artificial Intelligence (AI)" has become a household name in both corporate and academic spheres, owing to the widespread adoption of AI-based applications. These applications are no longer confined to mimicking human behavior; advanced linguistic processes like Natural Language Processing (NLP) have enabled the development of AI-text generators such as ChatGPT (Generative Pre-trained Transformer), which have revolutionized human-computer interaction.<br/><br/>The primary objective of this study is to explore how AI-generated responses can inadvertently transmit societal biases. This investigation spans diverse domains including politics, science, medicine, sports, organizational culture, and environmental change.<br/><br/>In today’s digital age, machines increasingly generate content that humans consume. It is critical to acknowledge that AI-generated texts can exhibit biases, as these models learn from massive datasets that may contain imperfections. While deep technical understanding of NLP is not necessary to detect such biases, awareness of their potential existence is vital. This study proposes to identify and validate these biases through structured methodologies.<br/><br/>It is essential to be cautious and critical of information consumed online, especially when the source is an AI model. Unlike humans, who infuse emotion and experience into their writing, AI systems may lack these personal nuances. Recognizing this distinction enables readers to make more informed judgments.<br/><br/>Language is a fundamental vehicle for conveying ideas, perspectives, and beliefs, thus shaping societal narratives. AI linguistic models introduce new dimensions to discourse analysis, necessitating a thorough assessment of biases that may be unintentionally reinforced through AI-generated content. Given the increasing presence of AI across various sectors, understanding these biases has become imperative.<br/><br/>Furthermore, comparing AI-generated texts with those written by humans allows for an evaluation of how well AI captures the complexity, ambiguity, and subtlety inherent in human expression. As AI models are trained on vast datasets, they often mimic human-like behaviors. This study seeks to explore the structural, contextual, and emotional differences between AI- and human-generated texts to inform the development of more accurate and ethical AI communication systems.<br/><br/>A comprehensive investigation into AI-induced bias can contribute significantly to content moderation frameworks, enhance algorithmic transparency, and promote responsible technological practices. Ultimately, this study aims to deepen our understanding of the intricate relationship between language, technology, and society.<br/><br/><br/><br/> |
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name as entry element |
Artificial intelligence - Social aspects |
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name as entry element |
Natural language processing (Computer science) - Moral and ethical aspects |
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name as entry element |
Bias (Prejudice) - In mass media |
700 ## - ADDED ENTRY--PERSONAL NAME |
Personal name |
Kachhap, Sanya Nikita |
Relator term |
Co-author |
942 ## - ADDED ENTRY ELEMENTS (KOHA) |
Koha item type |
Student Project |