{"id":148050,"date":"2024-01-09T09:43:41","date_gmt":"2024-01-09T09:43:41","guid":{"rendered":"https:\/\/www.techopedia.com"},"modified":"2024-01-09T09:43:41","modified_gmt":"2024-01-09T09:43:41","slug":"can-discriminative-ai-help-us-in-the-fight-for-the-truth","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/generative-ai-vs-discriminative-ai","title":{"rendered":"Can ‘Discriminative AI’ Help Us in the Fight For the Truth?"},"content":{"rendered":"
In the domain of AI, generative AI<\/a> and discriminative AI stand as two distinct approaches to AI development. The former is dedicated to creating new content, while the latter specializes in classifying existing data. These two approaches have long been regarded as fundamental in shaping AI systems.<\/p>\n However, the recent surge in generative AI’s prowess, particularly in producing text and images closely resembling human creations, has ushered in a new era, where generative AI is becoming a significant source of misleading information.<\/p>\n And in response, discriminative AI is evolving as a defensive strategy.<\/p>\n This article explores the nuances of this evolving frontier, examining the interplay between Generative and Discriminative and shedding light on the challenges posed by the growing potential of generative AI in generating deceptive content.<\/p>\n Generative and discriminative AI represent divergent philosophies and applications within the field.<\/p>\n Generative models delve into understanding and simulating the underlying data structure, learning the probability distribution of the entire dataset<\/a>. This makes them adept at generating new data points resembling the training set, proving valuable in tasks such as image and text generation.<\/p>\n On the other hand, discriminative models concentrate on delineating boundaries between different classes in the data, excelling in tasks like image classification and natural language processing<\/a> (NLP). The choice between these approaches hinges on the task at hand, with generative models fostering creativity and diversity and discriminative models optimizing classification accuracy.<\/p>\n Recent advancements in generative AI, exemplified by models like ChatGPT<\/a>, LLaMa, Google Bard<\/a>, Stable Diffusion<\/a>, and DALL-E<\/a>, showcase an unprecedented ability to create diverse data instances.<\/p>\n Leveraging human instructions, these systems produce outputs indistinguishable from human-generated content, opening new frontiers in healthcare, law, education, and science.<\/p>\n However, this creative potential harbors a significant risk \u2014 the generation of convincingly misleading content on a large scale. The types of misinformation can be categorized into model-driven and human-driven.<\/p>\n Large language models<\/a> (LLMs) trained on vast internet datasets may inadvertently generate responses based on inaccuracies, biases, or misinformation present in the training data \u2014 a phenomenon known as model hallucination<\/a>. A notorious example of this phenomenon occurred during Bard’s inauguration, where it falsely claimed that the James Webb Space Telescope captured the “very first pictures” of an exoplanet.<\/p>\n The subsequent repercussions, a significant $100 billion loss in the market value of Google’s parent company, Alphabet<\/a>, underscored the real-world consequences of model-driven misinformation.<\/p>\n While strides have been made in addressing model hallucination, this article primarily focuses on the emerging issue of human-driven misinformation.<\/p>\n In the initial weeks of January 2023, OpenAI, the company responsible for ChatGPT, undertook a research initiative to assess the potential of large language models in generating misinformation.<\/p>\n Their findings indicated that these language models could become instrumental for propagandists and reshape the online influence operations landscape.<\/p>\n Subsequently, in the same year, Freedom House released a report revealing that governments and political entities worldwide, both democracies and autocracies, are leveraging AI to generate texts, images, and videos to manipulate public opinion in their favor.<\/p>\n The report documented the use of generative AI in 16 countries, illustrating its deployment to “sow doubt, smear opponents, or influence public debate.<\/p>\n Another significant generative AI technology contributing to the proliferation of misinformation is deepfake<\/a> material. This technology primarily focuses on crafting authentic-looking fabricated content, encompassing manipulated videos, audio recordings, or images portraying individuals engaging in actions or making statements they never actually performed. Many examples of deepfake videos are available on the internet.<\/p>\n As generative AI advances, contributing to the surge in misinformation, discriminative AI emerges as a crucial line of defense.<\/p>\n Leveraging its prowess to distinguish between authentic and deceptive content, discriminative AI employs machine learning<\/a> algorithms. These algorithms<\/a>\u00a0are instrumental in detecting discriminatory patterns that delineate true from false information, conducting rigorous fact-checks against reputable sources, and scrutinizing user behavior to identify potential instances of misinformation.<\/p>\nGenerative vs. Discriminative AI: Two Unique Paths<\/span><\/h2>\n
Generative AI: Unveiling the Pandora’s Box of Misinformation<\/span><\/h2>\n
Model-Driven Misinformation: The Hallucination Effect<\/span><\/h2>\n
Human-Driven Misinformation: A Growing Threat<\/span><\/h2>\n
Discriminative AI: A Shield Against Misinformation<\/span><\/h2>\n