{"id":138559,"date":"2023-12-12T20:20:26","date_gmt":"2023-12-12T20:20:26","guid":{"rendered":"https:\/\/www.techopedia.com"},"modified":"2023-12-12T20:40:19","modified_gmt":"2023-12-12T20:40:19","slug":"whats-in-a-name-we-need-to-get-ai-terminology-right","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/we-need-to-get-ai-terminology-right","title":{"rendered":"What’s in a Name? We Need to Get AI Terminology Right"},"content":{"rendered":"
One of the chief contributors to misunderstanding and conflict is improperly defined terminology. Words matter; when the wrong word is used to describe an object or event, that misconception breeds confusion and inflated or deflated expectations.<\/p>\n
A case in point these days is the term \u201cArtificial Intelligence<\/a>.\u201d Its origin has been traced to the mid-1950s when Stanford computer scientist John McCarthy hosted the first academic conference on the subject.<\/p>\n Since then, it has become a marketer\u2019s dream because it invokes an entirely new level of computing technology above simple processing and yet is still vague enough to avoid a firm definition.<\/p>\n Now that AI is finally making its way from the test bed into everyday life and seems capable of comprehending the world and expressing itself, the question is more relevant than ever: is AI actually intelligent, or are we simply engaging in a digital form of anamorphism?<\/p>\n According to Bradley Efron and Trevor Hastie, a pair of computer scientists also from Stanford, there is a big difference between algorithms and inference<\/a>. Algorithms<\/a> are what statisticians do, while inference is why they do them.<\/p>\n When you get right down to it, everything an intelligent algorithm does \u2013 from regression analytics to neural networking<\/a> \u2013 is based on mathematical formulas that use one set of variables to predict the behavior of other sets.<\/p>\n If this is intelligence, they argue, it is intelligence without understanding, which is a contradiction in terms. If a mind cannot understand what it is doing and why, it simply does not meet the intelligence threshold.<\/p>\n This is part of why AI has invoked such fear among the populace and why it could become such a letdown when people finally start to engage it meaningfully.<\/p>\n Moneycontrol.com\u2019s Parmy Olson argues that the term \u201cAI\u201d is a mirage<\/a>, along with \u201cmetaverse<\/a>\u201d and \u201cWeb3<\/a>\u201d \u2013 designed more to generate revenue than produce a better understanding of the technology. And terms like \u201cneural networking\u201d and \u201cdeep learning<\/a>\u201d aren\u2019t helping either.<\/p>\n READ MORE:<\/strong><\/p>\n The problem is more than just academic, Olson says. It allows companies to shift the blame, and perhaps the liability, for bias and other flaws in their models away from themselves and onto these supposedly independent-thinking creations.<\/p>\n At the same time, it fuels both the fear of AI annihilation and the expectation of AI utopia \u2013 neither of which is likely to materialize.<\/p>\n To be sure, what AI does is impressive. It can quickly ingest vast amounts of data, far more than even the most intelligent human mind, and then produce results in plain, accurate, and insightful language. But this is just mimicry<\/a>, says author Peter Cawdron, and mimicry is not intelligence. Parrots can speak as well; it doesn\u2019t make them intelligent.<\/p>\n The worst thing that could happen with AI is if humans started surrendering their intelligence to many algorithms. This tendency is already beginning to surface at home and in the office as people simply accept the results of any AI-driven process without acknowledging that it can get things wrong just as easily, or more so, than humans.<\/p>\n But if AI is not really intelligent, is it so easy to declare that humans are? Sure, we can speak and opine and pontificate about all kinds of things, but do we really understand them?<\/p>\n To quote the Scarecrow in The Wizard of Oz: \u201cSome people without brains do an awful lot of talking, don\u2019t you think?\u201d Is it possible that we merely have an arbitrary definition of intelligence because this is how our minds work?<\/p>\nIs AI Simply an Illusion of Intelligence?<\/span><\/h2>\n
AI: An Intentional Misnomer?<\/span><\/h2>\n
\n
AI Can Offer Misplaced Trust<\/span><\/h2>\n
Are We Intelligent?<\/span><\/h2>\n