{"id":49906,"date":"2023-02-01T00:00:00","date_gmt":"2023-02-01T00:00:00","guid":{"rendered":"https:\/\/www.techopedia.com\/ais-got-some-explaining-to-do\/"},"modified":"2024-02-13T07:35:58","modified_gmt":"2024-02-13T07:35:58","slug":"ais-got-some-explaining-to-do","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/ais-got-some-explaining-to-do\/2\/33468","title":{"rendered":"AI’s Got Some Explaining to Do"},"content":{"rendered":"
Can you trust AI<\/a>? Should you accept its findings as objectively valid without question?<\/p>\n The problem is, even if you did want to question AI, your questions won’t yield clear answers.<\/p>\n AI systems have generally operated like a black box<\/a>: Data is input, and data is output, but the processes that transform that data are a mystery. That creates a twofold problem.<\/p>\n For one, it is unclear which algorithms\u2019<\/a> performance are most reliable. Second, the AI’s seemingly objective results can be skewed by the values and biases of the humans who program the systems.<\/p>\n This is why there is a need for \u201cexplainable AI,\u201d<\/a> which refers to transparency in the virtual thought processes such systems use.<\/p>\n The way AI<\/a> analyzes information and makes recommendations is not always straightforward. There\u2019s also a distinct disconnect between how AI operates and how most people understand it to operate.<\/p>\n That makes explaining it a daunting task. As a recent Mckinskey article on explainable AI<\/a> pointed out:<\/p>\n \u201cModeling techniques that today power many AI applications, such as deep learning and neural networks, are inherently more difficult for humans to understand. For all the predictive insights AI can deliver, advanced machine learning engines often remain a black box.\u201d<\/p>\n The imperative to make AI explainable calls for shedding light on the process and then translating it into terms people can understand. It\u2019s no longer acceptable to tell people they have to regard AI output<\/a> as infallible. (Also read: <\/strong>Explainable AI Isn’t Enough; We Need Understandable AI<\/strong><\/a>.)<\/strong><\/p>\n \u201dPrincipally, it is not infallible \u2013 its outputs are only as good as the data it uses and the people who create it,\u201d noted Natalie Cramp, CEO of data science<\/a> consultancy Profusion in an interview with Silicon Republic<\/a>.<\/p>\n Experts in the field who understand the impact algorithmic decision-making can have on people\u2019s lives have been clamoring about the problem for years. As humans are the ones who set up the learning systems for AI, their biases get reinforced<\/a> in algorithmic programming and conclusions.<\/p>\n People are often not aware of their biases, or even how a data sample can promote racist<\/a> and sexist outcomes<\/a>. Such was the case for an automated rating system that Amazon had for job candidates.<\/p>\n As men dominate the tech industry, one algorithm learned to associate gender with successful outcomes<\/a> and was biased against women. Though Amazon dropped that tech back in 2018, the problems of biases<\/a> manifesting themselves in AI still persist in 2023.<\/p>\n \u201cAll organizations have biased data,\u201d proclaims an IBM blog intriguingly titled \u201cHow the Titanic helped us think about Explainable AI.<\/a>\u201d<\/p>\n That\u2019s because many are operating the same way — taking a sample of the majority to represent the whole. Though, in some respects, we have greatly reduced stereotypes related to sex and race, a study by Tidio found that level of enlightenment eludes some advanced tech.<\/a> (Also read: <\/strong>Can AI Have Biases?<\/strong><\/a>)<\/strong><\/p>\n The gap between real-life gender distribution and the representation offered by AI in Tidio’s study was stark. For example, AI asked to generate an image of a CEO didn’t turn out a single image of a woman<\/a>, when in reality around 15% of CEOs are female. Likewise, the AI programs underrepresented people of color in most positions.<\/p>\n As was the case for the Amazon algorithm, the AI here is falling into an error about women\u2019s roles, assuming they are completely absent from the category of CEO just because they make up the minority there. Where women actually make up a full half \u2013- in the category of doctor -\u2013 AI only represented them at 11%. The AI also ignored that 14% of nurses are male, turning out only images of women and falling back on the stereotype of female nurses.<\/p>\n Over the past couple of months, the world has grown obsessed with Chat GPT from OpenAI<\/a>, which can offer everything from cover letters to programming codes. But Bloomberg<\/a> warns that it, too, is susceptible to the biases that slip in through programming. (Also read: <\/strong>When Will AI Replace Writers?<\/strong><\/a>)<\/strong><\/p>\n Bloomberg references Steven Piantados<\/a>i of the University of California, Berkeley\u2019s Computation and Language Lab, who tweeted this on December 4, 2022:<\/p>\n \u201cYes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.<\/p>\n And what is lurking inside is egregious\u201d<\/p>\n Attached to the tweet was code that resulted in the ChatGPT\u2019s conclusion that \u201conly White or Asian men would make good scientists<\/a>.\u201d<\/p>\n Bloomberg acknowledges that OpenAI has since taught the AI to respond to such questions with \u201cIt is not appropriate to use a person’s race or gender as a determinant of whether they would be a good scientist.\u201d However, it doesn\u2019t have a fix in place to avert additional biased responses.<\/p>\n Eliciting biased responses when playing around with ChatGPT doesn\u2019t have an immediate impact on people\u2019s lives. However, when biases determine serious financial outcomes like hiring decisions and insurance payouts, it becomes a matter with immediate, serious consequences.<\/p>\n That could range from being denied a fair shot at a job, as was the case for Amazon\u2019s candidate ranking, or being considered a higher risk for insurance. That\u2019s why, in 2022, the California Insurance Commissioner, Ricardo Lara, issued a bulletin<\/a> in response to allegations of data misuse for discriminatory purposes.<\/p>\n He referred to \u201cflagging claims from certain inner-city ZIP codes,\u201d which makes them more likely to be denied or given much lower settlements than comparable elsewhere. He also pointed to the problem of predictive algorithms that assess \u201crisk of loss based on arbitrary factors,” which include “geographic location tracking, the condition or type of an applicant\u2019s electronic devices, or based on how the consumer appears in a photograph.\u201d<\/p>\n Any of those extend the possibility of a decision that has \u201can unfairly discriminatory impact on consumers.\u201d Lara went on to say that \u201cdiscrimination against protected classes of individuals is categorically and unconditionally prohibited.\u201d<\/p>\n The question is: what has to be done about fixing biases?<\/p>\n For OpenAI\u2019s product, the solution offered is the feedback loop of interacting with users. According to the Bloomberg report, its Chief Executive Officer, Sam Altman, recommended that people thumb down such responses to point the tech in the right direction.<\/p>\n Piantadosi told Bloomberg he didn\u2019t consider that adequate. He told the reporter, \u201cWhat’s required is a serious look at the architecture, training data and goals.\u201d<\/p>\n Piantadosi considered relying on user feedback<\/a> to put results on the right track to reflect a lack of concern about \u201cthese kinds of ethical issues.\u201d<\/p>\n Companies are not always motivated to dive into what is causing biased outputs, but they may be forced to do so in the case of algorithmic decisions that have a direct impact on individuals. Now for insurance businesses in California, Lara\u2019s bulletin demands that level of transparency for insurance consumers.<\/p>\n Lara insists that any policy holder who suffer any \u201cadverse action\u201d attributed to algorithmic calculations must be granted a full explanation:<\/p>\n \u201cWhen the reason is based upon a complex algorithm or is otherwise obscured by the technology used, a consumer cannot be confident that the actual basis for the adverse decision is lawful and justified.\u201d<\/p>\n Those are laudable aspirations and definitely long overdue, particularly for the organizations that hide behind the computer to shut down any questions humans affected have about the decisions. However, despite the pursuit of explainable AI by companies like IBM, we\u2019re not quite there yet.<\/p>\n The conclusion IBM comes to at the end of months struggling with the challenge of assuring the trustworthiness of AI is that \u201cthere is no easy way to implement explainability and, therefore, trustworthy AI systems<\/a>.\u201d<\/p>\n So the problem remains unsolved. But that doesn\u2019t mean there has been no progress.<\/p>\n As Cramp said, \u201cWhat needs to happen is a better understanding of how the data that is used for algorithms can itself be biased and the danger of poorly designed algorithms magnifying these biases.\u201d<\/p>\n We have to work to improve our own understanding of algorithmic functions and keep checking for the influence of biases. While we have yet to arrive at objective AI, remaining vigilant about what feeds it and how it is used is the way forward.<\/p>\n","protected":false},"excerpt":{"rendered":" Can you trust AI? Should you accept its findings as objectively valid without question? The problem is, even if you did want to question AI, your questions won’t yield clear answers. AI systems have generally operated like a black box: Data is input, and data is output, but the processes that transform that data are […]<\/p>\n","protected":false},"author":7477,"featured_media":53945,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_lmt_disableupdate":"no","_lmt_disable":"","om_disable_all_campaigns":false,"footnotes":""},"categories":[573,592,599],"tags":[],"category_partsoff":[],"class_list":["post-49906","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-identity-access-governance","category-machine-learning"],"acf":[],"yoast_head":"\nThe Black Box Problem<\/span><\/h2>\n
As Fallible as Humans<\/span><\/h2>\n
Biased AI Output<\/span><\/h2>\n
What about ChatGPT?<\/span><\/h2>\n
Why it Matters<\/span><\/h2>\n
Fixing the Problem<\/span><\/h2>\n
Outlook for Explainability<\/span><\/h2>\n