{"id":50684,"date":"2022-02-16T00:00:00","date_gmt":"2022-02-16T00:00:00","guid":{"rendered":"https:\/\/www.techopedia.com\/explainable-ai-isnt-enough-we-need-understandable-ai\/"},"modified":"2022-07-25T18:46:49","modified_gmt":"2022-07-25T18:46:49","slug":"explainable-ai-isnt-enough-we-need-understandable-ai","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/explainable-ai-isnt-enough-we-need-understandable-ai\/2\/34671","title":{"rendered":"Explainable AI Isn’t Enough; We Need Understandable AI"},"content":{"rendered":"
In the world of artificial intelligence<\/a> (AI), explainable AI<\/a> (XAI) has gained an incredible amount of attention in the past few years. Many are emphasizing how important it is to the future of artificial intelligence and machine learning. (Also read: Why Does Explainable AI Matter Anyway?<\/a>)<\/strong><\/p>\n And it is<\/em> important\u2014but it is not the solution. The desire to explain black box systems’ decisions is good; but XAI tools or methods alone will never be enough. If we want to provide full assurance for these systems\u2019 decisions, we should be discussing how to deliver \u201cunderstandable AI\u201d instead.<\/p>\n More and more, AI systems are making important decisions that impact our daily lives.<\/p>\n From insurance claims and loans to medical diagnoses and employment, enterprises are using AI and machine learning<\/a> (ML) systems with increasing frequency. However, consumers have become increasingly wary of artificial intelligence. For instance, in the realm of insurance, a mere 17% of consumers trust AI<\/a> to review their insurance claims because they cannot comprehend how these black box systems reach their decisions. (Also read: Has a Global Pandemic Changed the World’s View of AI?<\/a>)<\/strong><\/p>\n Explainability for AI systems is practically as old as the field itself<\/a>. In recent years, academic research has produced many promising XAI techniques and a number of software companies have emerged to provide XAI tools to the market. The issue, though, is that all of these approaches view explainability as a purely technical problem. In reality, the need for explainability and<\/em> interpretability in AI is a much larger business and social problem\u2014one that requires a more comprehensive solution than XAI can offer.<\/p>\n It is perhaps easiest to understand how XAI works through an analogy. So, consider another black box: the human mind.<\/p>\n We all make decisions; and we’re more or less aware of the reasons behind those decisions (even when we\u2019re asked to explain them!). Now imagine yourself (the XAI) observing another person’s (the original AI model) actions and inferring the rationale behind those actions. How well does that generally work for you?<\/p>\n With XAI, you are using a second model to interpret the original model. The \u201cexplainer\u201d model is a best guess<\/a> at the inner workings of the original model\u2019s black box. They might approximate what is happening in the black box; they might not. How well should we expect it to approximate and \u201cexplain\u201d non-human decisions? We can\u2019t really know. Compounding this problem is how different model types require different explainers, which makes them more burdensome to manage alongside their respective models.<\/p>\n An attractive alternative is to design so-called \u201cinterpretable\u201d models that provide visibility into the decision logic by design. Some excellent recent research suggests that such “white box” models may perform just as well as black box ones in some domains. But even these models have a significant downside: They are still often not understandable for non-technical people.<\/p>\n Another quick thought experiment: Imagine the imperfect explanations of XAI were, instead, perfect. Now, invite someone who isn\u2019t a data scientist<\/a> to review the model\u2019s decisions\u2014say, an executive in charge of a billion-dollar line of business who needs to decide whether to greenlight a high-impact ML model. (Also read: The Top 6 Ways AI Is Improving Business Productivity in 2021<\/a>.)<\/strong><\/p>\n The model could create an enormous competitive advantage and generate massive top-line revenue. It could also damage the company\u2019s brand permanently or hurt the company\u2019s stock price if the model runs amok. So it\u2019s safe to say that executive would want some proof before the model goes live.<\/p>\n Looking at the outputs of some explainer models, what they would find is basically gobbledygook. That is to say, it is unreadable, decontextualized data with none of the attributes or logic they would expect when they hear the word \u201cexplanation.\u201d<\/p>\n Herein lies the biggest issue for XAI as a field for use in the enterprise. Plus, interpretable models have the same issue for everyday people: The explanations require translation from technologists. The business executive, the risk organization, the compliance manager, the internal auditor, the chief counsel\u2019s office and the board of directors cannot understand these explanations independently. What about the end user<\/a> the model impacts?<\/p>\n Because of this, achieving trust and confidence becomes hard. External parties like regulators, consumer advocates and customers will find even less comfort.<\/p>\n The fact is, most “Explainable” AI tools are only explainable to a person with a strong technical background<\/a> and deep familiarity with how that model operates. XAI is an important piece of the technologist\u2019s toolkit\u2014but it is not a practical or scalable way to \u201cexplain\u201d AI and ML systems’ decisions.<\/p>\n The only way we\u2019re going to get to the promised land of trust and confidence in decisions made by black box AI and ML is by enriching the explanatory domain and broadening its audience. What we need is \u201cUnderstandable AI\u201d\u2014or AI that satisfies non-technical stakeholders’ needs in addition to XAI tools for technical teams.<\/p>\n The foundation for understandability is transparency. Non-technical people should have access to every decision made by the models they oversee. They should be able to search a system of record, based on key parameters, to evaluate the decisions individually and in aggregate. They should be able to perform a counterfactual analysis on individual decisions, changing specific variables to test whether the results are expected or not. (Also read: AI’s Got Some Explaining to Do<\/a>.)<\/strong><\/p>\n But we shouldn\u2019t stop there. Understandable AI also needs to include the larger context in which the models operate. To build trust, business owners should have visibility into the human decision-making that preceded and accompanied the model throughout its life cycle. Here are just a handful of the vital questions everyone around a model should ask themselves:<\/p>\n Explainability alone will not solve the problem of understanding how an AI or ML model is behaving. However, it can\u2014and should\u2014be an important piece of the larger Understandable AI picture.<\/p>\n With careful selection and design, these tools provide invaluable insight for the expert modeler and technical teams, particularly before a model is put into production. But if companies innovating with these intelligent models today do not consider their non-technical stakeholders’ needs, they will almost certainly endanger the success of many important projects\u2014projects that could benefit the public and the companies developing them.<\/p>\n","protected":false},"excerpt":{"rendered":" In the world of artificial intelligence (AI), explainable AI (XAI) has gained an incredible amount of attention in the past few years. Many are emphasizing how important it is to the future of artificial intelligence and machine learning. (Also read: Why Does Explainable AI Matter Anyway?) And it is important\u2014but it is not the solution. […]<\/p>\n","protected":false},"author":7935,"featured_media":50685,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_lmt_disableupdate":"no","_lmt_disable":"","om_disable_all_campaigns":false,"footnotes":""},"categories":[573,571,600],"tags":[],"category_partsoff":[],"class_list":["post-50684","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-computer-science","category-data-science"],"acf":[],"yoast_head":"\nXAI Is Hot Right Now for the Right Reasons<\/span><\/h2>\n
XAI Only Approximates the Black Box<\/span><\/h2>\n
Explainable to Whom?<\/span><\/h2>\n
Understandable AI: Transparency and Accessibility<\/span><\/h2>\n
\n
Conclusion: XAI is One Piece of the Understandable AI Solution<\/span><\/h2>\n