{"id":53752,"date":"2023-03-01T09:04:42","date_gmt":"2023-03-01T09:04:42","guid":{"rendered":"https:\/\/www.techopedia.com\/?p=53752"},"modified":"2023-03-01T11:07:17","modified_gmt":"2023-03-01T11:07:17","slug":"4-principles-of-responsible-artificial-intelligence-systems","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/4-principles-of-responsible-artificial-intelligence-systems\/2\/34938","title":{"rendered":"4 Principles of Responsible Artificial Intelligence Systems"},"content":{"rendered":"
As AI becomes all-pervading, AI systems need to be more transparent about how they arrive at their decisions. Without a standard governance framework, however, the task of supporting explainable AI<\/a> is not easy. (Also read: <\/strong>Why Does Explainable AI Matter Anyway?<\/strong><\/a>)<\/strong><\/p>\n Recently, Techopedia brought together the following leaders<\/a> to discuss how and why organizations are adopting Responsible AI<\/a> as a governance framework:<\/p>\n <\/p>\n The panel discussion produced some great talking points that you can use to inspire discussions about AI governance in your organization. They include the ideas that:<\/p>\n Here is a discussion of each of these talking points in more depth:<\/p>\n One concern that came up at Techopedia’s recent webinar is that the concepts of Responsible AI and ethical AI are often being treated as if they were the same thing. This is not correct and it can create misunderstandings when project stakeholders treat the two terms as synonyms.<\/p>\n So, what’s the difference?<\/p>\n According to our panelists, ethical AI focuses on aspirational values such as producing fair outcomes and recognizing the human right to keep one’s personally identifiable information (PII<\/a>) private.<\/p>\n In contrast, Responsible AI focuses on the technological and organizational measures that enable organizations to achieve those aspirational objectives. The sum total of these two systems can also be called trustworthy AI<\/a>.<\/p>\n Next, our experts touched on how organizations need to balance the interests of the company’s shareholders, customers, community, financiers, suppliers, government and management. This can make incorporating and executing Responsible AI systems difficult because a broad mix of stakeholders can have competing priorities.<\/p>\n That’s why it’s important for organizations to align the principles of Responsible AI with their corporate governance policies to provide the following:<\/p>\n A responsible AI system must be equipped to handle conflicts of interest between the shareholders and the customers. The Volkswagen incident our experts discussed<\/a> is an instructive case study: When corporate leadership wanted to reward shareholders at their customers’ expense, it didn’t go well.<\/p>\n It’s important that AI systems be transparent about conflict of interests in both corporate and government sectors. (Also read: <\/strong>Explainable AI Isn’t Enough; We Need Understandable AI<\/strong><\/a>.)<\/strong><\/p>\n An AI system, irrespective of the industry, must accommodate disparate stakeholders and the organization’s reputation and public perception can be negatively impacted when mystery box AI systems<\/a> are not explainable.<\/p>\n For example, it’s important that the AI systems used to automate loan approvals be transparent and not weighted down by demographic or socio-economic biases. Many fintech<\/a> institutions use AI to evaluate applications for loans or mortgages. However, when an AI system is only trained with historical data, it can result in turning down individuals in certain demographic groups whose Fair Isaac Corporation (FICO) credit scores have been low in the past.<\/p>\n The ecological and environmental impact of AI systems must also be discussed. Some research shows that a single training AI system can emit as much as \u00a3150,000 of carbon dioxide. When choosing a governance framework for Responsible AI, it’s important for organizations to balance AI development with its impact on the environment.<\/p>\n Lastly, don’t forget security! Corporate deep neural networks<\/a> are often trained with proprietary data as well as huge volumes of data scraped from the internet<\/a>. The proprietary data can be a goldmine for hackers, so it’s important to discuss how your AI system will be protected from malicious actors. (Also read: <\/strong>AI in Cybersecurity: The Future of Hacking is Here<\/strong><\/a>.)<\/strong><\/p>\n Organizations such as the Institute of Electrical and Electronics Engineers (IEEE), the European Commission (EU) and the Partnership on AI have already been developing frameworks for developing and maintaining<\/a> Responsible AI systems. The frameworks are based on the following principles:<\/p>\n The importance of Responsible AI is beyond debate, but ensuring that all AI systems are transparent and explainable is not an easy task. The more complex the deep learning model<\/a>, the harder it becomes to understand how decisions have been made.<\/p>\n The need for Responsible AI frameworks is still a nascent idea, but it is one that is developing quickly in response to real-world problems. Our experts predict that AI frameworks for ensuring confidentiality, fairness and transparency will soon be common across every industry. (Also read: <\/strong>Experts Share 5 AI Predictions for 2023<\/strong><\/a>.)<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":" As AI becomes all-pervading, AI systems need to be more transparent about how they arrive at their decisions. Without a standard governance framework, however, the task of supporting explainable AI is not easy. (Also read: Why Does Explainable AI Matter Anyway?) Recently, Techopedia brought together the following leaders to discuss how and why organizations are […]<\/p>\n","protected":false},"author":7870,"featured_media":53755,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_lmt_disableupdate":"no","_lmt_disable":"","om_disable_all_campaigns":false,"footnotes":""},"categories":[573,599],"tags":[],"category_partsoff":[],"class_list":["post-53752","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-machine-learning"],"acf":[],"yoast_head":"\n\n
1. Define the Scope of Responsible and Ethical AI<\/span><\/h2>\n
2. Expect to Balance Responsible AI With Established Corporate Governance Policies<\/span><\/h2>\n
\n
3. Debate the Ethical Issues That Affect AI Systems<\/span><\/h2>\n
4. Follow a Mature Framework for Responsible AI<\/span><\/h2>\n
\n
Conclusion<\/span><\/h2>\n