{"id":121528,"date":"2023-11-01T08:26:43","date_gmt":"2023-11-01T08:26:43","guid":{"rendered":"https:\/\/www.techopedia.com"},"modified":"2023-11-01T08:26:43","modified_gmt":"2023-11-01T08:26:43","slug":"white-house-executive-order-ai","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/white-house-executive-order-ai","title":{"rendered":"Inside the White House’s Executive Order to Rein in AI"},"content":{"rendered":"
The White House has issued an executive order<\/a> to manage the risks of artificial intelligence<\/a> (AI), requiring the biggest AI developers to share information with the government before releasing their algorithms<\/a>\u00a0to the public.<\/p>\n The executive order is among the first government regulations on AI as authorities worldwide scramble to control the rapid advancement of the technology. It:<\/p>\n \u201c…establishes new standards for AI safety and security, protects Americans\u2019 privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.\u201d<\/p><\/blockquote>\n The US government’s sweeping order aims to address the broad potential impact of AI technologies:<\/p>\n Among the key provisions, the Biden administration requires that companies developing the most powerful AI systems share the results of safety tests and other critical information with the US government.<\/p>\n Any AI model that potentially poses a severe risk to national security, economic stability, or public health and safety will be governed by the Defense Production Act. This means that the company must notify the federal government when training the model. The National Institute of Standards and Technology<\/a> will set standards for extensive testing to ensure that such models are safe before public release.<\/p>\n Acknowledging the risks posed by the use of AI in various systems, the White House ordered:<\/p>\n The Biden-Harris Administration has previously published a Blueprint for an AI Bill of Rights<\/a> and issued an Executive Order directing agencies to combat algorithmic discrimination<\/a> while enforcing existing authority to protect citizens’ rights and safety.<\/p>\n With concerns about the potential for AI to reinforce discrimination and displace jobs increasing, the White House is directing additional actions to provide guidance to keep AI algorithms from being used to exacerbate discrimination, including throughout the justice system; advance the responsible use of AI in healthcare<\/a> and the development of pharmaceutical drugs<\/a>; create resources to support educators deploying AI-enabled tools; and develop best practices to mitigate the harms and maximize the benefits of AI for workers.<\/p>\n The broad extent of the executive order and the issues it raises about the impact of AI on national security, privacy, civil rights, healthcare, education, and the workforce illustrate the scope of the challenge governments face in regulating the proliferation of AI algorithms and models. It also raises questions about the extent to which governments will and can control the widespread adoption of AI.<\/p>\n The Biden Administration consulted a range of other governments on AI frameworks before releasing the order, including Australia, Brazil, Canada, Chile, the European Union, India, Israel, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The principles outlined support Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the UN, the statement noted.<\/p>\n But can governments balance the impact on job security in various industries<\/a> with the desire to support home-grown companies in the global innovation race?<\/p>\n Several AI startups welcomed the announcement, but some CEOs expressed concerns<\/a> over whether the regulations could hinder smaller companies from developing AI technologies, stifling innovation.<\/p>\n Leaders of advocacy groups responded positively but noted the challenges of implementation.<\/p>\n “It’s notable to see the Administration focus on both the emergent risks of sophisticated foundation models and the many ways AI systems are already impacting people’s rights. The Administration rightly underscores that US innovation must also include pioneering safeguards to deploy technology responsibly,” said<\/a> Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology.<\/p>\n “Of course, the EO’s success will rely on its effective implementation. We urge the Administration to move quickly to meet relevant deadlines and to ensure that any guidance or mandates issued under the EO are sufficiently specific and actionable to drive meaningful change,” Givens said.<\/p>\n “Today’s executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology \u2013 but it’s only a first step. Along with establishing a framework for the government’s use of AI, the EO recognizes the power of the government to establish norms and standards as a major purchaser of technology. That’s an important policy lever,” said Robert Weissman, President of consumer advocacy group Public Citizen, in a statement<\/a>.<\/p>\n “However, as much as the White House can do independently, those measures are no substitute for agency regulation and legislative action. Preventing the foreseeable and unforeseeable threats from AI requires agencies and Congress to take the baton from the White House and act now to shape the future of AI \u2014 rather than letting a handful of corporations determine our future, at potentially great peril,” Weissman added.<\/p>\n Public Citizen has been pushing for the regulation of AI technology, recently petitioning the Federal Election Commission (FEC) to introduce a new rule banning political deepfakes in election campaign advertising.<\/p>\n “The new executive order strikes the right tone by recognizing both the promise and perils of AI,” said Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University. “What’s missing is an enforcement and implementation mechanism. It’s calling for a lot of action that’s not likely to receive a response.”<\/p>\n The executive order from the Biden Administration addresses concerns shared by governments worldwide about the impact of rapidly developing and unregulated AI technologies on many aspects of public life, from national security to individual privacy rights, healthcare, education, and employment.<\/p>\n This raises questions about how much governments can and will be able to control the impact of AI. Any effective regulation will require support from lawmakers and businesses to realize the promise of the technology while limiting its negative impact.<\/p>\n","protected":false},"excerpt":{"rendered":" The White House has issued an executive order to manage the risks of artificial intelligence (AI), requiring the biggest AI developers to share information with the government before releasing their algorithms\u00a0to the public. The executive order is among the first government regulations on AI as authorities worldwide scramble to control the rapid advancement of the […]<\/p>\n","protected":false},"author":286558,"featured_media":93772,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_lmt_disableupdate":"","_lmt_disable":"","om_disable_all_campaigns":false,"footnotes":""},"categories":[573,599],"tags":[],"category_partsoff":[],"class_list":["post-121528","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-machine-learning"],"acf":[],"yoast_head":"\nUS Government Orders Safe, Innovative AI Implementation<\/span><\/h2>\n
\n
Addressing Potential AI Risks<\/span><\/h2>\n
\n
The Challenge of Government Regulation of AI<\/span><\/h2>\n
The Bottom Line<\/span><\/h2>\n