U.S. Leadership on AI Global Governance

05/22/2024

|

Dr. Orit Frenkel & Rebecca Karnak | American Leadership Initiative

Artificial Intelligence (AI) is poised to become one of the most impactful and consequential societal revolutions of the 21st century, with enormous benefits and risks that the world is just beginning to understand. 

This paper will outline the AI global governance landscape and associated gaps in policy in order to establish the importance of U.S. leadership in developing a global AI regulatory framework – one that protects U.S. economic and national security interests, advances U.S. competitiveness, while ensuring transparency, protecting human rights, and promoting essential democratic values – as societies attempt to set the AI rules of the road for decades to come. 

The United States, the European Union (EU), and China have each embraced different AI regulatory models. The U.S. has pursued a more limited government model that fosters innovation, encourages private sector initiatives, and limits risks. The recently passed EU AI Act embraces a more restrictive regulatory role over the use and design of the technology, protecting consumers and privacy, while restricting various AI applications. 

Meanwhile, China is advancing a state control model with extensive censorship and surveillance capabilities. This is especially concerning as it spreads its model when it sells technology across the Global South.

In the U.S., the Biden Administration released its Executive Order on AI (EO) in October 2023, which builds on earlier efforts to establish guardrails for industry and directs funding for R&D and talent development, while establishing a government-wide effort for AI deployment through federal agencies. 

For now, the U.S. leads the world in terms of total AI private investment. However, AI is being deployed across China at a much quicker pace than in the U.S. While the U.S. still leads in AI, the Chinese government is investing considerable funding and political capital to close the gap. 

Multilateral efforts to create common governance structures and guardrails for AI have accelerated, through the UN, the OECD, the G7 and other forums. U.S. leadership and its governance model are reflected in many of these early efforts.

In order to lead the way on global governance for AI, the U.S. needs to build on its current efforts in several important ways. 

First, the Congress must pass a comprehensive federal privacy law. Strong privacy legislation would contribute to a healthy regulatory ecosystem and build greater credibility for U.S. positions on AI globally. 

The U.S. will need a comprehensive endeavor to prepare its workforce for the new AI economy. This must include a focus on job training and transition to address the significant job loss that will occur as many low-income and low-skill jobs are replaced by AI. Creating a pipeline of AI talent will also be key for U.S. global competitiveness. 

Globally, the U.S. should prioritize its participation in international standards bodies which have been working to create a common set of standards for AI. China prioritizes its participation and leadership in these organizations, and the U.S. needs to do the same, to advance its approach to ethical AI and transparent standards. Similarly, agreements across national safety institutes are critical to allow countries to work together on measurement science and harmonization of testing and other guardrails.  

The U.S. must rejoin the digital trade negotiations from which it withdrew in late 2023. U.S. absence opens the door to other countries advancing their own governance models, which could compromise U.S. economic security and put U.S. AI developers at a disadvantage. It could also usher in more punitive rules that impede innovation and hamper democracy. 

The U.S. EO calls on federal agencies to increase their AI capacity and AI workforce, and to responsibly adopt AI into their operations, while managing risks from AI’s use. The U.S. must use this deployment across the federal government as a testing ground for AI regulation and deployment, deepening its regulatory infrastructure, and establishing guidelines and regulations that other countries can adopt. 

Inclusivity among diverse stakeholders is essential to get AI governance right. Transparency in the U.S. policy process, together with a tradition of facilitating stakeholder input into regulations and laws, positions the U.S. to be a leader in developing inclusive regulations that reflect diverse interests. 

Perhaps the most acute short-term threat that AI governance raises is its impact on democracy. With elections this year in at least 64 countries, including the U.S., the increased sophistication of AI in facilitating misinformation and disinformation is of grave concern. We have already seen the use of deepfakes and misinformation used in political campaigns, including in the U.S. 

The U.S. must take steps to ensure election integrity at home and abroad. It must position itself to lead the effort to regulate misinformation, minimize false information spread by bad actors and protect democratic values. 

U.S. economic and national security depends on getting AI global governance right. Working with diverse stakeholders to harness the power of AI, investing in training, reskilling, and deploying AI in an inclusive, transparent, humane, and democratic way is essential to ensure AI achieves its full promise. The U.S. should use its domestic initiatives to lead the way on global AI governance, ensuring that AI does not become a tool of authoritarians, but rather is used to make societies more equitable, ensuring a brighter future for all the world’s citizens.

ALI-White_Paper-05142024-FINAL

To read the full white paper as published by the American Leadership Initiative, click here.