AI Regulation Efforts Are Picking Up: Key Takeaways for Businesses

Bradley Intelligence Report

Client Alert

Author(s) William Samir Simpson (Bradley, Analyst)

Several significant developments for national and multilateral efforts to regulate artificial intelligence (AI) occurred in the past week, spelling out immediate effects for the private and public sectors. Most notable include President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, issued on October 30, and the United Kingdom’s hosting of the 2023 AI Safety Summit, held on November 1-2, that produced the groundbreaking Bletchley Declaration, the first clear international agreement on AI. As the pace of regulatory efforts picks up in the United States and beyond, it will be essential for businesses to understand what’s being covered and how operations may be impacted going forward as AI continues to permeate and reshape virtually every facet of economic activity.

Emerging Regulatory Structures

Biden’s executive order represents the most substantial effort yet in the U.S. to regulate AI at the federal level. Building off voluntary commitments made earlier this year by dozens of technology companies regarding security testing and identifying AI-generated content, it establishes guidelines and best practices on safely developing and deploying AI systems. The order gives the Department of Commerce authority to monitor the training, development, and production of highly advanced AI models, requiring entities producing them to regularly report such information. Notably, the executive order invokes the Defense Production Act to explicitly cover AI technologies and justify requirements for private companies to share information with the government. The national security emphasis and power allotted to the president over AI technologies mark a considerable shift from the U.S. government’s previous laissez-faire approach toward AI and data privacy regulations, which relied largely on private firms to self-regulate and lead the way on best practices.

Meanwhile, the 2023 AI Safety Summit, a key initiative for global AI coordination by UK Prime Minister Rishi Sunak, and the resulting Bletchley Declaration similarly highlight a paradigm shift in how governments approach AI. The U.S., China, UK, European Union, Japan, India, and 23 other countries came together to jointly pledge to contain risks posed by AI, paying special attention to the potential of advanced AI systems to pose “serious, even catastrophic, harm, either deliberate or unintentional.” It also delineated cooperation for “internationally inclusive” research on future advanced AI models that complements existing international structures, such as the UN, G7, OECD, and Global Partnership for AI, as well as “other international initiatives” that seemingly allude to competing AI safety institutes announced by both the U.S. and UK this week. The next AI Safety Summit is planned to take place in South Korea in mid-2024, with another following in France later next year.

Competing National Priorities

Despite recognizing the risks posed by AI as being inherently international in nature, national regulatory authorities continue to differ significantly in their approaches. The EU AI Act, which has been in the works since 2021 and is expected to take effect by the end of the year, establishes a range of transparency and safety regulations for AI systems used in EU member states, including mandatory disclosure of content that is AI-generated, requirements to publish summaries of copyrighted data used for training, and directives for human oversight in the design and testing phases to prevent AI models from generating harmful or illegal content.

Biden goes even further than the EU in his executive order, considering a broader array of risks and establishing a cybersecurity program to develop AI tools in line with its security emphasis. On the other hand, the UK takes a “pro-innovation” approach to regulating AI within its borders; Sunak, despite hosting the first global AI summit, has warned against rushing to regulate the technology before fully understanding its risks. Other countries active in the global technology industry, such as India and Japan, are also following a wait-and-see approach, though the latter has reportedly been in talks with the EU to align its efforts with the anticipated EU AI Act, albeit leaning toward less restrictive rules for businesses. Meanwhile, China is already ahead of the curve, passing a law in August that places restrictions on companies providing generative AI services to consumers concerning the training data used and output produced. However, the government’s recent crackdowns on technology giants in the country and prohibition on content determined to violate the interests of the ruling Communist Party raise worries that such efforts aim to strengthen political control as much as to mitigate AI safety risks.

Biden’s executive order, for its part, reflects the administration’s desire to cement U.S. leadership in global AI policy, particularly as it recognizes growing AI competition from China. Commentators noted the occurrence of both this development and Sunak’s AI Safety Summit taking place in the same week, suggesting that national ambitions to reach first place in the AI regulation race are overshadowing the spirit of international cooperation. Given that the order also specifically requires reporting of foreign access to U.S. cloud services – aligning with recent rules that stipulate U.S. companies and investors should not support China’s domestic development of advanced AI technology – geopolitical motivations are apparent in driving U.S. actions in part, given the increasingly adversarial relationship that has formed with China over access to critical technologies. With the U.S. now positioned as a flag bearer for leading AI rules in the democratic world, we may expect convergence of Biden’s regulation roadmap with like-minded partners in the EU, UK, Japan, and India, setting up one camp in AI adoption in contrast to another led by China.

Effects on the Private Sector

Putting the political considerations aside, the developments of this past week will have critical implications for businesses. Despite setting innovation as a priority, Biden’s order is already facing pushback from the private sector concerning the use of the Defense Production Act to obtain data from AI model producers; critics claim that the forceful application of federal government power that has been traditionally utilized in national emergencies will inherently harm the ability of companies to exercise innovation in developing AI systems. In turn, companies across industries that are likely to significantly adopt AI (such as legal, finance, education, and healthcare) could see the usefulness of their AI services plateau, leading to weaker than expected return on investment. Additionally, the reporting requirements only appear to target the major U.S. AI players for now, while reserving the right for future actions impacting all AI systems. Concerns are that this provision could lead to regulatory capture, where the current major AI firms who have heavily contributed to discussions on U.S. AI regulation would attempt to influence and mitigate the restrictions placed on them as the regulations set in. This could disadvantage competitors not already in the space, and result in challenges for businesses across industries adopting AI in their operations, reducing their leverage to negotiate for lower pricing and liability assurance from AI providers.

As to the AI Safety Summit, while producing a breakthrough global agreement on AI safety regulations, national approaches remain unaligned, presenting issues for companies operating across multiple jurisdictions, who could find that their AI practices in line with U.S. protocols may not line up with rules set by other regulators, such as the EU. The nonalignment of regulations between the EU and the UK poses more hurdles for companies accustomed to operating in European markets. Efforts to comply with global regulations could impose increased costs and difficulties on global businesses, who may choose to meet the strictest requirements across the board, limiting use in all jurisdictions, or to split AI use rules between different offices, inhibiting technology cooperation across regions.

However, with these short-term concerns outlined, there are plenty of reasons for businesses to be optimistic. The U.S. government’s roadmap for a comprehensive AI regulation system can streamline the adoption of AI systems to improve workflows across key industries. With the ability to reduce the time needed for research, analysis, report drafting, and data management, among other routine tasks, the medium to long-term benefits for labor productivity are clear. The threat of job losses relating to further automation remain pertinent, though the executive order contains specific provisions requiring the Department of Labor to report to the president how agencies can support workers displaced by AI systems, while focusing existing AI adoption efforts on occupations that the department has concluded do not have sufficient workers available (termed as “Schedule A” occupations), such as nurses and physical therapists. The safety requirements under this regulation could increase the labor demand for skilled professionals outside the technology sector to assist with training and developing new AI models yet may negatively impact the firms and industries from where these workers migrate.

While the UK’s AI Safety Summit didn’t result in any concrete alignment of national regulatory structures, the Bletchley Declaration’s intent to foster international cooperation on AI risks and plans for subsequent summits next year offer hope that at least some form of regulatory convergence between the U.S. and like-minded partners will take place soon. All in all, the key takeaway for businesses should be that governments are scrambling to address the risks of AI while harnessing its benefits, and it will be vital to track ongoing developments as regulatory frameworks shape the adoption of AI into the economy and larger society.