Microsoft, Google and OpenAI just became charter members of what may be the first true A.I. lobby. Up next: lawmakers write the rules

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/07/GettyImages-1246870629-e1690400105794.jpg?w=2048

Microsoft, Alphabet, OpenAI, and Anthropic—four of the most important A.I. developers in the world—announced Wednesday they would form a trade organization, called the Frontier Model Forum, in an effort to craft new regulations and policies for the emerging technology. 

This isn’t the first time A.I. developers have agreed to some form of self-regulation. OpenAI founder and CEO Sam Altman invited “regulatory interventions” by governments when he testified before Congress in May, and earlier this month, the White House announced a voluntary agreement on A.I. guardrails with the four companies above, plus Meta, Amazon and the startup Inflection. 

As developers of the technology, these companies are well positioned to lend their technical expertise in a still poorly understood field. However, as is often the case when rival companies form a trade association, questions are inevitably raised about the possibility they might exert undue influence in crafting any future policies. Anytime an organization attempts to influence policy making it can be considered lobbying, says Mark Fagan, a lecturer at Harvard’s Kennedy School of Public Policy and the author of Lobbying: Business, Law and Public Policy, Why and How 12,000 People Spend $3+ Billion Impacting Our Government

“I start from the premise that everyone who walks into a policymaker’s office is not an altruist,” Fagan says. “They are there because they are putting forward a position for their supporters. In the case of a corporation, we call those supporters shareholders.”

Because the underlying technology in A.I. is so new that policymakers will have no choice but to rely on the tech industry’s expertise when crafting eventual laws. Fagan told Fortune that he believes lawmakers will view the Frontier Model Forum “cautiously, but also take advantage of it.”

“There’s a difference between looking from the outside and being on the inside and knowing exactly how that algorithm was built, what the training data was, and what emerged out of it,” Fagan says. “There’s an information asymmetry that exists, and it’s always going to exist. Regulators will always be behind.”

However, regulators are not entirely powerless. It may seem banal to say but they make the laws and therefore wield a decision-making power that the Googles and Microsofts of the world could never have, according to Fagan. 

“You have this interesting tension, where each of the parties has a different asset they are using to help the outcome,” Fagan says. “The asset of the corporation is detailed knowledge and information and money they can use for research. The regulator’s asset, at the end of the day, is that they put a rule in place.” 

This ultimately creates a much more symbiotic relationship than either policymakers or companies would like to admit when it comes to regulating brand new technologies. Fagan said he believes the onus is on policymakers to ensure that input doesn’t result in lax policies, virtually co-written by the industry players who stand to gain from them.

“Where is the burden to ensure that undue influence is not exerted?” Fagan says. “I don’t believe it sits with the organization, whether it’s a nonprofit or a corporation, it is the burden of the policymaker.”

The Frontier Model Forum says it will establish an advisory board over the coming months. The founding members will also establish a charter, governance framework, and consider additional members. 
When reached for comment, OpenAI, Google, and Microsoft referred Fortune to the press release announcement. Anthropic did not respond.