America leads the world in AI–but we could fall behind on AI regulation by the end of 2023

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/09/GettyImages-1258683018-e1694455587638.jpg?w=2048

Nine months ago, ChatGPT was released, capturing the public’s attention unlike any innovation in recent memory. The excitement around AI’s opportunities also came with legitimate concerns over its potential negative impacts, as well as calls by industry and government for significant and enforceable rules. However, the window for the U.S. to influence the global debate on how to regulate AI is rapidly closing.

While much AI innovation is happening in the U.S., other governments around the world are moving more quickly to shape future rules. In November, the U.K. is hosting a global AI summit. By the end of 2023, the European Union will have finalized its AI Act, the most comprehensive AI law enacted to date. Japan will have finalized its AI policy approach while simultaneously leading a G7 effort to establish common standards for AI governance.

Enterprise software companies–the makers of AI systems–began calling for U.S. legislation nearly two years ago–and the need to act only continues to grow more urgent.

There are fundamental objectives on which everyone should agree: AI, in any form, should not be used to commit illegal acts. It should not be used to compromise privacy, facilitate cyberattacks, exacerbate discrimination, or create physical harm. AI that is developed and deployed responsibly, improves our lives, and makes us safer should flourish.

Congress should take advantage of the considerable work that governmental organizations, civil society advocates, and industry groups have already put into identifying the risks of using AI in various contexts and the concrete steps organizations can take to mitigate those risks. Although these proposals have important differences, they collectively form a basis for action.

There are signs that U.S. lawmakers want to act. Several members of Congress are drafting or have already introduced AI-related legislation, and Senators Schumer, Young, Heinrich, and Rounds have launched a process intended to develop bipartisan AI legislation within “months.”

However, some other leaders seem to suggest that Congress may never be able to pass meaningful AI legislation, given the complexity of the technology and lack of understanding among lawmakers.

It’s important to study the issue at hand before acting, but this hopefully will not lead to inaction on a major tech policy issue. While nobody fully knows what all the positive and negative aspects of AI will be, including the new implications of generative AI, we do not need to wait to set basic rules to guard against the risks that are clear today.

Legislation should require companies that develop or use AI systems in high-risk contexts to identify and mitigate the potential harms of those systems. Specifically, legislation should require companies to conduct impact assessments for high-risk AI systems, so that those who develop and deploy AI find and address potential risks. Impact assessments are already used widely in a range of other fields–from environmental protection to data security–as an accountability mechanism. The same approach can work for AI.

Setting thoughtful rules for AI is central to the vitality of our economy. Industries of all kinds and businesses of all sizes are looking for ways to use AI to grow. Countries that best facilitate responsible and broad-based AI adoption will see the greatest economic and job growth in the coming years. But first, governments must establish strong laws that build trust and elevate standards for how AI is used.

Passing any new law is no simple feat. Shaping a meaningful global discussion is time-consuming and difficult. The U.S. must not squander this rapidly closing window of opportunity to lead on AI legislation.

Victoria Espinel is the CEO of BSA – The Software Alliance, a global trade organization representing the enterprise software industry.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.