This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2022/12/52545734639_0ed358acf5_o.jpgOne doesn’t have to look far to find nefarious examples of artificial intelligence. OpenAI’s newest A.I. language model GPT-3 was quickly coopted by users to tell them how to shoplift and make explosives, and it took just one weekend for Meta’s new A.I. Chatbot to reply to users with anti-Semitic comments.
As A.I. becomes more and more advanced, companies working to explore this world have to tread deliberately and carefully. James Manyika, senior vice president of technology and society at Google, said there’s a “whole range” of misuses that the search giant has to be wary of as it builds out its own AI ambitions.
Manyika addressed the pitfalls of the trendy technology on stage at the Fortune‘s Brainstorm A.I. conference on Monday, covering the impact on labor markets, toxicity, and bias. He said he wondered “when is it going to be appropriate to use” this technology, and “quite frankly, how to regulate” it.
The regulatory and policy landscape for A.I. still has a long way to go. Some suggest that the technology is too new for heavy regulation to be introduced, while others (like Tesla CEO Elon Musk) say we need to be preventive government intervention.
“I actually am recruiting many of us to embrace regulation because we have to be thoughtful about ‘What is the proper to use these technologies?” Manyika said, adding that we need to make sure we’re using A.I. in the most useful and appropriate ways with sufficient oversight.
Manyika started as Google’s first SVP of technology and society in January, reporting directly to the firm’s CEO Sundar Pichai. His role is to advance the company’s understanding of how technology affects society, the economy, and the environment.
“My job is not so much to monitor, but to work with our teams to make sure we are building the most useful technologies and doing it responsibly,” Manyika said.
His role comes with a lot of baggage, too, as Google seeks to improve its image after the departure of the firm’s technical co-lead of the Ethical Artificial Intelligence team, Timnit Gebru, who was critical of natural language processing models at the firm.
On stage, Manyika didn’t address the controversies surrounding Google’s A.I. ventures, but instead focused on the road ahead for the firm.
“You’re gonna see a whole range of new products that are only possible through A.I. from Google,” Manyika said.
Our new weekly Impact Report newsletter will examine how ESG news and trends are shaping the roles and responsibilities of today’s executives—and how they can best navigate those challenges. Subscribe here.