Apple CEO Tim Cook says he uses OpenAI’s ChatGPT

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/06/GettyImages-1496169964-e1686113322179.jpg?w=2048

OpenAI’s buzzy chatbot, ChatGPT, has attracted a lot of attention in recent months with its simple features and wide range of uses. Following its November launch, the artificial intelligence tool quickly gained 100 million monthly active users in just two months. 

It turns out one of those millions of users is Apple CEO Tim Cook. 

“Of course I use it,” Cook said, in an interview with Good Morning America aired Tuesday. “I’m excited about it. I think there’s some unique applications for it and you can bet that it’s something that we’re looking at closely.”

The CEO didn’t elaborate further on how specifically the company was looking to use the new technology, but he did acknowledge that Apple’s approach to anything A.I. was different from other tech peers. 

“We do integrate it into our products today—people don’t necessarily think of it as A.I.,” Cook said. 

Apple has stayed relatively quiet on the subject of A.I. even as the new technology has moved front-and-center in the tech conversation. In February, Google introduced its Bard chatbot and Microsoft announced the partnership with ChatGPT-maker OpenAI that would power its Bing search engine, but Apple had no sweeping new products or A.I. breakthroughs to boast at the time. 

Apple once had the lead in A.I. with its voice assistant Siri, launched years ago. But much of what the company does with A.I. now focuses smaller-scale additions to app features that help improve the user experience. For instance, the Cupertino, Calif.-based company announced its new mixed reality headset on Monday, along with a slew of new features on its devices—some of which are powered by A.I.

Apple has also reportedly clamped down on some of its employees using ChatGPT and other similar tools to prevent leaking sensitive data to the chatbot that could later be used to train the underpinning model.

Apple did not immediately return Fortune‘s request for comment.

A.I. regulation

Companies working with A.I. are also concerned about how this new technology will be regulated. A.I. tools have shared factually inaccurate information, have become “unhinged,” and been misused by users. Experts are worried that these incidents will happen more frequently as these tools become readily available.

Regulations are crucial in the A.I. space, Cook said.

“You worry about things like bias, things like misinformation, maybe worse in some cases,” Cook said. “Regulation is something that’s needed in this space. I think guardrails are needed.”

But despite the concerns, he still sees a lot of opportunity from the A.I. developments that tech companies are pouring resources into.

“What people are now talking about are these large language models, and I think they have great promise,” the Apple chief said, echoing a thought he said at the company’s earnings call last month in the Good Morning America interview. “I do think that it’s so important to be very deliberate and very thoughtful in the development and deployment of these.” 

He added that because the tech was so powerful, the onus was on companies to regulate themselves as well as comply with other guardrails that come into place. 

Experts on A.I. fears

Cook isn’t the only tech executive to urge the sector to be deliberate as it considers what the large-scale adoption of A.I. could do. 

Last week, a group of technologists and A.I. experts, including OpenAI CEO Sam Altman, signed a letter warning that A.I. posed a “risk of extinction” akin to a pandemics and nuclear wars. The signatories wanted to “open up discussions” on the tech’s threats.

But that wasn’t the first letter warning of how dangerous A.I. could be. In March, Tesla CEO Elon Musk, Apple cofounder Steve Wozniak and thousands of other tech experts and academics called for a 6-month ban on developing advanced A.I. that was becoming “human-competitive at general tasks” so that a more robust system of rules could be made. 

Work on regulations for A.I. began in the U.S. in April, but little has come from it in terms of firm guidelines or safeguards. Other parts of the world are making more headway in the regulation of A.I., like Europe’s “A.I. Act” that would categorize the uses of the technology by its risk level.