This post was originally published on this site
https://i-invdn-com.investing.com/trkd-images/LYNXMPEJ6K091_L.jpgOpenAI did not immediately respond to questions about Willner’s exit.
Trust and safety departments have taken on a high-profile role in technology companies such as OpenAI, Twitter, Alphabet (NASDAQ:GOOGL) and Meta as they seek to limit the spread of hate speech, misinformation and other harmful content on their platforms.
At the same time, fears AI will run out of control have risen.
Willner took over his role at OpenAI in February last year, after working at Airbnb and Facebook (NASDAQ:META). He attributed his decision to quit to growing demands from his job affecting his family life.
“Anyone with young children and a super intense job can relate to that tension, I think, and these past few months have really crystallised for me that I was going to have to prioritise one or the other,” he said in the post.
“I’ve moved teaching the kids to swim and ride their bikes to the top of my OKRs (objectives and key results) this summer.”
Microsoft-backed OpenAI, whose AI chatbot ChatGPT, has stormed the world, has said it depends on its trust and safety team to build “the processes and capabilities to prevent misuse and abuse of AI technologies”.