Chai Shai Chaishai with me

The director of trust and safety at OpenAI is resigning.

On Thursday, OpenAI’s head of trust and safety announced his resignation.

In a LinkedIn article, Dave Willner, who has headed the trust and safety team at the AI company since February 2022, announced that he was “leaving OpenAI as an employee and transitioning into an advisory role” so that he could spend more time with his loved ones.

The timing of Willner’s departure couldn’t be worse for OpenAI. OpenAI has been under increasing scrutiny from lawmakers, regulators, and the public about the safety of its products and their possible repercussions since the company’s AI chatbot ChatGPT went popular late last year.

In a March testimony before a Senate panel, OpenAI CEO Sam Altman advocated for regulations on artificial intelligence. One of “my areas of greatest concern,” he told lawmakers, is the possibility that AI will be used to manipulate voters and target disinformation, especially considering the upcoming election and the fact that these models are improving.

Former Facebook and Airbnb employee Willner wrote on Thursday that “OpenAI is going through a high-intensity phase in its development” and that his position has “grown dramatically in scope and scale since I first joined.”

OpenAI released a statement announcing Willner’s departure, saying his contributions were “fundamental” in establishing the company’s commitment to the appropriate application of its technology. According to OpenAI, Chief Technology Officer Mira Murati will take over as interim manager of the trust and safety team, and Willner will continue to provide advice to the group until the end of the year.

The company’s statement stated, “To help us accomplish our goals, we need a technically savvy leader who can coordinate the creation of safeguards for our product’s operation and its potential for expansion.”

Willner’s departure comes as OpenAI keeps working with U.S. and international regulators to build boundaries for rapidly evolving AI. On Friday, the White House and seven major AI businesses, including OpenAI, reached an agreement on voluntary pledges to improve the safety and confidence in AI systems and products. Companies have pledged to explicitly label AI-generated content and subject it to independent testing before releasing it to the public, as revealed by the White House.

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button