A major personnel change is underway at Open AI, the AI juggernaut that almost single-handedly inserted the concept of generative AI into the global public discourse with the launch of ChatGPT. Dave Willner, an industry veteran who served as head of trust and security for the startup, announced in a publish on LinkedIn last night that he quit his job and moved into an advisory role. He plans to spend more time with his young family, he said. He had been in the role for a year and a half.
His departure comes at a critical time for the world of AI.

Picture credits: LinkedIn (Opens in a new window) under a DC BY 2.0 (Opens in a new window) Licence.
Along with all the excitement about the capabilities of generative AI platforms – which are based on large language models and are super-fast at producing freely generated text, images, music and more based on simple user prompts – there has been a growing list of questions. How can we best regulate activity and companies in this brave new world? How can adverse impacts on a range of issues be best mitigated? Trust and security are fundamental elements of these conversations.
just today, OpenAI President Greg Brockman is due to appear at the White House alongside executives from Anthropic, Google, Inflection, Microsoft, Meta and Amazon to endorse voluntary commitments to pursue common goals of security and transparency before an executive decree on AI which is in preparation. This follows much noise in Europe related to the regulation of AI, as well as changing feelings among Some others.
The importance of all this is not lost on OpenAI, which has sought to to position yourself as a conscious and responsible player on the ground.
Willner makes no reference to this specifically in his LinkedIn post. Instead, he keeps it high, noting that the demands of its OpenAI work shifted to a “high-intensity phase” after the launch of ChatGPT.
“I’m proud of everything our team has accomplished during my time at OpenAI, and while my job was one of the coolest and most interesting jobs it is possible to have today, it has also grown tremendously in scope and scale since I joined,” he wrote. While he and his wife – Chariotte Willner, who is also a trust and security specialist – are both committed to always putting family first, he said, “In the months since launching ChatGPT, I’ve found it increasingly difficult to hold my end of the bargain.”
Willner has only been in his OpenAI role for a year and a half, but he comes from a long career in the field that includes leading trust and safety teams at Facebook and Airbnb.
The work on Facebook is particularly interesting. There he was one of the first employees who helped define the company’s first position on Community Standards, which is still used as the basis of the company’s approach today.
It was a very formative time for the company, and arguably – given the influence Facebook has had on how social media has grown globally – for the internet and society at large. Some of those years were marked by very outspoken stances on free speech and the need for Facebook to resist calls to contain controversial groups and controversial posts.
A case in point was a very big dispute in 2009 that unfolded in the public forum over how Facebook handled the accounts and posts of Holocaust deniers. Some employees and outside observers felt that Facebook had a duty to take a stand and ban these posts. Others thought it amounted to censorship and sent the wrong message about free speech.
Willner was in the latter camp, believing that “hate speech” was not the same as “direct harm” and therefore should not be moderated in the same way. “I do not believe that Holocaust denial, as an idea in itself (sic), inherently poses a threat to the safety of others,” he wrote at the time. (For an overview of the TechCrunch past, see the full article about it here.)
In retrospect, considering everything that happened, that was a pretty naïve and short-sighted position. But, it seems that at least some of these ideas have evolved. In 2019, no longer being employed by the social network, he was speak against how the company wanted to grant politicians and public figures weaker content moderation exceptions.
But if Facebook’s need to lay the groundwork was greater than expected at the time, it’s arguably even more so now for the new wave of technology. According to this New York Times History less than a month ago, Willner was first brought to OpenAI to help figure out how to prevent Dall-E, the startup’s image generator, from being misused and used for things like the creation of generative AI child pornography.
But as the says the proverb, OpenAI (and the industry) needs this policy yesterday. “A year from now, we’re going to reach a very problematic state in this area,” David Thiel, chief technologist at Stanford’s Internet Observatory, told the NYT.
Now, without Willner, who will lead OpenAI’s charge to solve this problem?
(We have reached out to OpenAI for comment and will update this post with any responses.)