The announcement of OpenAI’s new Safety and Security Committee, tasked with crucial decision-making in OpenAI projects and operations got the internet buzzing, considering CEO Sam Altman is a part of it too.
The discussions revolved around a likely early arrival of GPT-5 and how the committee is a safety bunker for OpenAI. However, the most interesting aspect of this announcement seems to be the members on this committee.
In addition to being led by OpenAI Board directors, the group will also have technical and policy experts to guide them. With Altman in the lead, here’s the team spearheading OpenAI’s new Safety and Security Committee.
Bret Taylor
American entrepreneur and computer programmer Bret Taylor joined the board after Altman was reinstated as CEO following a brief ousting. Former co-CEO of Salesforce, Taylor comes with a vast experience of having also served on the board of tech companies such as Twitter and Shopify. He was also the co-creator of Google Maps.
Taylor has been Altman’s close friend who stood by him during last year’s ousting episode. Recently, Taylor and Larry Summers (another Board member) reacted sharply to Helen Toner’s (former board member who was removed after Altman’s reinstatement as CEO) accusation of Altman lying to the board multiple times and withholding information as some of the reasons for his ousting.
Taylor and Summers rejected the claims made by Toner and were disappointed at her for discussing these issues.
Adam D’Angelo
Adam D’Angelo, co-founder and CEO of Quora, also the former CTO of Facebook, joined the board as an independent director in 2018. He was the only board member whose position remained unaffected after Altman’s ousting and reinstatement as the CEO.
D’Angelo is also the founder of Poe, a platform for multi-chatbot interactions that allows users to interact from all the available LLMs in the market.
Jakub Pachocki
OpenAI’s new chief scientist, Jakub Pachocki, took over Ilya Sutskever’s role upon his exit. Leading OpenAI’s research efforts, Pachocki is one the technical experts on the new safety committee. In Sutskever’s exit announcement on X, Pachocki was referred to as having ‘excellent research leadership’.
Born in Poland, Pachocki excelled in programming contests during his studies and even won $10,000 at the Google Code Jam in 2012. Having studied computer science from the University of Warsaw in 2013, he did a PhD in the same subject from Carnegie Mellon University.
Interestingly, Pachocki took up the role of the director of research in October last year, a month before Altman’s sacking.
John Schulman
One of the co-founders and head of security at OpenAI, John Schulman is a prominent researcher. At OpenAI, he is focussed on creating and improving algorithms that allow machines to learn from interactions with their environment.
Schulman pursued his undergraduate studies in physics at Caltech and later switched to neuroscience at UC Berkeley before completing his PhD in electrical engineering and computer sciences. His academic work laid the foundation for his future research in reinforcement learning and deep learning.
In a recent podcast with Dwarkesh Patel, Schulman spoke about his anticipation of AGI safety. “If AGI came way sooner than expected, we would definitely want to be careful about it. We might want to slow down a little bit on training and deployment until we’re pretty sure we know we can deal with it safely,” he said.
Matthew Knight
The head of security at OpenAI, Matthew Knight, joined the company in 2020. With a strong background in hardware, software, and wireless security, Knight leads the efforts to ensure the safety and security of OpenAI’s AI models and systems. This also includes ensuring the robustness of AI models against adversarial attacks.
Prior to joining OpenAI, Knight co-founded Agitator, a startup that developed secure and resilient dynamic radio frequency spectrum management technologies.
Lilian Weng
The head of safety systems at OpenAI, Lilian Weng, joined OpenAI in 2018 as a research scientist. At OpenAI, Weng’s work majorly focused on developing algorithms that enable machines to learn, adapt, and perform complex tasks autonomously.
Weng has contributed to the development of advanced reinforcement learning techniques, which are used to train AI agents to make decisions by interacting with their environment and learning from the outcomes of their actions.
She earned her PhD in electrical engineering and computer science from the Massachusetts Institute of Technology.
Aleksander Madry
The head of preparedness at OpenAI, Aleksander Madry, is a professor at MIT in the department of electrical engineering and computer science. He earned his PhD in computer science from MIT and has since become a leading figure in AI research, particularly focusing on machine learning, optimisation, and algorithmic robustness.
Nicole Seligman
A member of the board of directors at OpenAI, Nicole Seligman, is a corporate and civic leader and lawyer. Former EVP and general counsel at Sony Corporation, Seligman currently serves on three public company corporate boards – Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines Inc. Seligman has made significant contributions to the fields of law and corporate governance.