UHG
Search
Close this search box.

Bigtech Gets an AI Safety Guru

Anthropic, Google, Microsoft, and OpenAI have jointly revealed the appointment of the executive director of the Frontier Model Forum

Share

Bigtech Gets an AI Safety Guru Now
Listen to this story

After uniting in July to announce the formation of the Frontier Model Forum, Anthropic, Google, Microsoft, and OpenAI have jointly revealed the appointment of Chris Meserole as the inaugural executive director of the forum. Simultaneously, they’ve introduced a groundbreaking AI Safety Fund, committing over $10 million to stimulate research in the realm of AI safety.

Chris Meserole brings a wealth of experience in technology policy, particularly in governing and securing emerging technologies and their future applications. Meserole’s new role entails advancing AI safety research to ensure the responsible development of frontier models and mitigate potential risks. Moreover, he would also oversee identification of best safety practices for these advanced AI models.

Meserole expressed his enthusiasm for the challenges ahead, emphasising the need to safely develop and evaluate the powerful AI models. “The most powerful AI models hold enormous promise for society, but to realise their potential we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum,” said Chris Meserole.

Who is Chris Meserole?

Before joining the Frontier Model Forum, Meserole served as the director of the AI and Emerging Technology Initiative at the Brookings Institution, where he was also a fellow in the Foreign Policy program.

The Initiative, founded in 2018, sought to advance responsible AI governance by supporting a diverse array of influential projects within the Brookings Institution. These initiatives encompassed research on the impact of AI on issues like bias and discrimination, its consequences for global inequality, and its implications for democratic legitimacy.

Throughout his career, Meserole has concentrated on safeguarding large-scale AI systems from the potential risks arising from either accidental or malicious use. His endeavours include co-leading the first global multi-stakeholder group on recommendation algorithms and violent extremism for the Global Internet Forum on Counter Terrorism. He has also published and provided testimony on the challenges associated with AI-enabled surveillance and repression. 

Additionally, Meserole organised a US-China dialogue on AI and national security, with a specific focus on AI safety and testing and evaluation. He’s a member of the Christchurch Call Advisory Network and played a pivotal role in the session on algorithmic transparency at the 2022 Christchurch Call Leadership Summit, presided over by President Macron and Prime Minister Ardern.

Meserole’s background lies in interpretable machine learning and computational social science. His extensive knowledge has made him a trusted advisor to prominent figures in government, industry, and civil society. His research has been featured in notable publications such as The New Yorker, The New York Times, Foreign Affairs, Foreign Policy, Wired, and more.

What’s next for the forum?

The Frontier Model Forum is established for sharing knowledge with policymakers, academics, civil society, and other stakeholders to promote responsible AI development and supporting efforts to leverage AI for addressing major societal challenges.

The announcement says that as AI capabilities continue to advance, there is a growing need for academic research on AI safety. In response, Anthropic, Google, Microsoft, and OpenAI, along with philanthropic partners like the Patrick J McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn, have initiated the AI Safety Fund, with an initial funding commitment exceeding $10 million. 

The AI Safety Fund aims to support independent researchers affiliated with academic institutions, research centres, and startups globally. The focus will be on developing model evaluations and red teaming techniques to assess and test the potentially dangerous capabilities of frontier AI systems.

This funding is expected to elevate safety and security standards while providing insights for industry, governments, and civil society to address AI challenges.

Additionally, a responsible disclosure process is being developed, allowing frontier AI labs to share information regarding vulnerabilities or potentially dangerous capabilities within frontier AI models, along with their mitigations. This collective research will serve as a case study for refining and implementing responsible disclosure processes.

In the near future, the Frontier Model Forum aims to establish an advisory board to guide its strategy and priorities, drawing from a diverse range of perspectives and expertise.

The AI Safety Fund will issue its first call for proposals in the coming months, with grants expected to follow soon after.

The forum will continue to release technical findings as they become available. Furthermore, they aim to deepen their engagement with the broader research community and collaborate with organisations like the Partnership on AI, MLCommons, and other leading NGOs, government entities, and multinational organisations to ensure the responsible development and safe utilisation of AI for the benefit of society.

📣 Want to advertise in AIM? Book here

Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words.
Related Posts
Association of Data Scientists
Tailored Generative AI Training for Your Team
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.