UHG
Search
Close this search box.

OpenAI Doesn’t Need Safety Lessons from Ilya’s Safe Superintelligence 

Notably, OpenAI recently appointed retired US Army General Paul M Nakasone to its board of directors.

Share

Mira Murati, the chief technology officer of OpenAI, during a recent interview at the AI Everywhere event at Dartmouth College, said that OpenAI gives the government early access to new AI models, and they have been in favour of more regulation.

“We’ve been advocating for more regulation on the frontier models, which will have these amazing capabilities and also have a downside because of misuse. We’ve been very open with policymakers and working with regulators on that,” she said.

The discussion, moderated by Dartmouth trustee Jeffrey Blackburn, covered both the potential benefits and the inherent challenges of AI advancements. 

“In terms of safety, security, and the societal aspects of this work, I think these things are not an afterthought. It can’t be that you sort of develop the technology and then you have to figure out how to deal with these issues,” said Murati. 

“You have to build them alongside the technology and actually in a deeply embedded way to get it right. And for capabilities and safety, they’re actually not separate domains. They go hand in hand,” she added. 

Notably, OpenAI recently appointed retired US Army General Paul M Nakasone to its board of directors. As a priority, Nakasone will join the board’s Safety and Security Committee, which is responsible for making recommendations to the board on critical safety and security decisions for all OpenAI projects and operations.

Murati’s optimism about AI is in the belief that smarter AI can lead to safer and more beneficial outcomes. She emphasised that the future of AI lies in creating systems that are not only more intelligent but also more secure. This dual focus on capability and safety is crucial as AI becomes increasingly integrated into various aspects of society.

Murati’s Perspective 

According to Murati, OpenAI prioritises safety, usability, and reducing biases, aiming to democratise creativity and free up humans for higher-level tasks.

In a recent post on X, she said that to make sure these technologies are developed and used in a way that does the most good and the least harm, they work closely with red-teaming experts from early stages of research. 

“We also use an iterative approach, gradually releasing tools and carefully studying how they impact the real world to guide future development. Protecting and strengthening the most valuable aspects of creativity is fundamental to our human experience,” she said.

This is a positive step towards ensuring the responsible use of AI. This allows governments to better understand the capabilities and limitations of the technology, and develop appropriate regulations to minimise potential risks. 

Meanwhile, OpenAI’s former chief scientist, Ilya Sutskever, recently started his own company called Safe Superintelligence. He left the company in May 2024 amid reports of tension with CEO Sam Altman over AGI safety and the rapid pace of advancements at OpenAI. 

Seemingly in response to this, and for safety concerns, OpenAI formed its Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Altman. 

This committee recommends critical safety and security decisions for OpenAI projects and operations as the company trains its next frontier model, which is expected to advance AGI capabilities.

Exploring AI’s Potential

When asked how OpenAI’s safety work aligns with the development, and if she believes it falls within their domain or requires external regulation, Murati candidly replied, “My perspective on this is that this is our technology. So it’s our responsibility [to see] how it’s used.” 

She added that it’s also a shared responsibility of the society, civil society, government, content makers, media, and so on, to figure out how it’s used. “But in order to make it a shared responsibility, you need to bring people along. You need to give them access, and tools to understand and to provide guardrails,” she said. 

Furthermore, the discussion highlighted the transformative impact of ChatGPT in bringing AI into the public consciousness. By providing people with a tangible, interactive experience of AI, ChatGPT has simplified the technology and made its capabilities and risks more comprehensible.  

Moreover, when people are aware of the potential and limitations of AI, they are better equipped to advocate for appropriate uses and safeguards. 

There is a need for a comprehensive and collaborative approach to AI regulation and safety. By focusing on risk minimisation, involving governments and fostering public awareness, we can better prepare for the transformative impact of AI on society.

This balanced approach can help ensure that AI is developed and used responsibly, benefiting individuals, businesses, and society as a whole.

📣 Want to advertise in AIM? Book here

Picture of Tarunya S

Tarunya S

As a passionate enthusiast of caffeine and journalism, I transform tech into words. I enjoy mountain hikes as much as binge-watching new Netflix series.
Related Posts
Association of Data Scientists
Tailored Generative AI Training for Your Team
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.