After wooing consumers and enterprises with its latest model, Claude Sonnet 3.5, Anthropic is extending its services to the US government and public sector in partnership with Amazon Web Services (AWS).
Soon, the company is also looking to make Claude 3 Haiku and Claude 3 Sonnet available in AWS Marketplace, specifically for the US Intelligence Community (IC), and in AWS GovCloud.
“We are making Claude available for applications like combating human trafficking, rooting out international corruption, identifying covert influence campaigns, and issuing warnings of potential military activities,” said Anthropic’s chief executive Dario Amodei in an exclusive interview with AIM on the sidelines of the AWS Summit 2024 in Washington, DC.
“I think it’s just really important that we provide these services well. It makes democracy as a whole more effective, and if we provide them poorly, it undermines the notion of democracy,” he said.
Amodei believes that what distinguishes Anthropic from OpenAI and other companies is the “concept of Constitutional AI (CAI)”.
Anthropic’s CAI trains AI systems to align with human values and ethics, drawing on high-level principles from sources like the UN Declaration of Human Rights and ethical guidelines. In the near future, the company plans to provide custom constitutions for specific constituencies, or services that require specific information.
Amodei added that Anthropic wants to help the US government and its citizens by providing them with a tool to easily access information related to voting or healthcare services. “Anthropic, AWS and Accenture recently worked with the DC Department of Health to power a chatbot that allows residents to ask natural language questions about things like nutrition services, vaccinations, schedules, and other types of simple health information,” he said.
When discussing cloud security, he emphasised that AWS has a proven track record of providing government customers with world-class security solutions. “AI needs to empower democracy and allow it to be both better and remain competitive at all stages,” he said, adding that the government can use Claude to improve citizen services, enhance policymaking with data-driven insights, create realistic training scenarios, and streamline document review and preparation.
Responsible AI Matters
The founder of Anthropic has always been in favour of regulating AI. “AI is a very powerful technology, and our democratic governments do need to step in and set some basic rules of the road. We’re getting to a point where the amount of concentration of power can be greater than that of national economies and national governments, and we don’t want that to happen,” he said in a recent podcast.
Considering the US elections are supposed to happen later this year, Anthropic has introduced an Acceptable Use Policy (AUP) that prohibits the use of their tools in political campaigning and lobbying. This means candidates are not allowed to use Claude to build chatbots that can pretend to be them, and the company doesn’t allow anyone to use Claude for targeted political campaigns.
Anthropic has been working with government bodies like the UK’s Artificial Intelligence Safety Institute (AISI) to conduct pre-deployment testing of their models.
OpenAI Lobbying the US Government
OpenAI’s chief technology officer Mira Murati said during a recent interview that the company gives the government early access to new AI models, and they have been in favour of more regulation.
“We’ve been advocating for more regulation on the frontier, which will have these amazing capabilities but also have a downside because of misuse. We’ve been very open with policymakers and working with regulators on that,” she said.
Notably, OpenAI has been withholding the release of its video generation model Sora, as well as the Voice Engine and voice mode features of GPT-4o. It is likely that OpenAI might also release GPT-5 post-elections.
Earlier this year, Murati confirmed that the elections were a major factor in the release of GPT-5. “We will not be releasing anything that we don’t feel confident on when it comes to how it might affect the global elections or other issues,” she said.
Meanwhile, OpenAI recently appointed retired US Army General Paul M Nakasone to its board of directors. As a priority, General Nakasone joined the board’s Safety and Security Committee, which is responsible for making recommendations to the board on critical safety and security decisions for all OpenAI projects and operations.
Meanwhile, OpenAI has been working with the US Defense Department on open-source cybersecurity software — collaborating with the Defense Advanced Research Projects Agency (DARPA) for its AI Cyber Challenge announced last year.
In April, OpenAI CEO Sam Altman, along with tech leaders from Google and Microsoft, joined a DHS panel on AI safety to advise on responsible AI use in critical sectors like telecommunications and utilities.
Altman has actively engaged with US lawmakers, including testifying before the Senate Judiciary Committee. He proposed a three-point plan for AI regulation, which includes establishing safety standards, requiring independent audits, and creating a federal agency to license high-capability AI models.
There is no denying that both OpenAI and Anthropic are trying to win the US government’s favour and contracts. The outcome of these efforts could significantly impact not only their own standings but also the broader adoption and regulation of AI technologies in public sectors.