Listen to this story
|
Anthropic recently launched Claude in the European Union and updated its ToS (terms of service). The company highlighted policy refinements, high-risk use cases and certain disclosure requirements within its usage policy, possibly to align with the EU regulations.
Interestingly, the policy changes applied to users worldwide. Soon after, complaints about the model’s performance began surfacing from across the globe.
Why the Change?
Users noticed a marked change in the way Claude reacted to certain prompts and questioning. While there have been several theories as to why the company decided to shuffle things, the most believable seems to be that Anthropic is trying to anticipate the upcoming EU AI Act, thanks to its recent deployment in the region.
Like one Reddit user said, the rest “is just a cheap conspiracy. The new ToS is because they are finally deploying to the EU, and therefore need to comply with this,” pointing to the EU’s Artificial Intelligence Act (AIA).
Anthropic has gone all-in on creating a more holistic policy, ahead of their launch in the EU as well as more recently in Canada. However, other big tech companies have faced similar problems in the EU.
OpenAI, Meta and Others Follow
Now, Anthropic making overarching policy changes to fit in with EU standards isn’t unwarranted. The region has been notorious for cracking down on companies not following through with the regulations.
Case in point, OpenAI was recently in hot water when an Italian regulatory body accused the company of violating the EU privacy laws. In January this year, the company was subjected to a fact-finding initiative by Italy’s Data Protection Authority (DPA), where they alleged that user data had been used to train OpenAI’s ChatGPT.
This, they said, was in violation of the EU General Data Protection Regulation (GDPR).
Similarly, Meta updated its privacy policy, stating, “To properly serve our European communities, the models that power AI at Meta need to be trained on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe who use them.”
However, this was also flagged by an Austrian privacy organisation, NYOB, stating that this also violated EU GDPR.
With countries in the EU closely following AI companies on how they implement their policies, Anthropic’s need for such a drastic change makes sense. But whether this change is doing good overall is up for debate.
How Bad is the Change?
As per the updated usage policy, Anthropic prohibits the usage of its services in compromising child safety, critical infrastructure, and personal identities. They have also barred making use of their products to create emotionally and psychological harmful content, as well as misinformation, including those used in elections.
There are several other changes made to the policy, as well as their ToS and privacy policies, including the right to request deletion of personal data and the option to opt out in case of data selling to third parties.
While most would be happy about stricter data privacy policies, users have reported that Claude is performing significantly worse this year. Particularly, with respect to the use cases in the updated usage policy.
“Some stuff that’s very open to interpretation or just outright dumb. Want to write some facts about the well-documented health risks of obesity? You’d be violating the “body shaming” rule. You can’t create anything that could be considered ‘emotionally harmful’,” one Reddit user said.
Further, they said that this would be worse to determine, considering there is no guarantee that those reviewing violations would be unbiased or neutral in terms of political misinformation.
Additionally, sexually-explicit content generation has also been significantly restricted. One user said that a story they had been working on with Claude had stopped progressing because Claude refused to continue, stating that it was uncomfortable with the prompt.
This was further backed by several users who stated the same issue, including one who said that Claude refused to comply with providing quotes from certain fictional characters, citing copyright infringement.
“You can’t ‘promote or advocate for a particular political candidate, party, issue or position’. Want to write a persuasive essay about an issue that can be construed as political? Better not use Claude,” they said.
What’s the Damage?
At the moment, users are willing to give both Claude and Anthropic the benefit of the doubt. With the updated policies, seemingly also due to the EU AI Act, Anthropic has made it easier to flag issues with their products and data privacy concerns.
This includes two emails, including one for Anthropic’s Data Protection Officer (DPO), to raise complaints or offer feedback, which was not present in the previous iteration of their policy.
Similarly, users believe that while Claude seems to have been handicapped by the new ToS, this could be reverted if given enough time and if the issues are raised by the users. “Anthropic does seem willing to listen to user feedback – and we’ve seen with the release of the Claude 3 models the dialling back of the refusals. So I think, at some point in the future, Anthropic will loosen up on things like that,” another user said.
Whether this can actually happen or if Anthropic will stick to its guns to preserve a user base in the EU and Canada is yet to be seen.
It’s no surprise to conclude that the noose is only tightening around big tech companies, and Claude seems to be the actual first in a long line of victims of over-regulation.