Amazon announces a global university competition to focus on Responsible AI for LLM coding security. The “Amazon Trusted AI Challenge” is offering 250,000 in sponsorship and monthly AWS credits to each of the 10 teams that will be selected for the competition that begins in November 2024. The winning team will have a chance to get $700,000 in cash prizes.
The students will participate in a tournament-style competition where they can either develop AI models or red teams to improve AI user experience, prevent misuse, and help users create safer code.
Model developers will focus on adding security features to AI models that generate code, while testers will create automated methods to test these models. Each round of the competition will also involve multiple interactions, allowing teams to improve their models and techniques by identifying strengths and weaknesses.
Improving AI Through Red Teaming
“We are focusing on advancing the capabilities of coding LLMs, exploring new techniques to automatically identify possible vulnerabilities and effectively secure these models,” said Rohit Prasad, senior vice president and head scientist, Amazon AGI.
“The goal of the Amazon Trusted AI Challenge is to see how students’ innovations can help forge a future where generative AI is consistently developed in a way that maintains trust, while highlighting effective methods for safeguarding LLMs against misuse to enhance their security,” said Prasad.
Amazon’s AI challenge is a promising way to build more robust and secure coding systems by collaborating with some of the deepest young minds in the industry. Similar methods have been adopted by other companies including OpenAI who run cybersecurity and bounty challenges. Their last competition invited people to help with framing ways to deploy responsible AI models.