Enterprises love to RAG, but not everyone is great at it. The much touted solution for hallucinations and bringing new information to LLM systems, is often difficult to maintain, and to evaluate if it is even getting the right answers. This is where Ragas comes into the picture.
With over 6,000 stars on GitHub and an active Discord community over 1,300 members strong, Ragas was co-founded by Shahul ES and his college friend Jithin James. The YC-backed Bengaluru-based AI startup is building an open source stand for evaluation of RAG-based applications.
Several engineering teams from companies such as Microsoft, IBM, Baidu, Cisco, AWS, Databricks, and Adobe, rely on Ragas offerings to make their pipeline pristine. Ragas already processes 20 million evaluations monthly for companies and is growing at 70% month over month.
The team has partnered with various companies such as Llama Index and Langchain for providing their solutions and have been heavily appreciated by the community. But what makes them special?
Not Everyone Can RAG
The idea started when they were building LLM applications and noticed a glaring gap in the market: there was no effective way to evaluate the performance of these systems. “We realised there were no standardised evaluation methods for these systems. So, we put out a small open-source project, and the response was overwhelming,” Shahul explained while speaking with AIM.
By mid-2023, Ragas had gained significant traction, even catching the attention of OpenAI, which featured their product during a DevDay event. The startup continued to iterate on their product, receiving positive feedback from major players in the tech industry. “We started getting more attention, and we applied to the Y Combinator (YC) Fall 2023 batch and got selected,” Shahul said, reflecting on their rapid growth.
Ragas’s core offering is its open-source engine for automated evaluation of RAG systems, but the startup is also exploring additional features that cater specifically to enterprise needs. “We are focusing on bringing more automation into the evaluation process,” Shahul said. The goal is to save developers’ time by automating the boring parts of the job. That’s why enterprises use Ragas.
As Ragas continues to evolve, Shahul emphasised the importance of their open-source strategy. “We want to build something that becomes the standard for evaluation in LLM applications. Our vision is that when someone thinks about evaluation, they think of Ragas.”
While speaking with AIM in 2022, Kochi-based Shahul, who happens to be a Kaggle Grandmaster, revealed that he used to miss classes and spend his time Kaggeling.
The Love for Developers
“YC was a game-changer for us,” Shahul noted. “Being in San Francisco allowed us to learn from some of the best in the industry. We understood what it takes to build a successful product and the frameworks needed to scale.”
Despite their global ambitions, Ragas remains deeply rooted in India. “Most of our hires are in Bengaluru,” Shahul said. “We have a strong network here and are committed to providing opportunities to high-quality engineers who may not have access to state-of-the-art projects.”
“We have been working on AI and ML since college,” Shahul said. “After graduating, we worked in remote startups for three years, contributing to open source projects. In 2023, we decided to experiment and see if we could build something of our own. That’s when we quit our jobs and started Ragas.”
Looking ahead, Ragas is planning to expand its product offerings to cover a broader range of LLM applications. “We’re very excited about our upcoming release, v0.2. It’s about expanding beyond RAG systems to include more complex applications, like agent tool-based calls,” Shahul shared.
Shahul and his team are focused on building a solution that not only meets the current needs of developers and enterprises, but also anticipates the challenges of the future. “We are building something that developers love, and that’s our core philosophy,” Shahul concluded.