Mohit Pandey, Author at AIM https://analyticsindiamag.com/author/mohit-pandey/ Artificial Intelligence, And Its Commercial, Social And Political Impact Tue, 03 Sep 2024 10:09:18 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Mohit Pandey, Author at AIM https://analyticsindiamag.com/author/mohit-pandey/ 32 32 Will AI Coding Tools Mark the End of IDEs? https://analyticsindiamag.com/developers-corner/will-ai-coding-tools-mark-the-end-of-ides/ Tue, 03 Sep 2024 09:47:42 +0000 https://analyticsindiamag.com/?p=10134313

All IDEs will soon be AI assisted.

The post Will AI Coding Tools Mark the End of IDEs? appeared first on AIM.

]]>

Do we even need to learn coding anymore? The question sounds very timely as tools such as Cursor and Claude Artifacts are making everyone build apps without writing a single line of code. Cursor, which is also basically a glorified fork of the VS Code IDE, is making developers wonder if this transition to building apps in natural language marks the end of traditional IDEs.

Register for NVIDIA AI Summit India

IDE, or integrated development environment, is a code editor that allows developers to write, test, and debug code, combining multiple tools and features into a single environment. IDEs are essential for using programming languages and making useful software. 

The most famous IDE, VS Code, also allows developers to edit code such as IntelliSense code completion. However, with AI editors like Cursor, Zed, Magic, Codeium, or the most recent one Melty, integrating traditional IDEs into a developer’s workflow seems to have become redundant. 

Or Has It?

There was a recent surge of developers actively uninstalling VS Code because of Cursor IDEs. But it is easily possible for VS Code to simply add an update of AI-assisted coding onto its platform, which would eventually mark the end of Cursor. Some people also predict that Microsoft might simply end up acquiring Cursor in the future. 

John Lindquist, creator of egghead.io, said that he had a chat with Harald Kirschner, project manager at VS Code, about Cursor and VS Code recently, and the team is keenly aware of Cursor’s capabilities and there might be several things in the pipeline. “I think we’ll all be pleasantly surprised,” he said.

Other IDEs such as Jetbrains, PyCharm, and IDLE, are also facing a similar crisis as AI-generated coding gains traction. 

Regardless, modern generative AI coding tools are capable of integrating several open-source LLMs instead of a traditional IDE like VS Code, making it handy for several AI developers. “You can select code and ask questions based on that piece of code. So you don’t have to keep switching between the IDE and browser,” explained a developer on X.

But it simply means that most of the IDEs in the future would come with generative AI integration. It would become the default for IDEs as it was for integrating no-code and low-code platforms. 

Moreover, though it is becoming easier for non-developers to build apps, the task of building high-end software is still far away from these autocoding platforms. These tools can enable non-developers or those with minimal coding experience to create apps without needing to interact with an IDE, but those cannot be replicated everywhere. 

When it comes to experienced developers, AI tools can fasten the prototyping process by generating samples of code through prompts to quickly test ideas. But in the long run, when it comes to dealing with complex, customised, and large-scale projects, traditional IDEs still are better at debugging and handling crucial features.

The Future of IDEs is AI-Assisted

Before talking about the end of IDEs, it is essential to understand what they stand for. With IDEs, developers have a standardised environment with a defined codebase for meeting specific requirements. This consistency of code cannot be fully controlled by an automated code generation platform like Cursor, or others. 

On the contrary, future IDEs could take this further by generating larger chunks of code, or even entire modules, based on brief descriptions or prompts. The future of software development is likely to see AI-integrated IDEs becoming the norm. This would be helpful in analysing code in real-time and automatically correcting them. 

Furthermore, with generative AI integrated into IDEs, these code editors would be able to suggest context-aware code, which is not just based on the syntax, and even optimise it. This would also enable a personalised developer style of coding, enabling natural language prompting with coding, much like Cursor and Claude. 

AI-integrated IDEs will likely combine traditional development tools with generative coding, making software development more efficient, intuitive, and accessible while still providing the depth needed for complex projects. This would enable developers who know coding to take generative tools to the next level. 

But at the same time, it would be difficult to manage generative AI code if it is available within the IDEs as people still would be hesitant to trust generative AI for boilerplate code. It could also result in the loss of coding skills for skilled developers. 

The post Will AI Coding Tools Mark the End of IDEs? appeared first on AIM.

]]>
Meet Melty, Open Source Alternative to Cursor  https://analyticsindiamag.com/ai-news-updates/meet-melty-open-source-alternative-to-cursor/ Tue, 03 Sep 2024 04:08:20 +0000 https://analyticsindiamag.com/?p=10134274

Started by Charlie Holtz and Jackson de Campos, Melty is backed by Y Combinator and is part of the S24 batch.

The post Meet Melty, Open Source Alternative to Cursor  appeared first on AIM.

]]>

With AI code editors like Cursor, Zed, and Codeium, gathering all the attention, there is an open source alternative recently launched in the market. Meet Melty, an open source AI code editor, which is specifically designed for 10x engineers.

Register for NVIDIA AI Summit India

Started by Charlie Holtz and Jackson de Campos, Melty is backed by Y Combinator and is part of the S24 batch. The founders tout it is the first editor that understands what developers are coding from the terminal to GitHub and collaborates for writing production ready code.

Click here to check out the GitHub repository for Melty

In a thread on X, Holtz said that during the first weeks of YC they were still figuring out a bunch of ideas when they started working on AI developer tools as suggested by Aaron Epstein. “We’re big fans of the tools that are out there. But we still find ourselves copy-pasting from Claude, juggling ten chats for the same task, and committing buggy code that comes back to bite us later. Nothing is quite ‘it’ yet,” said Holtz.

After working on Melty for 28 days, Holtz said that Melty is already writing about half of its own code.

“Charlie and Jackson build so fast you’d think they’re a team of 10 stacked engineers, rather than just 2 co-founders. Melty’s going to give every engineer their superpowers,” said Epstein. 

The goal that the team explains of building Melty is to help people understand the code by allowing them to watch every change like a pair programmer and it also learns to adapt to the developer building the code. 

The post Meet Melty, Open Source Alternative to Cursor  appeared first on AIM.

]]>
Wake Me Up When Companies Start Hiring Clueless Modern ‘Developers’ https://analyticsindiamag.com/ai-origins-evolution/wake-me-up-when-companies-start-hiring-clueless-modern-developers/ Mon, 02 Sep 2024 09:00:42 +0000 https://analyticsindiamag.com/?p=10134230

People who know how to drive are not all F1 racers.

The post Wake Me Up When Companies Start Hiring Clueless Modern ‘Developers’ appeared first on AIM.

]]>

“Programming is no longer hard” or “everyone’s a developer” are the most common phrases that one would hear on LinkedIn or X as everyone is basically talking about Cursor, Claude, or GitHub Copilot. But the problem is that most of the people who claim so are not developers themselves. They are merely ‘modern developers.’

Register for NVIDIA AI Summit India

Santiago Valdarrama, founder of Tideily and an ML teacher, who has been actively asking developers if they are using Cursor or not, started another discussion that tools such as Cursor and others are basically tools that can only assist existing developers in writing better code. “Wake me up when companies start hiring these clueless modern ‘developers’,” he added.

He gave an analogy of calling yourself an F1 racer after playing a racing game on an iPad.

In all honesty, it is undeniable that the barrier to entry for becoming a developer has dropped significantly ever since Cursor, even ChatGPT, dropped. People have been able to build software for their personal use and even build apps in mere hours. But, this does not eliminate the fact that it is currently only limited to just creating such apps and low level software.

“You Can’t Correct Code if You Don’t Know How to Code”

Given all this hype around the end of software engineering roles, developers and programmers are getting worried about the future of their jobs. It is indeed true that software engineers have to upskill faster than anyone else, but the fear of getting replaced can be pushed off to at least a few years.

Having tools such as Cursor and Claude are only good enough if a developer actually knows how the code actually works. The real game-changer is how developers who use AI will outpace those who don’t. “The right tools can turn a good developer into a great one. It’s not about replacing talent; it’s about enhancing it,” said Eswar Bhageerath, SWE at Microsoft. 

AI only takes care of the easy part for a developer – writing code. The real skill that a developer has is reasoning and problem solving, apart from fixing the bugs in the code itself, which cannot be replaced by any AI tool, at least anytime soon. Cursor can only speed up the process and write the code but correcting the code is something that only developers can do.

Moreover, bugs generated within code with AI tools are not easily traceable by developers without using any other AI bug detection tool. Andrej Karpathy, who has been actively supporting Cursor AI over GitHub Copilot, also shared a similar thing while working. “it’s slightly too convenient to just have it do things and move on when it seems to work.” This has also led to the introduction of a few bugs when he is coding too fast and tapping through big chunks of code.

These bugs cannot be fixed by modern ‘developers’ who were famously also called ‘prompt engineers’. To put it simply, someone has to code the code for no-code software.

Speaking of prompt engineers, the future will include a lot of AI agents that would be able to write code themselves. The future jobs of software engineers would be managing a team of these AI coding agents, which is not possible for developers who just got into the field by just learning to build apps on Cursor or Claude. It is possible that the size of the teams might decrease soon as there would be no need for low level developers.

Upskilling is the Need of the Hour

That is why existing developers should focus on developing engineering skills, and not just coding skills. Eric Gregori, adjunct professor at Southern New Hampshire University, said that this is why he has been teaching his students to focus more on engineering than just programming. “AI is too powerful of a tool to ignore,” he said, while adding that existing limitations of coding platforms have been removed completely. 

“Hopefully, AI will allow software engineers to spend more time engineering and less time programming.” It is time to bring back the old way of learning how to code as modern developers would be tempted to just copy and paste code from AI tools, and not do the real thinking. 

The F1 driver analogy fits perfectly here. Most people can learn how to drive, but would never be able to become a race driver. The same is the case with the coding tools. But if all people need is prototyping and designing an initial code, AI driven developers would be able to do a decent enough job.

That is why a lot of pioneers of the AI field such as Karpathy, Yann LeCun, Francois Chollet, even Sam Altman, say that there would be 10 million coding jobs in the future, the ones that would require the skills of Python, C++, and others. As everyone in some way would be a ‘modern developer’, and most of the coding would be done by AI agents. 

It is possible that most of the coding in the future would be in English, but most of it would be about debugging and managing the code generated by AI, which is not possible for someone who does not know coding from scratch.

The post Wake Me Up When Companies Start Hiring Clueless Modern ‘Developers’ appeared first on AIM.

]]>
The Operationalisation of GenAI https://analyticsindiamag.com/ai-highlights/the-operationalisation-of-genai/ Mon, 02 Sep 2024 08:28:25 +0000 https://analyticsindiamag.com/?p=10134232

Organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.

The post The Operationalisation of GenAI appeared first on AIM.

]]>

The operationalisation of GenAI is becoming significant across various industries. Vinoj Radhakrishnan and Smriti Sharma, Principal Consultants, Financial Services at Fractal, shared insights into this transformative journey, shedding light on how GenAI is being integrated into organisational frameworks, particularly in the banking sector, addressing scepticism and challenges along the way.

“GenAI has undergone an expedited evolution in the last 2 years. Organisations are moving towards reaping the benefits that GenAI can bring to their eco system, including in the banking sector. Initial scepticism surrounding confidentiality and privacy has diminished with more available information on these aspects,” said Sharma. 

She noted that many organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.

Radhakrishnan added, “Organisations are now more open to exploring various use cases within their internal structures, especially those in the internal operations space that are not customer facing. 

“This internal focus allows for exploring GenAI’s potential without the regulatory scrutiny and reputational risk that customer facing applications might invite. Key areas like conversational BI, knowledge management, and KYC ops are seeing substantial investment and interest.”

Challenges in Operationalisation

Operationalising GenAI involves scaling applications, which introduces complexities. “When we talk about scaling, it’s not about two or three POC use cases; it’s about numerous use cases to be set up at scale with all data pipelines in place,” Sharma explained. 

“Ensuring performance, accuracy, and reliability at scale remains a challenge. Organisations are still figuring out the best frameworks to implement these solutions effectively,” she said.

Radhakrishnan emphasised the importance of backend development, data ingestion processes, and user feedback mechanisms. 

“Operationalising GenAI at scale requires robust backend to frontend API links and contextualised responses. Moreover, adoption rates play a crucial role. If only a fraction of employees uses the new system and provide feedback, the initiative can be deemed a failure,” he said.

The Shift in Industry Perspective

The industry has seen a paradigm shift from questioning the need for GenAI to actively showing intent for agile implementations. “However,” Sharma pointed out, “only a small percentage of organisations, especially in banking, have a proper framework to measure the impact of GenAI. Defining KPIs and assessing the success of GenAI implementations remain critical yet challenging tasks.”

The landscape is evolving rapidly. From data storage to LLM updates, continuous improvements are necessary. Traditional models had a certain refresh frequency, but GenAI requires a more dynamic approach due to the ever-changing environment.

Addressing employee adoption, Radhakrishnan stated, “The fear that AI will take away jobs is largely behind us. Most organisations view GenAI as an enabler rather than a replacement. The design and engineering principles we adopt should focus on seamless integration into employees’ workflows.”

Sharma illustrates with an example, “We are encouraged to use tools like Microsoft Copilot, but the adoption depends on how seamlessly these tools integrate into our daily tasks. Employees who find them cumbersome are less likely to use them, regardless of their potential benefits.”

Data Privacy and Security

Data privacy and security are paramount in GenAI implementations, especially in sensitive sectors like banking. 

Radhakrishnan explained, “Most GenAI use cases in banks are not customer-facing, minimising the risk of exposing confidential data. However, there are stringent guardrails and updated algorithms for use cases involving sensitive information to ensure data protection.”

Radhakrishnan explained that cloud providers like Microsoft and AWS offer robust security measures. For on-premises implementations, organisations need to establish specific rules to compartmentalise data access. 

“Proprietary data also requires special handling, often involving masking or encryption before it leaves the organisation’s environment,” Sharma added.

Best Practices for Performance Monitoring

Maintaining the performance of GenAI solutions involves continuous integration and continuous deployment (CI/CD). 

“LLMOps frameworks are being developed to automate these processes,” Radhakrishnan noted. “Ensuring consistent performance and accuracy, especially in handling unstructured data, is crucial. Defining a ‘golden dataset’ for accuracy measurement, though complex, is essential.”

Sharma added that the framework for monitoring and measuring GenAI performance is still developing. Accuracy involves addressing hallucinations and ensuring data quality. Proper data management is fundamental to achieving reliable outputs.

CI/CD play a critical role in the operationalisation of GenAI solutions. “The CI/CD framework ensures that as underlying algorithms and data evolve, the models and frameworks are continuously improved and deployed,” Radhakrishnan explained. “This is vital for maintaining scalable and efficient applications.”

CI/CD frameworks help monitor performance and address anomalies promptly. As GenAI applications scale, these frameworks become increasingly important for maintaining accuracy and cost-efficiency.

Measuring ROI is Not So Easy

Measuring the ROI of GenAI implementations is complex. “ROI in GenAI is not immediately apparent,” Sharma stated. “It’s a long-term investment, similar to moving data to the cloud. The benefits, such as significant time savings and reduction in fines due to accurate information dissemination, manifest over time.”

Radhakrishnan said, “Assigning a monetary value to saved person-hours or reduced fines can provide a tangible measure of ROI. However, the true value lies in the enhanced efficiency and accuracy that GenAI brings to organisational processes.”

“We know the small wins—saving half a day here, improving efficiency there—but quantifying these benefits across organisations is challenging. At present, only a small portion of banks have even started the journey on a roadmap for that,” added Sharma.

Sharma explained that investment in GenAI is booming, but there is a paradox. “If you go to any quarterly earnings call, everybody will say we are investing X number of dollars in GenAI. Very good. But on the ground, everything is a POC (proof of concept), and everything seems successful at POC stage. The real challenge comes after that, when a successful POC needs to be deployed at production level. There are very few organisations scaling from POC to production as of now. One of the key reasons for that is uneasiness on the returns from such an exercise – taking us back to the point on ROI.”

“Operational scaling is critical,” Radhakrishnan noted. “Normally, when you do a POC, you have a good sample, and you test the solution’s value. But when it comes to operational scaling, many aspects come into play. It must be faster, more accurate, and cost-effective.” 

Deploying and scaling the solution shouldn’t involve enormous investments. The solution must be resilient, with the right infrastructure. When organisations move from POC to scalable solutions, they often face trade-offs in terms of speed, cost, and continuous maintenance.

The Human Element

Human judgement and GenAI must work in harmony. “There must be synergy between the human and what GenAI suggests. For example, in an investment scenario, despite accurate responses from GenAI, the human in the loop might disagree based on their gut feeling or client knowledge,” said Radhakrishnan.

This additional angle is valuable and needs to be incorporated into the GenAI algorithm’s context. A clash between human judgement and algorithmic suggestions can lead to breakdowns, especially in banking, where a single mistake can result in hefty fines.

Data accuracy is obviously crucial, especially for banks that rely heavily on on-premises solutions to secure customer data. 

“Data accuracy is paramount, and most banks are still on-premises to secure customer data. This creates resistance to moving to the cloud. However, open-source LLMs can be fine-tuned for on-premises use, although initial investments are higher,” added Sharma.

The trade-off is between accuracy and contextualisation. Fine-tuning open-source models is often better than relying solely on larger, generic models.

Radhakrishnan and Sharma both noted that the future of GenAI in banking is moving towards a multi-LLM setup and small language models. “We are moving towards a multi-LLM setup where no one wants to depend on a single LLM for cost-effectiveness and accuracy,” said Sharma. 

Another trend she predicted is the development of small language models specific to domains like banking, which handle nuances and jargon better than generalised models.

Moreover, increased regulatory scrutiny is on the horizon. “There’s going to be a lot more regulatory scrutiny, if not outright regulation, on GenAI,” predicted Radhakrishnan.

“Additionally, organisations currently implementing GenAI will soon need to start showing returns. There’s no clear KPI to measure GenAI’s impact yet, but this will become crucial,” he added.

“All the cool stuff, especially in AI, will only remain cool if the data is sound. The more GenAI gathers steam, the more data tends to lose attention,” said Sharma, adding that data is the foundation, and without fixing it, no benefits from GenAI can be realised. “Banks, with their fragmented data, need to consolidate and reign-in this space to reap any benefits from GenAI,” she concluded.

The post The Operationalisation of GenAI appeared first on AIM.

]]>
Software Engineers Have to Upskill Faster Than Anyone Else https://analyticsindiamag.com/ai-origins-evolution/software-engineers-have-to-upskill-faster-than-anyone-else/ Sat, 31 Aug 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10134157

But “upskill to what?” is what people ask.

The post Software Engineers Have to Upskill Faster Than Anyone Else appeared first on AIM.

]]>

The barrier for entry to become a developer is dropping everyday. The most recent phenomenon that everyone is still talking about is Anysphere’s Cursor AI coding tool, which has basically made everyone a developer. Now, there are more tools coming up in the same category such as Codeium, Magic, and Zed AI, all of them trying to come up with the same formulae. 

This definitely brings in the question – what would happen to the software developers of today? Graduating out of colleges with computer science degrees in a world to compete with the people who are becoming software engineers with AI tools, the turmoil of an average software engineer is real.

The solution is easier said than done – upskill yourselves and focus on higher order things such as building foundational AI. Even 8-year-olds are building apps using Cursor AI in 45 minutes. 

A Profession Like Never Before

Since there is no barrier to entry, no degree requirements, and no regulations about who can join the market, software engineering has become a profession that has never happened ever in the history. There are plenty of opportunities for developers to upskill.

But “upskill to what?” is what people ask. 

The conversation from LLMs to SLMs to coding assistants to AI agents keeps changing so swiftly, it can be challenging to determine which new skills are worth acquiring. This question reflects a broader uncertainty about how to prioritise learning in a field where the next big thing seems just around the corner.

Saket Agrawal, a developer from IIT Guwahati, said that it is not as much about the technological shift but the advancement of automation tools that reduce the time and efforts for the same skills. “I don’t see any big threat to existing software skills suddenly and software has been the field all the time which needs continuous skills updation based on requirement without leaving your old skills instantly,” he said. 

Another user on X put it in a funny way. “Software engineers need more updates than my grandma’s Windows 95. Ever tried explaining AI to her? It’s like defining gluten-free bread to a caveman!”

It is widely discussed that a lot of software engineering jobs are dying. “Winter is coming for software engineering,” said Debarghya Das from Menlo Ventures, saying that many of the current software engineering jobs would become a distant memory. 

Scott Stouffer adds another layer to this conversation by suggesting that some are experiencing an upgrade in their lives at a pace that surpasses others. This notion of being “upgraded” faster could imply a divide between those who adapt quickly to technological advancements and those who struggle to keep up.

LLMs to Upskill?

While there is a very interesting caveat to all of this conversation around upskilling. Hardcore skilled developers believe that leveraging tools such as Cursor and others can take them to another level where the new developers would never be able to reach. Yann LeCun has already told developers getting into the AI field to not work on LLMs.

Andrej Karpathy recently said that the future of coding is ‘tab tab tab’ referring to auto code completion tools such as Cursor. Further in the thread, he added that with the capabilities of LLM shifting so rapidly, it is important for developers to continually adapt the current capabilities.

Some people are sceptical if they even should get into the computer science field anymore. “…if I was new to programming I would be too tempted to skip actual learning in favour of more LLM usage, resulting in many knowledge gaps,” said a user replying to Karpathy. This truly feels like the true way forward for many developers. 

This is similar to what Francois Chollet, the creator of Keras, said a few months ago. “There will be more software engineers (the kind that write code, e.g. Python, C or JavaScript code) in five years than there are today.” He added that the estimated number of professional software engineers today is 26 million, which would jump to 30-35 million in five years.

This is because developers who are proficient with coding without code generators can never be replaced. People who built programming languages and foundational tools are still very well versed with coding than people who are just using Cursor to build apps. Sure, there can be an abundance of people building apps in the future, but the scope would just be limited to that.

Meanwhile, highly skilled 10x developers would be focusing on leveraging such tools, or possibly finding flaws in them, to create even better software. So to say, creating the next Cursor or ChatGPT.

There is an abundance of things that can be done. For instance, focusing on enhancing hardware or building infrastructure for running future workloads can only be comprehended by experts in the field. For example, companies such as Pipeshift AI, Groq, Jarvis Labs, and many others who are working on different problems than coding. 

The truth is that such AI tools can never replace human intelligence or jobs, only augment them. “Generating working code is only a part of the responsibility,” said VJ’s Insights on a post on X. Though “Yes, if you are someone who *just* writes code, you need to start thinking differently.”

In the near future, there are predictions that the future of software engineering would be about managing a team of AI agent engineers, and telling them how to code. This will make every engineer akin to an engineering manager, delegating basic tasks to coding agents while focusing on higher-level aspects such as understanding requirements, architecting systems, and deciding what to build.

It is high time that software engineers start upskilling themselves, and currently, it looks like using generative AI tools is the best way forward, not without them. Who knows, you might also become a solo-entrepreneur building a billion dollar company alone

The post Software Engineers Have to Upskill Faster Than Anyone Else appeared first on AIM.

]]>
This Bengaluru-based AI Startup Knows How to Make Your Videos Viral https://analyticsindiamag.com/ai-breakthroughs/this-bengaluru-based-ai-startup-knows-how-to-make-your-videos-viral/ Fri, 30 Aug 2024 10:30:00 +0000 https://analyticsindiamag.com/?p=10134152

The team has built a metric called "virality score", which is derived from a dataset of 100,000 social media videos.

The post This Bengaluru-based AI Startup Knows How to Make Your Videos Viral appeared first on AIM.

]]>

Editing videos can be a tedious and time-consuming task, taking video editors hours and days to get their footage ready for release. Moreover, hiring a team of video editors is not what every content creator or small company wants to invest in. This is where vidyo.ai comes into the picture. 

Launched two years ago by Vedant Maheshwari and Kushagra Pandya, the platform has experienced remarkable growth, scaling from zero to approximately 300,000 monthly active users, and achieved a revenue milestone of $2 million. Notably, a significant portion of vidyo.ai’s revenue, about 85%, comes from the US market.

Most recently, the company was part of the Google for Startups Accelerator programme. The company hasn’t raised any funding since its seed round of $1.1 million in 2022.

The team has made significant strides in addressing one of the industry’s most persistent challenges: video editing.

Maheshwari and vidyo.ai’s journey into the realm of video content and social media began over eight years ago, during which he collaborated with creators and influencers to refine their content strategies across platforms like YouTube, TikTok, and Instagram. 

It was during this period that Maheshwari identified a major pain point: the time-consuming and complex nature of video editing.

This insight led to the creation of vidyo.ai, a platform designed to streamline the video editing process. The vision was to leverage AI to handle 80-90% of the editing, leaving users with the flexibility to make final adjustments before sharing their content on social media.

The platform caters to a diverse user base, including content creators, podcasters, and businesses seeking to generate short-form content with minimal effort. “We essentially enable them to let the AI edit their videos, and then they can publish directly to all social media platforms using our software,” Maheshwari added.

How vidyo.ai Works

vidyo.ai combines OpenAI’s models with proprietary algorithms to transform raw video footage into polished content. Users upload their videos to the platform, which then processes the content through a series of OpenAI prompts and proprietary data. This includes analysing what kind of videos perform well online, identifying potential hooks, and determining effective calls-to-action (CTAs).

“We run the video through multiple pipelines, identifying key hooks and combining them to create a final video. Our algorithms then score these videos based on their potential virality,” Maheshwari elaborated on the process. This “virality score” is derived from a dataset of 100,000 social media videos, allowing the platform to suggest the most promising clips for engagement.

When compared to other video editing tools like GoPro’s QuikApp and Magisto, vidyo.ai distinguishes itself with its frame-by-frame analysis of both video and audio content. Unlike these platforms, which often edit videos based on mood or music, vidyo.ai dives deeper into the content to optimise for social media performance.

“We do a comprehensive analysis of the content, ensuring that every aspect is optimised for virality,” Maheshwari said. This level of detail, combined with the ability to publish directly across multiple platforms, provides users with a unique advantage.

Challenges and Opportunities

Despite its success, vidyo.ai faces challenges common to Indian startups, particularly in securing funding. Maheshwari noted that while Indian VCs are cautious about investing in AI, preferring application-layer solutions over foundational work, US VCs often have a more aggressive approach.

“We’ve gone from zero to $2Mn in ARR in less than two years, which is remarkable. However, raising subsequent rounds of funding in India remains challenging due to a lack of clarity on how AI investments will pay off,” Maheshwari explained, saying that the VCs of the US would be ready to invest looking at this metric alone.

He also reflected on the possibility of starting the company in the US instead of India, citing potential benefits in terms of ease of operations and investor interest. “It often feels like running a company in India comes with more challenges compared to the US,” he admitted.

When it comes to finding the moat, vidyo.ai still stands on the tough ground of maybe Instagram, LinkedIn or TikTok releasing a similar feature on the base app. “There is definitely a little bit of platform risk,” but Maheshwari explained that it is unlikely that customers would shift to that since they don’t want to restrict themselves to the workflow of a creation platform. 

Comparing it to building something like Canva, Maheshwari said that vidyo.ai plans to expand its offerings, including the potential integration of generative AI features like deep-fakes and avatars. Currently, the team is also working on building an AI-based social media calendar, which would suggest to users content that would work the best in the coming week.

Maheshwari envisions building a comprehensive suite of tools for social media creation and publishing. “Our goal is to develop a full-stack solution that encompasses every aspect of social media content creation,” he said.

The post This Bengaluru-based AI Startup Knows How to Make Your Videos Viral appeared first on AIM.

]]>
Indian Engineers Say Most College Professor Don’t Know Programming https://analyticsindiamag.com/ai-origins-evolution/indian-engineers-say-most-college-professor-dont-know-programming/ Fri, 30 Aug 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10134131

“What's the point of paying the teachers then?” ask engineers.

The post Indian Engineers Say Most College Professor Don’t Know Programming appeared first on AIM.

]]>

According to a recent study, only 2.5% of engineers of India possess any AI skills, with only 5.5% qualified with basic programming knowledge. Though Indian IT companies are looking to change that as they actively upskill their employees, this report did not resonate well with the Indian engineers.

In a recent Reddit discussion, a significant number of Indian engineers expressed concerns over the programming skills of college professors, with many claiming that most professors in Indian colleges lack the necessary expertise to teach programming effectively. 

“Most people learn what college teaches them and nothing more. Most college professors themselves don’t know programming,” said a user in the discussion, which was highly resonated amongst college students.

This discussion highlighted a growing discontent among students and professionals who feel that the quality of education in computer science and related fields is being compromised due to the inadequacies of teaching staff.

One of the key points raised in the discussion was the tendency of professors to rely heavily on external resources like YouTube channels and online courses rather than teaching from their own knowledge and experience. “My college professors used to watch GateSmashers videos for teaching Computer Networks to us. They even wrote the same examples from the video, and when asked questions, they told us to watch the videos.” 

The situation is particularly dire in Tier-3 colleges, where the quality of teaching is perceived to be the lowest. A user said that their professor even copy-pasted the same screenshots from the video in his PowerPoint presentation. “I am not sure for what reason professors take salaries if they can’t even make their own original PPTs,” he added. 

Who to Blame?

Earlier, AIM wrote that there is a dire need for Indian researchers and professors to move beyond PhDs as most of them keep publishing papers. “Though Indian universities produce some very good engineers, they are very successful in the West,” Amit Sheth, the chair and founding director of the AIISC, told AIM earlier.

Similarly, Adarsh Shirawalmath, the founder of Tensoic, told AIM that though his college has been helpful, it is still years behind. “We are lagging a bit in terms of where the SOTA is and what we are doing because some of the professors still might be researching on CNN whereas the SOTA is really ahead,” he said. 

The issue is critical. These days, colleges make it mandatory for students to complete online courses on subjects already included in the syllabus. “What’s the point of paying the teachers then?” asked a user, which aptly explains the core issue of it all.

The root of the problem seems to be the significant gap between the salaries offered to college professors and what they could potentially earn in the private sector. “Let’s talk about 95% of teachers working in the rest of the colleges in India. Why would someone work as a teacher getting paid at most 10 to 12 LPA if they were any good?”. This is the valid question that engineers should be asking.

“The market will pay 30 LPA to anyone decent.” This discrepancy in pay makes it difficult for colleges to attract and retain talented professionals, leading to a situation where those who do become professors are often not the best in the field.

This sentiment was echoed by others, who argued that the incentive structure is fundamentally flawed. Nobody half good wants to be a computer science teacher when they can get paid at least double that much in an equivalent experience IT job.

Still Hope

This is a critical issue, as it suggests that the teaching profession in India is not seen as a viable career option for skilled professionals, which in turn affects the quality of education that students receive. On the other hand, becoming a university professor at IITs and NITs is still something engineers strive for. Which is why the faculty at these premier institutes is good enough and training world class talent. 

That is also why a lot of India’s obsession with STEM degrees is creating a generation of jobless graduates, as universities apart from these premier institutes struggle to provide decent skills for the market. Moreover, not all students are capable of self-learning, which further complicates the issue. 

Despite the general dissatisfaction, some users suggested potential solutions to the problem. One such suggestion was to hire more professors with industry experience. Some experienced people should take up the plan to teach in college after retiring from the corporate sector. “We need more professors with industry experience. I would readily learn from some professor with good industry experience of 5 years rather than a PhD professor,” said an engineer. 

The over-reliance on external resources, the disparity in pay between professors and private sector employees, and the lack of practical industry experience among teaching staff are all contributing factors to this issue. 

The ideal way forward is to increase the barrier of entry for becoming a professor for computer science, which would end up making the professors upskill themselves. At the same time, the compensation offered to these professors needs to be substantially increased as well, to match with high paying jobs of the market. 

The post Indian Engineers Say Most College Professor Don’t Know Programming appeared first on AIM.

]]>
Cursor Rival Codeium Raises $150 Mn, Becomes Unicorn with $1.25 Bn Valuation https://analyticsindiamag.com/ai-news-updates/cursor-rival-codeium-raises-150-mn-becomes-unicorn-with-1-25-bn-valuation/ Fri, 30 Aug 2024 04:29:16 +0000 https://analyticsindiamag.com/?p=10134113

Magic, another rival, closed a round of $320 million, including participation from Eric Schmidt, Atlassian, Jane Street, and Sequoia.

The post Cursor Rival Codeium Raises $150 Mn, Becomes Unicorn with $1.25 Bn Valuation appeared first on AIM.

]]>

While the debate around GitHub Copilot and Cursor AI is increasing, Codeium, an AI-powered code acceleration platform, announced that it has raised $150 million in a Series C funding round, propelling its valuation to $1.25 billion, becoming a unicorn.

The round was led by General Catalyst, with continued participation from existing investors Kleiner Perkins and Greenoaks. This funding milestone marks Codeium’s ascent to Unicorn status in less than two years since its inception.

https://twitter.com/codeiumdev/status/1829191944943350214

“The future of coding isn’t just about writing lines of code faster—it’s about enabling developers to think bigger, push boundaries, and achieve the extraordinary,” said Varun Mohan, CEO of Codeium. “This fresh funding means we’re more equipped to help developers turn those ‘what ifs’ into ‘what’s next,’ having the freedom to innovate without limits and turn challenges into opportunities for growth.”

Codeium’s platform leverages proprietary code-based LLMs to streamline software development and enhance developer productivity. With the newly secured funds, the company plans to accelerate the development of new features, expand its product offerings, and increase its workforce. The focus will also be on strengthening partnerships to maximise AI strategies and broaden its market impact.

Quentin Clark, Managing Director of General Catalyst, emphasised Codeium’s rapid adoption in real-world production environments, stating, “Codeium isn’t just an idea—it’s a fully scaling business with widespread enterprise adoption. Their genAI tools for software development are proving their worth in real production environments, where reliability is key.”

The funding announcement follows a period of significant growth for Codeium. The company has expanded its team to 80 professionals and grown its user base to over 700,000 active developers. In 2024, the enterprise product achieved eight figures in annual recurring revenue, with ARR increasing by over 500%. Additionally, Codeium now processes more than 100 billion tokens daily and has been integrated into production workflows at major companies such as Zillow, Dell, and Anduril.

Codeium has proactively removed “non-permissively” licensed code, such as copyrighted code, from the datasets used to train its AI models. This step addresses a common issue with some code-generating tools that, when trained on restrictively licensed or copyrighted code, can inadvertently reproduce that code, leading to potential legal risks for developers. According to Mohan, Codeium avoids this problem through its careful preparation and filtering of training data.

Recent technological advancements from Codeium include the launch of Cortex, an AI-powered reasoning engine for managing complex coding tasks, and Forge, an AI-assisted tool that enhances code review efficiency and culture. These innovations are part of Codeium’s mission to transform software development by making coding faster, smarter, and more intuitive.

In similar news, another AI coding assistant platform, Magic, closed a round of $320 million, including participation from Eric Schmidt, Atlassian, Jane Street, and Sequoia.

The post Cursor Rival Codeium Raises $150 Mn, Becomes Unicorn with $1.25 Bn Valuation appeared first on AIM.

]]>
CoRover, IRCTC, NPCI, Launch Conversational Voice Payments for UPI with BharatGPT https://analyticsindiamag.com/ai-news-updates/corover-irctc-npci-launch-conversational-voice-payments-for-upi-with-bharatgpt/ Fri, 30 Aug 2024 04:12:25 +0000 https://analyticsindiamag.com/?p=10134110

When a mobile number is provided, the system automatically retrieves the corresponding UPI ID and initiates the payment request via the user’s default UPI app.

The post CoRover, IRCTC, NPCI, Launch Conversational Voice Payments for UPI with BharatGPT appeared first on AIM.

]]>

NPCI, IRCTC, and CoRover have introduced “Conversational Voice Payments” for UPI at the Global Fintech Fest 2024, marking a significant leap in the ease of digital transactions. This new feature, integrated with payment gateways and powered by BharatGPT, allows customers to complete transactions using their voice or by typing their UPI ID or mobile number. 

When a mobile number is provided, the system automatically retrieves the corresponding UPI ID and initiates the payment request via the user’s default UPI app.

The feature offers flexibility, enabling users to update their mobile number or UPI ID within the transaction time limit. This groundbreaking technology, the first of its kind for UPI payments, aims to break down language barriers and make transactions more human-centric and accessible. CoRover’s voice-enabled BharatGPT enhances the multilingual system, supporting Hindi, Gujarati, and other languages, ensuring inclusivity in the payment process.

Vishal Anand Kanvaty, CTO of NPCI, expressed his enthusiasm, stating, “I am really excited for this launch because it addresses India and Bharat. Also, I think it helps all the citizens of this country to make the payment very seamless. I am really excited, we launched UPI123 last year, but the real use cases have started now, and I think I am really excited about this one because we will see this scale happening on this.”

Sanjay Kumar Jain, CMD of IRCTC, added, “As I speak, your payment will not get deducted. You have to speak; you have to raise your voice to get your payment deducted and get the tickets. That’s what we are offering.”

Ankush Sabharwal, Founder and CEO of CoRover.ai, highlighted the natural evolution of technology, saying, “What’s the natural way to interact? The natural way is that we see and talk, so why should technology not talk? In this pursuit, with the help of NPCI and IRCTC, you will now be able to talk to Disha and ask her to get your ticket booked, including payment. It is just a start; you will see everything, even hardware products talking.”

The integration of Conversational Voice Payments into AskDISHA, the AI virtual assistant for IRCTC and Indian Railways, allows users to book tickets and make payments using just their voice. This launch represents a pivotal moment in the evolution of AI, digital payments, and eCommerce, ushering in a more intuitive and convenient booking experience.

The post CoRover, IRCTC, NPCI, Launch Conversational Voice Payments for UPI with BharatGPT appeared first on AIM.

]]>
This AI Startup is an Iron Man Suit for the IT Guys https://analyticsindiamag.com/ai-origins-evolution/this-ai-startup-is-an-iron-man-suit-for-the-it-guys/ Wed, 28 Aug 2024 07:39:15 +0000 https://analyticsindiamag.com/?p=10133908

Vayu integrates deeply with the Salesforce developer platform, giving AI agents access to the same tools as the human developers.

The post This AI Startup is an Iron Man Suit for the IT Guys appeared first on AIM.

]]>

With so many companies developing automation tools and AI agents, it has become increasingly challenging to identify the differentiator or the moat of your company. 

Three industry veterans—Sidu Ponnappa, Aakash Dharmadhikari, and Steve Sule—found themselves at the crossroads. Originally setting out to build an IT services company, their journey took an unexpected turn with the arrival of ChatGPT in November 2022, which would redefine their approach to business and ultimately lead to the founding of realfast.ai.

Backed by PeakXV, RTP Global and DeVC, the leading project of realfast.ai is its Vayu platform, which emerged from the team’s real-world experience. The team believes that the only way to develop a meaningful AI assistant is on top of real work—commercial work that customers are willing to pay for because it has value. 

“Real work is messy, with many variables, and it’s a completely different dynamic from what you might expect in a lab environment,” Dharmadhikari told AIM.

The Vayu platform currently integrates deeply with the Salesforce developer platform, giving AI agents access to the same tools that human developers use but through a different interface. This approach allows AI to assist in tasks that traditionally require human intuition and experience.

One of the most innovative aspects of realfast’s approach lies in the way they train the AI agents. Unlike traditional AI models that rely on vast amounts of static data, realfast.ai’s models learn from observing human developers at work. 

“We realised that building an applied AI product is like teaching a child,” Ponnappa explained. “You need to break down your own thinking and create exercises that teach the AI the principles you’ve learned over years of experience.”

This method has led to the development of AI agents capable of handling over 700 specific tasks related to unit testing, all based on how human developers approach these tasks. “It’s a combination of natural language processing, fine-tuning, and generating large quantities of data,” Sule added.

The Moat

realfast.ai’s first major success came when they started working with design partners to implement their AI-driven processes. “We’ve been in production with a hybrid team—human developers assisted by AI agents—on tasks like unit testing,” Dharmadhikari shared. “We’ve seen up to a 3x speed-up in these processes.”

Sidu Ponnappa

As Realfast.ai continues to grow, the founders remain focused on their mission to revolutionise the IT services industry through AI. “The largest challenge we face is that there’s no existing data for this type of work,” Ponnappa noted. “The process and craft by which a finished product is created aren’t tracked or recorded anywhere because there’s been no reason to do so until now.”

With an example, the founders illustrated that the company was able to streamline 3,000 lines of legacy code for a company, which is very dirty and messy. This wouldn’t have been possible with just human engineers as it’s a tedious task. 

Currently, the platform uses models like ChatGPT and Anthropic’s Claude, but does not rely on open-source models as they believe that their reasoning capabilities are nowhere close to the proprietary ones. Moreover, unlike everyone else making open-source AI agents, the realfast.ai platform is SOC 2 and ISO compliant.

“We think of this platform as an Iron Man suit for the IT guys,” Ponnappa joked, noting that this is how they often pitch it to the investors. 

The Pivot They Needed

In the early days, Dharmadhikari and Ponnappa, both with extensive experience in services, especially from their time at Infosys and ThoughtWorks, respectively, decided to start a boutique IT company focusing on complex systems integration, “a space we believed had significant untapped potential”.

Sule had been an advisor for the team long before he came on board as a co-founder. 

This plan, however, took a drastic turn with the release of ChatGPT. The founders, already bullish on IT services, quickly recognised the transformative potential of AI. “Our strategy was always tool-oriented because value to customers comes from reliable delivery,” Ponnappa explained. 

“When ChatGPT dropped, it was unlike any tool we had ever seen. It wasn’t just a 10% improvement here or there; it was driving unprecedented changes across the entire lifecycle, from sales to coding.”

Pivoting to this field, as they continued to integrate ChatGPT into their processes, the founders quickly realised the potential of AI to revolutionise the IT services sector. 

“We started to see that the way IT services will look in five years post-AI adoption will be completely different from today,” Ponnappa noted. “It’s a platform problem. You need a platform where AI tools, assistants, and humans can work together seamlessly.”

This insight became the foundation of realfast.ai, a company dedicated to building an AI transformation platform that could support the unique needs of IT services companies. The goal was not just to enhance existing processes but to fundamentally change how services are delivered.

The post This AI Startup is an Iron Man Suit for the IT Guys appeared first on AIM.

]]>
Unlocking the Potential of RAG Models in Enterprise AI: Insights from NVIDIA’s Workshop https://analyticsindiamag.com/ai-highlights/unlocking-the-potential-of-rag-models-in-enterprise-ai-insights-from-nvidias-workshop/ Wed, 28 Aug 2024 06:03:35 +0000 https://analyticsindiamag.com/?p=10133902

Retrieval-augmented generation (RAG) models combine the power of large language models (LLMs) with external knowledge retrieval to produce more accurate and contextually relevant responses.  NVIDIA, in collaboration with AIM, recently hosted an in-depth online workshop led by Sagar Desai, a senior solutions architect specialising in LLMs and production deployment using NVIDIA’s stack. This workshop, part […]

The post Unlocking the Potential of RAG Models in Enterprise AI: Insights from NVIDIA’s Workshop appeared first on AIM.

]]>

Retrieval-augmented generation (RAG) models combine the power of large language models (LLMs) with external knowledge retrieval to produce more accurate and contextually relevant responses. 

NVIDIA, in collaboration with AIM, recently hosted an in-depth online workshop led by Sagar Desai, a senior solutions architect specialising in LLMs and production deployment using NVIDIA’s stack. This workshop, part of the NVIDIA AI Forum Community, aimed at exploring advanced techniques and best practices for optimising RAG models to achieve enterprise-grade accuracy.

You can now view the workshop recording here

The Growing Relevance of Generative AI in Enterprises

The workshop started with Sagar explaining the key points about generative AI, the technology that allows machines to generate content, and how it has been making significant inroads across various industries. 

From language processing to computer vision and beyond, generative AI is becoming a default skill for developers and is expected to become even more pervasive in the coming years. 

While LLMs like GPT-4 are powerful, they face challenges in handling domain-specific or real-time information. RAG offers a solution by integrating external data sources into AI outputs, making responses more accurate and contextually relevant. 

This capability is crucial for enterprises that need to incorporate proprietary or up-to-date information into their AI-driven processes, ensuring the content generated is both precise and relevant to their specific needs.

Key Techniques for Optimising RAG Models

During the workshop, Desai delved into several advanced techniques for optimising RAG models to achieve enterprise-grade accuracy. These techniques include:

  1. Query Writing: Crafting effective queries is essential for guiding the retrieval system in fetching the most relevant information. Desai emphasised the importance of understanding the context and intent behind a query to ensure that the retrieved data aligns with the user’s needs.
  2. Embedding Fine-Tuning: Embeddings, which are vector representations of text, play a crucial role in how LLMs and retrieval systems understand and process information. Fine-tuning these embeddings on domain-specific data can significantly improve the relevance and accuracy of the retrieved information.
  3. Re-ranking Strategies: After retrieving a set of potential responses, re-ranking techniques can be applied to prioritise the most relevant and accurate answers. Desai discussed various re-ranking methods that can be used to enhance the quality of the final output.
  4. Chunking Strategy: The workshop highlighted the importance of an effective chunking strategy when dealing with large documents or datasets. Splitting data into appropriately sized chunks ensures that the model can process and retrieve information more efficiently.
  5. Hybrid Embedding Models: Desai introduced the concept of hybrid embedding models, which combine semantic embeddings with keyword-based retrieval techniques (such as BM25) to improve the accuracy of the retrieval process. This approach allows for a more nuanced understanding of the query and better alignment with the user’s intent.

You can now view the workshop recording here

Scaling RAG Models for Enterprise Applications

One of the critical challenges in deploying RAG models at the enterprise level is ensuring that they can scale effectively while maintaining high accuracy and reliability. Desai discussed several strategies for scaling RAG models, including:

  • Utilising Advanced Hardware: NVIDIA’s stack, including the DGX systems and cloud solutions, provides the computational power needed to handle the massive demands of training and deploying RAG models. These systems are optimised for parallel computation, making them ideal for scaling LLMs and RAG models.
  • Leveraging Kubernetes for Inference Scaling: Kubernetes, a popular container orchestration platform, was highlighted as a powerful tool for managing and scaling AI inference workloads. By deploying RAG models within a Kubernetes environment, enterprises can achieve greater flexibility and efficiency in handling large-scale AI deployments.
  • Implementing Effective Indexing Mechanisms: Efficient indexing is crucial for the performance of RAG models, particularly when dealing with large datasets. Desai emphasised the importance of using vector databases, such as FAISS and Milvus, to streamline the retrieval process and improve the speed and accuracy of the system.

Best Practices for Deploying RAG Models in Production

Deploying RAG models in production environments requires careful planning and adherence to best practices to ensure that the models perform reliably and securely. 

Key takeaways from the workshop included:

  • Ensuring Data Security and Privacy: Enterprises must implement robust security measures to protect sensitive data used in training and deploying RAG models. This includes using secure APIs, encryption, and access controls to safeguard proprietary information.
  • Continuous Monitoring and Evaluation: Sagar underscored the importance of ongoing monitoring and evaluation of RAG models in production. This involves regularly assessing the model’s performance, accuracy, and relevance to ensure that it continues to meet enterprise requirements.
  • Fine-Tuning and Domain Adaptation: To maintain the accuracy and relevance of RAG models, enterprises should continuously fine-tune and adapt the models to their specific domains. This includes retraining the models on new data and adjusting the retrieval system to reflect changes in the knowledge base.

The Future of RAG Models in Enterprise AI

The NVIDIA workshop provided valuable insights into the advanced techniques and best practices for optimising RAG models to achieve enterprise-grade accuracy. As enterprises continue to adopt AI at scale, RAG models offer a powerful solution for overcoming the limitations of traditional LLMs by integrating real-time, domain-specific, and proprietary information into AI-generated content. 

By leveraging the techniques and strategies discussed in the workshop, enterprises can unlock the full potential of RAG models, enabling them to deliver more accurate, reliable, and contextually relevant AI solutions.

For AI professionals and enterprises looking to stay ahead of the curve, attending workshops and training sessions like NVIDIA’s is essential. These events provide a platform for learning, networking, and exploring the latest advancements in AI technology, ensuring that participants are well-equipped to tackle the challenges and opportunities in this rapidly evolving field.

You can now view the workshop recording here

Master the latest AI advancements with comprehensive, full-day workshops and earn technical certifications from NVIDIA experts. To learn more, attend the NVIDIA AI Summit India, scheduled for October 23–25, 2024, at Jio World Convention Centre — Mumbai. Register here.

The post Unlocking the Potential of RAG Models in Enterprise AI: Insights from NVIDIA’s Workshop appeared first on AIM.

]]>
Cracking YC: A Guide for AI Startups https://analyticsindiamag.com/ai-origins-evolution/cracking-yc-a-guide-for-ai-startups/ Tue, 27 Aug 2024 11:30:00 +0000 https://analyticsindiamag.com/?p=10133856

Since most of the current startups are tech and AI based, it is important for the founder to be a 'technical founder'.

The post Cracking YC: A Guide for AI Startups appeared first on AIM.

]]>

Today is the deadline for submitting applications to Y Combinator Fall Batch 2024. For the YC W24 program, 260 companies were selected from over 27000 applications. “With an acceptance rate under 1%, this was one of the most selective cohorts in YC history,” said Garry Tan, President & CEO of Y Combinator.

Despite this, the startup accelerator is still processing hundreds of applications every hour and this might be the best time to apply. But there are definitely a few things to keep in mind before you get selected and have an opportunity to click a photo with the famous YC signboard.

To start with, YC has its own list of Dos and Don’ts for getting started. This includes getting to the point, staying away from buzzwords, and showing the team that you won’t break within a year. 

As of early 2024, YC has funded over 4,500 startups with a combined valuation of more than $600 billion, making it an attractive lifeline for many emerging startup founders.

Interestingly, this year, YC has also taken a lot of interest in Indian AI startups, as their focus is increasing on founders and application based generative AI. What can startups learn from all the announcements so far? 

Focusing on a Technical Team with a Technical Founder

While YC will be at an advantage by investing in startups that will potentially hit millions of dollars, the YC programs are equally enticing for others as there is no requirement on formal degree or actual returns, YC simply bets on a founder and their idea.

One of the most important aspects that inventors look at is the founding team of a startup. And since most of the current startups are tech and AI based, it is important for the founder to be technical. So much so that he/she should be able to build the product by himself, if not, “you are not a technical founder,” said Isabella Reed, founder of CamelAI, which is part of YCW24.

Though some might disagree since tools like Cursor AI are changing the definition of what an engineer means, it is true that a startup which is building a tool, is a tech startup. For Indian founders, this is also a wake up call to start adopting tools like Cursor and others since it is now a given default for the future of tech products. 

One of the interesting strategies that founders are finding out is pitching their ideas to VCs and finding out the flaws and what can be improved, and some are still working on the pitch till 3 AM. One founder pitched to VCs and was able to get another co-founder for his startup, create a go-to-market strategy, and also send his product to beta users for feedback. 

Those who can’t get VCs, are asking people on X to help them with their YC applications and pitches. Which in many cases is turning to a roast show.

Meanwhile, one of the most important things for founders is to remember to not become a generative AI influencer on the way to YC. Though it can be fun to watch and preach, it is not helpful for the pitch and the process of getting selected. 

The most important thing to keep in mind is showing traction, picking a specific metric to back, a passion for the startup you are building, while also presenting the honest flaws of the product.

Don’t Get Blacklisted

It is not easy to get into YC. Amartya Jha, the co-founder and CEO of CodeAnt AI, which recently got into YC, narrated in a post how his team got rejected the first time because they couldn’t explain their product well to the investors. Despite this and the fear of getting blacklisted, they made a 45-minute video explaining their product and sent it to YC again. The following week, they managed to get another interview with YC and were finally selected!

Though an inspirational story, the lesson is to explain your product well in the first go.

Another Indian startup Asterisk by Mufeed VH, the creator of Devika, also got into YC. The team started with the Stition.ai and building cybersecurity products and is now finally into YC, becoming the first startup to get into YC from Kerala.

Given the small acceptance rate, founders keep scrolling their email inbox to find the invitation letter for the interview. Though it might be difficult to get an acceptance letter, it is definitely useful to at least fill out the form. 

“It’s the smallest set of questions you can ask to get a clear signal about a startup, so it’s also the smallest set of questions you can answer to get a clear signal about your own startup,” said Pete Koomen, group partner at YC.

Moreover, consistency is king. Kashif Ali, the founder of TaxGPT, which is part of the YC S24’s batch said that he applied seven times with three different companies in the last four years, and finally got in at the 8th time. 

At the same time, not everyone needs YC. If you are a true entrepreneur, you can be successful without YC. Keep applying though.

P.S. No sweat equity.

https://twitter.com/CCgong/status/1827441029114736739

[This is an opinion and not a promotional article]

The post Cracking YC: A Guide for AI Startups appeared first on AIM.

]]>
What Black Myth: Wukong’s Success Tells us About Investing in AI Startups https://analyticsindiamag.com/ai-insights-analysis/what-black-myth-wukongs-success-tells-us-about-investing-in-ai-startups/ Tue, 27 Aug 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10133809

VCs should be patient while investing in AI startups.

The post What Black Myth: Wukong’s Success Tells us About Investing in AI Startups appeared first on AIM.

]]>

Much like the AI industry, sceptics can say the gaming industry is also alive because of hyped products. And currently, the ‘hype’ is around Black Myth: Wukong

The game developed by Game Science, which is backed by the Chinese tech giant Tencent and Hero Interactive, is nothing short of a visual masterpiece, which is powered by NVIDIA GPUs. At the same time, its success story can be compared to OpenAI’s ChatGPT and also teaches a lot about how investments should work in the startup landscape.

To set the record, Black Myth: Wukong sold over 10 million copies in just three days, becoming the fastest selling game, beating Elden Right, Hogwarts Legacy, and even Red Dead Redemption 2. This is highly reminiscent of OpenAI acquiring 1 million users in just five days, compared to Instagram reaching 2.5 months to reach 1 million downloads.

But apart from the users, the game also tells us about what investing in AI startups means. 

The Word is ‘Patient Capital’

Black Myth: Wukong took the company five years to make. Game Science was established in 2014 in Shenzhen by seven former Tencent Games employees. Before shifting their focus to Black Myth: Wukong in 2018, the startup released mobile games and the shift to making the AAA game only happened because of the rise of Steam users in China. 

At that time, the company had 13 employees and in August 2020, when Game Science unveiled the first trailer for the game to attract talent, received over 10,000 resumes, including applications from AAA gaming companies and international candidates willing to apply for Chinese work visas. 

The development team eventually grew to 140 employees.

In March 2021, Tencent acquired a 5% minority stake in Game Science, emphasising that their role would be limited to technical support, without influencing the company’s operations or decisions. Though the financial backing was there, the company received several controversies around the game’s content and technical problems due to the shift from Unreal Engine 4 to Unreal Engine 5.

Before Tencent and Hero Interactive’s investment, Game Science’s financial performance was largely dependent on the success of their mobile games. While these games were commercially successful, the studio’s primary goal was to develop high-quality console games, which required significant financial backing. 

Similarly, AI Takes Time

This is what patient capital stands for. Believing in what the startup is building and giving them years to develop their products. What Game Science did right with their journey was developing smaller revenue generating games along the way to make up for the cost of building Wukong, which is something that AI startups should focus on. 

Or maybe they need investors like Microsoft and YC, who believe in OpenAI.

Arjun Rao, partner at Speciale Invest, also told AIM a similar strategy when investing in R&D startups. He said that it is essential to be patient when investing as these startups are still in the budding stage, and placing the bet on the right founders is paramount. “Founders do not need to worry about the current downturn and keep a long-term mindset,” said Rao.

AI startups, just like games, require extensive patient capital from investors as they require extended periods of R&D before their innovations can be brought to market. That is what happened with Microsoft backing OpenAI, eventually resulting in the release of Chat`GPT.

OpenAI started laying the foundation for ChatGPT in the beginning of 2020 when they released GPT-3. Though a great technological marvel, the company wasn’t generating profits, and still isn’t even after two years of ChatGPT’s release in 2022. 

India is expected to have around 100 AI unicorns in the next decade. This wouldn’t be possible if investors do not trust the founders and pour money into R&D. Prayank Swaroop, partner at Accel, said that VCs are increasingly expecting AI startups to demonstrate rapid revenue growth.

“Even to the pre-seed companies, we say, ‘Hey, you need to show whatever money you have before your next fundraiser. You need to start showing proof that customers are using you.’ Because so many other AI companies exist,” said Swaroop. 

This is what investing in AI startups should look like. The game that took more than five years to make couldn’t have been possible if the investors were just looking for immediate profitability and not betting on the founders. Maybe, Indian investors need to relook at their investment strategy.

The post What Black Myth: Wukong’s Success Tells us About Investing in AI Startups appeared first on AIM.

]]>
Why Developers are Uninstalling VS Code https://analyticsindiamag.com/developers-corner/why-developers-are-uninstalling-vs-code/ Tue, 27 Aug 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10133761

And why it might be a bad idea.

The post Why Developers are Uninstalling VS Code appeared first on AIM.

]]>

Microsoft is not shy of giving developers what they want. After Excel, and possibly Windows as well, the company introduced Copilot with GitHub. But somewhere there, VS Code, which has one of the largest footprint as a software for developers in today’s world, got lost for the recognition of its contribution.

Register for NVIDIA AI Summit India

“Not sure if Microsoft can truly comprehend the true footprint of VSCode,” said Jaana Dogan, principal engineer at Google. But at the same time, there is an apparent spree of people not liking what VS Code has become, stale. Though the abundance of repositories on GitHub that are forks of VS Code is undeniable, the new tools in the market are making VS Code struggle.

“Just uninstalled VS Code,” said a developer on X. Why? Probably because of the release of Cursor AI, which is touted as the ChatGPT moment in coding, is like the final nail in the coffin of VS Code. But is it really the end of VS Code?

What are the Problems?

There are several problems with VS Code, which is undeniable. Mohamed Yamani, a front end software engineer, said that VS Code for Python development sucks, explaining that it was not highlighting the problems with his code. Developers in the thread agree that they often end up using other IDE’s such as Jetbrains or even vim for Python. 

Even for C# and other languages, though VS Code has been introducing several updates, developers are not happy. Also, it is very easy to install malicious extensions through VS Code.

Coming to Cursor, which is basically a glorified fork or extension of VS Code, is capable of integrating several open source LLMs instead of an IDE like VS Code, making it handy for several AI developers. “You can select code and ask questions based on that piece of code. So you don’t have to keep switching between the IDE and browser,” explained a developer on X.

VS Code works well, but when it comes to AI development, it falls majorly behind Cursor with the integration of Claude 3.5 Sonnet, and other LLMs. Though VS Code allows integration of other LLMs such as Phi-3.5 or GPT-4, Cursor is just way more flexible with the availability of LLMs including Llama 3.1.

Alex Sidorenko, a React developer, made a video comparing VS Code with Copilot vs Cursor using Claude 3.5 Sonnet for Next.js App Router, where Cursor felt more aware and easier.

Cursor’s coding and AI capabilities should be a wake up call for Microsoft to make VS Code integration with GitHub Copilot a lot easier. “My assumption is that Cursor’s sudden surge in popularity will wake Microsoft up and they’ll make VS Code/GitHub Copilot integration a lot better, landing in a few months,” said Jamon Holmgren, the founder of Infinite Red.

Is it too late for Microsoft’s VS Code or GitHub Copilot to catch up? 

Cursor is Just One VS Code Update Away from Extinction

Exactly a year ago, Cursor was touted as if VS Code and ChatGPT had a baby. Cut to today, it is being called as a replacement for VS Code and GitHub Copilot. Conversations around dropping VS Code started exactly a year ago with the introduction of Cursor. But VS Code is still going strong. 

In a Reddit discussion, as developers were discussing the capabilities of Cursor and VS Code, they were willing to bet big on VS Code in the long run, as Cursor built by a small startup would struggle to compete with Microsoft. “Even if Cursor survives through the next year, I would go with VS Code just to have something standard and mainstream to compare it to,” said a user.

John Lindquist, creator of egghead.io, said that he had a chat with Harald Kirschner, project manager at VS Code, about Cursor and VS Code recently, and the team is keenly aware of Cursor’s capabilities and there might be several things in the pipeline. “I think we’ll all be pleasantly surprised,” he said. 

It seems like a Cursor like update for VS Code is coming soon, which might possibly make Cursor obsolete. Or to survive, Cursor might end by getting acquired by someone like OpenAI or Anthropic. 

But when it comes to VS Code, given the massive footprint it has, plans in the pipeline, it is hard to just uninstall it and move on. The concept of making everyone a developer using natural language, is not unknown to Microsoft. 

The post Why Developers are Uninstalling VS Code appeared first on AIM.

]]>
The Future of Coding is ‘Tab Tab Tab’ https://analyticsindiamag.com/ai-origins-evolution/the-future-of-coding-is-tab-tab-tab/ Mon, 26 Aug 2024 10:30:00 +0000 https://analyticsindiamag.com/?p=10133743

Coding is having a ChatGPT moment with Cursor AI.

The post The Future of Coding is ‘Tab Tab Tab’ appeared first on AIM.

]]>

Claude + Cursor = AGI. This is basically what the conversation is around coding right now everywhere on X. The hype around Cursor AI was just not enough, and then Andrej Karpathy added that he is choosing to use it instead of GitHub Copilot from now on. Is the future of coding in natural language already here?

Karpathy is the one who said more than a year back that English is the hottest programming language. Now for Cursor, he says, the future is ‘tab tab tab’ and that “The tool is now a complex living thing.” His support for Cursor even made people question if he is working with, or supporting, the team at Anysphere, the creators of Cursor, in some way.

Register for NVIDIA AI Summit India

Further in the thread, he added that with the capabilities of LLM shifting so rapidly, it is important for developers to continually adapt the current capabilities. This argument has resonated by many developers that it is increasingly becoming difficult to catch up with the new tools in coding.

Is it now time to bring back the old days of learning how to code? A user replied to Karpathy saying that they are very grateful that they had time to learn computer science prior to the advent of AI tools. “…if I was new to programming I would be too tempted to skip actual learning in favour of more LLM usage, resulting in many knowledge gaps,” the user added.

Karpathy agreed that it is a very valid concern and he feels that “it’s slightly too convenient to just have it do things and move on when it seems to work.” This has also led to the introduction of a few bugs when he is coding too fast and tapping through big chunks of code.

But, Cursor AI is Here to Stay

Overall, all of the coding assistant tools have given productivity gains for organisations. Before Cursor, developers in companies were relying on GitHub Copilot for coding faster, which was overall reported to have increased the productivity of the teams. But it definitely brings the question of how people would learn coding from scratch from now on.

It is almost as if Cursor is bringing a ChatGPT moment for coding. Earlier, Ricky Robinett, VP of developer relations at Cloudflare, posted a video of his eight-year-old daughter building a chatbot on the Cloudflare Developer Platform in just 45 minutes using Cursor AI, documenting the whole process, even the spelling mistakes while giving prompts! Even Jeff Dean, chief scientist at Google DeepMind was fascinated by it.

https://twitter.com/JeffDean/status/1827533480106062220

“It is the future,” said several developers who reposted the video. People who started using Cursor have started implementing their own tricks. The most common one currently is using Claude 3.5 Sonnet with Cursor AI, as it allows people to use other open source LLMs and architectures such as Replit, Tailwind, React, Vercel, Firebase and many more, as well in their workflows.

Developers have built complex projects using Cursor in just hours, without even writing a single line of code. Plus, using LLMs, the code generator could also explain the use cases and meaning of the code, which in the end assists in learning how to code as well. “If you’re a technical founder, Cursor + Claude 3.5 Sonnet is a legit team of senior software engineers,” said Sahil Lavingia, founder of Gumroad.

Wake Up Call for Others?

Francois Chollet, the deep learning guru and creator of Keras, said that he is interested in watching developers stream while programming using Cursor and Claude and another developer who is really good at coding, and compare how generative AI has worked for the former one. 

Chollet had earlier also said that it would be great if someone could fully automate software engineering as he could move on to working on higher things, which is what Cursor AI is slowly ushering in. 

Meanwhile, there is another tool in the market called Zed, which is released in partnership with Anthropic, the creator of Claude, which several developers claim is better than Cursor and VS Code as well. 

The same was the case with GitHub Copilot and even Cognition Labs’ Devin. Cursor’s capabilities should be a wake up call for Microsoft to make VS Code integration with GitHub Copilot a lot easier. Basically, Cursor is also a glorified VS Code extension. 

Devin, on the other hand, is still to be released, which might create a new era of programming as well. Probably, replacing Cursor AI, or an entire software engineering team. 

It is clear that most of the upcoming generation of developers would want their code to be generated completely by AI, which GitHub Copilot in many instances was failing to do. But the issues with generated buggy code still exist with Cursor, which might get fixed with future iterations. 

The post The Future of Coding is ‘Tab Tab Tab’ appeared first on AIM.

]]>
This AI Startup Wants to Fix Your Code Bugs. And Got YC Backing for It https://analyticsindiamag.com/ai-breakthroughs/this-ai-startup-wants-to-fix-your-code-bugs-and-got-yc-backing-for-it/ Mon, 26 Aug 2024 04:49:46 +0000 https://analyticsindiamag.com/?p=10133723

Tata 1mg has written hundreds of custom policies on CodeAnt platform.

The post This AI Startup Wants to Fix Your Code Bugs. And Got YC Backing for It appeared first on AIM.

]]>

Getting into YC takes work. Amartya Jha, the co-founder and CEO of CodeAnt AI, narrated in a post how he and his co-founder Chinmay Bharti got rejected the first time because they couldn’t explain their product well to the investors. Despite this and the fear of getting blacklisted, Jha and Bharti made a 45-minute video explaining their product and sent it to YC  again. 

The following week, they managed to get another interview with YC and were finally selected! But what is so good about what they are building? 

“Developers spend 20 to 30% of their time just reviewing someone else’s code. Most of the time, they simply say, ‘It looks good, just merge it,’ without delving deeper,” Jha explained while speaking with AIM. This leads to bugs and security vulnerabilities making their way into production. 

With generative AI, coding is undeniably getting easier. At the same time, the quality of code produced by these AI generators is still far from those of the human coders. This makes code review crucial for every organisation and this is where CodeAnt AI comes into the picture.

CodeAnt’s AI-driven code review system can drastically reduce these vulnerabilities by automating the review process and identifying potential issues before they reach the customer.

Founded in October 2023, CodeAnt AI is already making waves in the industry by automating code reviews with AI. The journey began at Entrepreneur First, where Jha met Bharti. The company quickly gained traction, securing a spot in YC by November 2023. 

By February 2024, they had launched their first product, attracting major clients like Tata 1mg, India’s largest online pharmacy, and Cipla, one of the country’s biggest pharmaceutical companies. “These companies were amazed that such a solution even existed,” Jha recalled. “And both contracts were paid, not just trials.”

What’s the Moat?

What sets CodeAnt AI apart from other players in the market such as CodeRabbit and SonarSource, which joined the market before CodeAnt, is its unique approach to code review. The company has developed its own dataset of 30,000+ checks, meticulously created to address every possible code commit scenario. 

“This is our pure IP,” Jha said. “We wrote our own algorithms to analyse code, understand its flow, and identify areas that need improvement. Our AI, combined with our custom engineering solutions, runs these checks on every code commit, offering unprecedented accuracy.” 

The platform also supports more than 30 programming languages.

While competition in the AI-driven code review space is growing, Jha remains confident in CodeAnt AI’s unique value proposition. Its biggest competitor is SonarSoruce, but interestingly, its lead investor is also one of the investors of CodeAnt. 

CodeRabbit relies solely on AI, which leads to a lot of hallucinations and false positives. “Our approach, which combines AI with deterministic policies, gives us a significant edge,” said Jha.

“The demand for tools like ours is only going to grow as AI-generated code becomes more prevalent,”  Jha added that tools like GitHub Copilot or Cursor are far from generating accurate code anytime soon.

Jha further elaborated on the limitations of competitors that use AI exclusively. When developers give a large codebase to AI, it tends to hallucinate. 

To mitigate this, CodeAnt built a foundation of hard-coded checks, which are further enhanced by AI. This reduces false positives and ensures that the code is not only correct but also secure and compliant with industry standards.

Amartya Jha speaking at Bangalore Python Meetup

Stands Out with Customisation

One of CodeAnt AI’s standout features is its ability to allow enterprises to input their own data and create custom policies. “For example, Tata 1mg has written hundreds of custom policies on our platform. Before CodeAnt AI, they would have had to build a similar platform themselves to enforce these policies,” Jha said, and added that Tata 1mg has written code in Python for the last eight years.

Now, they can simply use CodeAnt platform to ensure that their code complies with their specific guidelines, which is especially valuable for large organisations with complex codebases.

This level of customisation is not only beneficial for code quality but also for compliance with security frameworks. “Imagine a tool that knows exactly how code should be written in your organisation and can flag issues related to SOC 2 compliance, GDPR, or data privacy. It makes the process of maintaining compliance much easier and more efficient,” Jha added.

In the future, Jha will focus on expanding CodeAnt AI’s capabilities, particularly in the areas of security and code quality. “We’re currently working with some of the largest security and backup companies in the world, helping them review their code for vulnerabilities. Our team may be small, but we’re making a big impact,” he said.

Despite its San Francisco roots, CodeAnt AI maintains a strong presence in India, with Jha overseeing operations from Bangalore. The company is poised for further growth, with plans to expand its product offerings and deepen its market presence. 

“We are going deeper into security and code quality. We will soon be making announcements about new products and partnerships, so stay tuned,” Jha hinted.

The post This AI Startup Wants to Fix Your Code Bugs. And Got YC Backing for It appeared first on AIM.

]]>
xAI’s Grok-2 Ranks Second on the Chatbot Arena Leaderboard, Competing with Gemini 1.5 and GPT-4o https://analyticsindiamag.com/ai-news-updates/xais-grok-2-ranks-second-on-the-chatbot-arena-leaderboard-competing-with-gemini-1-5-and-gpt-4o/ Sat, 24 Aug 2024 05:11:57 +0000 https://analyticsindiamag.com/?p=10133668

Grok-2-Mini has earned the #5 position.

The post xAI’s Grok-2 Ranks Second on the Chatbot Arena Leaderboard, Competing with Gemini 1.5 and GPT-4o appeared first on AIM.

]]>

In an exciting development from the xAI team, Grok-2 and Grok-Mini have officially secured positions on the LMSys Chatbot Arena leaderboard. Grok-2 has taken the #2 spot, surpassing GPT-4o (May) and tying with the latest Gemini model, driven by over 6,000 community votes. 

Meanwhile, Grok-2-Mini has earned the #5 position.

https://twitter.com/lmsysorg/status/1827041269534879784

Grok-2 has excelled particularly in mathematical tasks, ranking #1 in this category, and secured the #2 positions across various other tasks, including hard prompts, coding, and instruction-following. 

Additionally, Grok-2-Mini has undergone significant speed enhancements, now performing twice as fast as before. This boost was achieved after xAI’s inference team as they completely rewrote the inference stack using SGLang, enabling more efficient multi-host inference and improved accuracy.

The team also introduced new algorithms for computation and communication kernels, alongside better batch scheduling and quantisation, further enhancing the models’ performance.

Several people are still sceptical about the performance. OpenAI’s GPT-4o, which claims the top spot, does not perform as well as Claude 3.5, which is at the 5th spot. Though, people have started experimenting with Grok-2 and claim that the model is actually brilliant in coding and maths related tasks.

Released in Beta this month, the Grok-2 family of models are also available for testing on X. The model also allows users to generate images using the FLUX.1 image generation model.

The post xAI’s Grok-2 Ranks Second on the Chatbot Arena Leaderboard, Competing with Gemini 1.5 and GPT-4o appeared first on AIM.

]]>
Andrej Karpathy Praises Cursor Over GitHub Copilot https://analyticsindiamag.com/ai-news-updates/andrej-karpathy-praises-cursor-over-github-copilot/ Sat, 24 Aug 2024 03:19:33 +0000 https://analyticsindiamag.com/?p=10133665

“The tool is now a complex living thing,” Karpathy added.

The post Andrej Karpathy Praises Cursor Over GitHub Copilot appeared first on AIM.

]]>

Cursor is currently the hottest AI developer tool. And its prowess is further reinforced when Andrej Karpathy decides to praise it. In a recent post on X, Karpathy said that he is trying Cursor along with Claude Sonnet 3.5 instead of GitHub Copilot and said it’s a net win.

Register for NVIDIA AI Summit India

“Just empirically, over the last few days most of my ‘programming’ is now writing English (prompting and then reviewing and editing the generated diffs), and doing a bit of ‘half-coding’ where you write the first chunk of the code you’d like, maybe comment it a bit so the LLM knows what the plan is, and then tab tab tab through completions,” said Karpathy, adding that he is able to do coding a lot faster with the help of the tool.

Further in the thread, he added that with the capabilities of LLM shifting so rapidly, it is important for developers to continually adapt the current capabilities. “The tool is now a complex living thing,” he added.

“All this talk of Claude + Cursor and becoming capable of building anything you put your mind to (no matter your skill set) is warranted. If this is the future, I want to live in it,” said Jordan Singer from Figma.

Sebastian Raschka, LLM research engineer at Lightning AI, said that though using the tool is fun, he sometimes enjoys unassisted coding as well. “It’s like driving a manual car (/stick). Not the most practical but fun,” he explained with an analogy. 

Pratik Desai, CEO of KissanAI, added, “Once you start using it regularly, you are going to develop your own tricks suitable for your style, that may not be over there in their onboarding tutorials.”

Anysphere, the company building the tool, recently raised $60 million led by Andreessen Horowitz, and included contributions from OpenAI’s Jeff Dean, John Schulman, Nat Friedman, and Noam Brown, valuing the company at $400 million. 

The hype around Cursor AI is true. To give an example, Ricky Robinett, VP of developer relations at Cloudflare, posted a video of his eight-year-old daughter building a chatbot on the Cloudflare Developer Platform in just 45 minutes using Cursor AI, documenting the whole process, even the spelling mistakes while giving prompts!

Read: Cursor AI is Better at Coding Than GitHub Copilot Will Ever Be

“The hottest new programming language is English,” Karpathy said more than a year back. Now for him, it is coming true in the purest sense.

The post Andrej Karpathy Praises Cursor Over GitHub Copilot appeared first on AIM.

]]>
Can Gen AI Reduce the Technical Debt of Supply Chain Platforms https://analyticsindiamag.com/ai-highlights/can-gen-ai-reduce-the-technical-debt-of-supply-chain-platforms/ Fri, 23 Aug 2024 13:03:46 +0000 https://analyticsindiamag.com/?p=10133651

Madhumita Banerjee, sheds light on how technical debt accumulates for Enterprises, and how generative AI can play a pivotal role in addressing these challenges.

The post Can Gen AI Reduce the Technical Debt of Supply Chain Platforms appeared first on AIM.

]]>

Technical debt, like financial debt, is a concept in information technology where shortcuts, quick fixes or immature solutions used to meet immediate needs burden enterprises with future costs. This debt can significantly impact supply chain efficiency, especially as businesses face the pressures of staying competitive and agile in a post-pandemic world. 

Madhumita Banerjee, Associate manager, Supply chain and manufacturing at Tredence, sheds light on how technical debt accumulates for Enterprises, and how generative AI can play a pivotal role in addressing these challenges.

Banerjee explained that in the context of supply chains, technical debt accumulates when outdated systems, fragmented processes, and manual, siloed workflows are used. Over time, these inefficiencies lead to increased operational costs, reduced responsiveness, and heightened exposure to risks, making it harder for companies to remain competitive.

One of the primary contributors to technical debt, according to Banerjee, is the reliance on legacy systems. “Many supply chains rely on outdated systems that are difficult to integrate with modern technologies, leading to increased maintenance costs and inefficiencies,” she noted. 

These legacy systems, coupled with data silos where information is stored in disparate systems, create significant barriers to seamless information flow, which is critical for supply chain efficiency.

Manual processes also play a role in accumulating technical debt. Tasks requiring human intervention are prone to errors and delays, contributing to inefficiencies and higher operational costs. 

As companies transitioned to digitalization, the rushed adoption of custom solutions and cloud migrations—often driven by the need to keep pace with technological advancements—introduced added complexity and heightened system maintenance burdens. Generative AI emerges as a pivotal new factor in this scenario. Although early adopters face new risks and the possibility of future debt with each generative AI deployment, the technology shows significant promise in addressing these challenges

The Role of Generative AI in Addressing Technical Debt

Banerjee emphasised that while analytics has historically helped connect data and enhance visibility, the emergence of generative AI, especially LLMs, marked a significant shift. 

“Conversational AI and LLM-powered agents make it easier for functional partners—both technical and non-technical—to understand and act on complex data,” she explained. This is especially crucial in supply chains, where not all stakeholders, such as warehouse partners and freight workers, are tech-savvy.

One of the most significant advantages of generative AI in supply chain management is its ability to enhance data integration and visibility. For instance, in order processing, which traditionally involves many manual steps prone to errors, generative AI can automate the entire workflow—from order intake and validation to order confirmation—ensuring seamless communication across departments and reducing the need for manual intervention.

Generative AI also holds promise in optimising decision-making processes within supply chain platforms. However, Banerjee noted that the effectiveness of generative AI in this area depends on the maturity of the supply chain itself. 

“For instance, if we have an LLM-powered event listener that detects market sentiments and links this information to the forecast engine, it can significantly narrow down the information demand planners need,” she said. 

This level of optimisation requires a robust and connected data model where all data parts communicate effectively, enabling real-time insights and more accurate demand forecasts.

Predictive Analytics, Real-time Data Processing, and Compliance

Banerjee said that predictive analytics is another area where generative AI can revolutionise supply chain management. She recalled the evolution from traditional “what-if” analyses to more advanced machine learning algorithms that predict outcomes over time. 

However, she pointed out that decision-making has now evolved to require not just predictions but also a deeper understanding of cause-and-effect relationships. “With GenAI, we can weave in causal discovery algorithms that translate complex data into actionable insights presented in simple English for all stakeholders to understand,” she added.

This capability is particularly valuable in areas like inventory forecasting, where understanding the root causes of forecast errors and deviations can lead to more accurate and reliable predictions. By translating these insights into easily digestible information, generative AI empowers supply chain managers to make more informed decisions, ultimately improving efficiency and reducing costs.

Speaking about real-time data processing being critical for the effectiveness of generative AI, Banerjee clarified that it’s not the AI that contributes to real-time processing but the other way around. 

“We need to have real-time data to make sure we can analyse scenarios and use generative AI to its maximum potential,” she explained. For instance, ensuring that data entered into legacy systems is immediately available on the cloud allows LLMs to process and convert this data into actionable insights without delay.

In terms of compliance and risk management, generative AI can bolster efforts by removing manual interventions. Banerjee highlighted procurement and transportation as a key area where GenAI can enhance compliance. In transportation, where contracts are reviewed annually, GenAI-powered systems can query specific contracts, compare terms, and ensure adherence to key metrics like freight utilisation and carrier compliance.

Challenges and Future Outlook

Although generative AI offers numerous benefits, challenges still persist. Banerjee stressed the importance of properly vetting the fitment and maturity of Gen AI strategy. “Embarking on the GenAI journey may appear simple, but without a thorough assessment of need and fitment, along with strong investments in data quality, integration, and governance, companies are likely to deepen their technical debt.”, she added

One of the most significant concerns is the issue of “hallucination”, where AI models generate incorrect or misleading information. and validating the data on which AI models are trained to avoid garbage in-garbage out scenarios.

In summary, Banerjee ties the discussion back to the central theme of technical debt. By addressing the key contributors to technical debt—legacy systems, data silos, and manual processes—generative AI can help reduce future costs and risks, enabling companies to pursue digital initiatives with greater confidence. 

“If we can successfully integrate GenAI into our systems, we can revolutionise the entire supply chain platform, making it more efficient, responsive, and competitive”, she concluded.

The post Can Gen AI Reduce the Technical Debt of Supply Chain Platforms appeared first on AIM.

]]>
Toss a Stone in Bangalore and It will Land on a Generative AI Leader https://analyticsindiamag.com/ai-origins-evolution/toss-a-stone-in-bangalore-and-it-will-land-on-a-generative-ai-leader/ Fri, 23 Aug 2024 11:18:49 +0000 https://analyticsindiamag.com/?p=10133632

But then not everyone can be a good GenAI leader.

The post Toss a Stone in Bangalore and It will Land on a Generative AI Leader appeared first on AIM.

]]>

Are software engineering roles disappearing? Not exactly. The apparent decline in job postings might just be a shift in titles—think ‘Generative AI Leader’ instead of ‘Software Engineer’. And now, as if on cue, everyone has become a generative AI expert and leader. Not just Silicon Valley, it’s the same in Bengaluru too. 

Vaibhav Kumar, senior director of AI/ML at AdaSci, pointed out this phenomenon in a LinkedIn post. “In Bangalore these days, if you randomly toss a stone, odds are it will land on a Generative AI Leader—earlier, they used to be software engineers, but now they seem to be on the brink of extinction.”

It is true that there are a slew of new jobs that are being created because of generative AI such as AI entrepreneur, chief AI officer, AI ethicist, AI consultant, and at least 50 more. But it has also given rise to people who simply call themselves ‘generative AI leaders’.

Everyone’s an AI Engineer

Kumar’s point is that now everyone is calling themselves an expert in AI as the barrier to entry is significantly lower. But then not everyone can be a good GenAI leader. Vishnu Ramesh, the founder of Subtl.ai, said that the only way to find a good generative AI leader is to ask them how many generative AI projects they have driven and whether these projects actually benefited the organisation. 

“The number of chatbots built will soon overtake Bangalore traffic,” Shashank Hegde, data science and ML manager at HP, said in jest, implying that with every company experimenting with generative AI in use cases, most of them are coming up with chatbots on their systems, which, honestly, people are not very fond of. 

Ramesh and Hegde’s points found takers. More engineers in the discussion described how their team’s ‘generative AI leaders’ were not able to not perform basic tasks of machine learning, and were mostly experts on data science, not generative AI really. “A Rs499 course from Udemy or Coursera is delivering GenAI leaders very rapidly,” commented Chetan Badhe.

AIM had earlier reported that fancy courses from new organisations are creating jobs for the future, but also causing freshers to search for jobs that are not there in the market. “GenAI leaders don’t know what’s bidirectional in BERT,” added another user.

Meanwhile, a recent IBM study found out that around 49% of Indian CEOs surveyed said they were hiring for GenAI roles that didn’t exist last year. Also, 58% of Indian CEO respondents say they are pushing their organisation to adopt generative AI more quickly than some people are comfortable with. 

This makes it clear that generative AI is the leading focus for companies, although for some, it’s more about appearances than substance. 

Moreover, getting ‘generative AI’ on your profile can also boost your salary by up to 50%. AIM Research noted that the median salaries of generative AI developers and engineers ranged between INR 11.1 lakh and 12.5 lakh per annum.

It just makes sense to call yourself a generative AI leader if you are already working in the software engineering field. But getting upskilled in the field to be credible is also important. 

Bengaluru is All About AI

Just like everyone is “thrilled” on LinkedIn, everyone is doing something in AI in Bengaluru. Someone correctly pointed out: If LinkedIn was a city, it would definitely be Bengaluru. The city’s AI hub, HSR Layout, was recently looking for a chief everything officer, someone who can perform a plethora of tasks all alone. 

And it is indeed true that most of the software engineering is becoming all about generative AI because of the trend and hype. Earlier, Bangalore was filled with software engineers from IT industries and startups; now they have slowly turned to generative AI leaders. Some are even influencers on X or LinkedIn. 

At the same time, Bengaluru’s AI culture is also giving rise to 10x engineers, who are able to do the task of 10 people using generative AI. Some even argue that there is no need for a computer science degree anymore to get into AI. It is definitely time to rewrite your resume and say you are a ‘generative AI leader’.

The post Toss a Stone in Bangalore and It will Land on a Generative AI Leader appeared first on AIM.

]]>
India will Have 100 AI Unicorns in the Next Decade https://analyticsindiamag.com/ai-insights-analysis/india-will-have-100-ai-unicorns-in-the-next-decade/ Thu, 22 Aug 2024 10:00:00 +0000 https://analyticsindiamag.com/?p=10133498

With $20 billion dry powder, India’s AI startup ecosystem is definitely foreseeing unicorns.

The post India will Have 100 AI Unicorns in the Next Decade appeared first on AIM.

]]>

India’s AI ecosystem is on fire. This year alone, all our feeds were filled with announcements from AI startups raising funds or launching innovative products and solutions. Surprisingly, the number of AI unicorns in India is still just one – Krutrim.

Can that change? Rahul Agarwalla, managing partner at SenseAI Ventures, definitely believes so

In a recent podcast, Agarwalla said that he is witnessing major growth in India’s AI ecosystem. “Over the past six months, we have seen over 500 AI startups. That’s a phenomenal amount of innovation that is going on in the country,” he said, adding that this gives him the belief that India will give birth to around 100 AI unicorns in the next decade.

The message sounds motivating, but should be taken with a grain of scepticism. Agarwalla was talking about all AI startups, including the ones using existing infrastructure or barely building applications on top of others. But India, which is often dubbed as the AI use case capital of the world, is largely struggling with innovation in the AI realm, at least at the moment. 

Similarly, Arjun Rao, partner at Speciale Invest, the firm which recently made three investments in the space, told AIM, “We are bullish about many successful AI companies being built from India, but making number predictions is fraught with risks. It is fairly clear that we’re in the very nascent stages of this AI revolution. Each year we’ll make progress and will need to iterate and solve problems along the way.”

Apart from the names like Sarvam AI, Krutrim, TWO.AI, Project Indus, SML, and a few others, Indian AI startups are largely powered by jugaad, and not the VC money. This is something that needs to change fairly quickly. 

Speaking with AIM, Prayank Swaroop, partner at Accel India, said that the 27 AI startups his firm has invested in in the past couple of years will be worth at least five to ten billion dollars in the future, which also includes wrapper-based startups. “A majority of people can start with a wrapper and then, over a period of time, build the complexity of having their own model. You don’t need to do it on day one,” said Swaroop.

AI Soonicorns on the Rise 

According to AIM Research data, Indian AI startups raised a total funding of $560 million across 25 rounds in 2024, which is a decline of 49.4% compared to the previous year. When it comes to global AI startups, the funding was around $27 billion from just April-June this year. 

Despite the slowdown in India, AI remains the hottest theme for investors as the demand from enterprises is increasing. “The numbers around AI investing need to be taken with a pinch of salt… AI today is like the cloud – almost every new startup is utilising it to varying degrees,” said Alok Goyal, partner at Stellaris Venture Partners. About 50% of the firm investments over the last 12 months were around AI-first startups. 

As several Indian AI startups are slowly moving towards building foundational AI solutions, it is necessary for them to raise funds in billions to keep up with the infrastructure cost and acquiring the necessary talent. 

Apart from that, for VCs, the opportunity also lies in several AI applications startups as mentioned above. But betting on AI startups to become a unicorn in a decade is still a risky bet for them as the generative AI landscape keeps changing with the release of new products from the big-tech companies. 

“Deals are also being announced later as startups are building in stealth. We are aware of at least 15 AI investments which are yet to be announced,” said Agarwalla from SenseAI.

Dry Powder is All You Need

The dream of 100 AI unicorns over the next decade is somewhat achievable given the amount of money the investors are sitting on. Rajan Anandan, the managing partner at Peak XV Partners, recently spoke about this. He sees potential for growth and innovation, and foresees India making a significant impact on the global stage. 

At the Global IndiaAI Summit, Anandan said, “Our firm has over INR 16,000 crore of dry powder. We just want more people starting up in AI.” 

He added that Peak XV has invested in 25 AI startups so far. Emphasising that capital availability is not a problem, Anandan mentioned that the VC ecosystem has $20 billion ready to be invested in Indian startups. He also noted that AI is currently the most significant theme, with investors showing strong enthusiasm for startups in this field. 

Antler, a VC firm, recently announced a $10 million investment in early-stage Indian AI startups. Rajiv Srivatsa, partner at Antler, shared the news on X, emphasising: “NOW is the best time to build a startup in India!”

With incubators and startup programmes like Google, NVIDIA, and JioGenNext, along with the government procuring GPUs from NVIDIA specially for Indian AI startups, it seems like the prediction might be true. 

At the same time, investors have also become wary about who they are investing in. Earlier, it was said that many VCs don’t have a thesis for AI startups, but now, the questions during pitches are extremely detailed. Though they are risk averse, the understanding of AI is now within their grasp.

The post India will Have 100 AI Unicorns in the Next Decade appeared first on AIM.

]]>
Ex-Tech Mahindra Veteran Jagdish Mitra Unveils GenAI SaaS Startup Humanize, Plans to Hire 1,000 Employees in 5 Years https://analyticsindiamag.com/ai-news-updates/ex-tech-mahindra-veteran-jagdish-mitra-unveils-genai-saas-startup-humanize-plans-to-hire-1000-employees-in-5-years/ Thu, 22 Aug 2024 03:51:07 +0000 https://analyticsindiamag.com/?p=10133440

Mitra aims to expand into Southeast Asia, Germany, the Nordic countries, the United Kingdom, Australia, and Japan, with a focus on customer experience, supply chain, and sustainability.

The post Ex-Tech Mahindra Veteran Jagdish Mitra Unveils GenAI SaaS Startup Humanize, Plans to Hire 1,000 Employees in 5 Years appeared first on AIM.

]]>

Former Tech Mahindra India business head Jagdish Mitra has announced the launch of Humanize, a generative AI IP-powered SaaS startup, just four months after leaving the IT services company. 

The new venture, which combines generative AI with low code/no code capabilities, aims to reduce the time it takes for customers to develop and bring solutions to market.

Mitra, now the founder and CEO of Humanize, explained, “One of the big problems businesses have is they don’t seem to have all the data structured and organised in a manner where they can take advantage of AI. So that’s our first job at Humanize. We are creating our own GenAI toolkit, which will help deploy the same SaaS platform at 15 – 25 percent faster pace.”

The launch of Humanize follows former Tech Mahindra CEO CP Gurnani’s AI venture AionOS, which was founded in partnership with InterGlobe founder Rahul Bhatia.

According to the report, Humanize’s product is currently in development and is expected to hit the market within the next five to six months. The startup plans to initially target India and the US, focusing on GCCs within enterprises. 

The startup’s hiring strategy includes expanding its team, currently composed of 14-15 senior executives, to 800-1,000 employees over the next five years. Humanize also plans to onboard around 80-100 customers during this period. 

Leveraging his position on the board of the National Skill Development Corporation (NSDC) and the government’s youth internship program, Mitra is committed to hiring about 100 skilled individuals from the internship program annually.

In the long term, Mitra aims to expand into Southeast Asia, Germany, the Nordic countries, the United Kingdom, Australia, and Japan, with a focus on customer experience, supply chain, and sustainability.

The US-headquartered startup will begin with two monetisation strategies: a project-oriented, services-based model, which will later transition into an outcome-based model. Mitra explained, “I’m trying to change the good old [revenue] model that already got applied many times over to something which is a lot more outcome-oriented rather than manpower-oriented. I don’t want to go to the manpower-oriented models.”

Humanize has also formed a strategic partnership with global technology and digital talent solutions provider NLB Services, which has invested in the startup. Mitra, confident in the current funding, does not plan to seek additional capital for product development and market launch.

“NLB’s biggest advantage is that they are a big talent company. They have built their whole business around being able to source the right talent at speed,” Mitra added.

The post Ex-Tech Mahindra Veteran Jagdish Mitra Unveils GenAI SaaS Startup Humanize, Plans to Hire 1,000 Employees in 5 Years appeared first on AIM.

]]>
GSTN Announces Analytics Hackathon to Create AI/ML Predictive Models for GST with Prizes up to INR 50 Lakh https://analyticsindiamag.com/ai-news-updates/gstn-announces-analytics-hackathon-to-create-ai-ml-predictive-models-for-gst-with-prizes-up-to-inr-50-lakh/ Wed, 21 Aug 2024 14:40:09 +0000 https://analyticsindiamag.com/?p=10133422

Prizes will be awarded to the top three teams whose models are deemed viable by the jury, with a special prize for an all-women team if eligible.

The post GSTN Announces Analytics Hackathon to Create AI/ML Predictive Models for GST with Prizes up to INR 50 Lakh appeared first on AIM.

]]>

The Goods and Services Tax Network (GSTN) has launched an Analytics Hackathon focused on developing a predictive model for GST, open to students and researchers. 

Participants have the chance to create AI and ML solutions using GST data, competing for a substantial prize pool exceeding INR 50 lakhs, with INR 25 lakhs awarded to the top entry. 

Click here to register

The online challenge runs from 12:00 PM on 12th August 2024 until 12:00 PM on 26th September 2024.

Participants will work with a dataset of roughly 900,000 anonymised records, each featuring around 21 attributes and target variables. This well-labeled dataset includes training, testing, and a non-validated subset reserved for final evaluations. The objective is to develop and implement innovative AI and ML algorithms to tackle the challenges set forth in the hackathon.

The GSTN Analytics Hackathon will be conducted online, encompassing participant registration, dataset access, and prototype submission. An offline event will be held for shortlisted participants in the final round. 

Teams can consist of up to five members, with at least one team lead. The hackathon will last 45 days, from the start of registration to the final submission date.

Participants must upload their code to a GitHub repository and may optionally provide a demo or product video on YouTube before submitting their solution prototypes. Required fields for online submissions include the project idea, description, source code URL, video URL, GitHub unique source code checksum, and a project report.

Prizes will be awarded to the top three teams whose models are deemed viable by the jury, with a special prize for an all-women team if eligible. Consolation prizes will be awarded at the jury’s discretion if no solution meets the required standards. 

The post GSTN Announces Analytics Hackathon to Create AI/ML Predictive Models for GST with Prizes up to INR 50 Lakh appeared first on AIM.

]]>
Is Sarvam 2B Good Enough for Indian AI Developers? https://analyticsindiamag.com/ai-breakthroughs/is-sarvam-2b-good-enough-for-indian-ai-developers/ Wed, 21 Aug 2024 11:42:56 +0000 https://analyticsindiamag.com/?p=10133383

One good thing is that this will mark the end of ‘Indic Llama Garbaggio’.

The post Is Sarvam 2B Good Enough for Indian AI Developers? appeared first on AIM.

]]>

Sarvam AI, or what we call the OpenAI of India, is probably one of the most celebrated startups by Indian developers. Its focus on Indian languages and open source philosophy has been the cornerstone for AI development in India. 

Recently, at Sarvam’s recent launch event, the company announced Sarvam 2B, its open source foundational model trained on 4 trillion tokens of data with 50% of it in 10 Indic languages. 

According to co-founder Vivek Raghavan, Sarvam 2B is among other small language models in the market, such as Microsoft’s Phi-3, Meta’s Llama 3, and Google’s Gemma Models, which were all decent enough for Indic AI development. 

But is Sarvam 2B really good enough for Indic AI developers?

First, to clear the air, the model uses a slightly modified Llama architecture, but is trained from scratch. This means there is no continued pre-training or distillation, making it the first foundational model built entirely for Indic languages. 

We can also call it an ‘Indic Llama trained from scratch’. This is several steps ahead of the company’s earlier approach of building OpenHathi, which was a model fine-tuned on top of Llama 2.

When compared to other models, Sarvam 2B outperforms GPT-4o and gives tough competition to Llama 3.1-8B and Gemma-2-9B in speed and accuracy for Indic languages.

The End of Indic Llama?

Models from Google, Meta, or OpenAI, have been increasingly trying to do better at Indic language translation and inference, but have been failing big time. Indian developers have been fine-tuning these models on billions of tokens of Indian language data, but the quality hasn’t necessarily improved. 

Plus, these models have rarely been adopted for any production. “The best thing about Sarvam’s 2B model is that it puts an end to the Indic Llama garbage,” said Raj Dabre, a prominent researcher at NICT in Kyoto and a visiting professor at IIT Bombay. 

Pratik Desai, the founder of KissanAI, agrees with Dabre and said that none of those models were useful for production. He still has those same doubts about Sarvam 2B, even though it is a great stepping stone towards better Indic models.

“I doubt it will be used in production. Actually none of the 2B models are good enough to serve paid customers,” said Desai, adding that even models such as Gemma 2B are not useful – not even for English use cases.

Desai further explained that LLMs with 2 billion parameters struggle with following consistency and instructions even for summarisation, and are only good enough as helper models. But since Sarvam is also planning to release more versions in the future, these issues might get resolved eventually. 

Regardless, the small size of the models make them useful for edge use cases and on-device LLMs. This, along with an audio model called Shuka released by Sarvam, would be helpful for the developer community to experiment with. But taking these models into production is still a hard decision to make as the output can become unreliable for several use cases, as is the case with other English models of the same size.

Along with this, the company also released Samvaad-Hi-v1 dataset, a collection of meticulously curated 100,000 high-quality English, Hindi, and Hinglish conversations, which is an invaluable resource for researchers and developers. 

A Good Start

One of the key aspects of Sarvam 2B’s brilliant performance is it is completely trained on synthetic data, which is similar to Microsoft’s Phi series of models. Even though the debate around using synthetic data is still on, Raghavan said that it is essential to use synthetic data for Indian languages as the data is not available on the open web. 

Furthermore, using synthetic data generated by the Sarvam 2B model would also help in improving the model’s capabilities in the future, or would make it run in circles. 

Raghavan has also said that the cost of running Sarvam 2B would be a tenth of other models in the industry. With APIs and Sarvam Agents in the pipeline, it is clear that Sarvam’s sovereign AI approach is helping them build great models for Indian developers. 

The post Is Sarvam 2B Good Enough for Indian AI Developers? appeared first on AIM.

]]>
Cursor AI is Better at Coding Than GitHub Copilot Will Ever Be https://analyticsindiamag.com/developers-corner/cursor-ai-is-better-at-coding-than-github-copilot-will-ever-be/ Tue, 20 Aug 2024 10:18:45 +0000 https://analyticsindiamag.com/?p=10133241

You won’t get replaced by AI. You will get replaced by an eight-year-old using Cursor AI.

The post Cursor AI is Better at Coding Than GitHub Copilot Will Ever Be appeared first on AIM.

]]>

It has been crazy how AI tools are making everyone a developer. Earlier it was GitHub Copilot, though its allure among developers has slowly been dying. Now, it is Cursor AI, which is now being celebrated as the best developer tool for AI right now.

To give an example, Ricky Robinett, VP of developer relations at Cloudflare, posted a video of his eight-year-old daughter building a chatbot on the Cloudflare Developer Platform in just 45 minutes using Cursor AI, documenting the whole process, even the spelling mistakes while giving prompts!

https://twitter.com/rickyrobinett/status/1825581674870055189

With the latest updates, Cursor AI included the feature of an alternate composer window that gives suggestions based on queries for the next step. People have been building apps in just a couple of days without even writing a single line of code

According to developers, the code completion quality and speed of Cursor AI is oftentimes better than GitHub Copilot and Supermaven AI, as it comes with Copilot++. For flexibility, the AI editor also allows users to switch from Microsoft’s VSCode into Cursor anytime, as it’s built on top of it. 

It has been only a few months since Cursor emerged out of nowhere, and people are still trying to figure out how it works so fast. Taking to X, Jack Zerby, a designer for Gumroad, said he was able to build an equity calculator with the help of Cursor AI, which would have taken weeks using other tools.

Compared to GitHub Copilot, which charges $10 per month after a one month free trial, Cursor AI is free for most use cases. But for the Pro version, it costs $20 per month with unlimited completions and 500 fast premium requests using GPT-4 or Claude-3.5 Sonnet. This is comparatively expensive when compared to GitHub Copilot, which has unlimited requests to OpenAI’s Codex Model.

Taking the Crown of the Fastest Copilot?

Cursor AI is being built by a team of developers constituting Anysphere, that started only two years back. It raised a seed funding of $8 million from the OpenAI Startup Fund, which also included participation from former GitHub CEO Nat Friedman and Dropbox co-founder Arash Ferdowsi. Cut to the present, Cursor AI is emerging as the strongest competitor to GitHub Copilot. 

According to reports, the team, led by Michael Truell, Sualeh Asif, Arvid Lunnemark, and Aman Sanger, is also in talks to raise over $60 million in its series A round, co-led by a16z and Thrive Capital, with Patrick Collison, the co-founder of Stripe also participating. This will get Anysphere $400 million post money valuation.

The Cursor AI tool has been adopted by several engineers at companies such as Samsung, Replicate, Midjourney, Shopify, and Perplexity. It can also access the existing codebase of teams, which helps it in retrieving accurate code in clicks, or just by hitting tab, along with natural language prompts and instructions.

People have been experimenting in different ways with the AI tool, and found that it works the best when it is paired with Anthropic’s Claude 3.5. Jordan Singer from Figma said that Cursor is definitely showing what the future of development will be. “All this talk of Claude + Cursor and becoming capable of building anything you put your mind to (no matter your skill set) is warranted. If this is the future, I want to live in it,” he said. 

When Santiago Valdarrama, founder of Tideily, posed the question, “If you aren’t using GitHub Copilot to write code, can you tell me why?”, one of the most common reasons given was that developers have started using Cursor AI, which Valdarrama said is a very genuine reason. 

Is Everyone Finally a Developer?

When Satya Nadella announced that everyone’s a developer with the rise of coding tools that can help code in natural language, he was mostly talking about GitHub Copilot and ChatGPT. But Cursor AI has taken it a step further because of its flexibility, even giving tough competition to Cognition Labs’ Devin

But it’s not just Cursor AI which is changing the landscape of coding. In April, Eric Schmidt-backed Augment also came out of stealth with $252 million, gaining a valuation of $977 million. Built by Ex-Microsoft software developer Igor Ostrovsky, Augment is similar to Cursor, utilising open tools available in the market. 

And apart from this, there is a dearth of coding Copilots in the market from Magic, Tabnine, Codegen, TabbyML, and even Amazon CodeWhisperer. But right now, people are in love with Cursor AI, even after using it just for 10 minutes.

The post Cursor AI is Better at Coding Than GitHub Copilot Will Ever Be appeared first on AIM.

]]>
Flutter Entertainment Expands with New GCC in Hyderabad https://analyticsindiamag.com/ai-news-updates/flutter-entertainment-expands-with-new-gcc-in-hyderabad/ Tue, 20 Aug 2024 08:16:38 +0000 https://analyticsindiamag.com/?p=10133183

Flutter aims to increase the headcount at this centre to 900 by 2024, with a target of 40% women in leadership positions by 2026, focusing on data analytics and engineering roles.

The post Flutter Entertainment Expands with New GCC in Hyderabad appeared first on AIM.

]]>

Flutter Entertainment, a global leader in online sports betting and iGaming, has inaugurated a new Global Capability Centre (GCC) in Hyderabad, India, signifying a significant investment in the country’s skills market.

Located at RMZ Spire in Knowledge City, Hyderabad’s premier technology park, the new facility represents a $3.5 million investment, spanning over 80,000 square feet across three floors. This centre of innovation and excellence currently employs over 700 professionals specialising in Data Engineering, Game Integrity Services, HR Operations, Procurement, Safety and Security, and Customer Operations.

The Hyderabad hub will play a crucial role in supporting Flutter’s global brands, including Paddy Power, Sisal, Sky Betting & Gaming, PokerStars, Sportsbet, and Betfair, leveraging the company’s unique competitive advantage, the Flutter Edge.

Flutter aims to increase the headcount at this centre to 900 by 2024, with a target of 40% women in leadership positions by 2026, focusing on data analytics and engineering roles. 

“Our expansion in India marks a huge milestone in Flutter’s growth and shows our continued commitment to investing in top talent to drive business growth globally,” said Phil Bishop, Chief Operating Officer at Flutter Entertainment. “The Hyderabad Global Capability Centre is designed to foster creativity and growth, further strengthening our presence in the Indian employment and skills market.”

Ashish Sinha, Managing Director of Flutter Entertainment India, expressed enthusiasm about the expansion, stating, “We are excited to expand our facility in Hyderabad as we continue to strengthen our presence in India. This expansion aligns with our global vision of Changing the Game by enabling continuous improvement in product and technology across iGaming and sports betting across our portfolio of brands.”

In line with Flutter’s global sustainability agenda, the Positive Impact Plan, the new workspace has received Gold certification from the US Green Building Council for its Leadership in Energy and Environmental Design. 

Additionally, Flutter is contributing GBP 30,000 in 2024 to support local initiatives in Hyderabad, emphasising the company’s commitment to making a positive impact in the communities where it operates.

The post Flutter Entertainment Expands with New GCC in Hyderabad appeared first on AIM.

]]>
This YC-Backed Bengaluru AI Startup is Powering AWS, Microsoft, Databricks, and Moody’s with 5 Mn Monthly Evaluations https://analyticsindiamag.com/ai-breakthroughs/this-yc-backed-bengaluru-ai-startup-is-powering-aws-microsoft-databricks-and-moodys-with-5-mn-monthly-evaluations/ Tue, 20 Aug 2024 04:40:40 +0000 https://analyticsindiamag.com/?p=10133091

By mid-2023, Ragas gained significant traction, even catching the attention of OpenAI, which featured their product during a DevDay event.

The post This YC-Backed Bengaluru AI Startup is Powering AWS, Microsoft, Databricks, and Moody’s with 5 Mn Monthly Evaluations appeared first on AIM.

]]>

Enterprises love to RAG, but not everyone is great at it. The much touted solution for hallucinations and bringing new information to LLM systems, is often difficult to maintain, and to evaluate if it is even getting the right answers. This is where Ragas comes into the picture.

With over 6,000 stars on GitHub and an active Discord community over 1,300 members strong, Ragas was co-founded by Shahul ES and his college friend Jithin James. The YC-backed Bengaluru-based AI startup is building an open source stand for evaluation of RAG-based applications. 

Several engineering teams from companies such as Microsoft, IBM, Baidu, Cisco, AWS, Databricks, and Adobe, rely on Ragas offerings to make their pipeline pristine. Ragas already processes 20 million evaluations monthly for companies and is growing at 70% month over month.

The team has partnered with various companies such as Llama Index and Langchain for providing their solutions and have been heavily appreciated by the community. But what makes them special?

Not Everyone Can RAG

The idea started when they were building LLM applications and noticed a glaring gap in the market: there was no effective way to evaluate the performance of these systems. “We realised there were no standardised evaluation methods for these systems. So, we put out a small open-source project, and the response was overwhelming,” Shahul explained while speaking with AIM.

By mid-2023, Ragas had gained significant traction, even catching the attention of OpenAI, which featured their product during a DevDay event. The startup continued to iterate on their product, receiving positive feedback from major players in the tech industry. “We started getting more attention, and we applied to the Y Combinator (YC) Fall 2023 batch and got selected,” Shahul said, reflecting on their rapid growth.

Ragas’s core offering is its open-source engine for automated evaluation of RAG systems, but the startup is also exploring additional features that cater specifically to enterprise needs. “We are focusing on bringing more automation into the evaluation process,” Shahul said. The goal is to save developers’ time by automating the boring parts of the job. That’s why enterprises use Ragas.

As Ragas continues to evolve, Shahul emphasised the importance of their open-source strategy. “We want to build something that becomes the standard for evaluation in LLM applications. Our vision is that when someone thinks about evaluation, they think of Ragas.”

While speaking with AIM in 2022, Kochi-based Shahul, who happens to be a Kaggle Grandmaster, revealed that he used to miss classes and spend his time Kaggeling. 

The Love for Developers

“YC was a game-changer for us,” Shahul noted. “Being in San Francisco allowed us to learn from some of the best in the industry. We understood what it takes to build a successful product and the frameworks needed to scale.”

Despite their global ambitions, Ragas remains deeply rooted in India. “Most of our hires are in Bengaluru,” Shahul said. “We have a strong network here and are committed to providing opportunities to high-quality engineers who may not have access to state-of-the-art projects.”

“We have been working on AI and ML since college,” Shahul said. “After graduating, we worked in remote startups for three years, contributing to open source projects. In 2023, we decided to experiment and see if we could build something of our own. That’s when we quit our jobs and started Ragas.”

Looking ahead, Ragas is planning to expand its product offerings to cover a broader range of LLM applications. “We’re very excited about our upcoming release, v0.2. It’s about expanding beyond RAG systems to include more complex applications, like agent tool-based calls,” Shahul shared.

Shahul and his team are focused on building a solution that not only meets the current needs of developers and enterprises, but also anticipates the challenges of the future. “We are building something that developers love, and that’s our core philosophy,” Shahul concluded.

The post This YC-Backed Bengaluru AI Startup is Powering AWS, Microsoft, Databricks, and Moody’s with 5 Mn Monthly Evaluations appeared first on AIM.

]]>
AMD to Acquire ZT Systems for $4.9B for Expanding AI Data Centres Ecosystem https://analyticsindiamag.com/ai-news-updates/amd-to-acquire-zt-systems-for-4-9b-for-expanding-ai-data-centres-ecosystem/ Mon, 19 Aug 2024 12:03:54 +0000 https://analyticsindiamag.com/?p=10132928

AMD plans to sell ZT Systems’ data center infrastructure manufacturing arm to a “strategic partner.”

The post AMD to Acquire ZT Systems for $4.9B for Expanding AI Data Centres Ecosystem appeared first on AIM.

]]>

AMD, in a strategic move to strengthen its AI ecosystem, announced the acquisition of ZT Systems for $4.9 billion. The deal, involving both cash and stock, includes an additional contingent payment of up to $400 million based on performance metrics.

ZT Systems, a New Jersey-based company specializing in compute design and infrastructure for AI, cloud, and general-purpose computing, will be integrated into AMD’s computing infrastructure design business. AMD plans to sell ZT Systems’ data center infrastructure manufacturing arm to a “strategic partner.”

The company also works closely with NVIDIA and Intel.

AMD, which has already invested $1 billion in its broader ecosystem, sees this acquisition as pivotal in enhancing its expertise in AI systems design, encompassing silicon, software, and systems. “Our acquisition of ZT Systems is the next major step in our long-term AI strategy to deliver leadership training and inferencing solutions that can be rapidly deployed at scale across cloud and enterprise customers,” stated AMD chair and CEO, Dr. Lisa Su.

ZT Systems’ CEO, Frank Zhang, will lead AMD’s manufacturing business, while President Doug Huang will oversee design and customer enablement teams, reporting to AMD’s executive vice president, Forrest Norrod. The deal is expected to finalize in the first half of 2025.

“We are excited to join AMD and together play an even larger role designing the AI infrastructure that is defining the future of computing,” said Frank Zhang, CEO of ZT Systems. “For almost 30 years we have evolved our business to become a leading provider of critical computing and storage infrastructure for the world’s largest cloud companies. AMD shares our vision for the important role our technology and our people play designing and building the computing infrastructure powering the largest data centers in the world.”

Last month, AMD announced that it has signed a definitive agreement to acquire Silo AI, which completed just last week, in an all-cash transaction valued at approximately $665 million. The acquisition is expected to close in the second half of 2024, subject to regulatory approvals.

The acquisition of ZT Systems is the latest move by AMD to bolster its AI capabilities. Over the past year, alongside ramping up organic R&D efforts, AMD has invested over $1 billion to grow its AI ecosystem and enhance its AI software expertise.

The post AMD to Acquire ZT Systems for $4.9B for Expanding AI Data Centres Ecosystem appeared first on AIM.

]]>
Former Nutanix Founder’s AI Unicorn is Changing the World of CRM and Product Development https://analyticsindiamag.com/ai-breakthroughs/former-nutanix-founders-ai-unicorn-is-changing-the-world-of-crm-and-product-development/ Mon, 19 Aug 2024 10:56:33 +0000 https://analyticsindiamag.com/?p=10132923

Backed up Khosla Ventures, DevRev recently achieved unicorn status with a $100.8 million series A funding round, bringing its valuation to $1.15 billion.

The post Former Nutanix Founder’s AI Unicorn is Changing the World of CRM and Product Development appeared first on AIM.

]]>

DevRev, founded by former Nutanix co-founder Dheeraj Pandey and former SVP of engineering at the company, Manoj Agarwal, is an AI-native platform unifying customer support and product development. It recently achieved unicorn status with a $100.8 million series A funding round, bringing its valuation to $1.15 billion.

Backed by major investors, such as Khosla Ventures, Mayfield Fund, and Param Hansa Values, the company is on the road to proving the ‘AI bubble’ conversation wrong. “Right now, there’s a lot of talk in the industry about AI and machine learning, but what we’re doing at DevRev isn’t something that can be easily replicated,” said Agarwal in an exclusive interview with AIM.

Agarwal emphasised the unique challenge of integrating AI into existing workflows, a problem that DevRev is tackling head-on. Databricks recently announced that LakeFlow Connect would be available for public preview for SQL Server, Salesforce, and Workday; DevRev is on a similar journey, but with AI at its core, it remains irreplaceable.

DevRev’s AgentOS platform is built around a powerful knowledge graph, which organises data from various sources—such as customer support, product development, and internal communications—into a single, unified system with automatic RAG pipelines. 

This allows users to visualise and interact with the data from multiple perspectives, whether they are looking at it from the product side, the customer side, or the people side.

The Knowledge Graph Approach

Machines don’t understand the boundaries between departments. The more data you provide, the better they perform. “Could you really bring the data into one system, and could you arrange this data in a way that people can visually do well?” asked Agarwal. 

The Knowledge Graph does precisely that – offering a comprehensive view of an organisation’s data, which can then be leveraged for search, analytics, and workflow automation.

Agarwal describes the DevRev platform as being built on three foundational pillars: advanced search capabilities, seamless workflow automation, and robust analytics and reporting tools. “Search, not just keyword-based search, but also semantic search,” he noted.

On top of these foundational elements, DevRev has developed a suite of applications tailored to specific use cases, such as customer support, software development, and product management. These apps are designed to work seamlessly with the platform’s AI agents, which can be programmed to perform end-to-end tasks, further enhancing productivity.

“The AI knowledge graph is the hardest thing to get right,” admitted Agarwal, pointing to the challenges of syncing data from multiple systems and keeping it updated in real-time. However, DevRev has managed to overcome these hurdles, enabling organisations to bring their data into a single platform where it can be organised, analysed, and acted upon.

The Open Approach

The company’s focus on AI is not new. In fact, DevRev’s journey began in 2020, long before the current wave of AI hype. “In 2020, when we wrote our first paper about DevRev, it had GPT all over it,” Agarwal recalls, referring to the early adoption of AI within the company. 

Even today, DevRev primarily uses OpenAI’s enterprise version but also works closely with other AI providers like AWS and Anthropic. In 2021, the platform organised a hackathon where OpenAI provided exclusive access to GPT-3 for all the participants. 

This forward-thinking approach allowed DevRev to build a tech stack that was ready to leverage the latest advancements in AI, including the use of vector databases, which were not widely available at the time.

One of the biggest challenges that DevRev addresses is the outdated nature of many systems of record in use today. Whether it’s in customer support, CRM, or product management, these legacy systems are often ill-equipped to handle the demands of modern businesses, particularly when it comes to integrating AI and machine learning.

Not a Bubble

DevRev’s architecture is designed with flexibility in mind, allowing enterprises to bring their own AI models or use the company’s built-in solutions. “One of the core philosophies we made from the very beginning is that everything we do inside DevRev will have API webhooks that we expose to the outside world,” Agarwal explained. 

As DevRev reaches its unicorn status, Agarwal acknowledges the growing concerns about an “AI bubble” similar to the dot-com bubble of the late 1990s. “There’s so many companies that just have a website and a company,” he said, drawing parallels between the two eras. 

However, he believes that while there may be some hype, the underlying technology is real and here to stay. “I don’t think that anybody is saying that after the internet, this thing is not real. This thing is real,” Agarwal asserted. 

The key, he argues, is to distinguish between companies that are merely riding the AI wave and those that are genuinely innovating and solving real problems. DevRev, with its deep investment in AI and its unique approach to integrating it into enterprise workflows, clearly falls into the latter category.

The post Former Nutanix Founder’s AI Unicorn is Changing the World of CRM and Product Development appeared first on AIM.

]]>
Why You Should Not Attend Cypher 2024 https://analyticsindiamag.com/ai-breakthroughs/why-you-should-not-attend-cypher-2024/ Sun, 18 Aug 2024 04:22:44 +0000 https://analyticsindiamag.com/?p=10132843

The last thing you want is to be bombarded with insights, network with industry leaders, and leave with your head buzzing full of new ideas.

The post Why You Should Not Attend Cypher 2024 appeared first on AIM.

]]>

Cypher 2024 is just around the corner, and if you’re considering not attending, you might miss out the most from the biggest AI summit of India. After all, who would want to waste their time rubbing shoulders with the brightest minds and getting exclusive insights from top industry leaders?

When it comes to the speakers, the line is pretty fascinating. With the session from the former member of Rajya Sabha Subramanian Swamy to insightful AI discussions with Vivek Raghavan, co-founder of Sarvam AI and Nikhil Malhotra, the head of the Project Indus, everything seems to just fall into place.

Around 2000+ AI and data scientists attending each day with decision-makers and C-level executives from Fortune 500 companies all gathering to share their wisdom, it’s almost as if you’d be overwhelmed with knowledge. And who doesn’t need that?

Cypher is notorious for its intense networking opportunities, from the rapid-fire FlashConnect sessions to the laid-back yet insightful VoxPops. To put this into perspective, Cypher 2023 had around 15% startups, 25% GCCs, 20% IT providers, 15% consulting firms, and 25% Indian companies. 

And Cypher 2024 is going to be at least twice as big as the last one.

But really, who needs more professional connections? Why expand your network with industry leaders and potential collaborators when you could just stay in your comfort zone? For staying in the comfort zone, Cypher 2024 will also host an Experience Zone where you can relax and enjoy your own AI created music with Beatoven.ai.

With three jam-packed days of content, featuring talks from renowned speakers like Amit Kapur from Lowe, Santosh Hegde from Couchbase, Shekar Sivasubramanian from Wadhwani AI, it’s almost as if Cypher is daring you to learn too much. The sheer volume of actionable insights, innovative strategies, and cutting-edge AI discussions might just fry your brain. 

Cypher began in 2015 with a straightforward mission: to bridge the gap between the AI community and various industries, both established and emerging. The idea quickly gained traction.

Over the years, Cypher has evolved into the largest AI conference in India, expanding at an unprecedented rate. But it’s not just about size— it is also considered the best AI conference in the country.

Then comes The Minsky Awards for Excellence in AI. These awards are highly esteemed AI accolades, exclusively honouring enterprise-level achievements. These awards celebrate outstanding excellence in AI, highlighting the most innovative and impactful applications of artificial intelligence.

Cypher hosts people from all over the world. And when it comes to India, though most of them attend from Bangalore, representatives from all the companies from different cities in India that are using generative AI are present to share insights and collaborate.

This also includes companies from diverse industries. Last year, Cypher 2023 had around 30% representation from IT companies, 20% from the financial sector, 15% from healthcare and life sciences, 10% each for manufacturing, consulting, and e-commerce, and the rest 5% from education and government organisations.

Too Much AI Pressure?

Let’s face it: being inspired by success stories, groundbreaking innovations, and the future direction of AI might just get you motivated to push boundaries in your own work—and who needs that kind of pressure?

Especially this time, with the AI Startup ecosystem rising in India, a dedicated track at Cypher where leading startups showcase their latest innovations, might be too much to handle for people getting into the field. Or maybe, it might be the best motivation needed to start your own venture.

Cypher 2024 might seem like the ultimate destination for anyone serious about AI and data science, but think carefully before you attend. The last thing you want is to be bombarded with insights, network with industry leaders, and leave with your head buzzing full of new ideas. After all, staying in your comfort zone has its own charm, doesn’t it?

If you are not yet convinced to skip Cypher 2024, please register here.

The post Why You Should Not Attend Cypher 2024 appeared first on AIM.

]]>