Siddharth Jindal, Author at AIM https://analyticsindiamag.com/author/siddharth-jindalanalyticsindiamag-com/ Artificial Intelligence, And Its Commercial, Social And Political Impact Tue, 03 Sep 2024 13:01:04 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Siddharth Jindal, Author at AIM https://analyticsindiamag.com/author/siddharth-jindalanalyticsindiamag-com/ 32 32 PhysicsWallah’s ‘Alakh AI’ is Making Education Accessible to Millions in India https://analyticsindiamag.com/ai-origins-evolution/how-physicswallah-is-leveraging-openais-gpt-4o-to-make-education-accessible-to-millions-in-india/ Tue, 03 Sep 2024 11:34:34 +0000 https://analyticsindiamag.com/?p=10134333

“Today, 85% of the doubts are solved in real time."

The post PhysicsWallah’s ‘Alakh AI’ is Making Education Accessible to Millions in India appeared first on AIM.

]]>

India’s ed-tech unicorn PhysicsWallah is using OpenAI’s GPT-4o to make education accessible to millions of students in India. Recently, the company launched a suite of AI products to ensure that students in Tier 2 & 3 cities can access high-quality education without depending solely on their enrolled institutions, as 85% of their enrollment comes from these areas.

Last year, AIM broke the news of PhysicsWallah introducing ‘Alakh AI’, its suite of generative AI tools, which was eventually launched at the end of December 2023. It quickly gained traction, amassing over 1.5 million users within two months of its release.

The suite comes with several products including AI Guru, Sahayak, and NCERT Pitara. “AI Guru is a 24/7 companion available to students, who can use it to ask about anything related to their academics, non-academic support, or more,” said Vineet Govil, CTPO of PhysicsWallah, in an exclusive interview with AIM.

He added that the tool is designed to assist students by acting as a tutor, helping with coursework, and providing personalised learning experiences. It also supports teachers by handling administrative tasks, allowing them to focus more on direct student interaction.

Govil further explained that students can ask questions in any form—voice or image—using a simple chat format. “It’s a multimodal.”  He said that even if the lecture videos are long—about 30 minutes, 1 hour, or 2 hours—the AI tool will be able to identify the exact timestamp of the student’s query.

When discussing Sahayak, he explained that it offers adaptive practice, revision tools, and backlog clearance, enabling students to focus on specific subjects and chapters for a tailored learning experience.

“Think of Sahayak as a helper that assists students in creating study plans. Based on the student’s academic profile and the entrance exam they are preparing for, it offers suggestions on a possible plan to follow. It includes short and long videos, and a question bank,” said Govil.

On the other hand, NCERT Pitara uses generative AI to create questions from NCERT textbooks, including single choice, multiple choice, and fill-in-the-blank questions.

Moreover, PhysicsWallah has introduced a ‘Doubt Engine’ which can solve students’ doubts after class hours. These doubts can be either academic or non-academic. 

“Academic doubts can be further divided into contextual and non-contextual. Contextual doubts are those that our system can understand, analyse, and respond to effectively. Non-contextual doubts are the ones where we are uncertain about the student’s thought process,” explained Govil.

He said that with the help of the slides that the teacher uses to teach and the lecture videos, their model is also able to answer non-contextual doubts. “Today, 85% of the doubts are solved in real time. Previously, it used to take 10 hours for doubts to be resolved by human subject-matter experts.”

The company has also launched an AI Grader for UPSC and CA aspirants who write subjective answers. Govil said that grading these answers is challenging due to the varying handwriting styles, but the company has successfully developed a tool to address this issue.

“Over a few months, we have done a lot of fine-tuning. Today, we are able to understand what a student is writing. At the same time, some students may use diagrams, and we are able to identify those as well,” said Govil.

The Underlying Tech

Govil said that they use OpenAI’s GPT-4o. Regarding the fine-tuning of the model, he said the company has nearly a million questions in their question bank. “We have over 20,000 videos in our repository that are being actively used as data,” he added.

On the technology front, he said that the company has developed its own layer using the RAG architecture. “And we have a vector database that allows us to provide responses based on our own context,” he said.

PhysicsWallah built a multimodal AI bot powered by Astra DB Vector and LangChain in just 55 days. 

Talking about the data resources for RAG, Govil said, “Our subject matter experts (SMEs) regularly update the data, including real-time current affairs and question banks. This continuous updating has helped us build a question bank with over a million entries.”

When asked about LLMs not being good at maths, Govil agreed and said “It’s a known problem that all the LLMs available today are not doing a great job when it comes to reasoning, and we are aware of it.”

“We are working with partners leading in the LLM space. At the same time, this is really an issue only for high-end applications. For day-to-day algebra and mathematical operations, they are performing well,” he added. 

Alakh AI is Not Alone

OpenAI former co-founder Andrej Karpathy recently launched his own AI startup, Eureka Labs, an AI-native ed-tech company. Meanwhile, Khan Academy, in partnership with OpenAI, has developed an AI-powered teaching assistant called Khanmigo, which utilises OpenAI’s GPT-4. 

Speaking of its global competitors, Govil said, “I won’t really like to compare [ourselves] with the others, but I can tell you that the kind of models we have, and the kind of responses and the skill at which we are operating, are not seen elsewhere.”

Moreover, recent reports indicate that Lightspeed Venture Partners will lead a $150 million funding round for PhysicsWallah at a valuation of $2.6 billion. 

In conclusion, PhysicsWallah’s innovative suite of tools under the Alakh AI umbrella, which includes Sahayak, AI Guru, and the Doubt Engine, is set to reshape the ed-tech industry with its advanced features and real-time capabilities.

The post PhysicsWallah’s ‘Alakh AI’ is Making Education Accessible to Millions in India appeared first on AIM.

]]>
“When Will Mira Get Married?” OpenAI CTO’s Mother Asked ChatGPT https://analyticsindiamag.com/ai-news-updates/when-will-mira-get-married-openai-ctos-mother-asked-chatgpt/ Tue, 03 Sep 2024 04:57:11 +0000 https://analyticsindiamag.com/?p=10134278

Paytm founder Vijay Shekhar Sharma took to X and said, “#MomsEverywhere.”

The post “When Will Mira Get Married?” OpenAI CTO’s Mother Asked ChatGPT appeared first on AIM.

]]>

OpenAI’s Mira Murati shared that the first time her mother used ChatGPT, she asked, “When will Mira get married?”. 

In a recent interview at Cannes Lion 2024, Murati recalled that when she first introduced her mother to ChatGPT in 2022, she was with her sister in Italy. She explained to her mother that she could ask any question she wanted,even in Albanian. Her mother’s first question to ChatGPT was about Mira’s marriage. To this, Mira’s sister humorously responded, “Mom, it’s not magic, it’s artificial intelligence”.

“She was having such a natural interaction with it that she thought she could ask anything,” said Murati. 

Paytm founder Vijay Shekhar Sharma took to X and said, “#MomsEverywhere.”

“didn’t know she was Indian,” quipped a user on X. 

Another user humorously remarked, “The most Albanian mother interaction ever. Recalling OpenAI’s leadership transition when Mira Murati was appointed Interim CEO during the Sam Altman kerfuffle, the user shared,”I told my mom, “Look, an Albanian woman is the CEO of one of the most innovative companies in the world.”

His mother’s response? “Message her, ask her on a date, and for a job.”

Since joining OpenAI in 2018, Murati has been instrumental in leading the development of ChatGPT, DALL-E, and the GPT series. Her leadership has been recognised by industry leaders, including Microsoft CEO Satya Nadella, for her ability to blend technical expertise with a deep appreciation for OpenAI’s mission of responsible AI development.

Murati briefly served as interim CEO of OpenAI in November 2023, following the unexpected removal of Sam Altman. Her tenure as interim CEO was short-lived, as Emmett Shear replaced her three days later, and Altman was reinstated five days after his removal. 

The post “When Will Mira Get Married?” OpenAI CTO’s Mother Asked ChatGPT appeared first on AIM.

]]>
Microsoft Appoints Former AWS Head Vaishali Kasture as GM for India and South Asia https://analyticsindiamag.com/ai-news-updates/microsoft-appoints-former-aws-head-vaishali-kasture-as-gm-for-india-and-south-asia/ Mon, 02 Sep 2024 13:39:40 +0000 https://analyticsindiamag.com/?p=10134269

With over 25 years of leadership experience, Kasture will oversee operations at a time when technology adoption is accelerating across both corporate and SMB sectors. 

The post Microsoft Appoints Former AWS Head Vaishali Kasture as GM for India and South Asia appeared first on AIM.

]]>

Microsoft has announced the appointment of Vaishali Kasture as the General Manager for its Small, Medium, and Corporate (SMC) business in India and South Asia. With over 25 years of leadership experience, Kasture will oversee operations at a time when technology adoption is accelerating across both corporate and SMB sectors. 

Kasture previously served as the Head of AWS India and South Asia. She will be the second high-ranking executive to leave AWS for Microsoft within the past year, as the companies vie for a larger share of the cloud market in India. Her predecessor at AWS, Puneet Chandok, became the head of Microsoft India and South Asia in September last year.

“I cannot think of a better time to take up the role as GM for the SMC business in India and South Asia,”said Kasture on her appointment to the new role.  She highlighted Microsoft’s comprehensive approach to technology, focusing on areas such as cloud computing, artificial intelligence, data, and security.

“Microsoft is singularly positioned as a full-service technology company, focusing on an end-to-end technology stack,” Kasture added, citing the company’s ability to enhance productivity through generative AI.

Kasture also referenced Microsoft CEO Satya Nadella’s vision, quoting him: “Microsoft is like the United Nations of Software. Helping people and organizations to do more, to achieve more.”

India, described by Kasture as “the fastest-growing large economy in the world,” presents significant opportunities for Microsoft, which has bolstered its presence in the country over the past three decades. 

Kasture will collaborate with key figures including Chandok and Rachel Bondi to advance Microsoft’s vision in the region. She expressed gratitude towards Ahmed Mazhari and Kevin Peesker for their support and partnership during her transition into the role.

The appointment underscores Microsoft’s commitment to expanding its footprint in India and South Asia, aligning with the growing demand for technology-driven solutions in the region.

The post Microsoft Appoints Former AWS Head Vaishali Kasture as GM for India and South Asia appeared first on AIM.

]]>
Anthropic Claude Artifacts to Kill App Store Soon  https://analyticsindiamag.com/ai-origins-evolution/anthropic-claude-artifacts-to-kill-app-store-soon/ Mon, 02 Sep 2024 11:18:54 +0000 https://analyticsindiamag.com/?p=10134256

While the OpenAI's Plugins Store was billed as an ‘iOS App Store moment’, it failed to meet the expectations and ended up being a hot mess. 

The post Anthropic Claude Artifacts to Kill App Store Soon  appeared first on AIM.

]]>

Anthropic recently made Claude Artifacts available to all users on iOS and Android, allowing anyone to easily create apps without writing a single line of code. AIM tried its hand at it and successfully created a Cricket Quiz game, Temple Run, and Flappy Bird, all with a single line of prompt in English. 

Debarghya (Deedy) Das, principal at Menlo Ventures, used Artifacts to build a Splitwise-like app. “With Claude launching on iOS today, I can now generate the Splitwise app instead of paying for Pro,” he said

“Claude Artifacts allows you to go from English to an entire app and share it!” he added, saying that his friend, a product manager who couldn’t code, now creates apps in minutes. “The cost of a lot of software is nearing ~$0.”

This brings us to question if this could be the end of App Stores. Groq’s Sunny Madra thinks this is the beginning of “The Build Your Own (BYO) era. Since Artifacts are shareable, anyone can use the apps you build, and they can be shared on any social media platform as a link.

Several users experimented with Claude Artifacts by building different apps. 

“Claude 3.5’s artifacts, now shareable, can help teach. In class, startup financing can be hard to explain. Now I just asked, “Create an interactive simulation that visually explains payoff differences for a startup and VC with liquidation preference…” Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, wrote on X

Similarly, Allie K Miller, AI advisor and angel investor, used it to build a calendar and an AI quiz, which took less than two minutes! 

The best part about Artifacts is that it is mobile-friendly and responsive. “Using Claude 3.5 Sonnet, you can generate artifacts (e.g., code snippets, text documents, or website designs) and iterate on them right within the same window,” exclaimed Elvis Saravia, the co-founder of DAIR.AI. 

On-Demand Software

When using mobile phones, we often search for apps that can solve our specific needs. For example, if you’re into fitness, you might download an app that offers various workouts. However, the app may not provide the customisation you seek. Now, instead of relying on downloads, you can create your own personalised apps that cater specifically to your needs. 

“On demand software is here,” said Joshua Kelly, chief technology officer, Flexpa, a healthcare tool company. Using Artifacts, he built a simple stretching time app for his runs in just 60 seconds.

Other than just giving prompts, users can now also share previously made websites or apps, and Claude can generate an exact replica. 

“You can now take a photo of something you want to replicate, give it to AI, and it outputs the code with a preview right on your iPhone,” posted Linas Beliūnas, director of revenue at Zero Hash, on LinkedIn.

On the internet, one can find several apps built using Claude Artifacts, such as the Rubik’s Cube Simulator, Self-Playing Snake Game, Reddit Thread Analyzer, Drum Pad, and Daily Calorie Expenditure.

Apart from building apps, Artifacts has the potential to greatly impact education. “Any piece of content— whether it’s a screenshot, PDF, presentation, or something else—can now be turned into an interactive learning game,” said AI influencer Rowan Cheung.

The End of No-Code Platforms?

Claude Artifacts is going to be a big threat to no-code and low-code app builder platforms such as AppMySite, Builder.ai, Flutter, and React Native. 

“Claude Artifacts are insane — I cannot believe how good the product is. You can ask it to build most little internal tools in minutes (at least, the UI) and customize further via code. Feels like a superpower for semi-technical people,” posted a user on X. 

Moreover, Claude, when put together with Cursor AI, has simplified the process of making apps. “So I’m building this box office app in React Native and I thought I’d try Cursor with Claude 3.5 and see how far I’d get. The backend is django/psql that’s already in place,” said another user on X. “Starting from scratch, I have authenticated with my server to log in users, issue tickets, register tickets, scan ticket QR codes, and send email/sms confirmations,” he added. 

Claude is set to rapidly democratise app development, potentially eliminating the need for an App Store. It will enable anyone to build apps based on their specific needs, complete with personalised UI and UX.

Moreover, building an app for the iOS App Store is challenging. Apple charges a standard 30% commission on app sales and in-app purchases, including both paid app downloads and digital goods sold within the apps. 

The company enforces rigorous guidelines that apps must adhere to, covering aspects such as user interface design, functionality, and privacy. Many apps are rejected for minor violations, and these guidelines are frequently updated, requiring developers to stay informed and adapt quickly.

However, for now, Claude allows anyone to build anything without any charges and lets users experiment to see if something is working or not. Even if someone wants to publish an app built using Claude on the iOS App Store, that is definitely an option.

Interestingly, Apple recently announced that, for the first time, it will allow third-party app stores on iOS devices in the EU. This change enables users to download apps from sources other than Apple’s official App Store, providing more options for app distribution and potentially reducing costs for developers.

Better than ChatGPT 

OpenAI previously introduced ChatGPT plugins, enabling users to create custom GPTs for their specific tasks. However, these plugins do not compare to Artifacts, which allows users to visualise their creations. 

While the Plugins Store was billed as an ‘iOS App Store moment’, it failed to meet the expectations and ended up being a hot mess. 

Moreover, during DevDay 2023, OpenAI chief Sam Altman launched a revenue-sharing programme which was introduced to compensate the creators of custom GPTs based on user engagement with their models. 

However, many details about the revenue-sharing mechanism remain unclear, including the specific criteria for payments and how the engagement would be measured.

“It was supposed to be announced sometime in Q1 2024, but now it’s the end of March, and there are still few details about it,” posted a user on the OpenAI Developer Forum in March. There have been no updates on the matter from OpenAI since then.

The post Anthropic Claude Artifacts to Kill App Store Soon  appeared first on AIM.

]]>
NVIDIA, Apple Blow OpenAI’s Bubble https://analyticsindiamag.com/ai-origins-evolution/nvidia-apple-blow-openais-bubble/ Sat, 31 Aug 2024 08:41:50 +0000 https://analyticsindiamag.com/?p=10134186

Following the investment, the money will ultimately flow back to NVIDIA as OpenAI purchases more compute resources to train its next frontier model. 

The post NVIDIA, Apple Blow OpenAI’s Bubble appeared first on AIM.

]]>

NVIDIA is reportedly in discussions to join a funding round for OpenAI that could value the AI startup at more than $100 billion, according to the Wall Street Journal. In May 2024, OpenAI was valued at approximately $86 billion.

This news comes on the heels of NVIDIA’s impressive July quarter results, where revenues surpassed $30 billion—a 122% increase from the previous year. 

Besides NVIDIA, Apple and Microsoft are also considering participating in the financing. Thrive Capital is reportedly leading the round with a $1 billion investment, while NVIDIA is evaluating a potential contribution of around $100 million, added the report. Notably, Microsoft has invested $13 billion in OpenAI overall.

While OpenAI is dependent on the NVIDIA GPUs to train its upcoming frontier model, Apple recently partnered with the company, integrating ChatGPT into Siri

On Thursday, it was reported that ChatGPT has surpassed 200 million weekly active users, doubling its count from the previous year. 

Surprisingly, this year, OpenAI has released only GPT-40 and GPT-40 Mini. However, the company has announced several other products, including Sora, SearchGPT, Voice Engine, GPT-40 Voice, and most recently, Strawberry and Orion. It seems that the announcements were likely intended to generate hype and raise funds.

NVIDIA is Investing in Itself, Not OpenAI 

Following the investment, the money will ultimately flow back to NVIDIA as OpenAI purchases more compute resources to train its next frontier model. 

NVIDIA is keen to secure its ecosystem for the year ahead and is now concentrating on its Blackwell GPUs. This lineup includes models B100 and B200, built for data centres and AI applications.

NVIDIA chief Jensen Huang said that the Blackwell is expected to come out by the fourth quarter this year. “We’re sampling functional samples of Blackwell, Grace Blackwell, and a variety of system configurations as we speak. There are something like 100 different types of Blackwell-based systems that were shown at Computex, and we’re enabling our ecosystem to start sampling those,” said Huang. 

However, previous reports indicated that these could be delayed by three months or more for Blackwell due to design flaws, a setback that could affect customers such as Meta Platforms, Google, and Microsoft, which have collectively ordered tens of billions of dollars’ worth of these chips.

Huang believes this is just the beginning and that there’s much more to come in generative AI. “Chatbots, coding AIs, and image generators are growing rapidly, but they’re just the tip of the iceberg. Internet services are deploying generative AI for large-scale recommenders, ad targeting, and search systems,” he said.

According to the NVIDIA CFO Colette Kress the next-generation models will require 10 to 20 times more compute to train with significantly more data. 

Earlier this year, Huang personally hand-delivered the first NVIDIA DGX H200 to OpenAI. 

OpenAI’s GPT-40 voice features, demonstrated during the Spring Update event, were made possible with the help of NVIDIA H200. “I just want to thank the incredible OpenAI team, and a special thanks to Jensen and the NVIDIA team for bringing us the advanced GPU that made this demo possible today,” said OpenAI CTO Mira Murati, during the OpenAI’s Spring Update.

Apple Wants a Slice of OpenAI 

Apple is catching up in the AI race. The company recently released iOS 18.1 beta 3, introducing the AI-powered Clean Up tool under Apple Intelligence, which removes unwanted objects from photos to enhance image quality. 

This feature of Apple Intelligence is based on the 3 billion parameter model, which Apple developed recently. 

While Apple Intelligence is perfect for day to day tasks, it is not focusing on better reasoning capabilities which will be required in the near future. This is where OpenAI comes into the picture. 

“This is a sign that Apple is not seeing a path where it makes sense to build a competitive, full feature LLM,” said Gene Munster, Managing Partner, Deepwater Asset Management on Apple’s investment in OpenAI. 

He added that this means Apple will be reliant on OpenAI, Google, and possibly even Meta to deliver about a third of their AI features in the long term.

OpenAI chief Altman is a huge fan of Apple, and his startup eventually ended up partnering with the company. He recently lauded the Cupertino-based tech giant for its technology prowess, saying, “iPhone is the greatest piece of technology humanity has ever made”, and it’s tough to get beyond it as “the bar is quite high.” 

As a part of Apple-OpenAI partnership,  iOS, iPadOS, and macOS users would get access to ChatGPT powered by GPT-4o later this year, where users can access it for free without creating an account, and ChatGPT subscribers can connect their accounts and access paid features right from these experiences.

Interestingly, when OpenAI announced the ChatGPT desktop app, it was first released for Mac users rather than for Microsoft.

Moreover, it was said that the company wasn’t paying OpenAI anything, as it was doing the startup a favour by making ChatGPT available to billions of customers. 

However, investing in OpenAI today would be a smart move for Apple, as it would provide access to the latest OpenAI models, similar to how Microsoft’s AI services primarily rely on OpenAI.

Meanwhile, OpenAI definitely has a soft corner for Apple. This affinity was clearly displayed at the OpenAI Spring Update, where MacBooks and iPhones were prominently used, while Microsoft Windows products were notably absent. 

The post NVIDIA, Apple Blow OpenAI’s Bubble appeared first on AIM.

]]>
The Birth of Solo Micro Entrepreneurs https://analyticsindiamag.com/ai-origins-evolution/the-birth-of-solo-micro-entrepreneurs/ Thu, 29 Aug 2024 09:24:58 +0000 https://analyticsindiamag.com/?p=10134046

And the end of App Stores? 

The post The Birth of Solo Micro Entrepreneurs appeared first on AIM.

]]>

The rise of generative AI signals a shift towards solo-founded AI startups as the new standard. “I think Cloud+AI is increasingly making the Pieter Levels style model of a scrappy solo serial micro-entrepreneur viable, allowing one person to spin up and run multiple companies that generate income, possibly reaching billion-dollar valuations,” said former OpenAI and Tesla computer scientist Andrej Karpathy, referring to Lex Fridman’s latest podcast with Levels.

Levels is a self-taught developer and entrepreneur who has designed, programmed, shipped, and run over 40 startups, many of which are hugely successful. 

Recently, Karpathy founded Eureka Labs, an AI company he is currently running on his own. He is actively experimenting with various generative AI tools to develop educational content through visual storytelling. Eureka Labs aims to transform education by blending generative AI with traditional teaching methods. Karpathy, who has previously held pivotal roles at OpenAI and Tesla, describes Eureka Labs as “a new kind of school that is AI native.”

“We’re entering a world where it’s getting easier and more lucrative to start a company than it is to try and get hired by one,” quipped Dennis Kardonsky, founder of Soverin.ai. 

This idea is similar to what Sam Altman said in a recent interview. “We’re going to see 10-person companies with billion-dollar valuations pretty soon…in my little group chat with my tech CEO friends, there’s this betting pool for the first year there is a one-person billion-dollar company, which would’ve been unimaginable without AI. And now [it] will happen.”

In a recent Lightcone podcast, YC partner Harj Taggar discussed Sam Altman’s idea, saying, “Founders who’ve been doing this (running startups) for a while are obsessed with the idea of having fewer employees, as few as possible, because once you manage a large company with lots of employees, you realise how much it sucks.”

Similarly, Anton Osika, founder of Lovable, a Stockholm-based AI platform that claims to enable anyone to build software applications “with just a conversation in plain English,” believes that programs like his will be capable of creating “80% of all SaaS” software by the end of 2025. He adds that soon, “you will see software unicorns with virtually no human involvement—it’s quite likely it will be just one person.”

“I do believe that anyone can build most things—not big VC-funded companies, but most things to create a great, multi-million-a-year lifestyle business. And I think, even with AI, we’re seeing a lot of that, where individuals and small teams are building really, really valuable companies that aren’t venture-backed, nor should they be. But it’s still a great business,” said Ben Tossell, founder of the AI-driven newsletter Ben’s Bites.

Ben uses a suite of AI tools like ChatGPT to strategise, create content, and analyse business data, allowing him to run his business efficiently without a large team. 

He said that there is a really simple equation that goes into what it takes to build something that provides value to customers, where you can get thousands or tens of thousands of customers signing up and paying. “I love that,” he added.

Linkedin founder Reid Hoffman recently predicted that the 9 to 5 jobs will be extinct by 2034. “You may not only work at different companies, you might work in different industries,” said Hoffman, adding that people may stop working like employees and begin working in a gig economy.

The ‘Gig Economy Revolution,’ Hoffman believes, will be more significant than anticipated. According to his prediction, within the next decade, 50% of the population will become freelancers and earn more while working for “3 or 4 gigs,” than those working in traditional employment. Compared to traditional positions, this approach may offer less job security, although it does provide greater flexibility and more opportunities.

Solo Entrepreneurs All the Way

Arjun Reddy of India is a notable example of a successful solo entrepreneur, having founded several startups and currently focusing on two AI ventures, Nidum.ai and HaiVE.

Nidum.ai uses blockchain to create a decentralised AI economy where users can contribute computing power through AI mining software and access a vast network of AI resources on demand, paying only for what they use.

Learn: How Data Mining Works

On the other hand, HaiVE provides on-premise and custom cloud AI solutions to enterprises that want to turbocharge their operations with AI but are concerned about streaming their data to third parties and creating potential competitors.

Another key figure is Ramsri Goutham Golla. He is the founder and solo developer of Questgen.ai, a platform that offers various AI-powered tools for creating quizzes and educational content, such as higher-order question generators and multiple-choice question generators.

In a recent video tutorial, Golla explained that big publishers used to outsource quiz creation to companies known as item banks or tutoring chains, which would hire staff to create quizzes. He mentioned that now, with AI tools, you can generate 150 to 200 quizzes with just one click from 300 to 400 pages of content. This is possible because modern LLMs, like Gemini or even OpenAI models, can handle long contexts—up to 100,000 tokens or more. Moreover one can use frameworks like LangChain to split the text into smaller chunks.

Another venture by Golla, Supermeme.ai, focuses on using AI to generate memes and other creative content. This platform leverages AI to create engaging and humorous content for social media and marketing purposes.

Golla said that both these platforms are currently at $100K ARR (Annual Recurring Revenue), which is approximately INR 83 lakhs in annual revenue.

Dhravya Shah, a 19-year-old from India, is busy creating several innovative apps, including OpenSearchGPT as an alternative to SearchGPT, Radish as an alternative to Redis, and SuperMemory for daily reminders. He has also recently applied to Y Combinator.

RIP App Store?

Anthropic recently made Artifacts available to all Claude users. Users can now create and view Artifacts on both the Claude iOS and Android apps Deedy Das, Principal at Menlo Ventures, used Artifacts to build a Splitwise-like app. “With Claude launching on iOS today, I can now generate the Splitwise app instead of paying for Pro,” he said. 

This brings us to question if this is the end of App Stores. Groq’s Sunny Mandra thinks this is the beginning of “The Build Your Own (BYO) era.

The post The Birth of Solo Micro Entrepreneurs appeared first on AIM.

]]>
Ola Electric to Launch Products on ONDC Platform Next Week https://analyticsindiamag.com/ai-news-updates/olaelectric-to-launch-products-on-ondc-platform-next-week/ Thu, 29 Aug 2024 08:13:02 +0000 https://analyticsindiamag.com/?p=10134026

This comes after he declared ONDC “the UPI moment for e-commerce” at the recent Ola Sankalp 2024 event.

The post Ola Electric to Launch Products on ONDC Platform Next Week appeared first on AIM.

]]>

Ola Electric will make its entire range of products available on the Open Network for Digital Commerce (ONDC) platform starting next week. This move aligns with ONDC’s vision to revolutionise the e-commerce landscape by promoting open and inclusive digital marketplaces.

“All Ola Electric products will be available on ONDC from next week onwards. ONDC is the future of commerce,” posted Ola chief Bhavish Aggarwal on X. 

https://twitter.com/bhash/status/1829061332961796539

The ONDC initiative, backed by government support, aims to create a unified network for various commerce platforms, enabling greater transparency and reducing the dominance of major e-commerce players. By joining ONDC, OlaElectric seeks to leverage this emerging infrastructure to expand its market reach and facilitate easier access to its products for consumers across India.

Aggarwal has been bullish on ONDC and recently said, “At Ola, we’re building the future of commerce with ONDC, which will enable kiranas and small merchants to reach consumers through digital networks. ONDC is the future.”

This comes after he declared ONDC “the UPI moment for e-commerce” at the recent Ola Sankalp 2024 event.

At the same event, he also announced that the company would change its name from Ola Cabs to Ola Consumer, as it expands its services beyond cabs. “We are going to offer a much broader suite of consumer services. Many of you have already used some of these. As I said, our ambition is to truly make commerce accessible, affordable, and efficient,” said Aggarwal.

For the same, Ola Consumer is integrating with ONDC to reduce the cost of commerce and expand service categories. Currently, Ola users can order food and beverages via ONDC through the Ola app, with plans to expand to groceries and fashion items. 

The company has also integrated a food delivery plugin within its app, enabling users to access a variety of restaurants and food brands listed on ONDC. This feature is in the pilot phase, available to Ola employees and a select group of consumers.

Aggarwal also plans to leverage the power of ONDC to take on the likes of Zepto and Blinkit. The company plans to implement fully automated dark stores and fulfilment centres to revolutionise warehousing and improve the commerce supply chain.

In addition, Ola is focusing on sustainable logistics by electrifying its delivery operations, which is expected to reduce logistics costs by about 50% and create additional jobs.

The post Ola Electric to Launch Products on ONDC Platform Next Week appeared first on AIM.

]]>
California’s SB 1047 Bill Sparks AI Civil War https://analyticsindiamag.com/ai-news-updates/californias-sb-1047-bill-sparks-ai-civil-war/ Thu, 29 Aug 2024 05:12:32 +0000 https://analyticsindiamag.com/?p=10133979

California’s Senate Bill 1047, a proposed AI regulation, has sparked intense debate in Silicon Valley, drawing both praise and criticism from tech leaders, lawmakers, and AI experts.

The post California’s SB 1047 Bill Sparks AI Civil War appeared first on AIM.

]]>

On Wednesday, California lawmakers approved a contentious AI safety bill, which now requires a final procedural vote. Afterward, the decision will rest with Governor Gavin Newsom, who has until September 30 to either sign the bill into law or veto it.

California’s Senate Bill 1047, a proposed AI regulation, has sparked intense debate in Silicon Valley, drawing both praise and criticism from tech leaders, lawmakers, and AI experts. 

xAI chief Elon Musk has also voiced support, emphasising the need for regulation to prevent AI misuse. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he posted on X. 

Musk said that for more than 20 years, he has supported AI regulation, drawing parallels to how society regulates any technology or product that could potentially pose a risk to the public.

The bill, introduced by State Senator Scott Wiener, seeks to implement strict safety measures for large-scale AI models, aiming to prevent potential catastrophes, but critics argue it could hinder innovation.

The legislation requires developers of significant AI models—those costing over $100 million to train—to perform comprehensive safety testing before releasing them to the public. 

It also mandates an “emergency stop” feature to shut down AI systems in critical situations and obligates developers to report any safety incidents to California’s Attorney General within 72 hours. A new state agency, the Frontier Model Division, would oversee compliance, with penalties of up to $30 million for repeated violations.

Proponents, including AI luminaries Geoffrey Hinton and Yoshua Bengio, believe the bill is crucial to addressing AI risks comparable to pandemics or nuclear threats, potentially setting a national standard for AI safety. 

Conversely, the bill has faced strong opposition from major tech companies like Google, OpenAI, and Meta, who argue it could stifle innovation and drive talent out of California. Critics, such as U.S. Representative Nancy Pelosi and AI expert Fei-Fei Li, caution that the bill’s requirements could disproportionately burden smaller companies and slow technological advancement. They advocate for federal regulation to avoid inconsistencies across states.

OpenAI has publicly opposed SB 1047, stating that it poses a threat to AI’s growth and could push entrepreneurs and engineers to relocate.

The bill has passed the California Appropriations Committee with amendments and is awaiting a final vote in the state assembly. If signed into law by Governor Gavin Newsom, it would be the first AI regulation of its kind in the U.S., potentially shaping AI governance nationwide.

As California, a key hub for AI innovation, seeks to balance technological advancement with safety, the fate of SB 1047 could have significant implications for the global tech industry and the future of AI regulation.

The post California’s SB 1047 Bill Sparks AI Civil War appeared first on AIM.

]]>
OpenAI’s ‘Orion’ and the Battle for Superiority https://analyticsindiamag.com/ai-origins-evolution/openais-orion-and-the-battle-for-superiority/ Wed, 28 Aug 2024 12:57:34 +0000 https://analyticsindiamag.com/?p=10133956

Gemini shines brighter… 

The post OpenAI’s ‘Orion’ and the Battle for Superiority appeared first on AIM.

]]>

According to a recent report, OpenAI is looking to secure more funding as its researchers develop ‘Orion’, a new model anticipated to solve complex problems more effectively than current AI technologies.

This confirms AI insider Jimmy Apples’ cryptic post from last year, which featured an image of the ‘Orion’ constellation with the caption, “Let’s conquer the cosmos”.

https://twitter.com/apples_jimmy/status/1728239862346903924

OpenAI chief Sam Altman, too, had hinted a few days earlier in a cryptic post that the company was working on a project known internally as Project Strawberry, also referred to as Q*. 

“I love summer in the garden,” wrote Altman on X, posting the image of a terracotta pot containing a strawberry plant with lush green leaves and small, ripening strawberries.

One key use of Strawberry is to produce high-quality training data for Orion, the next major LLM currently being developed by OpenAI.

However, the catch is that the new model takes extra time to generate responses. This confirms Apples’ revelation that Q* hasn’t been released yet because OpenAI is not satisfied with the latency and other ‘minor details’ they want to further optimise.

Despite this, when given extra time to ‘think’, the Strawberry model excels in addressing customer inquiries on more subjective topics, such as product marketing strategies. 

To highlight Strawberry’s strengths in language-related tasks, OpenAI employees have demonstrated to their colleagues how the model can solve complex word puzzles like The New York Times Connections, according to The Information report.

“The thing is proper *reasoning* SHOULD be time consuming. What we have now with LLMs is just a step (if even) above bare retrieval. I would be happy to pay $$$ for a good reasoning system, latency and all. But I am probably in the minority. Hopefully they can have a more rugged “enterprise” version available when they release what they have been cooking,” Bojan Tunguz, former software engineer at NVIDIA, posted on X

Worth the Wait? 

According to the report, OpenAI has demonstrated the Orion model to American national security officials. 

Taking a jibe at OpenAI, Stability AI founder Emad Mostaque posted on X, “OpenAI showing Strawberry/Q* to national security officials first highlight that AGI labs are likely to shift from consumer focus to military and state backers.”

He added that bills like SB 1047 will further accelerate this shift, leading to AI that is aligned with state actors rather than consumers.  However, OpenAI has opposed the California AI Bill.

“We join other AI labs, developers, experts, and members of California’s Congressional delegation in respectfully opposing SB 1047 and welcome the opportunity to outline some of our key concerns,” the company said in a statement.

Meanwhile, OpenAI has been working closely with the US government. OpenAI’s chief technology officer, Mira Murati, said in a recent interview that the company gives the government early access to new AI models, and the latter have been in favour of more regulation.

“We’ve been advocating for more regulation on the frontier, which will have these amazing capabilities but also have a downside because of misuse. We’ve been very open with policymakers and working with regulators on that,” she said.

Notably, OpenAI has postponed the release of its video generation model Sora, along with the Voice Engine and voice-mode features of GPT-4o. It is anticipated that GPT-5 may also be released after the elections. 

Recently, OpenAI introduced SearchGPT, though there is no set timeline for its availability.

Earlier this year, Murati confirmed that the elections were a major factor in the release of GPT-5. “We will not be releasing anything that we don’t feel confident about when it comes to how it might affect the global elections or other issues,” she said.

Meanwhile, OpenAI recently appointed retired US Army general Paul Nakasone to its board of directors. As a priority, Nakasone joined the board’s safety and security committee, responsible for making recommendations on critical safety decisions for all OpenAI projects and operations.

OpenAI has also been working closely with the US Department of Defence on open-source cybersecurity software, collaborating with the Defense Advanced Research Projects Agency (DARPA) for its AI Cyber Challenge announced last year. 

As OpenAI befriends the US government, consumers are left waiting for the next big release.

Google Steals the Limelight 

It appears that Google has followed OpenAI’s lead this time. 

Interestingly, just after the release of The Information report, Google introduced three new experimental Gemini models to improve speed, accuracy, and handling of complex prompts. The new models are Gemini 1.5 Flash-8B, Gemini 1.5 Pro, and Gemini 1.5 Flash. 

The new Gemini-1.5-Pro (0827) shows strong gains in coding and maths over previous versions and is in second position on the LMsys Chatbot Arena. 

A few days ago, Google AI Studio lead Logan Kilpatrick took a jab at critics who claim Google lacks innovation, highlighting that the company was the first to ship a 1 million and 2 million context window, a state-of-the-art multi-modal LLM, context caching, and a high-quality small model for developers called Flash. 

“So yeah, definitely no innovation happening here…..,” he quipped.

Moreover, Google recently appointed Noam Shazeer, the former head of Character.AI and a veteran Google researcher, to co-lead its key AI project, Gemini. 

Shazeer will join Jeff Dean and Oriol Vinyals as a technical lead for Gemini, which is being developed by Google’s AI division, DeepMind.

Anthropic, too, wasn’t going to be left behind. The OpenAI rival has made Artifacts available to all Claude users. Users can now create and view Artifacts on both the Claude iOS and Android apps. 

The company stated that since launching its preview in June, tens of millions of Artifacts have been created.

Deedy Das, Principal at Menlo Ventures, used Artifacts to build a Splitwise-like app. “With Claude launching on iOS today, I can now generate the Splitwise app instead of paying for Pro,” he said.

The post OpenAI’s ‘Orion’ and the Battle for Superiority appeared first on AIM.

]]>
AI4Bharat Invites Contributors to Chitralekha Open-Source Project https://analyticsindiamag.com/ai-news-updates/ai4bharat-invites-contributors-to-chitralekha-open-source-project/ Wed, 28 Aug 2024 12:14:45 +0000 https://analyticsindiamag.com/?p=10133946

The initiative offers contributors the chance to gain experience in a live project environment, build their portfolios, and contribute to the Indian Language AI landscape.

The post AI4Bharat Invites Contributors to Chitralekha Open-Source Project appeared first on AIM.

]]>

AI4Bharat is seeking contributors for Chitralekha, an open-source Video Transcreation platform supported by the EkStep Foundation. The platform, initially developed for video annotation, allows users to auto-generate and edit audio transcripts in Indic languages.

“We’re thrilled to invite you to contribute to Chitralekha, an innovative open-source video transcreation platform built by AI4Bhārat and funded by the EkStep Foundation,” said Ishvinder Sethi, Project Officer at AI4Bhārat.

Chitralekha’s features include subtitle generation and download, audio/video dubbing, and video translation across various Indic languages. The project, built on AI models developed in-house by AI4Bhārat, is open to enhancements and new feature integrations.

The initiative offers contributors the chance to gain experience in a live project environment, build their portfolios, and contribute to the Indian Language AI landscape. AI4Bhārat encourages participation in community discussions on the platform’s GitHub page, focusing on potential use cases and desired features.

This is an unpaid opportunity for open-source enthusiasts to help expand Chitralekha’s capabilities.

Chitralekha is an open-source platform designed for video transcreation across various Indic languages, leveraging machine learning models for ASR (Automatic Speech Recognition) for transcription, NMT (Neural Machine Translation) for translation, and TTS (Text-to-Speech) for voice-over.

The platform supports multiple input sources, such as YouTube and local files, and offers various options for transcription generation, including models, source captions, custom subtitle files, and manual creation.

For translation and voice-over generation, users can utilise models or manually created content. Currently, Chitralekha supports voice-over for single-speaker videos, with multi-speaker support under development.For more details or to contribute, visit the GitHub page here or try out the platform here.

The post AI4Bharat Invites Contributors to Chitralekha Open-Source Project appeared first on AIM.

]]>
Cerebras Launches Fastest AI Inference Solution, Claims 20x Speed Advantage Over NVIDIA https://analyticsindiamag.com/ai-news-updates/cerebras-launches-fastest-ai-inference-solution-claims-20x-speed-advantage-over-nvidia/ Wed, 28 Aug 2024 05:08:28 +0000 https://analyticsindiamag.com/?p=10133892

The solution delivers 1,800 tokens per second for the Llama 3.1 8B model and 450 tokens per second for the Llama 3.1 70B model.

The post Cerebras Launches Fastest AI Inference Solution, Claims 20x Speed Advantage Over NVIDIA appeared first on AIM.

]]>

Cerebras Systems today announced its new AI inference solution, Cerebras Inference, which it claims is the fastest in the world. The solution delivers 1,800 tokens per second for the Llama 3.1 8B model and 450 tokens per second for the Llama 3.1 70B model, making it 20 times faster than NVIDIA GPU-based hyperscale clouds. 

Cerebras Inference is priced at 10 cents per million tokens for Llama 3.1 8B and 60 cents per million tokens for Llama 3.1 70B, and is available to developers through API access.

The solution is powered by the third-generation Wafer Scale Engine (WSE-3), which enables it to run Llama 3.1 models 20 times faster than GPU solutions at one-fifth the cost. The WSE-3 integrates 44GB of SRAM on a single chip, eliminating the need for external memory and providing 21 petabytes per second of aggregate memory bandwidth, which is 7,000 times greater than that of an NVIDIA H100 GPU.

Cerebras addresses the inherent memory bandwidth limitations of GPUs, which require models to be moved from memory to compute cores for every output token. This process results in slow inference speeds, particularly for large language models like Llama 3.1-70B, which has 70 billion parameters and requires 140GB of memory.

Cerebras Inference supports models from billions to trillions of parameters. For models exceeding the memory capacity of a single wafer, Cerebras splits them at layer boundaries and maps them to multiple CS-3 systems. Larger models, such as Llama3-405B and Mistral Large, are expected to be supported in the coming weeks.

Cerebras Inference operates using 16-bit model weights, preserving the accuracy of the original Llama 3.1 models released by Meta. The solution offers developers chat and API access, with an initial offering of 1 million free tokens daily.

The company emphasises that the speed of inference is not just a matter of performance metrics but also enables more complex AI workflows and real-time large language model intelligence. Cerebras notes that techniques like scaffolding, which require significantly more tokens at runtime, are only feasible with its hardware.

Cerebras Inference aims to set a new standard for large language model development and deployment, offering both high-speed training and inference capabilities. The company anticipates that the speed and cost advantages will open up new possibilities for AI applications.

The post Cerebras Launches Fastest AI Inference Solution, Claims 20x Speed Advantage Over NVIDIA appeared first on AIM.

]]>
SambaNova’s Llama 3.1 405B Model Hits 114 Tokens Per Second, Setting Speed Record https://analyticsindiamag.com/ai-news-updates/sambanovas-llama-3-1-405b-model-hits-114-tokens-per-second-setting-speed-record/ Wed, 28 Aug 2024 04:58:26 +0000 https://analyticsindiamag.com/?p=10133887

The company's technology is built around the SN40L chip, which features a reconfigurable dataflow architecture.

The post SambaNova’s Llama 3.1 405B Model Hits 114 Tokens Per Second, Setting Speed Record appeared first on AIM.

]]>

SambaNova Systems has achieved a new performance milestone, setting a world speed record with Meta’s Llama 3.1 405B model, processing 114 tokens per second. The performance, verified by Artificial Analysis, outpaces other providers by over four times, positioning SambaNova as a leader in AI speed and efficiency.

“I’ve been playing with SambaNova Systems‘s API serving fast Llama 3.1 405B tokens. Really cool to see the leading model running at speed. Congrats to Samba Nova for hitting a 114 tokens/sec speed record,” said DeepLearning.ai founder Andrew Ng.  

The benchmark was set using a single 16-socket node, operating with full 16-bit precision on SambaNova’s custom RDU chips. This advancement addresses the challenge of balancing quality and speed in large models like Llama 3.1 405B, enabling the deployment of the model in more speed-sensitive applications, such as customer support and AI agents.

George Cameron, Co-Founder of Artificial Analysis, confirmed the record, saying that SambaNova’s platform reduces the trade-off between model size and operational speed, making it viable for real-time applications.

SambaNova’s fourth-generation RDU chip, the SN40L, plays a critical role in this achievement, facilitating real-time processing that opens up new enterprise use cases. These include intelligent document processing, real-time AI copilots, and explainable AI, all of which benefit from the platform’s speed.

The company is offering a demo of the Llama 3.1 405B model on its website and is inviting developers to access its APIs for building enterprise-level AI applications.

SambaNova Systems is a technology company specializing in artificial intelligence (AI) hardware and software solutions. Founded in 2017 in Palo Alto, California, by Kunle Olukotun, Rodrigo Liang, and Christopher Ré, the company provides purpose-built solutions for deep learning and AI applications.

The company’s technology is built around the SN40L chip, which features a reconfigurable dataflow architecture. This design optimizes data movement and reduces latency, making it highly efficient for AI tasks compared to traditional GPU-based systems.

The post SambaNova’s Llama 3.1 405B Model Hits 114 Tokens Per Second, Setting Speed Record appeared first on AIM.

]]>
The Secret to Creating the Next Billion-Dollar AI Startup https://analyticsindiamag.com/ai-origins-evolution/the-secret-to-creating-the-next-billion-dollar-ai-startup/ Tue, 27 Aug 2024 12:37:58 +0000 https://analyticsindiamag.com/?p=10133877

AI’s usefulness in a wide variety of applications creates a plethora of opportunities for entrepreneurship.

The post The Secret to Creating the Next Billion-Dollar AI Startup appeared first on AIM.

]]>

It’s now widely recognised that selling AI models is a zero-margin game. The next wave of AI startups must capitalise on LLMs in the application layer to tackle real-world challenges.

“The next billion dollar startups in AI will play on the application layer and not the infrastructure layer,” said AIM Media House chief Bhasker Gupta in a LinkedIn post. 

Gupta added that there is a plethora of problems to be solved using AI, and these startups will localise their solutions while maintaining a broad-based approach.

Echoing a similar market sentiment was Nayan Goswami, the founder and CEO of Chop. “The next major wave of AI innovation will focus on the application layer, where startups will build specialised vertical AI software-as-a-service (SaaS) companies for global markets,” he said

Goswami further elaborated that with robust foundational models like Anthropic, Cohere, and OpenAI, along with infrastructure companies like LangChain and Hugging Face advancing rapidly, we’re poised to witness a surge in application-layer startups targeting specific verticals. 

“Think of foundational models as the roadways, and application layers as the vehicles driving on them,” he explained. 

Finding the Right Application to Build is Key 

Andrew Ng, the founder of DeepLearning.AI believes AI’s usefulness in a wide variety of applications creates many opportunities for entrepreneurship. However, he advised budding entrepreneurs to be extremely specific about their ideas for integrating AI. 

For instance, he explained that building AI for livestock is vague, but if you propose using facial recognition to identify individual cows and monitor their movement on a farm, it’s specific enough.  

A skilled engineer can then quickly decide on the right tools, such as which algorithm to use first or what camera resolution to pick.

In a recent interview, Ng explained that the cost of developing a foundation model could be $100 million or more. However, the applications layer, which receives less media coverage, is likely to be even more valuable in terms of revenue generation than the foundation layer.

He also said that unlike foundation models, the ROI on the application layer is higher. “For the application layer, it’s very clear. I think it’s totally worth it, partly because it’s so capital efficient—it doesn’t cost much to build valuable applications. And I’m seeing revenues pick up. So at the application layer, I’m not worried,” he said.

Perplexity AI serves as a strong example by integrating search with LLMs. Rather than building its own foundational models, the startup leverages state-of-the-art models from across the industry, focusing on delivering optimal performance. The company is planning to run ads as well from the next quarter onwards. 

However, not everyone is going to make the cut; some startups are going to fail. Statistically speaking, around 90% of startups don’t survive long enough to see the light at the end of the tunnel.

Ashish Kacholia, the founder and managing director of Lucky Investment Managers, said, “AI is the future but key is how the applications shape up to capitalise on the technology.”

India is the Use Case Capital of AI 

“India is going to be a use case capital of AI. We’ll be very big users of AI, and we believe that AI can significantly help in the expansion of the ONDC Network,” said Manoj Gupta, the founder of Plotch.ai, in an exclusive interview with AIM. 

Similar thoughts were shared by Nandan Nilekani when he said that India is not in the arms race to build LLMs, and should instead focus on building use cases of AI to reach every citizen. He added that “Adbhut” India will be the AI use case capital of the world. 

“The Indian path in AI is different. We are not in the arms race to build the next LLM, let people with capital, let people who want to pedal chips do all that stuff… We are here to make a difference and our aim is to put this technology in the hands of people,” said Nilekani.

Krutrim founder Bhavish Aggarwal believes that India can build its own AI applications. Agreeing with him, former Tech Mahindra chief CP Gurnani said, “It’s time to stop ‘adopting’ and ‘adapting’ to AI applications created for the Western world.”

Gurnani said that the time is ripe for us to build AI models and apps based on Indian data, for Indian use cases, and store them on India-built hardware, software and cloud systems. “That will make us true leaders in the business of tech,” he added. 

Notably, Gurnani recently launched his own AI startup AIonOS. 

Startups Offering More Than LLMs

Lately, several AI startups in India have been building services using generative AI. For example, Unscript, a Bengaluru-based AI startup, is helping enterprises create videos with generative AI. Another video generation startup, InVideo, is estimated to generate $30 million in revenue in 2024. 

Recently, Sarvam AI launched Sarvam Agents. While the startup, backed by Lightspeed, Peak XV, and Khosla Ventures, is not the only company building AI agents, it stands out for its pricing. The cost of these agents starts at just one rupee per minute. 

According to co-founder Vivek Raghavan, enterprises can integrate these agents into their workflow without much hassle.

These agents can be integrated into contact centres and various applications across multiple industries, including insurance, food and grocery delivery, e-commerce, ride-hailing services, and even banking and payment apps.

Similarly, Krutrim AI is making AI shopping co-pilot for ONDC. Khosla Ventures-backed upliance.ai is building kitchen appliances  integrating generative AI. 

Meanwhile, Ema, an enterprise AI company founded by Meesho board member Surojit Chatterjee, recently raised an additional $36 million in Series A funding. 

The company is building a universal AI agent adaptable to a wide array of industries, including healthcare, retail, travel, hospitality, finance, manufacturing, e-commerce, and technology. 

Enterprises use Ema for customer support, legal, sales, compliance, HR, and IT functions. 

Lately, we have observed that Y Combinator is bullish on Indian AI startups, many of which are focused on building AI applications.

For example, the creator of AI software engineer Devika, Mufeed VH, who founded Stition.AI, is now part of YC S24 batch. His startup works around AI cybersecurity for fixing security vulnerabilities in codebases, and is now renamed to Asterisk

On the agentic front, Indian co-founders Sudipta Biswas and Sarthak Shrivastava are building AI employees through their startup FloWorks.

Examples are aplenty, with India poised to boast 100 AI unicorns in the next decade. 

In a conversation with AIM, Prayank Swaroop, partner at Accel India, said that the 27 AI startups his firm has invested in over the past few years are expected to be worth at least ‘five to ten billion dollars’ in the future, including those focused on wrapper-based technologies.

There are a host of categories, such as education, healthcare, manufacturing, entertainment, and finance, to explore with generative AI, and this is just the beginning.

The post The Secret to Creating the Next Billion-Dollar AI Startup appeared first on AIM.

]]>
Amazon Claims to have Saved 4,500 Developer-Year of Work with Q https://analyticsindiamag.com/ai-news-updates/amazon-claims-to-have-saved-4500-developer-year-of-work-with-q/ Tue, 27 Aug 2024 08:55:11 +0000 https://analyticsindiamag.com/?p=10133835

In less than six months, Amazon has upgraded over 50% of its production Java systems to modern versions, achieving these results at a fraction of the usual time and effort.

The post Amazon Claims to have Saved 4,500 Developer-Year of Work with Q appeared first on AIM.

]]>

Amazon CEO Andy Jassy recently revealed that by leveraging Amazon Q, the company was able to save 4,500 developers-years of work. “Yes, the number is crazy, but real,” he posted on X.

With Amazon Q, the company has significantly cut down the time needed to update Java applications. “The average time to upgrade an application to Java 17 plummeted from what’s typically 50 developer days to just a few hours,” he said. 

He added that in under six months, the company has been able to upgrade more than 50% of its production Java systems to modernised Java versions at a fraction of the usual time and effort. “Our developers shipped 79% of the auto-generated code reviews without any additional changes.”

“The benefits go beyond how much effort we’ve saved developers. The upgrades have enhanced security and reduced infrastructure costs, providing an estimated $260 million in annualised efficiency gains,” he claimed.

Amazon plans to expand the transformation capabilities of Amazon Q, with further developments aimed at additional developer tools. This advancement marks a notable shift in how enterprises handle software updates and maintenance.

Amazon Web Services announced the general availability of Amazon Q, earlier this year. The chatbot is available in three forms: Amazon Q for developers, Amazon Q for businesses, and Amazon Q apps. 

Amazon Q not only generates highly accurate code, it also tests, debugs, and has multi-step planning and reasoning capabilities that can transform (e.g., perform java version upgrades) and implement new code generated from developer requests. 

The chatbot also makes it easier for employees to get answers to questions across business data such as company policies, product information, business results, code base, employees, and many other topics by connecting to enterprise data repositories to summarize the data logically, analyze trends, and engage in dialog about the data.

The post Amazon Claims to have Saved 4,500 Developer-Year of Work with Q appeared first on AIM.

]]>
How Cursor AI, GitHub Copilot, Devin, and Amazon Q Help Reduce Technical Debt https://analyticsindiamag.com/ai-origins-evolution/how-cursor-ai-github-copilot-devin-and-amazon-q-help-reduce-technical-debt/ Tue, 27 Aug 2024 08:33:58 +0000 https://analyticsindiamag.com/?p=10133829

With Amazon Q, the company has significantly cut down the time needed to update Java applications. 

The post How Cursor AI, GitHub Copilot, Devin, and Amazon Q Help Reduce Technical Debt appeared first on AIM.

]]>

Recently, Amazon CEO Andy Jassy revealed that by leveraging Amazon Q, the company was able to save 4,500 developers-years of work. “Yes, the number is crazy, but real,” he posted on X.

With Amazon Q, the company has significantly cut down the time needed to update Java applications. “The average time to upgrade an application to Java 17 plummeted from what’s typically 50 developer days to just a few hours,” he said. 

He added that in under six months, the company has been able to upgrade more than 50% of its production Java systems to modernised Java versions at a fraction of the usual time and effort. “Our developers shipped 79% of the auto-generated code reviews without any additional changes.”

“The benefits go beyond how much effort we’ve saved developers. The upgrades have enhanced security and reduced infrastructure costs, providing an estimated $260 million in annualised efficiency gains,” he claimed.

Indeed, generative AI has made coding a breeze. Tools such as GitHub Copilot, Devin, and Amazon Q simplify the development process, making application creation easier and helping developers and enterprises reduce technical debt.

Technical debt arises when an enterprise rushes a product to meet deadline requirements without properly checking the code quality and debugging. Incomplete documentation and insufficient testing can lead to errors and inefficiencies that add to the debt.

Converting Legacy Code Base

Amazon is not the only company reducing technical debt with AI. San Francisco-based Databricks, which uses generative AI to quickly analyse and understand its legacy code base—something its CIO says has eased the burden on engineers.

Seventy-five-year-old payroll-processing company ADP is also looking at generative AI to convert COBOL to Java. “A big problem that we, and other legacy companies face is that we have COBOL running in our systems,” said Amin Venjara, the chief data officer of ADP. He added that today, very few programmers are familiar with COBOL.

The Roseland, NJ-based company is exploring the use of generative AI to convert its mainframe code from COBOL—a language developed in the 1950s and still widely used in banks and financial services—into Java, a programming language that has been around since 1995.

Wayfair, the online furniture retailer, is using generative-AI-based coding tools to update old code. Though Wayfair, established two decades ago, does not use COBOL, it does have “legacy code” in languages such as PHP and outdated database code in SQL. 

Additionally, there is code written by developers who are no longer with the company.

GenAI to Assist in Tedious Tasks

Generative AI acts as an intelligent assistant that automates tedious tasks, suggests improvements, and enhances code quality. Armand Ruiz, the VP of product at IBM, says that his favourite use case for generative AI is in software development.

According to Ruiz, GenAI has several use cases in software development. It can convert plain English instructions into code in the preferred programming language. 

Code translation tools convert languages like COBOL to Java, and facilitate code modernisation and migration. Bug-fixing tools identify and suggest fixes for code errors, thereby enhancing code reliability.

Notably, IBM recently announced the IBM watsonx Code Assistant for Enterprise Java Applications, expected to be generally available later this year.

Generative AI also plays a significant role in streamlining code maintenance through refactoring. Generative AI automates refactoring by suggesting or implementing code transformations. It can identify common anti-patterns and propose more efficient alternatives, ensuring that refactoring adheres to coding standards and best practices. 

Generative AI Tools are Shaping Software Engineering

Recently, AI coding tool Cursor AI has been generating buzz on social media.

OpenAI co-founder Andrej Karpathy praised the AI-integrated code editor, saying, “Programming is changing so fast… I’m trying VS Code Cursor + Sonnet 3.5 instead of GitHub Copilot again, and I think it’s now a net win.” 

“Just empirically, over the last few days, most of my ‘programming’ is now writing English (prompting and then reviewing and editing the generated diffs) and doing a bit of ‘half-coding’, where you write the first chunk of the code you’d like, maybe comment on it a bit so the LLM knows what the plan is, and then tab tab tab through completions.”

Previously, software engineers used to take a substantial amount of time to deliver a product, but now this has drastically decreased. GitHub recently launched Models, a playground which will offer developers access to leading LLMs, including Llama 3.1, GPT-4o, GPT-4o Mini, Phi 3, and Mistral Large 2. 

“With GitHub models, developers can now explore these models on GitHub, integrate them into their dev environment in Codespaces and VS Code, and leverage them during CI/CD in Actions – all simply with their GitHub account and free entitlements,” explained Github chief Thomas Dohmke.

Then, there are AI software engineers like Genie and Devin. Genie is designed to emulate the cognitive processes of human engineers, enabling it to solve complex problems with remarkable accuracy and efficiency. “We believe that if you want a model to behave like a software engineer, it has to be shown how a human software engineer works,” said Alistair Pullen, the cofounder of Cosine.

Learn to use AI Tools if you are a Software Engineer

“Software development companies will start hiring coders who can demonstrate in interviews that they master AI-assisted coding tools like Copilot or Cursor,” said Andriy Burkov, machine learning lead at TalentNeuron. 

He added that claims that LLMs can’t write reliable code are immature. “Most junior and mid-level coders can’t write reliable code either. This is why software engineering has, over decades, equipped itself with automated and semi-automated tools to ensure that code is production-ready.”

He further explained that LLMs are already being fine-tuned specifically for coding. Companies are investing hundreds of millions of dollars to enhance these LLMs’ coding capabilities because, if executed correctly, coding is the only use case where they can charge $10 or even $100 per million tokens from corporate clients.

One issue that the developers face is the belief that AI-generated code may contain bugs. Many AI startups are emerging to address these concerns. One such startup is YC-backed CodeAnt AI. 

CodeAnt’s AI-driven code review system can significantly reduce vulnerabilities by automating the review process and identifying potential issues before they reach the customer.

With advancements in AI-driven coding tools and platforms such as IBM watsonx and GitHub Models, there is no doubt that developers are now better equipped to handle legacy code, streamline code maintenance, and enhance productivity.

The post How Cursor AI, GitHub Copilot, Devin, and Amazon Q Help Reduce Technical Debt appeared first on AIM.

]]>
German AI Startup Aleph Alpha Launches Pharia-1-LLM Model Family https://analyticsindiamag.com/ai-news-updates/german-ai-startup-aleph-alpha-launches-pharia-1-llm-model-family/ Mon, 26 Aug 2024 12:44:56 +0000 https://analyticsindiamag.com/?p=10133777

German AI Startup Aleph Alpha has announced the release of its latest foundation model family, Pharia-1-LLM, featuring Pharia-1-LLM-7B-control and Pharia-1-LLM-7B-control-aligned. These models are now publicly available under the Open Aleph License, which permits non-commercial research and educational use. Pharia-1-LLM-7B-control is designed to produce concise, length-controlled responses and is optimized for German, French, and Spanish languages. […]

The post German AI Startup Aleph Alpha Launches Pharia-1-LLM Model Family appeared first on AIM.

]]>

German AI Startup Aleph Alpha has announced the release of its latest foundation model family, Pharia-1-LLM, featuring Pharia-1-LLM-7B-control and Pharia-1-LLM-7B-control-aligned. These models are now publicly available under the Open Aleph License, which permits non-commercial research and educational use.

Pharia-1-LLM-7B-control is designed to produce concise, length-controlled responses and is optimized for German, French, and Spanish languages. The model has been trained on a multilingual base corpus and adheres to EU and national regulations, including copyright and data privacy laws. It is specifically engineered for domain-specific applications in industries such as automotive and engineering.

The Pharia-1-LLM-7B-control-aligned variant includes additional safety features through alignment methods. This model is tailored for use in conversational settings like chatbots or virtual assistants, where safety and clarity are prioritized.

The training of Pharia-1-LLM-7B involved two phases. Initially, the model was pre-trained on a 4.7 trillion token dataset with a sequence length of 8,192 tokens, using 256 A100 GPUs. In the second phase, the model was trained on an additional 3 trillion tokens with a new data mix, utilizing 256 H100 GPUs. The training was performed using mixed-precision strategies and various optimisation techniques to enhance throughput and performance.

In terms of performance, Pharia-1-LLM-7B-control and Pharia-1-LLM-7B-control-aligned were evaluated against similarly sized weight-available multilingual models, including Mistral’s Mistral-7B-Instruct-v0.3 and Meta’s llama-3.1-8b-instruct. 

The comparison results, detailed in the model card, provide insights into the models’ effectiveness across multiple languages, including German, French, and Spanish. The evaluation highlighted areas where Pharia-1-LLM-7B outperforms or matches its peers in specific benchmarks and use cases.

Pharia detailed the model architecture, hyperparameters, and training processes in a comprehensive blog post. The models underwent evaluations against comparable weight-available multilingual models, with results available in the model card.

The post German AI Startup Aleph Alpha Launches Pharia-1-LLM Model Family appeared first on AIM.

]]>
This Bengaluru Startup is Competing with OpenAI Sora Heads-on https://analyticsindiamag.com/ai-origins-evolution/this-bengaluru-startup-is-competing-with-openai-sora-heads-on/ Mon, 26 Aug 2024 08:30:00 +0000 https://analyticsindiamag.com/?p=10133733

Unscript provides its customers with unlimited video generation and charges only for the final videos.

The post This Bengaluru Startup is Competing with OpenAI Sora Heads-on appeared first on AIM.

]]>

Since traditional video shoots are expensive and time-consuming, many enterprises have been turning to generative AI to craft their promotional videos. However, even state-of-the-art models like Sora and Kling frequently produce inaccurate results and concerns regarding copyright infringement and overall quality still persist.

Register for NVIDIA AI Summit India

Today, there are a handful of startups providing tools and solutions for enterprises to create AI videos that enterprises can use. Bengaluru-based AI startup Unscript is one among them.

The startup recently transformed a single photo into a full-fledged video, generating head and eye movements, facial expressions, voice modulations, and body language, achieving studio-quality results in under 2 minutes. This significantly reduces manual shooting efforts. 

Interestingly, the startup claims that their model has surpassed OpenAI’s Sora, Google Vlogger, Microsoft’s VASA-1, and Alibaba’s EMO, making it an ideal choice for brands, marketing agencies, and virtual influencers. 

Over 50 leading companies, including Ceat and Mahindra, are already leveraging its cost-effective and scalable video-production capabilities.

“We have built a ‘Canva for videos’, but in a version where you can get an end-to-end solution. Starting from shooting to the final video that you want to deliver to social media, everything can be automated here,” said co-founder Ritwika Chowdhury in an exclusive interview with AIM.

Unscript provides advanced video-generation solutions, including text-to-video, image-to-video, and the creation of virtual influencers as brand ambassadors for enterprises. 

The company is experiencing strong demand from sectors such as BFSI, pharma, and media & entertainment. 

Is Unscript Better Than Sora? 

The startup has built its own diffusion model for converting text to video. According to Chowdhury, their model follows diffusion principles but is distinct in its architecture. 

“It’s an encoder-decoder plus diffusion-based model that we have developed. A key aspect is that we have collected substantial data—about 1,000 hours—to train it,” said Chowdhury.

OpenAI’s Sora tends to generate highly creative videos that are not commonly seen in the real world, such as dolphins cycling in the ocean. Unscript’s model, however, is trained specifically to generate content based on humans.

Sora’s videos tend to be very abstract. You might generate something like a dog playing with a ball, but we specialise in creating content featuring human beings.

“If you look at Sora, you’ll see that physical interactions with the world are not mapped properly. For example, a person walking might appear to be floating or not interacting realistically, as it is not specifically trained for human-like interactions.”

Talking about Unscript, she said, “Our video tool is perfect at not only generating lip sync, but generating it based on the individual. This is important because we are working with a lot of enterprise customers, like Ceat, Mahindra, Bajaj, Maxlife, Flowworks, and Healthifyme.”

For script generation, Unscript uses third-party LLM vendors like OpenAI. “For the LLM component, especially in documents and videos where end-to-end script generation is needed, we’ve trained and fine-tuned with 1 million ad copies,” said Chowdhury.

She also mentioned that they train their models using proprietary data that they have collected, as open-source datasets often do not cover all types of ethnicities. “We literally hired 30 people last year, and for six months, we focused solely on collecting data,” she said.

Meanwhile, Unscript also supports content generation in over 40 languages. The team has published more than 30 research papers and has researchers from Samsung Research, Microsoft, Intel, and various IITs. Moreover, the company is advised by a former employee of OpenAI.

Targeting Enterprises

Chowdhury also cautioned about certain issues with Luma and Sora. “You cannot maintain the brand image consistently in all the clips. When creating enterprise content, you need to have logos, colours, and other elements presented in a specific way. Luma and Sora are not built for enterprise videos,” she said.

Unscript provides its customers with unlimited video generation. “We only charge for the final videos that you use, not for the R&D you might need to do,” Chowdhury said.

Right: Ritwika Chowdhury with Sania Mirza.

Businesses use the platform’s services to create diverse content, from YouTube videos to marketing assets and customer communications. BFSI companies use Unscript to produce short explainer videos as a more engaging alternative to traditional policy documents.

Future Roadmap 

Chowdhury revealed that her journey with generative AI started in 2014 while she was at IIT Kharagpur. Founded in 2021 by Chowdhury and Apurv Jain, Unscript has since raised over $1.25 million. 

The company does not plan to raise funds in the near future. Chowdhury noted that they are continually experimenting with new products and are preparing to release a new feature, which will be announced soon.

While Unscript is operating in an interesting space, there are other companies excelling in that space as well. 

The startup’s competitors include major names like OpenAI’s Sora, Kling, Runway ML, and Luma AI’s Dream Machine. Locally, InVideo, Phenomenal AI and Personate AI are also notable rivals, with Personate AI having developed AI anchors Krish and Bhoomi for Doordarshan.

The post This Bengaluru Startup is Competing with OpenAI Sora Heads-on appeared first on AIM.

]]>
Carrier’s Bold Moves in AI and Digital Transformation https://analyticsindiamag.com/ai-highlights/carriers-bold-moves-in-ai-and-digital-transformation/ Fri, 23 Aug 2024 11:08:32 +0000 https://analyticsindiamag.com/?p=10133630

Carrier is set to establish a 100+ employee AI Center of Excellence in India, aiming to propel the company to the forefront of AI innovation and advancement.

The post Carrier’s Bold Moves in AI and Digital Transformation appeared first on AIM.

]]>

In 2019, Carrier established its first Digital Hub in Hyderabad, India, coinciding with its transition to become an independent, publicly traded company in 2020. Led by Senior Vice President and Chief Digital Officer Bobby George, Carrier embarked on a digital transformation journey to modernise and strengthen the digital infrastructure of a 100-year-old organisation. 

Since then, Carrier’s success with Digital Hub India has led to the development of the “Connected Hubs” model, integrating hubs in Mexico and China with operations in Bengaluru and Hyderabad.

“Following our spin-off, where we prioritised goals, such as becoming more customer-centric, transforming our customer interactions, fostering innovation, and developing sustainable products, we recognized an internal talent deficit. This led us to make the strategic decision to establish a centre in Hyderabad, India,” shared Bobby George, SVP and CDO of Carrier, in a recent interview with AIM. 

“This move was pivotal in streamlining and accelerating digital deliveries worldwide, inspiring us to replicate the model across three countries, each with tailored focus areas.”

More Than a Delivery Center: A Strategic Partner

Today, Carrier’s India Hub is the company’s largest digital transformation centre, making up nearly 40% of Carrier’s digital talent from its two locations in Hyderabad and Bengaluru, respectively. Over the years, the hub has evolved into a strategic partner, collaborating with business units globally to develop innovative solutions and create significant impact.

“There’s no secret that India boasts the largest concentration of digital and engineering talent. Currently, Carrier is investing in artificial intelligence and data sciences. These skills will shape our future, and our hubs in Hyderabad and Bengaluru will play a pivotal role,” Bobby elaborated.

India: A Key Player in Carrier’s Global Strategy

Carrier’s strategy of interconnected hubs consolidates operations across Mexico, China, and India under unified leadership, fostering specialised talent pools. The Mexico hub pioneers automation and digital solutions for future factories, while the China hub enhances digital capabilities for the APAC region. 

Meanwhile, the India hub functions as the digital twin to the global headquarters, operating across multiple domains. India’s strategic location and versatile capabilities are crucial for bridging time zones and enhancing global coverage.

“Carrier Digital Hub India plays a pivotal role in our global strategy, accelerating our digital transformation journey. By leveraging India’s robust talent pool and strategic capabilities in cybersecurity, cloud operations, customer experience, IOT, and data science, we enhance operational agility and drive innovation across our enterprise,” emphasised Bobby.

This approach enables Carrier to swiftly implement innovative solutions, ensuring efficient product deliveries and streamlined modernization projects. The India hub significantly contributes to Carrier’s flagship products: Abound, a cloud-native platform for Healthy & Efficient buildings, and Lynx, a cold chain platform leveraging advanced data analytics, IoT, and machine learning.

Carrier’s Big Bet On AI 

Looking ahead, Carrier is set to establish a 100+ employee AI Center of Excellence in India, aiming to propel the company to the forefront of AI innovation and advancement. This strategic move underscores Carrier’s commitment to harnessing artificial intelligence to drive transformative change across its operations and offerings.

Focus On Talent Innovation & Inclusion at Carrier’s Digital Hubs

“At Carrier, we often say we are a 100-year-old company with the spirit of a startup. It’s essential that our employees have every opportunity to capture ideas, big or small, and continually innovate,” remarked Bobby George.

Carrier’s global digital teams have implemented an idea crowdsourcing program called iD8. This platform allows employees to submit their ideas during planned idea collation sprints and internal hackathons. Subject matter experts review these ideas, and selected concepts advance to the proof-of-concept stage.

Carrier’s culture of collaboration and respect has fueled its rapid growth in India, a trend poised to continue. The company is committed to hiring highly skilled deep tech talent in India. 

“Our strategic focus on acquiring top-tier talent is critical to advancing our digital transformation and innovation initiatives,” said Bobby George. “India’s extensive pool of tech professionals is pivotal to our global strategy. Furthermore, we are committed to recruiting fresh talent from colleges to infuse our teams with new perspectives and cutting-edge ideas, ensuring sustained innovation and growth.”

Empowering Women in Tech at Carrier

Carrier’s commitment to collaboration and equity extends to fostering an environment where women in tech can thrive. The AccelHerate program mentors women leaders, while the WomenUp program provides female employees with opportunities to interact with industry experts and gain valuable leadership insights.

“We are dedicated to the growth of women in tech. Our programs for skill development and recruitment of women candidates are a testament to this commitment. In Digital Hub India, our diversity ratio has increased from 2% to 24% in recent years, highlighting the progress of our initiatives. We remain committed to further enhancing diversity and inclusion,” Bobby emphasised.

All of this clearly portrays that Carrier’s strategic focus on AI and digital transformation, anchored by its robust India Hub, positions the company at the forefront of innovation. With a strong emphasis on talent acquisition, collaboration, and diversity, Carrier is set to drive sustainable growth, empowering its global operations to meet future challenges effectively. You too can be part of this growth, openings here: link

The post Carrier’s Bold Moves in AI and Digital Transformation appeared first on AIM.

]]>
Redis 8 Launches with AI Capabilities, Expands Developer Access https://analyticsindiamag.com/ai-news-updates/redis-8-launches-with-ai-capabilities-expands-developer-access/ Fri, 23 Aug 2024 10:02:34 +0000 https://analyticsindiamag.com/?p=10133609

Alongside Redis 8, the company unveiled Redis for AI, a new package designed to support the modern AI stack, and several other products aimed at enhancing developer workflows.

The post Redis 8 Launches with AI Capabilities, Expands Developer Access appeared first on AIM.

]]>

Redis has introduced Redis 8, a major update to its data platform, offering advanced features to its community of developers. Alongside Redis 8, the company unveiled Redis for AI, a new package designed to support the modern AI stack, and several other products aimed at enhancing developer workflows.

Redis 8 brings advanced features, previously available only in Redis Stack, to the Redis Community Edition. This update includes support for JSON, search, and vector databases, making it easier for developers to scale mobile, web, and AI applications. Redis 8 Community Edition is now available for general use.

Redis for AI is a comprehensive package designed to accelerate the deployment of GenAI applications. The package includes new products like RedisVL 0.3.0 and partner integrations such as langchain-redis and llama-index-vector-stores-redis. It aims to address challenges related to data speed, security, and integrity in AI development.

The company also introduced Redis Flex, an update to Redis on Flash, which reduces the cost of Redis deployments by up to 80% while maintaining performance. Redis Flex will be available for public preview soon.

Additionally, Redis Copilot, a virtual assistant integrated into Redis Insight, is now available, offering developers faster access to documentation and automated code generation.

Finally, Redis Data Integration (RDI) was launched to help customers synchronise data between existing databases and Redis with minimal setup, enabling a federated data stack. RDI is currently available on Redis Software, with plans to expand to Redis Cloud.

These announcements were made during Redis Released: Bengaluru, with additional events scheduled in Singapore, London, and New York City later this year.India boasts a staggering Redis user base of approximately 12 million downloads per day, securing its position as the third-largest adopter on the planet, closely trailing behind the technology giants, the United States and China.

The post Redis 8 Launches with AI Capabilities, Expands Developer Access appeared first on AIM.

]]>
Salesforce Launches Two New AI Sales Agents: Einstein SDR and Einstein Sales Coach https://analyticsindiamag.com/ai-news-updates/salesforce-launches-two-new-ai-sales-agents-einstein-sdr-and-einstein-sales-coach/ Fri, 23 Aug 2024 07:26:35 +0000 https://analyticsindiamag.com/?p=10133595

Both tools will be generally available in October, aiming to enhance sales team productivity and efficiency through Salesforce’s Einstein 1 Agentforce Platform.

The post Salesforce Launches Two New AI Sales Agents: Einstein SDR and Einstein Sales Coach appeared first on AIM.

]]>

Salesforce has announced the launch of two new autonomous AI sales agents: Einstein SDR Agent and Einstein Sales Coach Agent. Both tools will be generally available in October, aiming to enhance sales team productivity and efficiency through Salesforce’s Einstein 1 Agentforce Platform.

https://twitter.com/salesforce/status/1826755785017217104

The Einstein SDR Agent autonomously engages with inbound leads, answering questions, handling objections, and scheduling meetings around the clock. It uses Salesforce CRM data and external information to provide accurate and personalised responses, allowing sales teams to focus on higher-value tasks. 

“We are excited about the potential of humans with AI to scale and close deals faster with this groundbreaking AI innovation to close deals with greater velocity and value. By integrating AI agents that can generate high fidelity pipeline and provide personalised coaching, our reps will be able to focus on higher value deals and better prepare for them.” – Sam Allen, EVP & Chief Pipeline Officer, Salesforce

The Einstein Sales Coach Agent offers role-playing scenarios and feedback to sales representatives. It simulates buyer interactions, helping sellers practice pitches and negotiations using generative AI. This agent provides objective feedback and tracks performance metrics. 

Accenture plans to leverage these agents to boost deal team effectiveness and manage more complex deals. “We are teaming with Salesforce to develop and optimize Einstein Sales Agents to improve deal team effectiveness, scale to support more deals, and allow our people to focus their time and effort on our most complex deals.” Sara Porter, Global Sales Excellence Lead, Accenture.

The new AI agents can be set up using no-code actions and workflows, and Salesforce Data Cloud allows for further customisation by uploading relevant external information. Both agents also use the Einstein Trust Layer to ensure secure and trusted responses.

The post Salesforce Launches Two New AI Sales Agents: Einstein SDR and Einstein Sales Coach appeared first on AIM.

]]>
Bill Gates Turns to Computer Vision to Eradicate Malaria https://analyticsindiamag.com/ai-news-updates/bill-gates-turns-to-computer-vision-to-eradicate-malaria/ Fri, 23 Aug 2024 05:15:57 +0000 https://analyticsindiamag.com/?p=10133563

Introduces VectorCam, an app that identifies disease-carrying mosquitoes in a matter of seconds!

The post Bill Gates Turns to Computer Vision to Eradicate Malaria appeared first on AIM.

]]>

Microsoft founder Bill Gates has announced the use of computer vision technology as a crucial development in the battle against malaria, a disease that kills over 600,000 people annually. 

In a blog post, Gates introduced VectorCam, an app developed by Dr. Soumya Acharya and his team at Johns Hopkins University, with support from the Gates Foundation and Uganda’s malaria control program. The app allows for the rapid identification of mosquito species, a critical task in controlling the spread of malaria.

VectorCam utilises a smartphone and an inexpensive lens to identify mosquitoes in seconds, distinguishing between species, determining their sex, and even assessing if a female mosquito has recently fed on blood or developed eggs. The technology is currently being tested in Uganda, where it is already proving useful in adjusting insecticide strategies and improving the speed and accuracy of mosquito surveillance.

The innovation addresses the challenges faced by vector control officers in Uganda, who are responsible for collecting, identifying, and reporting mosquito data, often from remote locations. VectorCam streamlines this process by allowing local health workers to perform identifications, freeing up vector control officers to focus on broader strategic efforts.

In addition to VectorCam, Gates mentioned another emerging tool, HumBug, which identifies mosquito species based on the sound of their wing beats. While still in early development, HumBug could further enhance automated and continuous mosquito monitoring.

Gates said that while identifying mosquito species is vital, new and better tools are also needed to eradicate malaria. He expressed optimism that these innovations could bring the world closer to achieving this goal.

The post Bill Gates Turns to Computer Vision to Eradicate Malaria appeared first on AIM.

]]>
Netflix Partners with Snowflake to Enhance Advertising Capabilities https://analyticsindiamag.com/ai-news-updates/netflix-partners-with-snowflake-to-enhance-advertising-capabilities/ Fri, 23 Aug 2024 05:00:18 +0000 https://analyticsindiamag.com/?p=10133561

Last week, Netflix partnered with LVMH, COTY, Gucci, Kaiku Caffee Latte, Aeromexico, Google, and Rakuten for the highly anticipated return of Emily in Paris.

The post Netflix Partners with Snowflake to Enhance Advertising Capabilities appeared first on AIM.

]]>

Snowflake has announced a partnership with Netflix to enhance the streaming giant’s advertising capabilities through the use of Snowflake’s Data Clean Rooms. This collaboration is set to play a pivotal role in creating a secure and privacy-safe environment for Netflix’s advertisers, enabling them to gain deeper insights and improve campaign performance.

Data Clean Rooms are becoming increasingly important in the digital advertising landscape as they allow multiple parties to collaborate securely with sensitive or regulated data while maintaining data privacy.

Snowflake’s Data Clean Rooms provide a controlled environment where Netflix’s advertisers can analyse data without compromising user privacy. This technology allows advertisers to determine audience overlap, post-campaign reach, frequency, and last-touch attribution in a secure manner.

Netflix, which has been expanding its advertising business since introducing an ad-supported subscription tier in 2022, stands to benefit significantly from this partnership. The use of Snowflake’s Data Clean Rooms will enable Netflix to offer its advertisers more precise targeting and comprehensive analytics. This is crucial as Netflix continues to build its ad tech capabilities and aims to provide advertisers with effective ways to reach its highly engaged audience.

For Season 3 of Bridgerton—Netflix’s sixth most popular English-language TV series of all time—the company has secured multiple international on-screen title sponsors, including Pure Leaf, Amazon Audible, Puig, Booking.com, Stella Artois, and Hilton.

Last week, Netflix partnered with LVMH, COTY, Gucci, Kaiku Caffee Latte, Aeromexico, Google, and Rakuten for the highly anticipated return of Emily in Paris.

By leveraging Snowflake’s advanced data solutions, Netflix will  offer a more robust advertising platform that aligns with modern privacy standards and meets the demands of advertisers looking for secure data collaboration tools.

Overall, the partnership between Snowflake and Netflix is a strategic step towards revolutionizing how advertisers engage with audiences on streaming platforms, ensuring both privacy and effectiveness in digital advertising campaigns.

The post Netflix Partners with Snowflake to Enhance Advertising Capabilities appeared first on AIM.

]]>
ONDC’s ‘UPI Moment’ for E-Commerce Has Arrived  https://analyticsindiamag.com/ai-origins-evolution/ondcs-upi-moment-for-e-commerce-has-arrived/ Thu, 22 Aug 2024 11:01:13 +0000 https://analyticsindiamag.com/?p=10133505

To make it easier for customers to buy products and improve discoverability for sellers, Ola has also introduced an AI shopping co-pilot on ONDC.

The post ONDC’s ‘UPI Moment’ for E-Commerce Has Arrived  appeared first on AIM.

]]>

Indian commerce minister Piyush Goyal recently criticised e-commerce platforms like Amazon for using predatory pricing strategies, warning that such practices could harm local businesses and the Indian economy. The minister’s statement plays in favour of Bhavish Aggarwal, who recently announced his grand plan to disrupt the Indian e-commerce ecosystem.

Aggarwal supported Goyal, saying, “At Ola, we’re building the future of commerce with ONDC, which will enable kiranas and small merchants to reach consumers through digital networks. ONDC is the future.”

This comes after he declared ONDC “the UPI moment for e-commerce” at the recent Ola Sankalp 2024 event.

At the same event, he also announced that the company would change its name from Ola Cabs to Ola Consumer, as it expands its services beyond cabs. “We are going to offer a much broader suite of consumer services. Many of you have already used some of these. As I said, our ambition is to truly make commerce accessible, affordable, and efficient,” said Aggarwal.

For the same, Ola Consumer is integrating with ONDC to reduce the cost of commerce and expand service categories. Currently, Ola users can order food and beverages via ONDC through the Ola app, with plans to expand to groceries and fashion items. 

The company has also integrated a food delivery plugin within its app, enabling users to access a variety of restaurants and food brands listed on ONDC. This feature is in the pilot phase, available to Ola employees and a select group of consumers.

Aggarwal also plans to leverage the power of ONDC to take on the likes of Zepto and Blinkit. The company plans to implement fully automated dark stores and fulfilment centres to revolutionise warehousing and improve the commerce supply chain.

In addition, Ola is focusing on sustainable logistics by electrifying its delivery operations, which is expected to reduce logistics costs by about 50% and create additional jobs.

Can AI Help ONDC Achieve Its UPI Moment?

To make it easier for customers to buy products and improve discoverability for sellers, Ola has also introduced an AI shopping co-pilot on ONDC. “Your shopping experience on digital platforms is not linear and static. Imagine if there were an AI co-pilot guiding you along the way, personalising things, talking to you, and understanding your needs in real-time,” said Aggarwal.

Several major companies have joined ONDC to expand their market reach. These include Hindustan Unilever, ITC, Nestlé, PepsiCo, Dabur India, Godrej Consumer Products, Marico, and Tata Chemicals. 

Interestingly, Ola is not the only company banking on AI to turn ONDC into India’s e-commerce giant. Likewise, other contributors are pushing the boundaries of AI on ONDC. Plotch.ai, a Google-backed startup is also currently working with ONDC to build AI infrastructure and simplify e-commerce for consumers. 

The company has developed an AI-powered conversational commerce app featuring multilingual, voice-enabled semantic search and robust image search capabilities.

“Multilingual voice-based conversational commerce is one piece of the AI that we’re building,” said Manoj Gupta, the founder of Plotch.ai, in an exclusive interview with AIM. “For instance, if you’re looking to buy a saree, jewellery, or a T-shirt and want to find the most affordable options, you can simply ask the AI, and it will sort them for you,” he explained.

Recently, another Indian startup Sarvam AI recently introduced voice-based AI agents.

The cost of these agents starts at just one rupee per minute. According to co-founder Vivek Raghavan, enterprises can integrate these agents into their workflow without much hassle.

“These are going to be voice-based, multilingual agents designed to solve specific business problems. They will be available in three channels – telephony, WhatsApp, or inside an app,” Raghavan told AIM in an interaction prior to the event.

These agents could be integrated into contact centres and used for various applications across multiple industries, including insurance, food and grocery delivery, e-commerce, ride-hailing services, and even banking and payment apps. Raghavan further told AIM that Sarvam AI is working closely with Beckn Protocol, the underlying layer behind ONDC.

ONDC Is a Baby, Let it Grow

“ONDC is a very young, small baby, so we should let it grow. And I’m pretty sure that by 2030, we will see 100 million to 200 million transactions happening a month,” said Pramod Varma, former chief architect of Aadhaar, in an exclusive interview with AIM, when asked why ONDC is not seeing UPI-like success.

“ONDC is much broader. On ONDC, you will see taxi bookings happening through Yatri, metro ticketing being integrated, and physical goods like grocery commerce being added. Food delivery is also starting to kick in. So, it’s what’s called multi-sectoral commerce. It does take a little more complexity to unravel,” explained Varma.

ONDC logged a 21% month-on-month growth in transactions to 12 million in July 2024, compared to 10 million a month ago. Nationwide, ONDC handles 60,000 food orders daily, capturing 3% of the total order volumes managed by Swiggy and Zomato across India.

ONDC is also set to integrate nearly all metro services into its network by next year, according to MD and CEO Thampy Koshy. Currently, India has a operational metro network spanning 905 kilometres across over 20 cities, with the Kochi and Chennai metros already partnering with ONDC to offer ticketing services through platforms like Namma Yatri, Rapido, and redBus. 

Koshy said, “We are talking to every metro to become ONDC participants. Some of the talks are in an advanced stage. By the end of the coming year, all metros are likely to be part of ONDC.”

The post ONDC’s ‘UPI Moment’ for E-Commerce Has Arrived  appeared first on AIM.

]]>
Selling AI Models Is Turning into a Zero-Margin Business https://analyticsindiamag.com/ai-origins-evolution/selling-ai-models-is-turning-into-a-zero-margin-business/ Thu, 22 Aug 2024 05:10:13 +0000 https://analyticsindiamag.com/?p=10133458

…with more risk than reward as tech giants are offering its AI models for dirt cheap, and some free in a few cases, in an attempt to get into the application layer. 

The post Selling AI Models Is Turning into a Zero-Margin Business appeared first on AIM.

]]>

The AI market is currently crowded with various models, as major players such as OpenAI, Meta, and Google continuously refine their offerings. However, the question remains: which models will developers choose, and is it feasible for AI startups and big tech companies to offer these models for free?

“If you’re only selling models, for the next little while, it’s gonna be a really tricky game,” said Cohere founder Aidan Gomez in a recent interview. By selling models, he meant selling API access to those AI models. OpenAI, Anthropic, Google, and Cohere offer this service to developers, and they all face a similar problem.

“It’s gonna be like a zero margin business because there’s so much price dumping. People are giving away the model for free. It’ll still be a big business, it’ll still be a pretty high number because people need this tech — it’s growing very quickly — but the margins, at least now, are gonna be very tight,” he explained. 

Interestingly, OpenAI just made $510 million from API services while it made $1.9 billion from ChatGPT subscription models.

Gomez hinted that Cohere might offer more than just LLMs in the future. “I think the discourse in the market is probably right to point out that value is occurring both beneath, at the chip layer—because everyone is spending insane amounts of money on chips to build these models in the first place—and above, at the application layer.”

Recently, Adept was acqui-hired by Amazon, Inflection by Microsoft, and Character.ai by Google. “There will be a culling of the space, and it’s already happening. It’s dangerous to make yourself a subsidiary of your cloud provider. It’s not good business,”said Gomez.

Sully Omar, Co-founder and CEO of Cognosys, echoed similar sentiments and said, “It won’t be long until we see options like ‘login’ with OpenAI/Anthropic/Gemini. In the next 6-8 months, we’re likely to see products that use AI at a scale 100 times greater than today.” 

He added that from a business standpoint, it doesn’t make sense to upsell customers on AI fees. “I’d rather charge based on the value provided,” he said.

Omar noted that the current system, which relies on API keys, is cumbersome for most users. “90% of users don’t understand how they work. It’s much easier for users to sign into ChatGPT, pay for compute to OpenAI/Gemini, and then use my app or service at a lower price,” he explained. 

He also criticised the credits-based pricing model, suggesting that it is ineffective as it requires constantly managing margins on top of LLM fees.

The rise of LLMs has ignited another debate: will generative AI lead to more APIs or the end of APIs?

“The AI model market is mirroring the early days of cloud computing, where infrastructure (IaaS) was a low-margin game. As cloud providers realised, value creation shifted towards higher-margin services like SaaS and PaaS, layering specialised applications on top of core infrastructure,” said Pradeep Sanyal, AI and ML Leader at Capgemini. 

“AI startups must move beyond selling raw models to offering differentiated, application-focused solutions,” he explained. 

Google and OpenAI Compete for Developer Attention

OpenAI recently announced the launch of fine-tuning for GPT-4o, addressing a highly requested feature from developers. As part of the rollout, the company is offering 1 million training tokens per day for free to all organisations through September 23.

The cost for fine-tuning GPT-4o is set at $25 per million tokens. For inference, the charges are $3.75 per million input tokens and $15 per million output tokens. Additionally, GPT-4o mini fine-tuning is available to developers across all paid usage tiers. 

This development comes after Google recently reduced the input price by 78% to $0.075/1 million tokens and the output price by 71% to $0.3/1 million tokens for prompts under 128K tokens (cascading the reductions across the >128K tokens tier as well as caching) for Gemini 1.5 Flash. 

Moreover, Google is giving developers 1.5 billion tokens for free everyday in the Gemini API. The Gemini 1.5 Flash free tier includes 15 requests per minute (RPM), 1 million tokens per minute (TPM), and 1,500 requests per day (RPD). Users also benefit from free context caching, allowing up to 1 million tokens of storage per hour, as well as complimentary fine-tuning services.

https://twitter.com/OfficialLoganK/status/1825656369627935069

Logan Kilpatrick, Lead at Google AI Studio, said that they are likely to offer free tokens for the next several months.

Meanwhile, OpenAI recently launched GPT-4o mini, priced at $0.15 per million input tokens and $0.60 per million output tokens. This model is significantly more affordable than previous frontier models and over 60% cheaper than GPT-3.5 Turbo. The GPT-4o mini retains many of GPT-4o’s capabilities, including vision support, making it suitable for a broad range of applications.

Additionally, OpenAI has reduced the price of GPT-4o. With the new GPT-4o-2024-08-06, developers can save 50% on input tokens ($2.50 per million) and 33% on output tokens ($10.00 per million) compared to the GPT-4o-2024-05-13 model.

​​https://x.com/ofermend/status/1822783034296512597

Meta’s Llama 3.1 is a Game Changer

According to Harneet Singh, founder of Rabbitt AI, Meta’s latest model, Llama 3.1 70B, is the most cost-effective option, priced at $0.89 per million tokens while offering capabilities similar to OpenAI’s GPT-4o. “This cost-benefit ratio makes it an attractive choice for budget-conscious enterprises,” he said.

The company used  Groq’s hosted APIs for Llama 3.1.

Llama 3.1  70B model has an input token price of $0.59 per million tokens and an output token price of $0.79 per million tokens, with a context window of 8,000 tokens on Groq.

In contrast, the 8B model features a more affordable pricing structure, with input tokens costing $0.05 per million and output tokens priced at $0.08 per million, also with a context window of 8,000 tokens.

In comparison, the inference of Llama 3.1 405B costs $3 per 1M tokens and $5.00 per million output tokens on Fireworks. 

“I don’t think it’s possible to make it cheaper without some loss in quality. GPT-4o mini offering a comparable quality costs $0.15 per 1M input tokens and $0.6 per 1M output tokens (and half this price when called in batches),”said Andriy Burkov, machine learning lead at TalentNeuron.

“The math here is broken. Either OpenAI managed to distill a ~500B parameter model into a ~15B parameter model without a quality loss or they use crazy dumping. Any ideas?,” he pondered.

Conclusion

While open-source models like Llama 3.1 70B offer remarkable cost efficiency, proprietary models such as GPT-4o deliver unparalleled quality and speed, albeit at a higher price point. 

GPT-4o provides the most comprehensive multimodal capabilities, supporting text, image, audio, and video inputs. It is suitable for applications requiring diverse input types and real-time processing.

Gemini 1.5 Flash integration with Google’s ecosystem can be a significant advantage for businesses already using Google’s services, offering seamless integration and additional functionalities.

The choice of model thus largely depends on the specific needs and budget constraints of the enterprise.

The post Selling AI Models Is Turning into a Zero-Margin Business appeared first on AIM.

]]>
Microsoft Launches New Phi-3.5 Models, Outperforms Google Gemini 1.5 Flash, Meta’s Llama 3.1, and OpenAI’s GPT-4o https://analyticsindiamag.com/ai-news-updates/microsoft-launches-new-phi-3-5-models-outperforms-google-gemini-1-5-flash-metas-llama-3-1-and-openais-gpt-4o/ Wed, 21 Aug 2024 05:51:47 +0000 https://analyticsindiamag.com/?p=10133341

The Phi-3.5 models are now available on the AI platform Hugging Face under an MIT license, making them accessible for a wide range of applications.

The post Microsoft Launches New Phi-3.5 Models, Outperforms Google Gemini 1.5 Flash, Meta’s Llama 3.1, and OpenAI’s GPT-4o appeared first on AIM.

]]>

Microsoft has released the new Phi-3.5 models: Phi-3.5-MoE-instruct, Phi-3.5-mini-instruct, and Phi-3.5-vision-instruct. The Phi-3.5-mini-instruct, with 3.82 billion parameters, is built for basic and quick reasoning tasks. 

The Phi-3.5-MoE-instruct, with 41.9 billion parameters, handles more advanced reasoning. The Phi-3.5-vision-instruct, with 4.15 billion parameters, is designed for vision tasks like image and video analysis.

Phi-3.5 MOE-instruct 

Phi-3.5-MoE instruct is a 42-billion-parameter open-source model and demonstrates significant improvements in reasoning capabilities, outperforming larger models such as Llama 3.1 8B and Gemma 2 9B across various benchmarks.

Despite its competitive performance, Phi-3.5-MoE falls slightly behind GPT-4o-mini but surpasses Gemini 1.5 Flash in benchmarks. The model supports multilingual applications, although the specific languages covered remain unclear.

https://twitter.com/rohanpaul_ai/status/1825984804451463210

Phi-3.5-MoE features 16 experts, with two being activated during generation, and has 6.6 billion parameters engaged in each inference. Phi-3.5-MoE supports multilingual capabilities and extends its context length to 128,000 tokens. 

The model was trained over 23 days using 512 H100-80G GPUs, with a total training dataset of 4.9 trillion tokens.

The model’s development included supervised fine-tuning, proximal policy optimisation, and direct preference optimisation to ensure precise instruction adherence and robust safety measures. The model is intended for use in memory and compute-constrained environments and latency-sensitive scenarios.

Key use cases for Phi-3.5-MoE include general-purpose AI systems, applications requiring strong reasoning in code, mathematics, and logic, and as a foundational component for generative AI-powered features. 

The model’s tokenizer supports a vocabulary size of up to 32,064 tokens, with placeholders for downstream fine-tuning. Microsoft provided a sample code snippet for local inference, demonstrating its application in generating responses to user prompts.

Phi-3.5-mini-instruct

With 3.8 billion parameters, this model is lightweight yet powerful, outperforming larger models such as Llama3.1 8B and Mistral 7B. It supports a 128K token context length, significantly more than its main competitors, which typically support only up to 8K.

Microsoft’s Phi-3.5-mini is positioned as a competitive option in long-context tasks such as document summarisation and information retrieval, outperforming several larger models like Llama-3.1-8B-instruct and Mistral-Nemo-12B-instruct-2407 on various benchmarks. 

The model is intended for commercial and research use, particularly in memory and compute-constrained environments, latency-bound scenarios, and applications requiring strong reasoning in code, math, and logic. 

The Phi-3.5-mini model was trained over 10 days using 512 H100-80G GPUs. The training process involved processing 3.4 trillion tokens, leveraging a combination of synthetic data and filtered publicly available websites to enhance the model’s reasoning capabilities and overall performance.

Phi-3.5-vision-instruct 

Phi-3.5 Vision is a 4.2 billion parameter model and it excels in multi-frame image understanding and reasoning. It has shown improved performance in benchmarks like MMMU, MMBench, and TextVQA, demonstrating its capability in visual tasks. It even outperforms OpenAI GPT-4o on several benchmarks. 

The model integrates an image encoder, connector, projector, and the Phi-3 Mini language model. It supports both text and image inputs and is optimised for prompts using a chat format, with a context length of 128K tokens. The model was trained over 6 days using 256 A100-80G GPUs, processing 500 billion tokens that include both vision and text data.

The Phi-3.5 models are now available on the AI platform Hugging Face under an MIT license, making them accessible for a wide range of applications. This release aligns with Microsoft’s commitment to providing open-source AI tools that are both efficient and versatile.

The post Microsoft Launches New Phi-3.5 Models, Outperforms Google Gemini 1.5 Flash, Meta’s Llama 3.1, and OpenAI’s GPT-4o appeared first on AIM.

]]>
Meta Launches Metamate, an AI-Powered Assistant for Internal Teams https://analyticsindiamag.com/ai-news-updates/meta-launches-metamate-an-ai-powered-assistant-for-internal-teams/ Tue, 20 Aug 2024 15:55:12 +0000 https://analyticsindiamag.com/?p=10133325

The product offers a wide range of functions, including document summarisation, work recaps, information retrieval across wikis, data visualisation, and more.

The post Meta Launches Metamate, an AI-Powered Assistant for Internal Teams appeared first on AIM.

]]>

Soumith Chintala, an AI lead at Meta, has introduced Metamate, a generative AI product developed to improve internal productivity at the company. 

Developed in collaboration with Aparna Ramani, Zach Rait, and others, Metamate was created to address the unique needs of a large organisation like Meta, which has numerous internal-specific workflows.

Metamate’s capabilities extend beyond its base experience, which is similar to Perplexity AI, by allowing users to build custom agents directly in the browser using a Python-like scripting language. These agents can be tailored to specific teams or tasks, such as on-call bots or project-specific tools.

The product offers a wide range of functions, including document summarisation, work recaps, information retrieval across wikis, data visualisation, and more. Some specific use cases include managing performance feedback, analysing software changes, and providing project status updates. The tool is also capable of handling tasks such as performing mathematical calculations and generating complex queries.

“You’ve never used Metamate if you don’t work at Meta. It’s an AI for employees that’s trained on an enormous corpus of internal company docs. I use it all the time for efficiency gains,” said Esther Crawford, Director of product, Meta. 

“Any sizable company operating without an internal AI tool is already behind the curve,” she added. 

Chintala’s announcement highlights the integration of machine learning in Metamate, developed with Shahin Sefati, which further enhances its functionality across various internal systems at Meta.

The post Meta Launches Metamate, an AI-Powered Assistant for Internal Teams appeared first on AIM.

]]>
Samantha Ruth Prabhu Supports SAWiT.AI, a Generative AI Learning Challenge for 500,000 Indian Women https://analyticsindiamag.com/ai-news-updates/samantha-ruth-prabhu-supports-sawit-ai-a-generative-ai-learning-challenge-for-500000-indian-women/ Tue, 20 Aug 2024 12:43:35 +0000 https://analyticsindiamag.com/?p=10133301

Actress Samantha Ruth Prabhu has announced her support for SAWiT.AI, the world’s largest women-only Generative AI Learning Challenge. Scheduled for September 21, the event aims to equip 500,000 Indian women with foundational skills in generative AI through hands-on training with leading AI tools. “On 21st September, 500,000 Indian women will gain foundational Gen AI skills […]

The post Samantha Ruth Prabhu Supports SAWiT.AI, a Generative AI Learning Challenge for 500,000 Indian Women appeared first on AIM.

]]>

Actress Samantha Ruth Prabhu has announced her support for SAWiT.AI, the world’s largest women-only Generative AI Learning Challenge. Scheduled for September 21, the event aims to equip 500,000 Indian women with foundational skills in generative AI through hands-on training with leading AI tools.

“On 21st September, 500,000 Indian women will gain foundational Gen AI skills through hands-on experience with leading AI tools. If you’re as fascinated by tech as I am, then SAWiT.AI is the right place for you to build your skill and feed your curiosity,” Prabhu said in an Instagram post. 

Interestingly, her promo video is also made using generative AI.

SAWiT.AI, organized by the SAWiT network, is focused on empowering women participants with practical skills in generative AI. The initiative seeks to create a global narrative positioning Indian women as leaders in the development of AI technology for positive impact.

It is a collaborative effort between SAWiT (South Asian Women in Tech) and GUVI, an ed-tech startup incubated by IIT Madras. 

In her statement, Prabhu encouraged women interested in technology to participate, emphasising the broader goal of breaking barriers and shaping the future of AI. She also challenged others to celebrate women who inspire by overcoming challenges in their daily lives.

Prabhu highlighted that SAWiT.AI is not just about learning tech skills but about changing the future. The initiative plans to hand 500,000 Indian women the tools to lead in the ongoing generative AI revolution. 

Prabhu urged others to join the movement, reinforcing the idea of building a future where Indian women are at the forefront of the technology revolution.

The initiative will provide hands-on experience with leading AI tools, supported by a distinguished advisory council that includes Roshni Nadar Malhotra, Chairperson of HCL Tech  and women’s empowerment advocate, and Farzana Haque, a Senior Leader at Tata Consultancy Services.

The SAWiT.AI initiative comprises three main events:

  • SAWiT.AI Learnathon (September 21, 2024): A hands-on learning experience in Generative AI.
  • SAWiT.AI Hackathon (October 2024): The world’s largest women-led Generative AI challenge, where teams develop advanced AI applications.
  • SAWiT.AI Festival (November 2024): Celebrating Generative AI innovation, awarding challenge winners, and recognizing pioneering institutions, partners, and sponsors.

Women from both tech and non-tech backgrounds are encouraged to apply. Interested candidates can register at the SAWiT.AI registration page for a nominal fee of INR 499, with the deadline for registration set for September 18, 2024.

The post Samantha Ruth Prabhu Supports SAWiT.AI, a Generative AI Learning Challenge for 500,000 Indian Women appeared first on AIM.

]]>
Former Meta COO Sheryl Sandberg to Invest in Bengaluru AI Startup Simplismart https://analyticsindiamag.com/ai-news-updates/former-meta-coo-sheryl-sandberg-to-invest-in-bengaluru-ai-startup-simplismart/ Tue, 20 Aug 2024 11:07:58 +0000 https://analyticsindiamag.com/?p=10133256

Sandberg, known for her influential role in the tech industry, is participating in a $7 million funding round for the startup, which is being led by Accel.

The post Former Meta COO Sheryl Sandberg to Invest in Bengaluru AI Startup Simplismart appeared first on AIM.

]]>

Former Meta COO Sheryl Sandberg is set to make a significant investment in the Bengaluru-based generative AI startup Simplismart, according to a report by Entrackr.

Sandberg, known for her influential role in the tech industry, is participating in a $7 million funding round for the startup, which is being led by Accel.

Simplismart, founded by BITS Pilani alumni Amritanshu Jain and Devansh Ghatak, offers a high-speed inference engine designed to help customers deploy generative AI models efficiently across various cloud providers. 

The company has previously received backing from Titan Capital, Shastra VC, and First Cheque in earlier funding rounds. Some of its prominent customers include Vodex, Dubverse, and Mobavenue.

Before founding Simplismart, Jain held machine learning engineering positions at Oracle and Capillary Technologies. Ghatak, meanwhile, gained experience in engineering roles at Avaamo and Google. Before this investment, Sandberg’s only known involvement with the Indian startup ecosystem was as a client of Iconiq Capital, a U.S.-based investment management firm that has backed companies like Flipkart.

Despite reports suggesting Sandberg’s involvement, Simplismart’s co-founder Jain stated in another report that she is not participating in the current funding round.

Sandberg’s investment history in India includes another generative AI startup, Ema, which focuses on enhancing productivity by automating complex workflows.

The increasing interest in Indian AI startups from Silicon Valley investors is notable, with Simplismart being among several companies attracting significant attention. 

Bengaluru-based startup Sarvam AI recently introduced a suite of business-to-business products powered by its generative AI models. The company has secured investments from major firms, including Lightspeed Venture Partners, Peak XV Partners, and Khosla Ventures—one of the early backers of OpenAI. Last December, Sarvam AI completed a $41 million funding round.

This development follows a recent $52 million funding round by Fireworks AI, another generative AI startup in Sandberg’s portfolio, which was led by Sequoia Capital with participation from NVIDIA, AMD, and MongoDB Ventures.

Investors have observed that generative AI startups, particularly in seed and series-A rounds, are raising funds at high revenue multiples, making them some of the most expensive deals in the venture capital landscape.

The post Former Meta COO Sheryl Sandberg to Invest in Bengaluru AI Startup Simplismart appeared first on AIM.

]]>
You’re Not a Real Artist if You Aren’t Using Generative AI  https://analyticsindiamag.com/ai-origins-evolution/youre-not-a-real-artist-if-you-arent-using-generative-ai/ Tue, 20 Aug 2024 09:52:04 +0000 https://analyticsindiamag.com/?p=10133234

Is this the beginning of the end for Procreate?

The post You’re Not a Real Artist if You Aren’t Using Generative AI  appeared first on AIM.

]]>

Australian company Savage Interactive which developed Procreate, a digital painting and illustration app, recently announced that they will not be incorporating generative AI into their offerings.

“I prefer that our products speak for themselves. I really f***ing hate generative AI. I don’t like what’s happening in the industry, and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into our products,” said CEO James Cuda.

Ironically, CUDA is also the name of one of NVIDIA’s most popular tools for parallel computing and GPU connectivity, and plays a crucial role in enabling generative AI today.

“Our products are always designed and developed with the idea that a human will be creating something. You know, we don’t exactly know where this story is going to go or how it ends, but we believe that we’re on the right path supporting human creativity,” he added. 

This statement is bold given that the industry is increasingly adopting generative AI. Adobe, Procreate’s main competitor, is actively integrating generative AI into its creative tools to boost creativity and productivity.

“Is this the beginning of the end for Procreate? Hating on generative AI and clearly admitting that they’re never going to integrate AI features oh boy. As he said, he doesn’t know how it will end, but one thing is sure: it won’t end well,” said AI expert Ashutosh Shrivastava on X. 

Many in the artist community have wholeheartedly supported Procreate’s stance on generative AI. However, not everyone is aligned. “People aren’t looking at this from a business perspective. Right now a decent subset of artists hate AI, so it makes sense to try and target that market if it’s large enough,” posted a user on Hacker News. 

“If artists suddenly started loving AI tomorrow, this pledge would be out the window. It’s just business and marketing – nothing more, nothing less,” he explained. 

This development has certainly pleased artists, but for how long? Generative AI is set to become a crucial tool for creating new art. For instance, today, a majority of social media is filled with AI-generated images, and recently, we’ve seen the impressive capabilities of Flux integrated into Grok 2, which can create remarkably realistic images.

Artists Should Embrace Generative AI

Artists should not feel disheartened about using generative AI. Art has always evolved with the emergence of new technologies. Just as digital art emerged in the early 2000s and made the lives of graphic designers easier, generative AI is set to do the same. 

It also allows non-artistic individuals to experiment with art and create something new. For instance, now even an amateur can create AI-generated videos without any prior knowledge of filmmaking.

“I love artists. I have friends and family making a living in the arts. I went to art school. The artists who aren’t using generative AI to accelerate their process are, IMO, going to go extinct. Especially collaborative art,” posted a user on X. 

“AI art is real art, and there’s no shying away from this statement,” declared a 19-year-old artist who faced criticism for selling AI-generated artwork on Church Street in Bengaluru. 

Speaking to AIM, Ashok Reddy, a graphic designer at GrowthSchool, said, “It wasn’t a task I completed in one day; it was a collection of efforts over many months.” He emphasised that his images were original, generated from scratch, and not copied from any other creator or existing works. 

In a different approach to the AI art scene, David Sandonato, an Italian digital artist, began selling Midjourney prompt catalogues on PromptBase, a marketplace for AI art prompts. Today, he is the top-ranked artist on the platform, offering a library of 4,000 to 5,000 prompts, with new uploads daily.

In a recent interview, Santonato said, “It began as a side hustle, but I’m convinced that this business has big space to grow when people will realise that today 50% of the images available in the top microstock agencies can be generated in full quality with a good prompt.” 

Recently, self-proclaimed career guru Priyank Ahuja shared an intriguing post on X, that read, “ChatGPT and Canva will help you earn an extra $15,000/month.” He followed it up with a series of video demonstrations on how to use the tools for simple tasks like designing T-shirts, creating creative Instagram ads, and making YouTube Shorts.

Another user on Reddit said that he haa seen AI artists make $1000 a month selling adult themed content. “AI art is a different animal, and making money with it is going to look different than the traditional art community,” he said. 

AI artists around the world are gaining a lot of prominence. Refik Anadol, a Turkish-American new media artist, has captivated audiences worldwide with his work at the intersection of art and artificial intelligence at NVIDIA GTC earlier this year. 

Moreover, oftentimes, when an artist posts something generated using AI, many dismiss it as not being real art and offer a great deal of unwarranted criticism.

“As for the pro-AI community, we don’t have to tolerate aggressive behaviours and continual hyper-protective mentalities; you do have the right to show your work freely and without hate. Yes, you should develop your visual style in your work, but you should also be free to express love and passion for people with whatever tools you want. That is true inclusivity for everyone to learn to do,” posted one pro-AI artist on Reddit.

He argued that individuals often self-punish instead of seeking tools to address their weaknesses, develop their foundational skills, and enhance their artistic abilities, which he feels is unnecessary.

In India, designers often pursue a BDes or BFA degree, which offers thorough training in Adobe Creative Suite. With the growing prominence of generative AI, it’s crucial for designers to also learn these new skills, as they can greatly boost productivity.

In-house graphic designers at AIM also feel that with generative AI features, it has become increasingly easy to autofill and regenerate images, tasks that previously took a considerable amount of time.

A major concern artists have with generative AI is that it relies on data from the internet without adequately crediting them. While this is a legitimate issue, simply avoiding generative AI is not the answer. Beethoven.ai, an Indian generative AI music company, is leading the way by paying royalties to the artists whose music is used to train its models.

Similarly, Adobe is reportedly compensating artists and photographers for providing images and videos to train its artificial intelligence models. According to the report, Adobe pays between ¢6 and ¢16 per photo and an average of $2.62 per minute of video.

Competitors Are Betting Big on Generative AI

Procreate competitors are actively integrating generative AI into their products and services. 

Adobe’s generative AI platform, Firefly, offers capabilities such as text-to-image, which allows users to generate images from text prompts, expanding creative possibilities in applications like Photoshop. It also includes generative fill, enabling users to seamlessly add or remove elements from images, and generative shape fill and remove, which provide options to fill vector outlines and eliminate unwanted elements from images.

Similarly, Canva is pretty much bullish on generative AI. The Australian design company recently acquired Leonardo, a startup renowned for its generative AI content and research. The company caters to diverse industries such as fashion, advertising, and architecture by developing AI models for image creation. Some people call it the biggest competitor to Midjourney.

Canva has also introduced Canva Magic Media, which allows users to create images and videos from text prompts. Currently, around 180 million users worldwide use Canva. On the other hand, Procreate has over 30 million users.

P.S. The banner for this article was not created using generative AI.

The post You’re Not a Real Artist if You Aren’t Using Generative AI  appeared first on AIM.

]]>
Infosys to Earn $100M+ in Coca-Cola’s Major Cloud Deal with Microsoft https://analyticsindiamag.com/ai-news-updates/infosys-to-earn-100m-in-coca-colas-major-cloud-deal-with-microsoft/ Tue, 20 Aug 2024 05:09:54 +0000 https://analyticsindiamag.com/?p=10133165

Microsoft, the lead player in the deal, has enlisted Infosys to provide critical support services, capitalising on the Indian IT giant's expertise in cloud technologies.

The post Infosys to Earn $100M+ in Coca-Cola’s Major Cloud Deal with Microsoft appeared first on AIM.

]]>

Infosys is set to earn over $100 million as a key supporting partner in Coca-Cola’s $1.1-billion cloud migration deal with Microsoft, according to a recent report by the Economic Times. 

The report added that the agreement signed in April, marks a significant move by Coca-Cola to transition its operations to the cloud, enhancing its digital infrastructure and capabilities. Microsoft, the lead player in the deal, has enlisted Infosys to provide critical support services, capitalising on the Indian IT giant’s expertise in cloud technologies.

This partnership highlights Infosys’ expanding role in large-scale global digital transformation initiatives, with a strong focus on cloud migration services. The company’s involvement in such a high-profile partnership is expected to bolster its revenue stream significantly, further solidifying its position in the competitive IT services market.

The Indian IT giant’s involvement in the project is expected to bring in significant revenue, with over $27 million already secured from the Euro Pacific geography alone, according to regulatory filings with the US Securities and Exchange Commission (SEC).

The filings reveal that Coca-Cola Euro Pacific Partners PLC, a subsidiary of The Coca-Cola Company, committed €167 million to Microsoft for Azure cloud migration services over a six-year period. Additionally, €25 million has been earmarked for Infosys as a supporting partner in this initiative.

In April 2024, Microsoft and Coca-Cola announced a five-year strategic partnership aimed at aligning Coca-Cola’s core technology strategy and fostering innovation and productivity worldwide. As part of the agreement, Coca-Cola committed $1.1 billion to Microsoft Cloud and its generative AI capabilities. The companies plan to explore new technologies, including Azure OpenAI Service, to develop innovative AI use cases across various business functions.

Infosys has been a key partner with Microsoft in AI-driven initiatives. Last September, the two companies announced a collaboration to help enterprises adopt an AI-first approach to scale next-generation AI solutions, improve operational efficiencies, drive revenue growth, and enable business transformation.

Neither Infosys nor Coca-Cola has officially commented on the specifics of the financial arrangement, but industry experts suggest that Infosys’ role could be crucial in the successful execution of the cloud migration process. The deal is also expected to strengthen Infosys’ relationship with Microsoft, potentially leading to future collaborations in similar large-scale projects.

The post Infosys to Earn $100M+ in Coca-Cola’s Major Cloud Deal with Microsoft appeared first on AIM.

]]>