Pritam Bordoloi, Author at AIM https://analyticsindiamag.com/author/pritam-bordoloianalyticsindiamag-com/ Artificial Intelligence, And Its Commercial, Social And Political Impact Tue, 03 Sep 2024 06:24:19 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Pritam Bordoloi, Author at AIM https://analyticsindiamag.com/author/pritam-bordoloianalyticsindiamag-com/ 32 32 Revrag Unveils Its First AI Agent Emma https://analyticsindiamag.com/ai-news-updates/revrag-unveils-its-first-ai-agent-emma/ Mon, 02 Sep 2024 11:09:10 +0000 https://analyticsindiamag.com/?p=10134251

Emma integrates with popular tools like Slack and HubSpot

The post Revrag Unveils Its First AI Agent Emma appeared first on AIM.

]]>

Revrag, a startup developing AI agents, has announced the launch of its first AI-powered solution, Emma.

Emma is designed to revolutionise the initial stages of customer outreach and lead generation. Seamlessly integrating with popular tools like Slack and HubSpot, Emma streamlines the sales process, making it faster and more efficient for teams. With Emma, sales teams can easily:

  • Set up campaigns to target potential customers.
  • Customise outreach efforts based on specific business needs.
  • Save valuable time by automating the initial contact stages.
  • Potentially reach a larger pool of leads in less time.

“The SaaS world is evolving rapidly, and we see enormous potential for Generative AI in the sales industry, a sentiment echoed by recent market trends. AI agents like Emma represent a transformative technology that’s reshaping business operations globally. We aim to fill existing gaps and lead the market towards greater innovation. For small businesses and startups, AI agents like Emma offer a unique opportunity to level the playing field,” Ashutosh Singh, CEO of Revrag, said.

Revrag’s AI agents are sophisticated systems designed to function autonomously, learn from their environment, and achieve specific goals with minimal human intervention.

Revrag’s journey has been bolstered by an impressive $600k in pre-seed funding, reflecting investor confidence in the company’s vision. As the industry eagerly anticipates the launch of Emma, Revrag is set to challenge traditional sales processes and potentially usher in a new era in enterprise sales.

Backed by prominent angel investors and a powerhouse venture, Revrag is poised to make a significant impact in the competitive SaaS landscape.

The Bengaluru-based startup has attracted the attention of industry heavyweights, including Viral Bajaria, Co-founder of 6sense, Kunal Shah, Founder of Cred, Deepal Anchala, Founder of Slintel, Vetri Vellore, Founder of Rhythms, and 20 other renowned investors.

The post Revrag Unveils Its First AI Agent Emma appeared first on AIM.

]]>
How Joule is Helping SAP Find Moat in Spend Management https://analyticsindiamag.com/intellectual-ai-discussions/how-joule-is-helping-sap-find-moat-in-spend-management/ Mon, 02 Sep 2024 10:05:48 +0000 https://analyticsindiamag.com/?p=10134239

SAP customers are now reaping the benefits of Ariba, Concur and Fieldgrass – companies which SAP has acquired over the years – under one platform.

The post How Joule is Helping SAP Find Moat in Spend Management appeared first on AIM.

]]>

In today’s fast-paced world, it is vital for organisations to optimise procurement, manage supply chains and control costs effectively. For instance, cocoa prices have surged over 400% this year, posing a significant challenge for chocolate manufacturers.

Spend management tools provide comprehensive visibility into a company’s spending patterns, streamlining procurement processes. Generative AI could further optimise spend management as it can provide predictive analytics, supplier recommendations and enhanced negotiations strategies. 

This is exactly what SAP has done. The company has consolidated all aspects of spend management under one umbrella, calling it Intelligent Spend and Business Network (ISBN). And now, it has thrown generative AI into the mix.

SAP customers are now reaping the benefits of Ariba, Concur and Fieldglass – companies which SAP has acquired over the years – under one platform coupled with SAP Business Network.

While spend management is not a new concept, clubbing together the solution powered by generative AI and Joule, a proprietary AI assistant developed internally by SAP, they believe the solution holds potential.

How GenAI is Changing Spend Management

“At SAP, our AI-first product strategy involves deeply embedding AI into core business processes. Rather than overlaying AI onto existing workflows, we are fundamentally reimagining processes, positioning AI as a constant, collaborative partner for every user,” Jeff Collier, chief revenue officer, SAP Intelligent Spend & Business Network, told AIM.

By using Ariba, a procurement tool, customers can now leverage the power of generative AI models to accelerate their planning processes across multiple categories (including 3rd party market data from Beroe) and reduce supplier onboarding time, Collier further revealed.

(Jeff Collier, chief revenue officer, SAP Intelligent Spend & Business Network)

Generative AI models are also helping Ariba customers make sense of historical data and derive insights from it and make recommendations in real-time, which in return is helping them in their procurement process. 

“To optimise external workforce and services spend for greater insight, control, and savings, SAP Fieldglass customers are now leveraging AI-enhanced SOW description generation, AI-enhanced job descriptions, and AI-enhanced translation of job descriptions,” Collier added.

AI Agents are Coming

If not LLMs, AI agents are expected to truly scale AI. When asked if these could be part of SAP’s ISBN solution, Collier said that SAP is focused on embedding digital assistants directly into our products through Joule, SAP’s generative AI copilot. 

Joule provides AI-assisted insights and will be integrated as a standard feature across the SAP portfolio, including ISBN.

“In my conversations with chief executives, business leaders, and decision-makers around the world, a common theme has emerged– the urgent need to do more with less,” he said. 

He added that the COVID-19 pandemic has significantly expanded responsibilities, yet headcounts have remained flat or declined, making productivity a critical driver for AI interest among business leaders.

“The real question now is how swiftly these leaders can embed AI into their operations to start reaping its benefits. With the proliferation of tools, complex policies, and the need to maximise return and value, organisations are seeking chat-based interfaces or agents to guide them through their tasks and answer queries efficiently,” he added.

Hence, it makes sense for SAP to integrate AI agents into its entire portfolio. With Joule, SAP users can save time and increase productivity by describing their ideas, asking analytical questions, or instructing the system, rather than navigating through traditional clicks or coding. 

ISBN, an Interesting Prospect for India 

Explaining further what ISBN solutions from SAP truly mean, Ashwani Narang, Vice President, Intelligent Spend and Business Network, SAP Indian Subcontinent, told AIM that spend management extends beyond traditional procurement to include the entire supply chain, contingent labour, and employee expenses. 

“Various departments—marketing, finance, and others—constantly request funds, highlighting the need for comprehensive oversight. Procurement used to be the sole area focused on savings, but now spend management encompasses all financial outflows, including those outside the organisation like partners, logistics providers, and consultants,” Narang said.

(Ashwani Narang, Vice President, Intelligent Spend and Business Network, SAP Indian Subcontinent)

Narang believes, as the country transitions towards being a manufacturing economy with a great consensus on producing things locally thanks to initiatives like ‘Make in India’, ISBN could be a great tool for Indian companies. 

“The more you become a manufacturing economy, the more working capital becomes important and I believe that’s where the onus is going to be,” he said.

SAP has witnessed triple-digit growth with ISBN, and according to Narang, thousands of customers in India are already leveraging the solutions.

The solutions can also help companies with category management. “For instance, if a mattress manufacturer needs cotton, it’s crucial to assess whether it’s a supplier-power or buyer-power category. Knowing if enough suppliers offer the specific grade of cotton helps in negotiating prices,” he added.  

Narang said that Joule can provide insights into market conditions and supplier dynamics, guiding strategic sourcing and ensure better negotiations, he added.

Moreover, ISBN is also not limited to large enterprises with a global footprint. The portfolio of SAP customers for ISBN includes mid-size and small enterprises as well.

“Any company with significant purchasing needs—whether it’s a relatively small firm with a turnover of INR 1,000 crore or a large corporation like Microsoft with a $50 billion valuation—can benefit from SAP’s intelligence solutions.”

The post How Joule is Helping SAP Find Moat in Spend Management appeared first on AIM.

]]>
India’s AI Startup Boom: Govt Eyes Equity Stakes and GPU Support https://analyticsindiamag.com/ai-origins-evolution/indias-ai-startup-boom-govt-eyes-equity-stakes-and-gpu-support/ Fri, 30 Aug 2024 07:30:00 +0000 https://analyticsindiamag.com/?p=10134126

Indian startups need support in terms of computing resources more than in financing.

The post India’s AI Startup Boom: Govt Eyes Equity Stakes and GPU Support appeared first on AIM.

]]>

What does it take to qualify as an AI startup? At what stage do they need financial support? And lastly, what should the ideal mode of financing be? These critical topics came up for discussion when government officials recently met with key industry figures.

The focus of the meeting was AI startups in the context of the IndiaAI Mission. 

Notable attendees of the meeting included Google, NVIDIA, and Microsoft, and representatives from AI startups such as Jivi and DronaMaps were also present, the Hindustan Times said.

It’s encouraging to see the government recognise the rapid growth of AI startups across India and acknowledge the significant role they could play in driving the country’s economy in the coming years.

On a recent podcast, Rahul Agarwalla, managing partner at SenseAI Ventures, said he witnessed about 500 new AI startups emerge in the past six months, which is a massive number.

Based on the rate at which AI startups are popping up in the country, Agarwalla believes India could soon have 100 AI unicorns. While it remains to be seen when that happens, the budding AI ecosystem in India will indeed need support from the government beyond regulatory favours.

What Qualifies as an AI Startup?

A key topic discussed at the meeting was the criteria to define an AI startup. 

Attendees highlighted to the government that simply having ‘AI’ in their name does not automatically make a startup an AI-focused company.

In response, stakeholders proposed a rating system, which builds credibility among startups and would in turn make them eligible for government funding. Not everyone will make the cut though. 

Unfortunately, in the startup world, a majority of them do not live long enough to see the light at the end of the tunnel. 

Stakeholders recommend that rather than spreading a small amount of funding thin across many startups, the government should focus on identifying those with significant potential and provide them with targeted financial support.

Earlier, the government had allocated INR 10,372 crore as part of the India AI Mission – a part of which will be used to fund startups.

Should the Government Play the VC?

According to a Tracxn report, Indian AI startups raised $8.2 million in the April-June quarter, while their US counterparts raised $27 billion during the same period.

While not many Indian startups are building LLMs, which cost billions of dollars, the funding for AI startups in India still remains relatively low.

The government, under the IndiaAI Mission, is weighing options to fund AI startups and deciding how best to do so. One bold proposal on the table was taking equity stakes in these emerging companies.

The government had previously suggested taking the equity route as part of the second phase of the designed-linked incentive (DLI) scheme for semiconductor companies. However, the thought was not well received by many in the industry. 

“[I] don’t understand the logic of the government trying to become a venture capital firm for chip design companies. This move is likely to be ineffective and inefficient,” Pranay Kotasthane, a public policy researcher, said back then.

They fear government taking equity could lead to government influence over company operations, and historically, public sector companies in India have often underperformed. Moreover, it could push other venture capitalists away.

Access to Datasets and Compute 

Stakeholders were also quick to point out that more than financing, what the startups need is help in terms of compute. 

According to Abhishek Singh, additional secretary, ministry of electronics and information technology (MeitY), the government plans to disburse INR 5,000 crore of the allocated INR 10,372 crore to procure GPUs.

The government was quick to identify the need for compute, especially for Indian startups, researchers, and other institutions. In fact, last year, the government revealed its intention to build a 25,000 GPU cluster for Indian startups. 

Interestingly, PM Narendra Modi also met Jensen Huang, the CEO of NVIDIA, the company producing the most sought-after GPUs in the market, during his visit to India in 2023.

(Source: NVIDIA)

The Indian Express reported earlier this month that the government had finalised a tender to acquire 1,000 GPUs as part of the IndiaAI Mission. These GPUs will provide computing capacity to Indian startups, researchers, public sector agencies, and other government-approved entities.

Besides access to compute, the stakeholders also urged the government to make the datasets under the IndiaAI Mission available as soon as possible. The datasets will grant startups access to non-personal domain-specific data from government ministries to train models.

Notably, the Bhashini initiative is playing a crucial role in democratising access to Indic language datasets and tools for the Indian ecosystem.

India Startup 2.0 

While the government’s recognition of the funding gap in AI startups and its willingness to provide financial support is encouraging, it is equally important that the government creates a favourable environment for these businesses to thrive.

In line with this, the government launched the Startup India programme in 2016 to foster a robust ecosystem for innovation and entrepreneurship in the country. 

This initiative was designed to drive economic growth and create large-scale employment opportunities by supporting startups through various means. Perhaps, the need of the hour is a similar programme designed specifically for AI startups.

As part of the startup programme, the government identified 92,000 startups and, in addition to funding, provided support such as income tax exemption for three years, credit guarantee schemes, ease of procurement, support for intellectual property protection, and international market access.

Moreover, over 50 regulatory reforms were undertaken by the government since 2016 to enhance the ease of doing business, ease of raising capital, and reduce the compliance burden for the startup ecosystem.

Now, a similar ecosystem needs to emerge for AI startups as well, which fosters innovation, provides essential resources, and facilitates collaboration among researchers, developers, and investors to drive growth and success in the field.

The post India’s AI Startup Boom: Govt Eyes Equity Stakes and GPU Support appeared first on AIM.

]]>
The Rise of Non-NVIDIA GPUs https://analyticsindiamag.com/ai-origins-evolution/the-rise-of-non-nvidia-gpus/ Wed, 28 Aug 2024 09:30:00 +0000 https://analyticsindiamag.com/?p=10133924

AI chip startups are focused on delivering top-tier products and are unafraid to compete directly with NVIDIA.

The post The Rise of Non-NVIDIA GPUs appeared first on AIM.

]]>

NVIDIA may reign as the king of GPUs, but competition is heating up. In recent years, a wave of startups has emerged, taking on the Jensen Huang-led giant at its own game.

Tenstorrent, a startup led by Jim Keller, the lead architect of AMD K8 microarchitecture, is developing AI chips that the company claims perform better than NVIDIA’s GPUs.

“We have a very power-efficient compute, where we can put 32 engines in a box, the same size as NVIDIA puts eight. With our higher compute density and similar power envelope, we outperform NVIDIA by multiples in terms of performance, output per watt, and output per dollar,” Keith Witek, chief operating officer at Tenstorrent, told AIM.

(Wormhole by Tenstorrent)

NVIDIA’s chips used in the data centres need silicon interposers like HBM memory chips. Companies like Samsung and SK Hynix, along with NVIDIA, have also made millions selling these chips. However, Tenstorrent chips eliminate the need for these chips.

Similarly, Cerebras Systems, founded by Andrew Feldman in 2015, has developed chips to run generative AI workloads such as training models and inference. Their chip, WSE-3– is the world’s largest AI chip– with over 4 trillion transistors and 46225mm2 of silicon.

Check: Difference Between NVIDIA GPUs – H100 Vs A100

The startup claims its chips are 8x faster than NVIDIA DGX H100 and are designed specifically to train large models.

(World’s largest AI chip- WSE-3)

Startups Building for the Inference Market

There are startups developing chips designed specifically for inferencing. While NVIDIA’s GPUs are in great demand because they are instrumental in training AI models, for inference, they might not be the best tool available. 

D-Matrix, a startup founded by Sid Sheth, is developing silicon which works best at inferencing tasks. Its flagship product Corsair is specifically designed for inferencing generative AI models (100 billion parameter or less) and is much more cost-effective, compared to GPUs. 

“We believe that a majority of enterprises and individuals interested in inference will prefer to work with models up to 100 billion parameters. Deploying larger models becomes prohibitively expensive, making it less practical for most applications,” he told AIM.

Another startup that is locking horns with NVIDIA in this space is Groq, founded by Jonathan Ross in 2016. According to Ross, his product is 10 times faster, 10 times cheaper, and consumes 10 times less power.

Groq is designed to provide high performance for inference tasks, which are critical for deploying AI models in production environments.

Recently, another player, Cerebras, announced its Cerebras inference, which they claim is the fastest AI inference solution in the world. It delivers 1,800 tokens/sec for Llama3.1 8B and 450 tokens/sec for Llama3.1 70B, which is 20x faster than NVIDIA GPU-based hyperscale clouds.

Challengers in the Edge AI Market

While NVIDIA may have made its name and money by selling GPUs, over the years, it has also expanded in other segments, such as developing chips for humanoids, drones, and IoT devices.

SiMa.ai, a US-based startup with strong roots in India, is building chips which can run generative AI models on the embedded edge. Founded by Krishna Rangasayee in 2018, the startup takes NVIDIA as its biggest competitor.

Rangasayee believes multimodal AI is the future and the startup’s second-gen chip is designed to run generative AI models on the edge– on cars, robotic arms, humanoids, as well as drones.

“Multimodal is going to be everywhere, from every device to appliances, be it a robot or an AI PC. You will be able to converse, watch videos, parse inputs, just like you talk to a human being,” he told AIM.

Notably, SiMa.ai’s first chip, designed to run computer vision models on edge, beat NVIDIA on the ML Perf benchmarks. Another competitor of NVIDIA in this space is Hailo AI. It is building chips that run generative AI models on the edge.

Everyone Wants a Piece of the Pie 

Notably, these startups are not seeking a niche within the semiconductor ecosystem. Instead, they are focused on delivering top-tier products and are unafraid to compete directly with NVIDIA.

They all want a piece of the pie and are already locking horns with NVIDIA. 

D-Matrix, for instance, counts Microsoft, which is one of the AI model builders, as its customers. Sheth revealed that the company has customers in North America, Asia, and the Middle East and has signed a multi-million dollar contract with one of its customers. The point here is that Microsoft is one of NVIDIA’s biggest enterprise customers.

Cerebras also counts some of the top research and supercomputing labs as its customers. Riding on the success, the startup plans to go public this year.

Rangasayee previously told AIM that his startup is in talks with many robotics companies, startups building humanoids, public sector companies as well as some of the top automobile companies in the world.

They All Might Lose to CUDA

All these startups have made substantial progress and some are preparing to launch their products in the near future. While having advanced hardware is crucial, the real challenge for these companies will be competing against a monster – CUDA.

These startups, which position themselves as software companies which build their own hardware, have come up with their own software to make their hardware compatible with their customer’s applications. 

For example, Tenstorrent’s open-source software stack Metalium is similar to CUDA but less cumbersome and more user-friendly. On Metalium, users can write algorithms and programme models directly to the hardware, bypassing layers of abstraction.

Interestingly, they have another one called BUDA, which represents the envisioned future utopia, according to Witek. 

“Eventually, as compilers become more sophisticated and AI hardware stabilises, reaching a point where they can compile code with 90% efficiency, the need for hand-packing code in the AI domain diminishes.”

Nonetheless, it remains to be seen how these startups compete with CUDA. Intel and AMD have been trying for years, yet CUDA remains NVIDIA’s moat. 

“All the maths libraries… and everything is encrypted. In fact, NVIDIA is moving its platform more and more proprietary every quarter. It’s not letting AMD and Intel look at that platform and copy it,” Witek said.

The post The Rise of Non-NVIDIA GPUs appeared first on AIM.

]]>
Mid-Size IT Firm Mindsprint Launches Specialised Generative AI Offering https://analyticsindiamag.com/ai-trends-future/mid-size-it-firm-mindsprint-launches-specialised-generative-ai-offering/ Mon, 26 Aug 2024 13:00:00 +0000 https://analyticsindiamag.com/?p=10133764

The platform leverages Multiple LLMs, allows customers to cluster processes, to vectorise documents, and extract analytics or insights from them.

The post Mid-Size IT Firm Mindsprint Launches Specialised Generative AI Offering appeared first on AIM.

]]>

Mindsprint, a provider of purpose-built industry-first digital solutions and services headquartered in Singapore, is set to launch its new generative AI platform called MindVerse.

It is a purpose-built platform that includes over 10 generative AI solutions such as data intelligence, document interactor, intelligent chatbots, customer feedback analytics, recommendation engines, language translation, and content generation. 

According to Sagar Porayil Vadakkinakathu, chief technology officer at Mindsprint, the idea of MindVerse was conceived about 18 months ago, right after ChatGPT took the whole world by storm.

“As we experimented and built various proofs of concept, we discovered that many elements were reusable across different solutions. This realisation led us to the idea of creating a platform that leverages these reusable components to accelerate the delivery process and streamline development,” Vadakkinakathu told AIM.

The company, which spun out of Olam Group, a $36 billion food and agri giant, developed MindVerse as an external-facing platform to allow customers to explore demos and try out different generative AI solutions independently. 

Solving Business Problems with MindVerse 

Under the MindVerse platform, sits yet another platform called Mercury, where, according to Vadakkinakathu, all the magic happens. It leverages different Large Language Models (LLMs), allows customers to cluster processes, to vectorize documents, and extract analytics or insights from them.

“Because of our parent company, we have a significant presence in the agri sector. MindVerse is leveraged by F&B companies to derive procurement strategies.

“If you’re part of the procurement team and need to create a purchase order with multiple suppliers, you often have to negotiate without full visibility, relying on trial and error. Our solution, however, provides valuable insights based on historical trend analysis and market studies, enabling procurement professionals to make well-informed, strategic decisions,” Vadakkinakathu said.

While the platform uses Machine Learning models to make sense of historical data, the LLMs help the procurement officer with the negotiation pitch.

“The negotiation pitch should be backed by quantifiable data. If I have access to this data in a conversational format, I can effectively use it as a powerful tool during negotiations,” he pointed out.

The platform also chooses among multiple LLMs depending on the use case. For instance, “For some use case if Google T5 works best, we leverage that and for some other use case we might be leveraging the LLama Models.”

Besides agriculture, Mindsprint also operates in the life science, manufacturing and retail industries. MindVerse has solutions for sales, marketing, HR functions for large enterprises It provides inventory forecasting solutions for customers in the manufacturing space. 

Building Chatbots

As part of the MindVerse platform, customers also have the ability to build highly intelligent chatbots. These chatbots support structured data querying where business users can get results from Data Lake/Data marts without them needing to write SQL or report to visualise the data.

“We developed a chatbot for spend analytics that streamlines data access from data lakes like Snowflake. Traditionally, obtaining data from Snowflake could take weeks due to the request and BI dashboard creation process. Our chatbot enables users to directly query and extract data from data lakes/marts instantly,” Vadakkinakathu said.

With MindVerse, customers will be able to develop similar chatbots which provide instant, user-friendly access to data and streamline querying processes, thereby enhancing efficiency and decision-making.

Building Local LLMs

While LLMs like OpenAI’s GPT-4, or LLama 3.1 405 billion are significantly large models trained on big datasets. However, enterprises find smaller language models which can be trained on their enterprise data more useful.

Given that is the direction the industry is heading towards, Vadakkinakathu also revealed that the company is in the process of training a small model on one of its customer’s enterprise data.

“We’re in the process of training a local LLM using one of our customer’s datasets. This approach promises to be a game changer, surpassing the limitations of Retrieval-Augmented Generation (RAG) and other methods,” he continued, “We have the necessary infrastructure, including GPUs, and customer agreements on specific datasets. Currently, we’re focused on building the data pipeline, a challenging process that involves running multiple models to neutralise and balance the datasets.”

About Mindsprint

MindSprint, previously known as Olam Technology and Business Services (OTBS), was Olam’s IT division. However, now it operates as an independent company and caters many large enterprises among the sectors it operates in. It has presence in Singapore, US, India and the UK.

It counts Olam– which operates in 60 countries and supplies food and industrial raw materials to over 20,900 customers worldwide, placing them among the world’s largest suppliers of cocoa beans, coffee, cotton and rice–as one of its customers.  

Its customers list includes other heavyweights such as Nestle and Mondelez International. In India, Mindsprint has offices in Bangalore and Chennai. According to Vadakkinakathu, much of the Research and Development (R&D) work done by the company also happens in India. 

The post Mid-Size IT Firm Mindsprint Launches Specialised Generative AI Offering appeared first on AIM.

]]>
How Generative AI is Fueling Demand for Kubernetes https://analyticsindiamag.com/ai-origins-evolution/how-generative-ai-is-fueling-demand-for-kubernetes/ Mon, 26 Aug 2024 09:30:00 +0000 https://analyticsindiamag.com/?p=10133735

Kubernetes marked its 10th anniversary in June this year.

The post How Generative AI is Fueling Demand for Kubernetes appeared first on AIM.

]]>

Historically, running AI/ML workloads on Kubernetes has been challenging due to the substantial CPU/GPU resources these workloads typically demand. 

However, things are now changing. The Cloud Native Computing Foundation (CNCF), a nonprofit organisation that promotes the development and adoption of Kubernetes, recently released a new update, Kubernetes 1.31 (Elli). 

Register for NVIDIA AI Summit India

Elli introduces enhancements designed to improve resource management and efficiency, making it easier to handle the intensive requirements of AI and ML applications on Kubernetes.

Enterprises are increasingly turning to cloud native applications, especially Kubernetes, to manage their AI workload. According to a recent Pure Storage survey of companies with 500 employees and more, 54% said they were already running AI/ML workloads on Kubernetes.

Around 72% said they run databases on Kubernetes and 67% ran analytics. Interestingly, the numbers are expected to rise as more and more enterprises turn to Kubernetes. This is because the development of AI and ML models is inherently iterative and experimental. 

“Data scientists continually tweak and refine models based on the evolving training data and changing parameters. This frequent modification makes container environments particularly well-suited for handling the dynamic nature of these models,” Murli Thirumale, GM (cloud-native business unit), Portworx at Pure Storage, told AIM.

Kubernetes in the Generative AI Era

Kubernetes marked its 10th anniversary in June this year. What started with Google’s internal container management system Borg, has now become the industry standard for container orchestration, adopted by enterprises of all sizes. 

The containerised approach provides the flexibility and scalability needed to manage AI workloads.

“The concept behind a container is to encapsulate an application in its own isolated environment, allowing for rapid changes and ensuring consistent execution. As long as it operates within a Linux environment, the container guarantees that the application will run reliably,” Thirumale said.

(Source: The Voice of Kubernetes Expert Report 2024)

Another reason AI/ML models rely on containers and Kubernetes is the variability in data volume and user load. During training, there is often a large amount of data, while during inferencing, the data volume can be much smaller. 

“Kubernetes addresses these issues by offering elasticity, allowing it to dynamically adjust resources based on demand. This flexibility is inherent to Kubernetes, which manages a scalable and self-service infrastructure, making it well-suited for the fluctuating needs of AI and ML applications,” Thirumale said.

NVIDIA, which became the world’s most valuable company for a brief period, recently acquired Run.ai, a Kubernetes-based workload management and orchestration software provider. 

As NVIDIA’s AI deployments become more complex, with workloads distributed across cloud, edge, and on-premises data centres, effective management and orchestration get increasingly crucial. 

NVIDIA’s acquisition also signifies the growing use of Kubernetes, highlighting the need for robust orchestration tools to handle the complexities of distributed AI environments across various infrastructure setups.

Databases Can Run on Kubernetes

Meanwhile, databases are also poised to play an important role as enterprises look to scale AI. Industry experts AIM has spoken to have highlighted that databases will be central in building generative AI agents or other generative AI use cases.

As of now, only a handful of companies are training AI models. Most of the remaining enterprises in the world will be finetuning their own models and will look to scale with their AI solutions very soon. Hence, databases that can scale and provide real-time performance will play a crucial role.

“AI/ML heavily rely on databases, and currently, 54% of these systems are run on Kubernetes—a figure expected to grow. Most mission-critical applications involve data, such as CRM systems where data is read but not frequently changed, versus dynamic applications, like ATMs that require real-time data updates. 

“Since AI, ML, and analytics are data-intensive, Kubernetes is becoming increasingly integral in managing these applications effectively,” Thirumale said.

Replacement for VMware

Broadcom’s acquisition of VMware last year also impacted the growing usage of Kubernetes. The acquisition has left customers worried about the pricing and integration with Broadcom.

“It’s a bundle, so you’re forced to buy stuff you may not intend to,” Thirumale said. 

Referring to the survey again, he pointed out that as a result around 58% of organisations which participated in the survey plan to migrate some of their VM workloads to Kubernetes. And around 65% of them plan to migrate VM workloads within the next two years. 

Kubernetes Talent 

As enterprises adopt Kubernetes, the demand for engineers who excel in the technology is also going to increase, and this will be a big challenge for enterprises, according to Thirumale.

“Kubernetes is not something you are taught in your college. All the learning happens on the job,” he said. “The good news is senior IT managers view Kubernetes and platform engineering as a promotion. So let’s say you’re a VMware admin, storage admin, if you learn Kubernetes and containers, they view you as being a higher-grade person,” he said.

When asked if education institutions in India should start teaching students Kubernetes, he was not completely on board. He believes some basics can be taught as part of the curriculum but there are so many technologies in the world.

“Specialisation happens in the industry; basic grounding happens in the institutions. There are also specialised courses and certification programmes that one can learn beyond one’s college curriculum,” he concluded.

The post How Generative AI is Fueling Demand for Kubernetes appeared first on AIM.

]]>
Revrag AI Raises $600K to Transform B2B Sales with AI Agents https://analyticsindiamag.com/ai-news-updates/revrag-ai-raises-us-600k-to-transform-b2b-sales-with-ai-agents/ Fri, 23 Aug 2024 06:29:31 +0000 https://analyticsindiamag.com/?p=10133579

Its first product - an AI-BDR (Business Development Representative)- is set to launch soon.

The post Revrag AI Raises $600K to Transform B2B Sales with AI Agents appeared first on AIM.

]]>

India-based startup Revrag has announced that it has secured $600K in its pre-seed funding round. The startup is building AI agents designed for revenue teams, and its first product, an AI-BDR (Business Development Representative), is set to launch soon.

The AI agent will automate prospecting and outreach at scale, intelligently qualify leads, schedule meetings, and provide data-driven insights for more informed decision-making.

Their first AI agent, Emma, is a fundamental shift in how the sales industry will evolve with modern technology, according to the startup.

While Revrag plans to explore many other territories, they are currently focusing on critical business operations like sales and marketing.

The pre-seed round of funding was led by Powerhouse Ventures, with participation from notable investors including Viral Bajaria (co-founder of 6Sense), Kunal Shah (founder of Cred), Deepak Anchala (founder of Slintel), Vetri Vellore (founder of Rhythms), and 20 other marquee angel investors.

Revrag is also backed by industry powerhouses, including founders and senior executives of 6Sense and Slintel.

“At Revrag, we’re building intelligent AI agents to help revenue teams automate their redundant work so they can focus on what’s important. For example, our AI-BDR can smoothly take care of the initial prospect outreach and schedule meetings, allowing human sellers to focus more on client interactions and other complexities of the sales process,” Ashutosh Singh, CEO and co-founder at Revrag, said.

The post Revrag AI Raises $600K to Transform B2B Sales with AI Agents appeared first on AIM.

]]>
Wealthtech Startup InvestorAi raises INR 80 crore for Scaling its Business https://analyticsindiamag.com/ai-news-updates/wealthtech-startup-investorai-raises-rs-80-crore-in-series-a-for-scaling/ Thu, 22 Aug 2024 12:57:10 +0000 https://analyticsindiamag.com/?p=10133545

The funds will also be used to add new products.

The post Wealthtech Startup InvestorAi raises INR 80 crore for Scaling its Business appeared first on AIM.

]]>

AI-powered equity investment platform InvestorAi has announced that it has raised Rs 80 crores in a Series A round for scaling the business and adding new products.

Founded in 2018, InvestorAi has a singular, outcome focused mission – to use AI technology to produce winning investment outcomes for GenZ, Millennials and other digital investors.

Its products use AI to create stock recommendations for the Indian market. These recommendations are then packaged into easy-to-use investment solutions that combine recommendation and delivery into a one-click user experience.

Company’s proprietary AI techniques and models use sophisticated methods such as computer vision to convert complex financial data into images (exponentially increasing prediction accuracy) and genetic algorithms as well as other techniques to create investment-ready equity baskets (model portfolios).

InvestorAi has over 15 equity baskets with different strategies and some of them are now in their 3rd year, with compelling returns. All baskets have significantly beaten the index in last 12 months as well as since inception.

The AI models that power the platform have been trained using 14 years of stock market data and have delivered market beating returns consistently since 2022.

Moreover, the AI models are integrated with a next generation delivery engine called InvestorAi YouTrade, which provides a one-click experience integrated within the broker’s own mobile or on-line platform.

 “India’s retail investor base is currently around 150 million and growing at 3 million a month. The Indian stock market is expected to reach $10 trillion by 2030 from the current $4.8 trillion market cap. However, only 7% of households’ income is invested in direct equities. This presents a large untapped market opportunity which can be leveraged by use of AI-led equity financing guidance,” Bruce Keith, Co-founder and CEO, InvestorAi, said.

InvestorAi products are currently available through some of the most reputed names in the broking industry that include HDFC Securities, Geojit, PL, JM Financial Services, Yes Securities, IIFL Securities, 5Paisa and Axis Securities.

The post Wealthtech Startup InvestorAi raises INR 80 crore for Scaling its Business appeared first on AIM.

]]>
AI Coding Tools Deliver Multiple Benefits to Developers, but Company Adoption Lags in India: GitHub Report https://analyticsindiamag.com/ai-news-updates/ai-coding-tools-deliver-multiple-benefits-to-developers-but-company-adoption-lags-github-report/ Thu, 22 Aug 2024 12:25:53 +0000 https://analyticsindiamag.com/?p=10133539

Around 66% of respondents in the survey believe these tools will enhance their ability to meet customer requirements.

The post AI Coding Tools Deliver Multiple Benefits to Developers, but Company Adoption Lags in India: GitHub Report appeared first on AIM.

]]>

 GitHub recently released new research highlighting how software professionals are putting AI coding tools to use, and the impact it’s having on their teams, code, and organisations. The survey reveals that 81% of developers in India have experienced a perceived increase in code quality due to AI coding tools, however, only 40% say their companies are actively encouraging the use of AI coding tools.

Around 66% of respondents in the survey believe these tools will enhance their ability to meet customer requirements, among other benefits.

“The potential of AI-driven software development is undeniable; however, individual AI usage isn’t enough. Organisations need to operationalise AI throughout the software development lifecycle to boost collaboration, creativity, and modernisation,” Kyle Daigle, chief operating officer at GitHub, said.

The survey of 2,000 software professionals–which includes 500 respondents each across India, the U.S., Brazil, and Germany at organisations with 1,000 or more employees–identified several key benefits that respondents associate with using AI coding tools in software development. 

Key insights from respondents in India include:

  • Respondents anticipate AI will enhance code security and development efficiency. There is universal anticipation (100%) among survey respondents in India that AI coding tools will improve code security. Notably, the highest expectation of a “significant improvement” across all respondents globally was in India, with 41% expressing this view. 
  • Improve ability to meet customer requirements. The majority of respondents in India (66%) expressed optimism about the potential of AI coding tools to moderately improve or significantly enhance their ability to meet customer requirements. This trend was consistent across various industries, suggesting a widespread expectation of benefits from generative AI.
  • Easier to work with new programming languages, and understand existing codebases. A large portion (69%) of respondents in India reported that these tools make it “easy” to adopt a new programming language or understand an existing codebase. Notably, 28% of those in India highlighted that AI coding tools made it “very easy” for them to adopt a new programming language or understand an existing codebase.
  • Test case generation. 99% of respondents in India stated their organisations have experimented with using AI coding tools to generate test cases. The majority (75%) of respondents reported their organisations use AI tools for test generation at least “sometimes.” 
  • Proficiency in AI coding tools is seen as a major asset by job seekers. Nearly all respondents in India (99%) believe this skill makes them more attractive candidates, underlining the growing importance of AI across various fields. Notably, 56% in India believe this expertise significantly boosts their employability. 

Despite the reported benefits of AI coding tools by a high portion of respondents in India, a smaller percentage said their companies are actively encouraging adoption or allowing the use of AI tools, highlighting room for progress. In fact:

  • 99% of respondents surveyed in India reported using AI coding at work at some point.
  • In contrast, a much smaller percentage (40%) of respondents in India said their companies actively encourage and promote AI tool adoption.
  • An additional 39% of respondents in India report that their organisations allow the use of these tools but offer limited encouragement. To maximise the benefits of these tools, organisations should have a roadmap, a clear strategy, and policies in place to ensure wider adoption happens through building trust and driving measurable performance metrics.

The post AI Coding Tools Deliver Multiple Benefits to Developers, but Company Adoption Lags in India: GitHub Report appeared first on AIM.

]]>
CitiusTech Announces Industry’s First GenAI-Powered HEDIS Solution https://analyticsindiamag.com/ai-news-updates/citiustech-announces-industrys-first-genai-powered-hedis-solution/ Thu, 22 Aug 2024 11:46:33 +0000 https://analyticsindiamag.com/?p=10133509

CitiusTech, a global IT service and consulting firm, recently announced the launch of a GenAI version of their industry-leading HEDIS (Healthcare Effectiveness Data and Information Set) solution effectively making it industry’s first Generative HEDIS solution, seamlessly integrated into CitiusTech’s PERFORM+ Clinical Convergence Platform. CitiusTech’s PERFORM+ platform, an NCQA-certified solution for 10+ years, embodies a deliberate […]

The post CitiusTech Announces Industry’s First GenAI-Powered HEDIS Solution appeared first on AIM.

]]>

CitiusTech, a global IT service and consulting firm, recently announced the launch of a GenAI version of their industry-leading HEDIS (Healthcare Effectiveness Data and Information Set) solution effectively making it industry’s first Generative HEDIS solution, seamlessly integrated into CitiusTech’s PERFORM+ Clinical Convergence Platform.

CitiusTech’s PERFORM+ platform, an NCQA-certified solution for 10+ years, embodies a deliberate and thoughtful integration of GenAI capabilities. Key features of CitiusTech’s Generative HEDIS solution include:

  • Conversational Rules: e.g., text/ voice to rules for cohort building, measure building, attribution.
  • Gen AI Foundational Library: Collaborative workbench for Gen AI solutions.
  • AI Augmented Engineering: Faster time to market for feature building.
  • Assisted User Experience: e.g., Chart abstraction, annotations and others.

“This represents a once-in-a-lifetime opportunity to revolutionize a traditionally manual-intensive business process through the implementation of a forward-looking engineering architecture. We are at the pivotal intersection of healthcare-grade Generative AI adoption for conversational experiences and the most advanced technologies for HEDIS and quality management,” Madhu Madhanan, Vice President & Engineering Head of PERFORM+, CitiusTech, said.

CitiusTech’s Generative HEDIS solution is now available to healthcare payers looking to optimise their HEDIS processes and improve overall quality measures.

CitiusTech is a global IT services, consulting, and business solutions enterprise 100% focused on the healthcare and life sciences industry. It enables 140+ enterprises to build an efficient, effective, and equitable human-first ecosystem with deep domain expertise and next-gen technology. 

The post CitiusTech Announces Industry’s First GenAI-Powered HEDIS Solution appeared first on AIM.

]]>
Why are Content Creators Falling in Love with This AI Startup? https://analyticsindiamag.com/ai-origins-evolution/why-are-content-creators-falling-in-love-with-this-ai-startup/ Wed, 21 Aug 2024 11:07:54 +0000 https://analyticsindiamag.com/?p=10133375

InVideo sees over 3 million new users visit its platform every month.

The post Why are Content Creators Falling in Love with This AI Startup? appeared first on AIM.

]]>

A lot goes into making the videos we consume online—brainstorming ideas, writing scripts, editing, and recording voice-overs—and all of this consumes a substantial amount of the content creators’ time. 

This is where generative AI can step in to ease the burden. It can be a great tool for the creators to streamline these processes and reduce the time spent on routine tasks.

Imagine an AI tool that lets you accomplish everything at one place with just a few prompts. InVideo AI promises exactly that, which is why content creators around the globe are falling in love with the platform.

According to Sanket Shah, co-founder and CEO of InVideo, the platform gets nearly 3 million new users every month. 

“We don’t track the total number of people using our free product, but if we take 3 million on average, we could have around 35 million users on our platform in just one year. In terms of paid users, we have about 150,000 of them,” he said.

AIM met Shah in early August in the Bengaluru edition of AWS GenAI Loft, a collaborative gathering of developers, startups, and AI enthusiasts to learn, build, and connect.

Shah revealed that the startup has already secured over 50% of its $30 million revenue target for the year.

Avoiding the Uncanny Valley

Interestingly, InVideo has not developed a text-to-video model like OpenAI’s Sora or Kling. Instead, the startup has partnered with several media providers like Getty Images and Shutterstock and pays them a licence fee. 

One primary reason for this approach, according to Shah, is that InVideo wants to provide users with publishable videos. 

Currently, even though models like Sora produce high-quality videos, there is limited clarity about the datasets they are trained on, which complicates the publishability of these videos.

“Moreover, at InVideo, one of our core principles is to avoid anything that falls into the uncanny valley. As of last week, we felt that generative image and video technology still often produced results with issues like extra fingers or multiple eyes—things that are not acceptable for our purposes. 

“We focus on delivering high-quality, publishable videos, so we prioritise ensuring that our users receive content that meets professional standards,” Shah said. 

The platform leverages AI to understand the user’s intent, write script, handle voice-overs—practically cloning the user’s voice. 

It selects and integrates media, performs editing tasks, adds music, and ensures that all elements like transitions and zooms are correctly applied, effectively automating much of the post-production process.

However, the startup does not shy away from leveraging models like Sora. Once available, they could contemplate integrating Sora and Kling into their platform. However, they are not in the business of building models.

“We don’t want to enter into a race with the hyperscalers (model builders) to build the next-big model. Models are also perishable and we have seen that already,” he said.

The AI in InVideo 

While the startup is refraining from entering the territory of model builders like OpenAI, Anthropic, Microsoft, and Google, it is building models that suit its business model.

“We prefer to focus on developing models that are niche and highly specific to our needs. For instance, we are working on a lip-sync model tailored to our requirements,” Shah revealed.

The startup also plans a new AI-powered feature next month, which allows users to create an avatar or a digital clone of themselves. 

“Here’s how it works– you record a short video, speaking for 30 seconds to a minute, and the AI generates an avatar. Once you have your avatar, you can input a prompt or specify what you want it to say,” Shah revealed.

The platform does leverage LLMs from Anthropic, OpenAI and Google, but Shah refrained from revealing much about their use cases. “This is very proprietary and that is where most of the magic happens.”

InVideo also leverages Amazon Bedrock, which gives them access to some of the top LLMs through a single API. 

Moreover, the startup also leverages AWS’ multi-region fleet of Spot GPUs for video rendering and editing on open-source solutions, allowing them to run 90% of their workload on Spot instances, which enables close to 40% cost reduction.

Enabling Content Creators with AI

The startup started with a pre-AI product in 2017 and was initially focussed on enterprises. However, pivoting to AI and to a more B2C business model from B2B proved to be a game changer.

Today, it caters to YouTubers and established and new content creators creating content for Facebook, Instagram, and TikTok. 

“The platform is also leveraged by small businesses, for example, a lady selling horses in Texas, a partially deaf teacher in Palo Alto who is teaching a community college, a bunch of students and teachers, someone selling water bottles, and some restaurants,” Shah revealed. 

“About 5% of our customers are also filmmakers.”

When AIM asked Shah about some of the most fun and interesting videos he has seen users generate using the platform, he revealed that the brand manager of the legendary rock band Aerosmith used InVideo to generate content on how to deal with depression.

“One day, the brand manager received an email from a viewer who was contemplating suicide. After watching the video and following some advice, they decided against it. Stories like these are incredibly powerful,” Shah revealed.

The post Why are Content Creators Falling in Love with This AI Startup? appeared first on AIM.

]]>
LLUMO AI Lands $1M Seed Funding to Build Seamless Enterprise-Grade AI Integration Platform https://analyticsindiamag.com/ai-news-updates/llumo-ai-lands-1m-seed-funding-to-build-seamless-enterprise-grade-ai-integration-platform/ Wed, 21 Aug 2024 06:58:40 +0000 https://analyticsindiamag.com/?p=10133345

The funding will support LLUMO AI's expansion into the US market

The post LLUMO AI Lands $1M Seed Funding to Build Seamless Enterprise-Grade AI Integration Platform appeared first on AIM.

]]>

LLUMO AI, an AI optimisation company tackling the high costs and performance challenges of Generative AI, today announced that it has raised $1 million (₹8 crore) in seed funding. LLUMO AI empowers AI businesses to reduce Generative AI costs by 80% and comprehensively evaluate LLMs in real-time without requiring ground-truth data.

The seed funding will support LLUMO AI’s expansion into the US market as well as the development of an enterprise-grade platform that seamlessly integrates with existing AI workflows.

The funds will enable LLUMO AI to address the pressing challenges businesses face when integrating Generative AI and Large Language Models (LLMs) into their products and services.

As more companies seek to leverage the potential of Generative AI, they often encounter two significant hurdles: the inability to assess LLM performance in real-world scenarios and the soaring costs associated with LLM usage.

These challenges are particularly pronounced when implementing Retrieval-Augmented Generation (RAG) pipelines, which can increase the prompt token size by 5 to 10 times. After encountering these issues with their previous product Instaminutes, the Llumo team spoke to over 100 AI companies and found that these problems were widespread.

Determined to solve these problems, the LLUMO AI team developed a platform that empowers businesses to reduce Generative AI costs by up to 80% and gain unparalleled visibility into LLM performance.

At the core of LLUMO AI’s solution are two proprietary tiny LLMs trained on millions of data points. The first model efficiently compresses prompts, significantly reducing costs while maintaining output quality.

The second, Eval-LM (Evaluation Language Model), assesses LLM-generated output without requiring ground truth data. By leveraging these innovative models, businesses can optimise their LLM implementations, achieve faster iterations, and drive growth without straining their budgets.

The investment round was led by SenseAI Ventures, an AI-focused venture capital fund, with participation from India Quotient, AumVC, Venture Catalyst, IIM Indore Alumni Angel fund, and US-based angel investors.

“Our platform not only addresses the critical challenges of cost and performance but also empowers our customers to make data-driven decisions that accelerate growth and transform customer experiences. With this funding, we are one step closer to realising our vision of making Generative AI accessible, affordable, and impactful for businesses worldwide,” Shivam Gupta, founder of LLUMO AI, said.

The post LLUMO AI Lands $1M Seed Funding to Build Seamless Enterprise-Grade AI Integration Platform appeared first on AIM.

]]>
AI Hyperscaler E2E Networks Secures INR 420.51 Crores to Expand Cloud Infrastructure https://analyticsindiamag.com/ai-news-updates/ai-hyperscaler-e2e-networks-secures-inr-420-51-crores-to-expand-cloud-infrastructure/ Tue, 20 Aug 2024 10:14:27 +0000 https://analyticsindiamag.com/?p=10133242

E2E Networks, a leading AI-First and MeitY-empanelled Cloud Platform in India, today announced the successful closure of a strategic investment round, securing INR 420.51 crores through a preferential issue of equity shares. The hyperscaler plans to invest in expanding its accelerated cloud infrastructure, focusing on next-generation cloud GPUs and GPU clusters, which are critical for […]

The post AI Hyperscaler E2E Networks Secures INR 420.51 Crores to Expand Cloud Infrastructure appeared first on AIM.

]]>

E2E Networks, a leading AI-First and MeitY-empanelled Cloud Platform in India, today announced the successful closure of a strategic investment round, securing INR 420.51 crores through a preferential issue of equity shares.

The hyperscaler plans to invest in expanding its accelerated cloud infrastructure, focusing on next-generation cloud GPUs and GPU clusters, which are critical for AI and machine learning workloads.

This will enable E2E Networks to support the scaling needs of startups, enterprises, the public sector and research institutions as they navigate the era of generative AI and machine learning.

This investment will also further strengthen E2E Networks’ cloud platform, particularly through the enhancement of TIR, a cutting-edge low-code AI development platform.

The Board of Directors approved the preferential issue of up to 24,81,592 equity shares at an issue price of INR 1694.50 per share, raising an aggregate amount of INR 420.51 crores.

This investment comes from a diverse group of investors, including key members of the promoter group and a wide array of public investors. The funds will be deployed to accelerate the growth of E2E Networks’ AI-First Cloud Platform, enhancing its capability to support advanced machine learning and AI-driven applications.

Commenting on the successful investment round, Tarun Dua, co-founder and managing director of E2E Networks, stated, “The capital raised will enable us to enhance our cloud infrastructure, ensuring that we continue to deliver unparalleled value to our customers and support the next wave of AI-driven innovation in India.”

The post AI Hyperscaler E2E Networks Secures INR 420.51 Crores to Expand Cloud Infrastructure appeared first on AIM.

]]>
HP Enterprise Announces Plan to Acquire Morpheus Data https://analyticsindiamag.com/ai-news-updates/hp-enterprise-announces-plans-to-acquire-morpheus-data/ Tue, 20 Aug 2024 09:39:09 +0000 https://analyticsindiamag.com/?p=10133226

Hewlett Packard Enterprise has announced that it has entered into a definitive agreement to acquire Morpheus Data, a pioneer in software for hybrid cloud management and platform operations. Morpheus will enhance HPE GreenLake by providing multi-vendor, multicloud application provisioning, orchestration and automation, as well as FinOps capabilities for cloud cost optimization. Morpheus complements HPE’s successful acquisition […]

The post HP Enterprise Announces Plan to Acquire Morpheus Data appeared first on AIM.

]]>

Hewlett Packard Enterprise has announced that it has entered into a definitive agreement to acquire Morpheus Data, a pioneer in software for hybrid cloud management and platform operations.

Morpheus will enhance HPE GreenLake by providing multi-vendor, multicloud application provisioning, orchestration and automation, as well as FinOps capabilities for cloud cost optimization.

Morpheus complements HPE’s successful acquisition of IT operations management leader OpsRamp in 2023. These capabilities will solidify HPE as the first vendor to provide a full suite of enterprise-grade capabilities and services across the hybrid cloud stackand will make HPE GreenLake the future-proof hybrid cloud destination for enterprises.

“With the acquisition of Morpheus Data, we will take the next major leap to make HPE GreenLake cloud the de facto platform for innovating across hybrid IT,” Fidelma Russo, executive vice president and general manager, hybrid cloud and CTO at HPE said.

Earlier this year, HPE and NVIDIA announced NVIDIA AI Computing by HPE, a portfolio of co-developed AI solutions and joint go-to-market integrations that enable enterprises to accelerate adoption of generative AI.

Among the portfolio’s key offerings is HPE Private Cloud AI, a first-of-its-kind solution that provides the deepest integration to date of NVIDIA AI computing, networking and software with HPE’s AI storage, compute and the HPE GreenLake cloud.

The offering enables enterprises of every size to gain an energy-efficient, fast, and flexible path for sustainably developing and deploying generative AI applications.

The post HP Enterprise Announces Plan to Acquire Morpheus Data appeared first on AIM.

]]>
This Microsoft-Backed Startup Comes up With Alternative to GPUs for Inference https://analyticsindiamag.com/ai-breakthroughs/this-microsoft-backed-startup-comes-up-with-alternative-to-gpus-for-inference/ Mon, 19 Aug 2024 09:58:15 +0000 https://analyticsindiamag.com/?p=10132911

The startup will launch its flagship product Corsair in November this year

The post This Microsoft-Backed Startup Comes up With Alternative to GPUs for Inference appeared first on AIM.

]]>

Graphics processing units (GPUs) have become highly sought-after in the AI field, especially for training models. However, when it comes to inference, they might not always be the most efficient or cost-effective choice.

D-Matrix, a startup incorporated in 2019 and headquartered in Santa Clara, California, is developing silicons better suited for generative AI inference. 

Currently, only a handful of companies in the world are training AI models. But when it comes to deploying these models, the numbers could be in millions.

In an interaction with AIM, Sid Sheth, the founder & CEO of d-Matrix, said that 90% of AI workloads today involve training a model, whereas around 10% involve inferencing. 

“But it is rapidly changing to a world, say, five to 10 years from now, when it will be 90% inference, 10% training. The transition is already underway. We are building the world’s most efficient inference computing platform for generative AI because our platform was built for transformer acceleration.

“Moreover, we are seeing a dramatic increase in GPU costs and power consumption. For example, an NVIDIA GPU that consumed 300 watts three years ago now consumes 1.2 kilowatts—a 4x increase. This trend is clearly unsustainable,” Sheth said.

Enterprises Like Small Language Model 

The startup is also aligning its business model with the fact that enterprises see smaller language models as truly beneficial, which they can fine-tune and train on their enterprise data. 

Some of the best LLMs, like GPT-4, or Llama 3.1, are significantly big in size with billions or potentially trillions of parameters. These models are trained on all the knowledge of the world. 

However, enterprises need models that are specialised, efficient, and cost-effective to meet their specific requirements and constraints.

Over time, we have seen OpenAI, Microsoft and Meta launch small language models (SLMs) like Llama 3 7 b, Phi-3 and GPT-4o mini.

“Smaller models are now emerging, ranging from 2 billion to 100 billion parameters, and they prove to be highly capable—comparable to some of the leading frontier models. This is promising news for the inference market, as these smaller models require less computational power and are therefore more cost-effective,” Sheth pointed out.

The startup believes enterprises don’t need to rely on expensive NVIDIA GPUs for inferencing. Its flagship product Corsair is specifically designed for inferencing generative AI models (100 billion parameter or less) and is much more cost effective, compared to GPUs.

“We believe that the majority of enterprises and individuals interested in inference will prefer to work with models of up to 100 billion parameters. Deploying models larger than this becomes prohibitively expensive, making it less practical for most applications,” he said.

( Jayhawk by d-Matrix)

Pioneering Digital In-Memory Compute

The startup was one of the pioneers in developing a digital in-memory compute (DIMC) engine, which they assert effectively addresses traditional AI limitations and is rapidly gaining popularity for inference applications.

“In older architectures, inference involves separate memory and computation units. Our approach integrates memory and computation into a single array, where the model is stored in memory and computations occur directly within it,” Sheth revealed.

Based on this approach, the startup has developed its first chip called Jayhawk II, which powers its flagship product Corsair. 

The startup claims that the Jayhawk II platform delivers up to 150 trillions of operations per second (TOPs), a 10-fold reduction in total cost of ownership (TCO) and provides up to 20 times more inferences per second compared to the high-end GPUs.

Rather than focusing on creating very large chips, the startup’s philosophy is to design smaller chiplets and connect them into a flexible fabric. 

“One chiplet of ours is approximately 400 square millimetres in size. We connect eight of these chiplets on a single card, resulting in a total of 3,200 square millimetres of silicon. In comparison, NVIDIA’s current maximum is 800 square millimetres. 

“This advantage of using chiplets allows us to integrate a higher density of smaller chips on a single card, thereby increasing overall functionality,” Sheth revealed.

This approach allows the startup to scale computational power up or down based on the size of the model. If the model is large, they can increase the computational resources, and vice versa. According to Sheth, this unique method is a key innovation from D Matrix.

Corsair is Coming in the Second Half of 2024

The startup plans to launch Corsair in November and enter production by 2025. So far, it is already in talks with hyperscalers, AI cloud service providers, sovereign AI cloud providers, and enterprises looking for on-prem solutions.

Sheth revealed that the company has customers in North America, Asia, and the Middle East and has signed a multi-million dollar contract with one of these customers. 

While due to non-disclosure agreements, Sheth refrained from revealing who this customer is, there is a possibility that Microsoft, a model maker and an investor in d-Matrix, would significantly benefit from the startup’s silicon.

Expanding in India 

In 2022, the startup established an R&D centre in Bengaluru. Currently, the size of the startup team here is around 20-30, and the company plans to double its engineering team in the coming months. 

Previously, Sheth had highlighted his intention to increase the Indian workforce to 25-30% of the company’s global personnel.

(d-Matrix office in Bengaluru, India)

In India, the startup is actively leading a skilling initiative, engaging with universities, reaching out to ecosystem peers, and collaborating with entrepreneurs.

Through the initiative, the startup wants to not only create awareness about AI, but also boost India’s skilling initiatives. They collaborate with universities, providing them with curriculum, updates, guidelines, and more.

The post This Microsoft-Backed Startup Comes up With Alternative to GPUs for Inference appeared first on AIM.

]]>
AI Agents at INR 1 Per Min Could Really Help Scale AI Adoption in India https://analyticsindiamag.com/ai-breakthroughs/ai-agents-at-inr-1-per-min-could-really-help-scale-ai-adoption-in-india/ Wed, 14 Aug 2024 12:33:02 +0000 https://analyticsindiamag.com/?p=10132677

These agents could be integrated into contact centres and various applications across multiple industries, including insurance, food and grocery delivery, e-commerce, ride-hailing services, and even banking and payment apps.

The post AI Agents at INR 1 Per Min Could Really Help Scale AI Adoption in India appeared first on AIM.

]]>

Are AI agents the next big thing? The co-founders of Sarvam AI definitely think so. One of the startup’s theses is that consumers of AI will use generative AI models not just as a chatbot, but to perform tasks and achieve goals and that too, through a voice interface rather than text.

At an event held in Bengaluru on August 13th, Sarvam AI announced Sarvam Agents. While the startup, which is backed by Lightspeed, Peak XV, and Khosla Ventures, is not the only company building AI agents, what stood out was the pricing.

The cost of these agents starts at just one rupee per minute. According to co-founder Vivek Raghavan, enterprises can integrate these agents into their workflow without much hassle.

“These are going to be voice-based, multilingual agents designed to solve specific business problems. They will be available in three channels – telephony, WhatsApp, or inside an app,” Raghavan told AIM in an interaction prior to the event.

These agents could be integrated into contact centres and various applications across multiple industries, including insurance, food and grocery delivery, e-commerce, ride-hailing services, and even banking and payment apps.

For example, they could streamline customer service operations in insurance by handling policy inquiries, make reservations, assist with financial transactions, facilitate order tracking and customer support in food delivery, and manage ride requests and driver communications in ride-hailing apps.

Enabling AI Agents 

A technology that offers this capability at just a rupee per minute could be transformative. AI adoption could see substantial growth with AI agents, and Sarvam AI’s mission is to make this a reality.

Meta, which owns WhatsApp and other major social media platforms like Facebook and Instagram, introduced Meta AI to all these platforms. 

Meta AI can be summoned in group chats for planning and suggestions, it can make restaurant recommendations, trip planning assistance, and also provide general information.

However, Sarvam AI claims their generative AI stack could help AI scale in India compared to others. Their models perform better in Indic languages than the Llama models, which is powering Meta AI. During the event, the startup demoed their models, which managed to outperform certain models in Indic language tasks.

The startup is currently making its agents available in Hindi, Tamil, Telugu, Malayalam, Punjabi, Odia, Gujarati, Marathi, Kannada, and Bengali, and plans to add more languages soon.

Interestingly, given the backgrounds of the co-founders, especially Raghavan, who has helped Aadhaar scale significantly in India, the startup is well-positioned to drive widespread AI adoption and impact.

Raghavan served as the chief product officer at the Unique Identification Authority of India (UIDAI) for over nine years. As of September 29, 2023, over 138.08 crore Aadhaar numbers were issued to the residents of India.

As part of the interaction, Raghavan highlighted his experience in scaling technology to benefit humanity. He also mentioned that the startup is already in talks with several companies interested in utilising Sarvam agents. At the event, the startup revealed that their agent is already being integrated into the Sri Mandir app. 

(Vivek Raghavan & Pratyush Kumar, co-founders at Sarvam AI)

Models Powering Sarvam Agents

Raghavan said there are multiple models that form the backbone of these AI agents. The first is a speech-to-text model called Saaras which translates spoken Indian languages into English with high accuracy, surpassing traditional ASR systems. 

The second model, called Bulbul, is text-to-speech, offering diverse voices in multiple languages with consistent or varied options depending on preference.

The third is a parsing model designed for high-quality document extraction. This model addresses common issues with complex data, aiming to improve accuracy in parsing financial statements and other intricate documents.

Notably, these models are closed-source and available to customers as AI. However, the startup also launched an open-source, two billion-parameter foundational model trained on four trillion tokens and completely from scratch.

Less Dramatic but Good Demo

At the event, the startup also demoed what their agents could do. The demo, which was pre-recorded, showcased how a Sarvam agent could comprehend a person’s health condition, assist in finding the right doctor, and even book an appointment.

A pre-recorded demo may not appeal to everyone, but from the startup’s perspective, it’s a safe bet and completely understandable. Live demos carry inherent risks; for instance, at the Made by Google event, one Googler’s attempt to showcase Google Gemini’s capabilities live saw them fail twice before succeeding.

Sarvam AI’s demo was also reminiscent of OpenAI’s showcase of their latest model, GPT-4o, earlier this year. While Sarvam AI’s demo was less dramatic and also not at all controversial, it effectively demonstrated that their agents could understand the context as well as various Indian languages and dialects.

“These agents can also be very contextual. For example, when you’re on a particular page, you press a button seeking more information about a particular item. The agent will be context-aware, so it knows where you’re asking from. In contrast, when you call a number, it starts from scratch without that context,” Raghavan said.

The startup revealed it trained its models using NVIDIA DGX, leveraging Yotta’s infrastructure. Other notable collaborators include Exotel, Bhashini, AI4Bharat, EkStep Foundation and People+ai.

The post AI Agents at INR 1 Per Min Could Really Help Scale AI Adoption in India appeared first on AIM.

]]>
Acer Launches New AI-Powered Chromebook Laptops in India https://analyticsindiamag.com/ai-news-updates/acer-launches-new-ai-powered-chromebook-laptops-in-india/ Wed, 14 Aug 2024 06:07:16 +0000 https://analyticsindiamag.com/?p=10132642

These advanced laptops are specifically designed to cater to the demands of the enterprise and education sector.

The post Acer Launches New AI-Powered Chromebook Laptops in India appeared first on AIM.

]]>

Acer recently launched its latest Chromebook Plus models, the Acer Chromebook Plus 14 and 15 laptops, in India. The all-new Chromebook Plus with the latest built-in Google Gemini AI features offering robust performance, enhanced productivity features, and a sleek, professional design.

These advanced laptops are specifically designed to cater to the demands of the enterprise and education sector.

Acer Chromebook Plus offers built-in Google apps and powerful AI capabilities. It also offers Google Photos Magic Eraser, File Sync, Wallpaper generation, AI-created video backgrounds, and Adobe Photoshop on the web to help consumers boost their productivity.

Powered by a range of Powerful Intel® & AMD® processor variants, these Chromebooks ensure robust performance for multitasking and running demanding applications. The Chromebook Plus 14 has two variants. One with Intel® Core™ i3-N305 processor and another with AMD® Ryzen® 7000 Series Processor, while the Chromebook Plus 15 offers Up to Intel® 13th Gen Core™ i7-1355U processor.

All the models support up to 16GB LPDDR5X SDRAM and Storage up to 512 GB PCIe NVMe SSD, ensuring fast data access and ample space for important files and applications.

“With powerful Intel® & AMD® processors, vibrant displays, Powerful AI capabilities, and robust security features, we believe these Chromebooks will significantly enhance productivity and learning experiences. Our goal is to offer solutions that empower professionals and students to achieve more, and the Chromebook Plus 14 and 15 embody this vision perfectly,” Sudhir Goel, Chief Business Officer at Acer India, said.

Designed for durability and reliability, these Chromebooks have undergone rigorous military-grade reliability tests, including mechanical shock, transit drop, vibration, and resistance to sand, dust, humidity, and extreme temperatures.

In a previous interaction with AIM, Goel said, “At Computex 2024, we have showcased a lot of new products, which we are thrilled to introduce to the Indian market in the coming year.”

PC makers are hoping AI could help pull the market from the stalemate that it was last year. Research firm Canalys predicts that the PC market will see an 8% annual growth in 2024 as more AI PCs hit the market. Canalys also predicts AI PCs will capture 60% of the market by 2027.

The post Acer Launches New AI-Powered Chromebook Laptops in India appeared first on AIM.

]]>
Sarvam AI Launches India’s First Open Source Foundational Model in 10 Indic Languages https://analyticsindiamag.com/ai-breakthroughs/sarvam-ai-launches-indias-first-open-source-foundational-model-in-10-indic-languages/ https://analyticsindiamag.com/ai-breakthroughs/sarvam-ai-launches-indias-first-open-source-foundational-model-in-10-indic-languages/#respond Tue, 13 Aug 2024 10:28:00 +0000 https://analyticsindiamag.com/?p=10132446

Called Sarvam 2B, the model is trained on 4 trillion tokens of an internal dataset.

The post Sarvam AI Launches India’s First Open Source Foundational Model in 10 Indic Languages appeared first on AIM.

]]>

Bengaluru-based AI startup Sarvam AI recently announced the launch of India’s first open-source foundational model, built completely from scratch.

The startup, which raised $41 million last year from the likes of Lightspeed, Peak XV Partners and Khosla Ventures, believes in the concept of sovereign AI- creating AI models tailored to address the specific needs and unique use cases of their country.

The model, called Sarvam 2B, is trained on 4 trillion tokens of data. It can take instructions in 10 Indic languages, including Hindi, Tamil, Telugu, Malayalam, Punjabi, Odia, Gujarati, Marathi, Kannada, and Bengali.

According to Vivek Raghavan, Sarvam 2B is among a class of Small Language Models (SLMs) that includes Microsoft’s Phi series models, Llama 3 8 billion, and Google’s Gemma models.

“This is the first open-source foundational model trained on an internal dataset of 4 trillion tokens by an Indian company, with compute in India, with efficient representation for 10 Indian languages,” Raghavan told AIM in an interaction prior to the announcement.

The model, which will be available on Hugging Face, is well suited for Indic language tasks such as translation, summarisation and understanding colloquial statements. The startup is open-sourcing the model to facilitate further research and development and to support the creation of applications built on it.

Previously, Tech Mahindra introduced its Project Indus foundational model, while Krutrim also developed its own foundational model from scratch. However, neither of these models is open-source.

India’s First Open-Source AudioLM

The startup, which Raghavan co-founded with Pratyush Kumar, also believes that in India, consumers will use generative AI through voice mode rather than text. At an event held in ITC Gardenia, Bengaluru, on August 13th, the startup announced Shuka 1.0–India’s first open-source audio language model.

The model is an audio extension of the Llama 8B model to support Indian language voice in and text out, which is more accurate than frontier models. 

“The audio serves as the input to the LLM, with audio tokens being the key component here. This approach is notably unique. It’s somewhat similar to what GPT-4o introduced by OpenAI a couple of months ago,” Raghavan said.

According to the startup, the model is 6x more faster than Whisper + Llama 3. At the same time, its accuracy across the 10 languages is higher compared to Whisper+ Llama 3.

Previously, the startup has hinted extensively at developing a voice-enabled generative AI model. Startups and businesses aiming to incorporate voice experiences into their services can leverage this tool, particularly for Indian languages.

Raghavan also said that its aim is to make the model sound more human-like in the coming months. 

Sarvam Agents are Here

Another interesting development announced by the startup is Sarvam Agents. Raghavan believes that AI’s real use case is not in the form of chatbots but in AI doing things on one’s behalf. 

“Sarvam Agents are going to be voice-based, multilingual agents designed to solve specific business problems. They will be available in three channels– they can be available via telephony, it can be available via WhatsApp, and it can be available inside an app,” Raghavan said.

These agents are also available in 10 Indian languages, and the cost of these voice agents starts at a minimal cost of just INR 1/min.  These AI agents can be deployed by contact centres or by sales teams of different enterprises, etc.

While these agents may sound like existing conversational AI products available in the market, Raghavan said their architecture, which uses multiple in-house developed LLMs, makes them fundamentally different.

“These agents can also be very contextual. For example, when you’re on a particular page, you press a button seeking more information about a particular item. The agent will be context-aware, so it knows where you’re asking from. In contrast, when you call a number, it starts from scratch without that context,” he said.

Sarvam Models APIs

While both Sarvam 2B and Shuka 1.0 are open-source models, Sarvam.ai is making available a bunch of close-sourced Indic models used in the creation of Sarvam agents ready to be consumed as APIs.

“These include five sets of models. I will tell you about the three important ones. Our first model, a speech-to-text model, translates spoken Indian languages into English with high accuracy, surpassing traditional ASR systems. The second model is a text-to-speech model which converts text into speech, offering diverse voices in multiple languages, with consistent or varied options depending on preference,” Raghavan said. 

The third model is a parsing model designed for high-quality document extraction. This model addresses common issues with complex data, aiming to improve accuracy in parsing financial statements and other intricate documents. 

Other announcements made by the startup include a generative AI workbench designed for law practitioners to enhance their capabilities with features such as regulatory chat, document drafting, redaction and data extraction.

The post Sarvam AI Launches India’s First Open Source Foundational Model in 10 Indic Languages appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-breakthroughs/sarvam-ai-launches-indias-first-open-source-foundational-model-in-10-indic-languages/feed/ 0
Convin Launches a 7 Bn Parameter LLM Tailored for Indian Contact Centres https://analyticsindiamag.com/ai-news-updates/convin-launches-a-7-bn-parameter-llm-tailored-for-indian-contact-centres/ https://analyticsindiamag.com/ai-news-updates/convin-launches-a-7-bn-parameter-llm-tailored-for-indian-contact-centres/#respond Mon, 12 Aug 2024 12:16:00 +0000 https://analyticsindiamag.com/?p=10132278

With the new model, Convin anticipates a 200%  increase in customer acquisition and a 3X boost in overall revenue for 2024-25.

The post Convin Launches a 7 Bn Parameter LLM Tailored for Indian Contact Centres appeared first on AIM.

]]>

Convin, an AI-powered conversation intelligence platform for call centre setups, recently launched its advanced Large Language Model (LLM), with 7 billion parameters. This model is specifically designed to improve the business output and resolve the unique challenges of customer-facing teams such as sales, support, and collections. 

Convin’s new LLM addresses these gaps and significantly outperforms leading models like GPT-3.5 by 40% and GPT-4 Turbo by 20% in accuracy, according to the company.

Trained on over 200 billion tokens and supporting 35+ Indian and South Asian languages, including codemixed variations, the model ensures precise transcriptions, zero to low hallucinations, and contextual understanding. It reassures businesses of a critical advantage in delivering high-quality, culturally sensitive customer interactions.

“Traditional language models often fail to deliver accurate results, but purpose-built models such as Convin LLM produce better results and are more accurate. By addressing major challenges such as agent inefficiencies, call centres can improve handling time, response time, and inconsistent customer experience. This streamlines processes and enhances customer satisfaction by providing precise, data-driven insights and predictive analytics. As a result, call centres realize a substantial cost reduction and new revenue generation,” Atul Shree, CTO of Convin, said.


The process begins by identifying specific objectives related to inefficiencies in the contact centre setup and selecting relevant data sources. Data is then collected and preprocessed to ensure high quality, including filtering, deduplication, and tokenization.

Pre-training on this cleaned dataset helps the model understand linguistic patterns and adapt to different languages. Finally, the model undergoes fine-tuning with task-specific labelled data, refining its parameters to predict labels accurately and deliver optimal performance.

With the enhanced capabilities and efficiencies introduced by this model, Convin anticipates a 200%  increase in customer acquisition and a 3X boost in overall revenue for 2024-25.

The post Convin Launches a 7 Bn Parameter LLM Tailored for Indian Contact Centres appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/convin-launches-a-7-bn-parameter-llm-tailored-for-indian-contact-centres/feed/ 0
CloudKeeper Acquires AI-Automation-Led Cloud Optimisation Startup WiseOps https://analyticsindiamag.com/ai-news-updates/cloudkeeper-acquires-ai-automation-led-cloud-optimisation-startup-wiseops/ https://analyticsindiamag.com/ai-news-updates/cloudkeeper-acquires-ai-automation-led-cloud-optimisation-startup-wiseops/#respond Mon, 12 Aug 2024 08:30:00 +0000 https://analyticsindiamag.com/?p=10132182

With a growing customer base of 50 clients, the startup has achieved over $100,000 in revenue till date.

The post CloudKeeper Acquires AI-Automation-Led Cloud Optimisation Startup WiseOps appeared first on AIM.

]]>

CloudKeeper, a leading provider of comprehensive cloud cost optimisation services, has acquired WiseOps, an AI automation platform specialising in AWS cost and usage optimisation.

While the financial details of the deal were not disclosed, CloudKeeper made payments in equity and cash.

WiseOps previously secured an undisclosed pre-seed investment from CORE91.VC in December 2023. With a growing customer base of 50 clients, the startup has achieved over $100,000 in revenue till date.


Founded in 2023, the startup is known for AI-driven recommendations and automated optimisations, empowering teams to significantly reduce cloud spend without compromising on performance or workflow efficiency.

By integrating WiseOps’ intelligent tools into CloudKeeper’s robust ecosystem, clients can now access a truly end-to-end cloud optimisation solution that promises enhanced savings and workflow efficiency.

Sharing about the journey of the company, Praneet Chandra, Co-founder of WiseOps, stated, “Fifteen months ago, Ronak and I founded WiseOps in response to companies struggling with rising costs and cloud infrastructure challenges, leading to layoffs. Our journey began with our first customer, where we reduced their cloud bill by 50%.”

“WiseOps was the missing piece of the puzzle,” said Deepak Mittal, co-founder and CEO of CloudKeeper. “By joining forces with them, CloudKeeper has become a truly comprehensive cloud cost optimization solution. It will enable us to cater to a broader range of clients, address more complex use cases, and help businesses optimize and engineer their cloud environments more effectively.”

The post CloudKeeper Acquires AI-Automation-Led Cloud Optimisation Startup WiseOps appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/cloudkeeper-acquires-ai-automation-led-cloud-optimisation-startup-wiseops/feed/ 0
Gupshup Boosts Workforce by 20% Amid Rising Demand for Conversational AI Solutions https://analyticsindiamag.com/ai-news-updates/gupshup-boosts-workforce-by-20-amid-rising-demand-for-conversational-ai-solutions/ https://analyticsindiamag.com/ai-news-updates/gupshup-boosts-workforce-by-20-amid-rising-demand-for-conversational-ai-solutions/#respond Mon, 12 Aug 2024 07:32:50 +0000 https://analyticsindiamag.com/?p=10132196

The demand for Gen AI-powered conversational experiences globally has seen the company double its team size in Brazil and ramp up hiring efforts in China, the GCC region, Indonesia, Malaysia and Turkey.

The post Gupshup Boosts Workforce by 20% Amid Rising Demand for Conversational AI Solutions appeared first on AIM.

]]>

Gupshup is witnessing strong demand for its conversational AI solutions in India and international markets. To fuel this growth, the company undertook accelerated hiring in FY24, growing its workforce by 20% to 1400 people.

The hires will support Gupshup’s growth and expansion across India, Latin America, Middle East, SEA, Africa and Europe.

Additionally, the company made several senior-level hires across Marketing, GTM (go-to-market), Engineering, and Solutions. Already a profitable unicorn, Gupshup saw 40% YoY growth last year, driven by brands’ escalating demand to engage customers through conversational advertising, marketing, and support on messaging channels.

“As we continue to expand our footprint globally, we are actively seeking top talent across engineering, product development, marketing, and customer support roles to drive this conversational revolution. Our people are our greatest asset, and we are committed to building a diverse and inclusive workforce that can unlock massive value for our customers through innovative conversational experiences,” Madhuri Nandgaonkar, VP – HR, Gupshup said.

While India has been a primary market for Gupshup, geographies like Latin America, Middle East, APAC, Africa and Europe have emerged as key growth drivers for the company over the last 3 years.

The demand for Gen AI-powered conversational experiences globally has seen the company double its team size in Brazil and ramp up hiring efforts in China, the GCC region, Indonesia, Malaysia and Turkey.

In FY25, the company aims to boost its engineering and Go-To-Market (GTM) teams, followed by product development and customer support, across India and other geographies. 

According to the company, this year, the company also saw a 15% increase in women in senior leadership positions. The company hired 22% women employees and 33% women interns as part of their internship program.

Gupshup has doubled its customer base in several of the international markets and works with leading global brands including, L’Oréal, P&G, Grupo Carso, GoJek, Nestle, Petromin, and Netflix among others.

Earlier this year, Gupshup further expanded its product suite with the launch of Conversation Cloud – a comprehensive suite of SaaS tools aimed at revolutionizing business.

The post Gupshup Boosts Workforce by 20% Amid Rising Demand for Conversational AI Solutions appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/gupshup-boosts-workforce-by-20-amid-rising-demand-for-conversational-ai-solutions/feed/ 0
Kyndryl Launches Global Security Operations Centre in Bengaluru https://analyticsindiamag.com/ai-news-updates/kyndryl-launches-global-security-operations-centre-in-bengaluru/ https://analyticsindiamag.com/ai-news-updates/kyndryl-launches-global-security-operations-centre-in-bengaluru/#respond Mon, 12 Aug 2024 06:52:45 +0000 https://analyticsindiamag.com/?p=10132147

The centre features high-level cyber engineering that analyses evolving compromise indicators and incident impacts to provide customers with decisive insights.

The post Kyndryl Launches Global Security Operations Centre in Bengaluru appeared first on AIM.

]]>

Kyndryl announced it has launched a Security Operations Centre (SOC) in Bengaluru, India that offers comprehensive support and advanced protection capabilities for the entire cyber threat lifecycle, using AI, specifically machine learning and integrated automation systems.  

The SOC in Bengaluru is designed to be a cyber defence hub that operates around the clock to offer cyber threat intelligence and incident response, collaborating with Kyndryl’s global network of cybersecurity experts.

Kyndryl provides a hybrid model that allows organisations to selectively outsource certain cybersecurity functions or fully outsource the end-to-end management of their cybersecurity operations to Kyndryl.

It will also be a centre of excellence for cybersecurity management with specialised skills, certifications and experience in cybersecurity platform management, and technologies to support security events, operational management and monitoring. 

Kyndryl’s SOC capabilities include multiple- level incident monitoring, malware labs, threat hunting and security information and event management (SIEM) that monitor and correlate security events.

The SOC features high-level cyber engineering that analyses evolving compromise indicators and incident impacts to provide customers with decisive insights. The SOC also helps ensure compliance with government data protection regulations and adapts to evolving cyber threats and regulatory requirements.

The SOC is underpinned by Kyndryl’s Security Operations as a platform (SOaap) capability. The SOaap is a single, unified digital platform that provides a centralized view to help monitor, detect, prevent and respond to the latest cyber threats in real-time, in a flexible delivery and collaborative approach with Kyndryl’s global partnership ecosystem.

Integrated on Kyndryl Bridge, the SOaap enables Kyndryl to provide enhanced visibility, risk and threat management to a customer’s entire IT estate to determine the impact of any threats more quickly while also streamlining the orchestration required between IT Operations and Cybersecurity Operations. 

“We are addressing the critical security challenges faced by C-Suite leaders, the need for enhanced operational efficiency, compliance with evolving security regulations, and integration with new technologies. Our focus is on managing increased workloads, responding to dynamic business needs, and defending against an expanded attack surface. With the Indian government’s strong focus on data security policies, we are committed to leading the way with innovative and responsible enterprise resilience services to make India Cyber Surakshit,” Lingraju Sawkar, President, Kyndryl India, said.

The post Kyndryl Launches Global Security Operations Centre in Bengaluru appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/kyndryl-launches-global-security-operations-centre-in-bengaluru/feed/ 0
The Dotcom Bubble Continues to Haunt Wall Street https://analyticsindiamag.com/ai-trends-future/the-dotcom-bubble-continues-haunt-wall-street-investors/ https://analyticsindiamag.com/ai-trends-future/the-dotcom-bubble-continues-haunt-wall-street-investors/#respond Thu, 08 Aug 2024 07:00:00 +0000 https://analyticsindiamag.com/?p=10131843

Around $5 trillion in market value was lost between 2000-02.

The post The Dotcom Bubble Continues to Haunt Wall Street appeared first on AIM.

]]>

When the US stock markets opened for trading on early Monday morning, major tech companies saw around $1 trillion wiped out in market cap. NVIDIA, the company whose stocks soared so much this year that it became the most valuable company in the world for a short period of time, lost around $300 billion in market cap.

Companies the likes of Amazon, Meta, Google, Apple, Tesla, and Microsoft have similarly lost significantly in the last few days. There is a growing concern among investors that the market cap for tech companies surged quickly due to the generative AI boom, making them expensive, and that the massive investments in AI might not yield substantial profit for quite some time.

In some sense, this is true. Meta, which recently released Llama 3.1, the most advanced open-source large language model (LLM) to date, has invested nearly $40 billion in AI. Even though Mark Zuckerberg has a plan to earn it all back, it may take many years. Investors might not be willing to make such a long bet.

The same goes for other companies investing heavily in AI like Google and Microsoft. From their perspective, investing in AI makes sense too. They don’t want to miss out on a technology that many believe could redefine the global economy and order of things. 

Sundar Pichai, CEO of Alphabet, has stated that the company prefers to over-invest in AI and not achieve immediate results rather than under-invest and risk missing out on potential opportunities. But again, investors might not agree.

Here, it is important to take into consideration that shifting investor sentiments are not the sole reason for the plunging stock market. Investor sentiment was positive earlier this year due to expectations that the Fed would lower interest rates. However, sentiment shifted when data showed a decline in manufacturing and construction and a weaker-than-expected job market.

Not a Dotcom Bubble  

Many experts AIM spoke to have voiced a common opinion that there is indeed an AI hype, and some of them even hold the media accountable for inflating the bubble. 

“Journalists not only have the ability, but also the responsibility, to educate the general public on AI. Currently, with all the scaremongering and sensationalising, they’re not doing that,” Mikael Kopteff, Reaktor’s chief technology officer, told the BBC.

Inflated stocks are also reminiscent of the dotcom bubble. The horrors of 2000 hang over Wall Street like a dark cloud which it still struggles to shake off.

However, many feel the comparison to the dotcom bubble in 2000 might not be entirely apt. The stocks that peaked in the late 90s during the dotcom boom were mostly startups and had different business models compared to current companies like Google, Microsoft and Meta.

These companies have invested billions in AI, and even if these investments do not yield immediate results, they are unlikely to go bankrupt. They possess substantial reserves and have diversified business interests that provide financial stability.

“They can lose billions of dollars and not go broke,” Erik Gordon, a professor at the University of Michigan’s Ross School, told Business Insider. 

Google and Meta will continue to earn billions from ads, Apple will continue to sell new iPhones, and Microsoft will continue to sell enterprise software.

Since 2000, the market and its investors have also evolved. Moreover, experts believe that the true value of AI will become clear eventually. 

Tanvir Khan, executive vice president of cloud, infrastructure, digital workplace services, and platforms at NTT DATA, previously told AIM that generative AI is relatively new. It took many years for internet products to evolve.

“You can actually draw parallels to the internet. Things like online banking and online brokerages took longer to emerge, even though the internet was there for a while. Hence, these types of use cases at scale might be a couple of years away,” he said.

While AI has the potential to be transformational, this will take time. The current hype is driven by promises of what AI might achieve in the next five to ten years, creating unrealistic expectations.

It’s a Market Correction

Over $1 trillion wiped out in market cap could be the aftermath of AI hype. To sustain investor confidence, these tech behemoths were required to announce strong earnings from AI, which did not happen.

For instance, Microsoft, in its recent earnings call, reported a revenue growth of 29 per cent in the fiscal fourth quarter, compared to a 31 per cent rise in the previous period. Of the recent quarter’s growth, approximately eight percentage points were due to AI, jumping just one percentage point from seven percentage points in the previous quarter.

This gap between expected and actual performance has led to a wave of sell-offs resulting in market correction. Moreover, overvalued tech company stocks aren’t new. A similar sell-off was seen in late 2022. During COVID, tech stocks similarly surged with a greater emphasis placed on technology as we started to work remotely. 

Investor sentiments were high and the stocks of many tech companies soared amidst the global pandemic. Soon, lockdowns were lifted across the globe and the market corrected itself–Apple lost around $220 billion in valuation and Microsoft lost around $189 billion, while Alphabet’s valuation was down by $123 billion.

The Bubble Might Still Burst 

It’s hard to predict investor sentiment and what’s coming next for these tech companies. However, many experts pointed out the difference between software and hardware companies. 

While Microsoft might continue to sell enterprise softwares, NVIDIA, which sells graphics processing units (GPUs) might have a hard time if their customers stop buying these often coveted pieces of hardware. Moreover, their biggest customers–hyperscalers–have started making their own AI chips. 

“The bigger disappointment could be in the hardware stocks. If investors are counting on the current growth rates for equipment hardware that supports the growth of AI to continue, they may be disappointed,” Gil Luria, an analyst with D.A. Davidson, told Quartz.

Luria even draws parallels with Cisco Systems, a company whose products were instrumental in creating early internet infrastructure and whose success came to symbolise the dotcom era.

Interestingly, many predicted that Cisco would be the world’s most valuable company back then. Nonetheless, it’s hard to predict where the market will head. But one thing is certain: if these companies fail to take investors into confidence and show what value lies in AI, things could go south.

The post The Dotcom Bubble Continues to Haunt Wall Street appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-trends-future/the-dotcom-bubble-continues-haunt-wall-street-investors/feed/ 0
Nagarro Targets Net Zero Emissions by 2050 https://analyticsindiamag.com/ai-origins-evolution/nagarro-targets-net-zero-emissions-by-2050/ https://analyticsindiamag.com/ai-origins-evolution/nagarro-targets-net-zero-emissions-by-2050/#respond Wed, 07 Aug 2024 07:03:51 +0000 https://analyticsindiamag.com/?p=10131715

To achieve this, the company has come up with an ‘eco-digital’ strategy.

The post Nagarro Targets Net Zero Emissions by 2050 appeared first on AIM.

]]>

The carbon footprint of the global information technology & computing industry almost equals or, at times, exceeds that of the aviation industry. The bad news is that it is going to substantially increase as generative AI gets embedded into the global economy.

Therefore, there is a greater need for IT companies to adopt sustainable practices and innovative solutions to reduce their environmental impact. 

Nagarro, a global player in digital engineering with a strong presence in India, has pledged net-zero carbon emissions by 2050. To achieve this, the company has come up with an ‘eco-digital’ strategy guided by three fundamental principles – green, inclusive, and ethical. 

“I see it playing a crucial role because the eco-digital strategy is designed not just to change technology but to shift human behaviour. With a workforce of 18,000 employees, the objective is to ensure that everyone understands and embraces this change towards sustainability,” Ashish Agarwal, the global BU head and custodian of sustainability at Nagarro, told AIM.

This becomes highly critical because the IT industry was quick to integrate generative AI into their existing solutions and has reported multiple proofs of concept (PoC) and revenue directly from generative AI.

Nagarro, which reported over $1 billion in revenue in FY 2023 for the first time, has also launched various LLM-powered solutions for its clients.

Generative AI Carbon Footprint

However, these models, which primarily run on the cloud, require significant energy. Google, which has invested heavily in generative AI and is the third biggest cloud service provider in the world, saw its carbon emission soar by 50% in the generative AI era. 

According to reports, it costs OpenAI millions of dollars every day just to keep ChatGPT running. 

“When generating not just text, but especially new videos or images, the energy consumption can be extremely high. Therefore, it’s crucial to use such technologies judiciously and evaluate whether they are truly necessary.

“Cloud computing plays a major role in this and is often used continuously throughout the day. However, cloud resources do not always automatically shut down or scale back, leading to the use of capacity that may remain idle. This can result in a substantial increase in energy consumption due to the scale of cloud operations,” Agarwal said.

As part of its eco-digital strategy, Nagarro plans to implement sustainable practices to ensure the responsible use of AI. Some of the best practices include evaluating whether every piece of software or API call is truly necessary when making decisions. 

“Consider if some functions can be managed locally rather than through multiple API calls. Additionally, assess whether the image or video quality is required for the end user’s needs and the context in which it will be used. Making these considerations can lead to more efficient and responsible use of resources,” Agarwal added.

Nagarro has also developed a dashboard that enables both customers and the company to monitor their cloud and energy consumption. It provides insights into policies that the customer can implement to reduce their carbon footprint. 

The dashboard visually displays current carbon consumption and projects potential reductions based on specific policies, which are automatically suggested based on their cloud usage patterns.

Green Softwares 

As part of its eco-digital strategy, Nagarro is also exploring ways to use software with limited energy. Besides designing energy-efficient software, the IT company is also ensuring that its software is backward compatible. 

The goal, according to Agarwal, is to extend the lifespan of devices such as laptops or mobile phones. This has been particularly relevant as in the past enterprises often discarded old devices every few years. 

“To address this, it is important to design software that is both backward and forward compatible, allowing devices to be used effectively in the long term while accommodating new technologies as they emerge,” he said.

The best practices also include deciding which programming language to use. For instance, C is a much more energy-efficient language compared to Python because it is an assembly language and gets executed very quickly. There are also softwares available that can translate Python code into a simpler form of language, closer to assembly language.

Moreover, Nagarro is training its employees in best practices by partnering with Terra.do, an edtech startup and climate careers platform, to develop a specialised curriculum. Approximately 2,000 Nagarro employees are currently receiving training through this programme.

“In terms of accessibility, global standards are well-established and effectively addressed. However, on the software engineering side, standards are not as developed. This is an area that will need to evolve and improve over time,” Agarwal said.

Ensuring Eco Digital Success 

Besides training its employees, Nagarro’s approach includes labelling a sustainability number to every project. Currently, the company assigns every engineering project with a carbon number, reflecting factors such as Microsoft Office 365 usage, hardware, travel, and hotel stays. 

“While we aim to advance toward a sustainability rating in the future, our current approach involves tracking the carbon footprint of all our engineering projects. Although this is still an early stage in our journey, we ensure that every project has a carbon number associated with it,” Agarwal said.

Though these are ambitious steps, the carbon footprint and energy consumption associated with training and inferencing LLMs remain substantial. While Nagarro’s efforts may appear modest, similar adoption across the industry could make a meaningful impact.

The post Nagarro Targets Net Zero Emissions by 2050 appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/nagarro-targets-net-zero-emissions-by-2050/feed/ 0
NatWest Group Opens New Office in Bengaluru, Will Hire 3000 Software Engineers by 2026 https://analyticsindiamag.com/ai-news-updates/natwest-group-opens-new-office-in-bengaluru-will-hire-3000-software-engineers-by-2026/ https://analyticsindiamag.com/ai-news-updates/natwest-group-opens-new-office-in-bengaluru-will-hire-3000-software-engineers-by-2026/#respond Wed, 07 Aug 2024 06:07:55 +0000 https://analyticsindiamag.com/?p=10131709

India is NatWest' second largest employee base outside the UK.

The post NatWest Group Opens New Office in Bengaluru, Will Hire 3000 Software Engineers by 2026 appeared first on AIM.

]]>

NatWest Group has announced the lease of a new office in Bengaluru located at Bagmane Constellation Business Park. The new site follows our announcement last year that NatWest Group was looking to recruit 3,000 new software engineers in India by 2026.

The seating capacity in the new office will be three times that of the current office, with the opportunity to increase this further. The state-of-the-art facility spans over 370,000 square feet over 11 floors and is a LEED-certified green building (Leadership in Energy and Environmental Design).

The location will serve as a hub for pioneering technology solutions and cutting-edge developments, supporting us as we work in simpler and smarter ways to better serve our customers.

Bengaluru is a key strategic location for NatWest Group in India, alongside Gurugram and Chennai. This expansion not only strengthens our presence in India, where we have our second-largest employee base outside the UK, but also enhances our colleague value proposition.

“Bengaluru is known for its vibrant technology sector and skilled talent pool, so this new office marks a significant chapter in our growth journey across India. Strengthening our global operations further positions us at the forefront of innovation as we continue to prioritise improving the customer and colleague experience,” said Scott Marcar, Group Chief Information Officer, NatWest Group.

Punit Sood, Head of India, NatWest Group, added, “Our new Bengaluru office is not just an expansion of physical space but a strategic investment in our future. With a modern office design to enhance productivity and create an inspiring environment for our employees it reflects our commitment to the vast potential of India’s talent and technology ecosystem.”

The post NatWest Group Opens New Office in Bengaluru, Will Hire 3000 Software Engineers by 2026 appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/natwest-group-opens-new-office-in-bengaluru-will-hire-3000-software-engineers-by-2026/feed/ 0
Can Observability Help Prevent Another CrowdStrike Outage? https://analyticsindiamag.com/ai-origins-evolution/could-observability-help-prevent-another-crowdstrike-outage/ https://analyticsindiamag.com/ai-origins-evolution/could-observability-help-prevent-another-crowdstrike-outage/#respond Tue, 06 Aug 2024 06:01:56 +0000 https://analyticsindiamag.com/?p=10131472

Real-time feedback mechanisms in observability solutions notify teams immediately, reducing the mean time to detect and respond.

The post Can Observability Help Prevent Another CrowdStrike Outage? appeared first on AIM.

]]>

The Microsoft-CrowdStrike outage on July 19 was possibly the biggest in IT history, costing Fortune 500 companies alone over $5 billion in losses. The outage, caused by a faulty update, brought call centres, hospitals, banks, and airports across the globe to a complete halt for a few hours.

The outage might underscore the fragility of modern technology, revealing how critical systems can be disrupted by vulnerabilities, signalling the need for robust safeguards and resilience in digital infrastructure.

Observability players were quick to point out the importance of comprehensive monitoring and real-time analytics in detecting and mitigating such vulnerabilities, emphasising that enhanced visibility can prevent or minimise the impact of future disruptions.

AIM inquired with both established and new observability providers about whether their solutions can help prevent similar outages in the future. 

Rohit Ramanand, the GVP of engineering, India, at New Relic said that full stack observability platforms provide real-time insights into system performance and health, making them an invaluable tool to help prevent, or mitigate outages when they occur.

New Relic is one of the leading and most dominating players in the space, with over 80,000 customers worldwide and over 12,000 customers in India alone.

Observability Can Avert System Downtime

“Observability enhances operational efficiency through three key mechanisms. First, it enables early issue detection with real-time insights, allowing engineering teams to resolve problems before they impact customers,” Ramanand told AIM.

Second, it offers a unified source of truth, streamlining the process of identifying root causes during outages by consolidating data from various sources.

Lastly, AI-driven observability platforms leverage historical data to build predictive models, helping to foresee and mitigate similar issues in the future. This integrated approach ensures a more proactive and efficient management of potential disruptions.

AIM also posed the same question to Middleware, a new-age startup based in San Francisco with roots in Ahmedabad. Sawaram Suthar, the founding director, echoed a similar sentiment. 

Suthar believes observability solutions can significantly help prevent a situation like the CrowdStrike outage. 

“Development and operations teams can collect metrics on performance, latency, and error rates, enabling proactive responses to anomalies. Furthermore, they can centralise logs to gain a unified view of system activity and streamline root cause analysis,” he said.

Suthar also adds that the real-time feedback mechanisms in observability solutions notify teams immediately, reducing the mean time to detect and respond. “We ourselves have helped companies achieve over 20% reduction in time to resolution,” he said.

“We’ve noticed that debugging often ends up accounting for 50% of a developer’s effort. With observability tools, they can focus on building applications, dedicating only about 10% of their time to debugging and problem resolution,” he added.

Can AI Help Enterprises Prepare Better?

Even if enterprises believe they are monitoring all aspects, they can still encounter blind spots without the right tools, highlighting the importance of full-stack observability. 

AI’s ability to examine historical data and be more predictive could help organisations take appropriate action and a preventive approach.

New Relic’s AI capabilities help enterprises monitor AI-specific metrics like token usage, costs, and response quality and integrate with traditional application performance monitoring.

Having an integrated view of metrics, events, traces, and logs simplifies and accelerates root cause identification. “Comprehensive application performance monitoring (APM) capabilities enhance anomaly detection, leading to quicker remediation,” Ramanand said.

Besides implementing robust monitoring and logging, enterprises should develop automated alerting and notification systems, regularly conduct system audits and develop disaster recovery and business continuity plans, according to Suthar.

However, sometimes an outage might be inevitable. Enterprises should be well-equipped to mitigate risk and minimise the impact through effective response strategies and robust contingency plans.

“With observability, organisations gain a deeper understanding of systems to identify how to mitigate incidents when they do occur and ultimately prevent such events from reoccurring,” Ramanand added.

Observability in the Generative AI Era

Overall, the observability market is projected to grow from $2.4 billion in 2023 to $4.1 billion by 2028, reflecting a compound annual growth rate (CAGR) of 11.7% over the forecast period, according to a MarketsandMarkets research report.

Moreover, an increasing number of observability providers have begun incorporating generative AI into their products and services. Additionally, companies are developing solutions to monitor LLMs as enterprises integrate these models into their business operations.

An AIM Research report revealed that several leading players in the AI observability market, including Dynatrace, Datadog, and New Relic, have expanded their offerings to include observability capabilities tailored for GenAI-infused applications, addressing the specific needs of this emerging field.

Another interesting observation from the report is that around 80% of the companies offering tools for generative AI observability are startups, and most of them have been established in

the last three years. This signifies the growing prominence of observability, especially in the era of generative AI supremacy.

The post Can Observability Help Prevent Another CrowdStrike Outage? appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/could-observability-help-prevent-another-crowdstrike-outage/feed/ 0
AWS Data Centres Cut Carbon Emissions by 98% for AI Workloads Compared to On-Premises Solutions: Study https://analyticsindiamag.com/ai-news-updates/aws-data-centres-cut-carbon-emissions-by-98-for-ai-workloads-compared-to-on-premises-solutions-study/ https://analyticsindiamag.com/ai-news-updates/aws-data-centres-cut-carbon-emissions-by-98-for-ai-workloads-compared-to-on-premises-solutions-study/#respond Mon, 05 Aug 2024 06:30:02 +0000 https://analyticsindiamag.com/?p=10131308

Accenture estimates that AWS’s global infrastructure is up to 4.1 times more efficient than on-premises.

The post AWS Data Centres Cut Carbon Emissions by 98% for AI Workloads Compared to On-Premises Solutions: Study appeared first on AIM.

]]>

A new study commissioned by Amazon Web Services (AWS) and completed by Accenture found that utilizing AWS data centres for compute-heavy, or AI workloads yields a 98% reduction in carbon emissions compared to on-premises data centres.

The study found that an effective way to minimise the environmental footprint of leveraging Artificial Intelligence is by moving IT workloads from on-premises infrastructure to cloud data centres in India and around the globe. 

Accenture estimates that AWS’s global infrastructure is up to 4.1 times more efficient than on-premises. For Indian organisations, the total potential carbon reduction opportunity for AI workloads optimised on AWS is up to 99% compared to on-premises data centres.

This is credited to AWS’s utilisation of more efficient hardware (32%), improvements in power and cooling efficiency (35%), and additional carbon-free energy procurement (31%).

Further optimising on AWS by leveraging purpose-built silicon can increase the total carbon reduction potential of AI workloads to up to 99% for Indian organisations that migrate to and optimise on AWS.

“Considering 85% of global IT spend by organisations remains on-premises, a carbon reduction of up to 99% for AI workloads optimised on AWS in India is a meaningful sustainability opportunity for Indian organisations, according to Jenna Leiner, Head of Environment Social Governance (ESG) and External Engagement, AWS Global.

“As India accelerates towards its US$1 trillion-dollar digital opportunity and encourages investments into digital infrastructure, sustainability innovations and minimising IT related carbon emissions will be critical in also helping India meet its net-zero emissions by 2070 goal.

This is particularly important given the rising adoption of AI. AWS is constantly innovating for sustainability across our data centres —optimising our data centre design, investing in purpose-built chips, and innovating with new cooling technologies – so that we continuously increase energy efficiency to serve customer compute demands,” Leiner said.

The post AWS Data Centres Cut Carbon Emissions by 98% for AI Workloads Compared to On-Premises Solutions: Study appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/aws-data-centres-cut-carbon-emissions-by-98-for-ai-workloads-compared-to-on-premises-solutions-study/feed/ 0
AIM Exclusive: This MeitY-Backed Startup is Set to Tapeout a 12 nm AI Chip This Year https://analyticsindiamag.com/tech-ai-blend/aim-exclusive-this-meity-backed-startup-set-tape-out-12-nm-ai-chip/ https://analyticsindiamag.com/tech-ai-blend/aim-exclusive-this-meity-backed-startup-set-tape-out-12-nm-ai-chip/#respond Sat, 03 Aug 2024 02:30:00 +0000 https://analyticsindiamag.com/?p=10131256

The startup has developed three chips so far to run AI models on the embedded edge

The post AIM Exclusive: This MeitY-Backed Startup is Set to Tapeout a 12 nm AI Chip This Year appeared first on AIM.

]]>

Specialised chips are essential to deploy AI models on the edge. Yet, most of the chips currently available in the market fall short of these requirements. A few companies across the globe, like Sima.ai and Hailo, are trying to fill that gap. 

However, Netrasemi, a Thiruvananthapuram-based Indian startup, is now entering the space with its cost-effective and power-efficient advanced AI chipsets for edge AI use cases. 

Backed by the Ministry of Electronics and Information Technology (MeitY), Netrasemi has developed three chips so far to run AI models on the embedded edge. Soon, this startup will tapeout two chips based on the 12 nm nodes.

The first chip – Netra A4000 – represents a line of high-performance chips designed with advanced Chiplet (D2D integration) technology. These chips, offering between 32 and 100 TOPS (trillion operations per second), are targeted at the edge server market, smart NVR systems and robotics industry.

The second and third chips – NetraA2000 & NetraR1000 – are designed for surveillance, video analytics, vision processing, smart cameras, and smart home use cases. 

The NetraA2000 is based on an Arm architecture and provides performance of 4 TOPS, whereas the NetraR1000 is based on a RISC-V architecture and offers 2 TOPS performance.

According to Jyothis Indirabhai, CEO and co-founder, Netrasemi is the first startup in India to develop AI chips. Founded in 2020 by Indirabhai along with Sreejith Varma and Deepa Geetha, the startup is a beneficiary of MeitY’s Design Linked Incentive (DLI) scheme. 

It was also one of the first startups to be approved for the government’s Chips to Startups (C2S) programme.

( Jyothis Indirabhai receiving an award from Rajeev Chandrasekhar)

Tapeout Soon!

The startup plans to take NetraA2000 & NetraR1000 chips, which will tapeout soon, to customers this year. 

“We plan to showcase these products to our customers this year. Our goal is to introduce them to our clients first, allowing them the necessary time to validate the products and ensure they meet production standards,” Indirabhai told AIM.

Over the past several months, the startup has engaged with key OEMs in sectors such as telecom clients, industrial products and surveillance products makers.

“These are just one or two large global players in their sector. Our conversations with them are deep where they have started sharing specifications as per their requirement,” Hariprasad C, the chief strategy officer of Netrasemi, told AIM.

The startup has also formed a partnership with a US-based chip company (name not disclosed) to jointly develop and introduce another version of AI chip to the market.


According to Indirabhai, the startup’s chips are fabricated by Taiwan Semiconductor Manufacturing Company (TSMC), and will hit production by 2025. “The customers will be able to get the product ideally somewhere by mid-2026.”

It’s All About Software 

Many chip companies in today’s age project themselves as a software company which builds their own silicon. NVIDIA might be the king of GPUs- but its moat is CUDA- its general purpose parallel computing platform and application programming interface.

Netrasemi also offers a versatile SDK called NetraSDK that supports TensorFlow, PyTorch, and ONNX frameworks. 

The SDK enables developers with minimal coding experience to integrate their AI models and applications onto the Netrasemi platforms using an intuitive graph builder tool.

“Rather than simply providing an SDK and expecting users to develop based on that, our framework is far more comprehensive, featuring a wide range of sample applications. 

“In many instances, we deliver near-complete products to our customers, allowing them to refine and finalise the solution. Many of our designs are nearly ready for production,” Indirabhai said.

Competition with NVIDIA?

With the Netra A4000, the startup will enter a space where NVIDIA also operates in. NVIDIA’s Jetson Orin series is targeted at the high-end edge AI device market. 

However, Indirabhai points out that the A4000 chips do not cover the whole spectrum of applications that Jetson Orin does. But, on the applications the A4000 chips target, Netrasemi manages to offer higher performance.

“For example, while Jetson Orin might advertise 32 TOPS, this figure doesn’t necessarily reflect video analytics performance. Effective video analytics demands specialised IPs tailored for the pipeline and Jetson Orin’s general-purpose architecture isn’t optimised for this. 

“Although Jetson might offer 32 TOPS for AI, the actual application performance for video analytics may fall short. Our architecture is superior and offers 10X times more power efficiency,” he pointed out.

The higher-end version of the Jetson Orin series does offer over 200 TOPs, which are meant for high-end compute and edge AI use cases. Here, it is important to note that Netrasemi’s roadmap sees them entering the AI server market sometime in 2027-28.

( Netrasemi lab)

Support from MeitY

As part of the government’s DLI scheme to support semiconductor companies, Netrasemi is bound to receive financial aid of around INR 15 crore. So far, the startup has already received around INR 2.5 crore as refunds. 

Moreover, as part of the C2S programme, the startup has received additional support of INR 5 crore.

“One is the DLI, a reimbursement programme spanning three years, and the other is the C2S which is a grant received for the R1000 chip. This grant is part of a joint project submitted in collaboration with the College of Engineering Thiruvananthapuram (CET),” Hariprasad revealed.

Additionally, the government is also supporting the startup with access to electronic design automation (EDA) tools for three years. The startup has also raised $1 million in equity funding from venture capitalists. Indirabhai said they plan to raise funds again in the coming months. 

Made-in-India Chips 

The emergence of Netrasemi comes at a time when the government of India has significantly pushed for ‘Made-in-India’ chips. Other companies, such as InCore Semiconductor and Mindgrove Technologies, have or are planning to launch their own chipsets in the market.

For instance, Mindgrove has developed its own indigenous microprocessor chip named MG Secure IoT. However, taking these chips to the market remains the biggest challenge.
Undoubtedly, these companies, including Netrasemi will face intense competition from mature players in the market as well as Chinese companies providing cheaper alternatives.

The post AIM Exclusive: This MeitY-Backed Startup is Set to Tapeout a 12 nm AI Chip This Year appeared first on AIM.

]]>
https://analyticsindiamag.com/tech-ai-blend/aim-exclusive-this-meity-backed-startup-set-tape-out-12-nm-ai-chip/feed/ 0
Telangana Becomes India’s First State to Develop its Own AI Model https://analyticsindiamag.com/ai-breakthroughs/telangana-becomes-the-first-state-in-india-to-build-its-own-ai-model/ https://analyticsindiamag.com/ai-breakthroughs/telangana-becomes-the-first-state-in-india-to-build-its-own-ai-model/#respond Fri, 02 Aug 2024 09:30:00 +0000 https://analyticsindiamag.com/?p=10131223

In July, the Information Technology, Electronics & Communications (ITE&C) department in Telangana hosted a datathon aimed at creating a Telugu LLM datasets.

The post Telangana Becomes India’s First State to Develop its Own AI Model appeared first on AIM.

]]>

Lately, there has been a debate in India’s AI ecosystem about whether the country should build its own foundational models. Some argue that India can address real-world problems by leveraging existing state-of-the-art models without spending millions on new ones. 

Others, however, believe that it’s essential to develop models that profoundly understand the nuances, complexities, and rich diversity inherent in India’s myriad cultures and languages.

Amidst all this, an Indian state government has undertaken the task of developing an LLM that operates in the state’s official language.

In July, the Information Technology, Electronics & Communications (ITE&C) department in Telangana hosted a datathon aimed at creating a Telugu LLM.

Carried out in partnership with Swecha, a non-profit and free open-software movement in Telangana, the datathon was organised to help build datasets which, in turn, will help train the Telugu LLM.

Building Telugu Datasets

Building effective LLMs for Indian languages remains a challenging task due to the scarcity of high-quality data. While ChatGPT is impressive because it is trained on multiple terabytes of data, such extensive datasets are not available for Indian languages.

To develop datasets in Telugu, the Telangana government is tapping into its rich education system. Around 1 lakh undergraduate students across all engineering colleges in Telangana took part in the datathon and collected data from ordinary citizens who use Telugu as their mother tongue. 

The team collected data from oral sources such as folk tales, songs, local histories, and information about food and cuisine. Additionally, they plan to dispatch volunteers to approximately 7,000 villages across the state to gather audio and video samples of people discussing various topics, which were then converted into content.

(Source: @nutanc)

Interestingly, this is not the first instance when such an exercise was undertaken. Last year, the same Swecha team developed a Telugu SLM, named ‘AI Chandamama Kathalu’, from scratch. 

To collect data for the model, a similar datathon was organised with volunteers from Swecha, in collaboration with nearly 25-30 colleges. Over 10,000 students participated in translating, correcting, and digitising 40,000-45,000 pages of Telugu folk tales.

Building LLMs

Ozonetel, which is an industry partner for the project along with DigiQuanta and TechVedika, supported it by training the model and providing the necessary compute. 

The team tried fine-tuning Google’s MT-5 open-source model, Meta’s Llama, and Mistal. However, they finally settled on building a model similar to GPT-2 from scratch. Training the model on a cluster of NVIDIA’s A100 GPUs took nearly a week.

Now, the aim is to develop a larger model and have it ready to be showcased at the Telangana govt’s Global AI Summit, scheduled to take place in September this year.

Moreover, developing a large model could cost millions of dollars. For instance, building something like ChatGPT could cost in the billions. However, the team aims to develop the Telugu LLM at a cost of around INR5-10 lakh.

India’s Efforts to Build AI Models in Regional Languages 

Over time, we have seen efforts to build AI models in regional languages. For instance, Abhinand Balachandran, assistant manager at EXL, released a Telugu version of Meta’s open-source LLama 2 model.

Similarly, in April this year, a freelance data scientist released Nandi– built on top of Zephyr-7b-Gemma, the model boasts 7 billion parameters and is trained on Telugu Q&A data sets curated by Telugu LLM Labs.

Interestingly, such models have not been just limited to Telugu. We have seen AI models built on top of open-source models such as Tamil Llama, and Marathi LLama, among others.

However, these models could be seen as mere experiments. But the Telangana government’s effort to develop an AI model in Telugu has the potential to make significant strides in advancing regional language technology and preserving cultural heritage.

Officials involved in the project have told the media that voice commands from devices such as Alexa are not available in Telugu and this platform will pave the way for such innovations. 

The post Telangana Becomes India’s First State to Develop its Own AI Model appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-breakthroughs/telangana-becomes-the-first-state-in-india-to-build-its-own-ai-model/feed/ 0
Bloq Quantum Raises INR 1.3 Crore To Accelerate Enterprise Adoption of Quantum Computing https://analyticsindiamag.com/ai-news-updates/bloq-quantum-raises-inr-1-3-crore-to-accelerate-enterprise-adoption-of-quantum-computing/ https://analyticsindiamag.com/ai-news-updates/bloq-quantum-raises-inr-1-3-crore-to-accelerate-enterprise-adoption-of-quantum-computing/#respond Thu, 01 Aug 2024 07:16:54 +0000 https://analyticsindiamag.com/?p=10131111

his capital infusion will stimulate innovation in quantum algorithms, improve platform functionalities, and expedite growth.

The post Bloq Quantum Raises INR 1.3 Crore To Accelerate Enterprise Adoption of Quantum Computing appeared first on AIM.

]]>

Bloq Quantum, an AI quantum software startup, has raised INR 1.3 Crore in a pre-seed round led by Inflection Point Ventures.

The funds will be used for product development and team expansion. This capital infusion will stimulate innovation in quantum algorithms, improve platform functionalities, and expedite growth.

Bloq Quantum simplifies enterprise adoption of quantum computing with its user-friendly low-code interface. It accelerates quantum algorithm development by tenfold, providing valuable business insights. Designed for users at all skill levels, Bloq Quantum streamlines the path to quantum computing.

Bloq Quantum excels as the foremost platform offering a user-friendly interface for developing quantum algorithms. Its key strength lies in the exceptional team, meticulously curated to lead the charge in advancing quantum computing technology. Their collective expertise and commitment drive Bloq Quantum’s continuous innovation, ensuring they deliver state-of-the-art solutions to our users and partners.

As of June 2024, Bloq Quantum operates globally, providing quantum computing solutions to clients worldwide. The widespread operations cater to diverse industries and users, emphasizing dedication to pioneering quantum computing innovations on a global scale. This global presence underscores the startup’s commitment to innovation and capability to address complex challenges.

Sreekuttan L S, co-founder & CEO, brings a strong physics background from IISER Pune and experience as a Product Manager at The Quantum Insider. Jay Patel, Co-Founder and CTO, combines his expertise in Computer Engineering and Quantum Machine Learning, honed at CERN, with three global Quantum Challenge awards and multiple quantum software prototypes. Together, they drive innovation in quantum computing with their complementary skills and deep industry knowledge.

“Enterprises face challenges in adopting quantum computing due to fragmented and complex algorithm development processes. Bloq Quantum aims to streamline this by offering a comprehensive solution that simplifies the creation of quantum solutions, providing a seamless experience for businesses,” Sreekuttan L S, said.

The post Bloq Quantum Raises INR 1.3 Crore To Accelerate Enterprise Adoption of Quantum Computing appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/bloq-quantum-raises-inr-1-3-crore-to-accelerate-enterprise-adoption-of-quantum-computing/feed/ 0