UHG
Search
Close this search box.

Microsoft Azure Crushes Cloud Competition, Leaves AWS and Google Cloud Behind

Microsoft announced plans to spend more money this fiscal year to enhance its AI infrastructure, even as growth in its cloud business has slowed, suggesting that the AI payoff will take longer than expected.

Share

Illustration by Nikhil Kumar

Microsoft has once again claimed the crown of cloud, despite mixed market reactions after it reported its results.

In the previous quarter, Microsoft Azure notably encroached on AWS’s market share and it continues to ride this wave. In the latest quarter, Microsoft Azure’s Intelligent Cloud revenue, which includes the company’s server products and cloud services, rose to $28.5 billion, a 19 per cent increase year over year. This segment now constitutes nearly 45 percent of Microsoft’s total revenue. 

Meanwhile, AWS reported a revenue of $26.28 billion, a 19 per cent increase, surpassing analysts’ expectations of $26.02 billion according to StreetAccount. Google Cloud, meanwhile, experienced a 29 per cent rise in revenue, reaching $10.3 billion, slightly above the projected $10.2 billion.

Combined, AWS, Google Cloud and Microsoft Azure accounted for a whopping 67 per cent share of the $76 billion global cloud services market in Q1 2024, according to new data from IT market research firm Synergy. It needs to be seen, however, if Microsoft Azure has increased its market share. 

Source: Statista

In Generative AI We Trust 

One thing common amongst the three hyperscalers is that they have kept their trust in generative AI even though it hasn’t really started to pay off. 

Microsoft announced plans to spend more money this fiscal year to enhance its AI infrastructure, even as growth in its cloud business has slowed, suggesting that the AI payoff will take longer than expected.

Microsoft CFO Amy Hood explained that this spending is essential to meet the demand for AI services, adding that the company is investing in assets that “will be monetised over 15 years and beyond.” Meanwhile, CEO Satya Nadella said that Azure AI now boasts over 60,000 customers, marking a nearly 60% increase year-on-year, with the average spending per customer also on the rise. 

“For the next quarter, we expect Azure’s Q1 revenue growth to be 28% to 29% in constant currency,” said Hood. “Growth will continue to be driven by our consumption business, including AI, which is growing faster than total Azure.”

On similar lines, Google is facing increasing AI infrastructure costs. “The risk of under-investing far outweighs the risk of over-investing for us. Not investing to stay ahead in AI carries much more significant risks,” warned Google CEO Sundar Pichai.

In another news, Meta chief Mark Zuckerberg said that to train Llama 4, the company will need ten times more compute than what was needed to train Llama 3.

The Azure OpenAI service provides access to best-in-class frontier models, including GPT-4o and GPT-4o mini. Apart from that, Azure also offers in-house built AI models like Phi-3, a family of powerful small language models, which are being used by companies like BlackRock, Emirates, Epic, ITC, and Navy Federal Credit Union.

“With Models as a Service, we provide API access to third-party models, including as of last week, the latest from Cohere, Meta, and Mistral. The number of paid Models as a Service customers more than doubled quarter over quarter, and we are seeing increased usage by leaders in every industry from Adobe and Bridgestone to Novo Nordisk and Palantir,” said Nadella.

Microsoft is trying hard not to be dependent on OpenAI and has listed the startup as its competitor in generative AI and search. This move might have come after OpenAI cozied up to Apple by integrating ChatGPT into Siri.

Similarly, AWS Bedrock has been constantly adding new models to its offerings. “Bedrock has recently added Anthropic’s Claude 3.5 models, which are the best-performing models on the planet, Meta’s new Llama 3.1 models, and Mistral’s new Large 2 models,” said Amazon chief Andrew Jassy.

Last year, Amazon also announced its generative AI model called Q. “With Q’s code transformation capabilities, Amazon has migrated over 30,000 Java JDK applications in a few months, saving the company $260 million and 4,500 developer years compared to what it would have otherwise cost,” said Jassy.  

Google is also quite bullish with Gemini. Most recently, Google DeepMind’s new Gemini 1.5 Pro’s experimental version, 0801, was tested in Arena for the past week, gathering over 12K community votes.

For the first time, Google Gemini has claimed the 1st spot, surpassing GPT-4 and Claude-3.5 with an impressive score of 1300, and also achieving the first position on the Vision Leaderboard.

Google Vertex AI includes all models from the Gemini and Gemma families, such as Gemini 1.5 Pro, Gemini 1.5 Flash, and Gemma 2. It also offers third-party models from Anthropic, Mistral and Meta.

“Uber and WPP are using Gemini Pro 1.5 and Gemini Flash 1.5 in areas like customer experience and marketing. We broadened support for third-party models including Anthropic’s Claude 3.5 Sonnet and open-source models like Gemma 2, Llama, and Mistral,” Pichai said.

Some of the notable customers of Google Cloud are Hitachi, Motorola Mobility, KPMG, Deutsche Bank, and Kingfisher, as well as the US government.

Building In-House AI Chips 

NVIDIA’s upcoming Blackwell chip has been delayed by three months or more due to design flaws, a setback that could impact customers such as Meta, Google, and Microsoft, who have collectively ordered tens of billions of dollars worth of the chips.

Ahead of this, all the hyperscalers, apart from utilising NVIDIA GPUs, have also been developing their own AI chips. “We added new AI accelerators from AMD and NVIDIA, as well as our own first-party silicon chips, Azure Maia, and we introduced the new Cobalt 100, which provides best-in-class performance for customers like Elastic, MongoDB, Siemens, Snowflake, and Teradata,” said Nadella. 

Google also recently launched Trillium which was used by Apple to train its foundation models. “Trillium is the sixth generation of our custom AI accelerator, and it’s our best-performing and most energy-efficient TPU to date. It achieves a near five time increase in peak compute performance per chip and is 67 percent more energy efficient compared to TPU v5e,” said Pichai. 

“Year-to-date, our AI infrastructure and generative AI solutions for cloud customers have already generated billions in revenues and are being used by more than two million developers,” he added.

AWS has also developed its custom silicon chips. “We’ve invested in our own custom silicon with Trainium for training and Inferentia for inference. The second versions of those chips, with Trainium coming later this year, are very compelling on price performance. We are seeing significant demand for these chips,” Jassy said.

📣 Want to advertise in AIM? Book here

Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts
Association of Data Scientists
Tailored Generative AI Training for Your Team
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.