UHG
Search
Close this search box.

Modular’s Attempt to Steal NVIDIA’s Mojo

Modular software enables developers to run models on non-NVIDIA servers like AMD, Intel, and Google, potentially addressing the GPU crunch

Share

Listen to this story

AI startup Modular recently raised $100 million in a recent funding round, led by General Catalyst, Google Ventures, SV Angel, Greylock, and Factory. 

The 20-month-old startup by former Google and Apple alumnus, Tim Davis and Chris Lattner is making waves in the AI industry, offering software that aims to challenge the dominance of NVIDIA’s CUDA and similar alternatives, providing developers with a more accessible and efficient solution for building and combining AI applications.

“Modular’s technology scales across a wide range of hardware and devices, which gives users more flexibility as their needs evolve,” said Modular CEO and co-founder, Chris Lattner, in an exclusive interaction with AIM. 

NVIDIA’s CUDA software, used for machine learning, only supports its own GPUs, whereas Modular’s software aims to enable developers to run models on non-NVIDIA servers like those from AMD, Intel, and Google—potentially providing a solution to the GPU crunch as the demand by companies like Microsoft, Google, OpenAI, xAI and Meta strain supply. 

Lattner explained saying that their Modular’s AI engine and Mojo have been designed with ease of use in mind, catering to machine learning engineers by employing a Python-based approach rather than a more intricate C++ foundation, as seen in CUDA. “When compared to CUDA for GPU programming, Modular’s engine and Mojo are easier to use and more familiar to ML engineers, notably being Python-based instead of C++ based,” he added.

He said that Modular exhibits remarkable scalability across an expansive spectrum of hardware and devices giving users a higher degree of flexibility, ensuring that their AI solutions can evolve with their requirements. The AI engine builds on these strengths to provide higher performance and productivity by combining individual operations into an efficient optimised execution environment,” said Lattner. 

Modular’s tools seamlessly integrate into existing workflows, negating the need for wholesale rearchitecting or code rewriting in C++ or CUDA. This affords developers a frictionless transition and empowers them to unlock heightened productivity and performance without incurring exorbitant costs.

A cornerstone of Modular’s arsenal is the Mojo toolkit, which represents a concerted effort to simplify AI development across diverse hardware platforms. The Mojo programming language, blends the ease of use associated with Python with features like caching and adaptive compilation techniques, targeting improved performance and scalability in AI development.

Towards an AI Future 

In an era where tech alumni-founded startups command high valuations, Modular’s approach to validating its commercial momentum and proving its value proposition to investors remains crucial. 

Modular’s journey is not without challenges. The adoption of a new programming language like Mojo can be hindered by Python’s established status in the machine-learning landscape. However, Lattner’s conviction in Mojo’s distinct advantages and its potential to revolutionize AI development processes remains unshaken.

Given the duo’s experience, the venture exhibits potential to make it big. For instance, Lattner has led the creation of Swift, a programming language by Apple, while Davis has led Google’s machine-learning product efforts, focusing on getting the models working directly on devices.

With a rapidly growing community of over 120,000 developers, Modular claims that it has gauged demand from thousands of prominent enterprises that are excited to deploy its infrastructure.

“We have been able to achieve tremendous momentum in only 20 months,” said Davis, Modular co-founder and President. “The financing will allow us to accelerate our momentum even more, scaling to meet the incredible demand we have seen since our launch,” he added, talking about the recent funding that it raised. 

Competition Galore

Lattner acknowledged that while Modular faces competition, a lot of companies are offering point solutions that fail to resolve the challenges across the AI infrastructure stack for developers and enterprises. Besides NVIDIA, some of its competitors include Cerebras and Rain among others. 

“There is no solution in the market that unifies the frameworks and compute architectures, and supports all their weird and wonderful features with really minimal migration pain,” said Lattner, stating the USP of the company. 

Further, he said that while others allege that they are fast, deeper dive forces one to change application or model-specific codes, and they don’t scale across different hardware types either.

Lattner also said that Modular’s technologies are designed to complement NVIDIA’s existing AI infrastructures and that the chip giant is an important partner in this endeavour. The overarching mission is to facilitate broader adoption of hardware among AI developers by unifying technology stacks, simplifying complexities, and making the process more accessible, he said.

Simply put, Modular’s strategy hinges on its holistic approach, which seeks to unify frameworks and compute architectures with minimal migration challenges. Unlike some competitors, Modular’s solutions aim to address end-to-end challenges, fostering accessibility, innovation, and ethical considerations in AI technology.

NVIDIA vs the World 

Modular is not alone, there are several other startups challenging Nvidia’s dominance in GPU manufacturing and the associated software that binds users to its chips. Notable companies in this competition include d-Matrix, Rain Neuromorphics, and Tiny Corp. The collective aim is to transform the AI chip landscape by providing alternatives to NVIDIA’s products, which can be expensive for training and running machine-learning models. These startups are focusing on designing chips and software that they claim offer improved efficiency compared to Nvidia’s GPUs.

Rain Neuromorphics, now known as Rain AI, is addressing the high costs of training and running machine-learning models on conventional GPUs. Their approach combines memory and processing, similar to human synapses, resulting in cooler and energy-efficient operation compared to Nvidia’s GPUs, which require continuous cooling and drive up electricity costs.

Tiny Corp, founded by George Hotz, the former CEO of Comma AI, focuses on open-source deep-learning tools named tinygrad. These tools aim to accelerate the training and running of machine-learning models.

However, NVIDIA stands apart and according to Databricks CEO Naveen Rao, has separated itself from competitors. Despite the challenges and past bankruptcies of startups attempting to compete with Nvidia, these companies are betting on the transformative potential of AI to gain traction in the competitive AI chip sector.

📣 Want to advertise in AIM? Book here

Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.
Related Posts
Association of Data Scientists
Tailored Generative AI Training for Your Team
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.