OpenAI is treading on thin ice. According to a recent report, the ChatGPT maker could lose as much as $5 billion this year. Regarding costs, OpenAI is projected to spend nearly $4 billion this year on renting Microsoft’s servers to power ChatGPT and its underlying LLMs.
OpenAI’s expenses for training, including data acquisition, could soar to nearly $3 billion. Additionally, with a rapidly growing workforce of around 1,500 employees, its costs could reach approximately $1.5 billion, driven partly by fierce competition for technical talent with companies like Google.
In total, OpenAI’s operational expenses this year could reach up to $8.5 billion.
However, there is still hope. OpenAI recently generated $3.4 billion annual revenue.
ChatGPT Plus, the premium version of OpenAI’s popular chatbot, emerged as the primary revenue driver, contributing $1.9 billion. The service boasts 7.7 million subscribers paying $20 per month. Following closely is ChatGPT Enterprise, which brought in $714 million from 1.2 million subscribers at $50 per month. The API generated $510 million, while ChatGPT Team added $290 million from 80,000 subscribers paying $25 monthly.
The question arises – “How long can OpenAI continue operating this way?” Without raising funds soon, the company may run out of cash to sustain its operations.
OpenAI may end up losing $5 billion this year and run out of cash in 12 months unless it raises more money. Investors should ask: What is their moat? Unique tech? What is their route to profitability when Meta is giving away similar tech for free? Do they have a killer app? Will the tech ever be reliable? What is real and what is just a demo?” posted Gary Marcus on X. He may not be wrong.
OpenAI has raised approximately $13.3 billion in total funding to date from Microsoft.
OpenAI’s Jugaad
OpenAI is cutting costs with GPT-4o Mini. At 15 cents per million input tokens and 60 cents per million output tokens, GPT-4o Mini is over 60% cheaper than GPT-3.5 Turbo. Recently, CEO Sam Altman posted that GPT-4o mini is “already processing more than 200B tokens per day!”
If we do some number crunching, processing 200 billion tokens daily, GPT-4o Mini costs OpenAI $90,000 per day, compared to $225,000 with GPT-3.5 Turbo. This transition allows OpenAI to save $135,000 daily, showcasing enhanced efficiency and cost-effectiveness.
OpenAI Has no Money, no Moat
Regardless of these reports, OpenAI has now announced SearchGPT, a prototype with AI search features that delivers quick, timely answers, with clear and relevant sources. However, Perplexity AI, the much touted Google-search alternative, already offers similar capabilities.
Moreover, Altman has also revealed that the voice feature on GPT-4o will be rolled out to Alpha users next week.
However, Hume AI has already been leading the way with its Empathetic Voice Interface (EVI). Companies such as SenseTime and Kyutai are also developing voice-based products like SenseNova 5.0 and Moshi, respectively.
At Google I/O 2024, Google introduced its AI agent Project Astra, which processes and integrates multiple modalities of data, including text, speech, images, and video.
Apple has also upgraded Siri with ‘Apple Intelligence,’ enabling it to perform more actions for users.
From the perspective of LLMs, Meta released Llama 3.1, which performs even better than OpenAI’s GPT-4o and GPT-4o Mini in categories such as general knowledge, reasoning, reading comprehension, code generation, and multilingual capabilities.
“Guys, fine-tuned Llama 3.1 8B is completely cracked. Just ran it through our fine-tuning test suite and it blows GPT-4o mini out of the water on every task,” posted a user on X. “There has never been an open model this small, this good.”
Within a day, Paris-based Mistral AI has also released Mistral Large 2, which offers substantial improvements in code generation, mathematics, and multilingual support, even outperforming Llama 3.1 in code generation and math abilities.
With a 128k context window and support for dozens of languages, including French, German, Spanish, and Chinese, Mistral Large 2 aims to cater to diverse linguistic needs. It also supports 80+ coding languages, such as Python, Java, and C++.
This puts the race to building LLMs and SLMs back into perspective. The case now seems that OpenAI, which was the pioneer in generative AI, is lagging behind when it comes to releases.
Not to forget, OpenAI has been postponing the launch of Sora, which is now expected later this year. Meanwhile, the video generation space has seen a surge of startups like Kling, RunwayML, and Luma AI. Recently, Kling AI, the Chinese company, announced the global launch of Kling AI International Version 1.0.
Simply put, OpenAI is certainly a jack of all trades, master of none.