Creating software and applications has never been easier. With generative AI tools, these days anyone can create simple apps and get assistance to build bigger software. This has ushered in a series of claims by people that AI is better at coding than humans, making developers worry about losing their jobs.
But the reality is quite the opposite. Greg Diamos, the co-founder of Lamini, replied to a post by Francois Chollet, who reiterated that there would be more software engineers in five years than there are today. Diamos asked, “Do you think that AI today could write Windows?”
Chollet said that software engineers would be using AI and not be replaced by them. “At least not within that timeline and definitely not via current technology.” This is something Diamos agreed with as well.
“I remember a researcher in 2017 showed me an LLM that could write hello.c and fibonacci.c. Today Llama 3.1 can write a calculator app, but Windows is probably out of reach,” wrote Diamos.
‘The Truth is Somewhere in the Middle’
The problem with current AI systems, especially LLMs, is that they lack in reasoning and maths is something probably out of calculation for them. Though in just the last few days, OpenAI, Google, and Meta have all released their models with increased maths capabilities, the capabilities for creating something like Windows or an operating system all by itself are far-fetched.
A few weeks back, when Meta launched Llama 3.1, it was touted as the Linux moment of AI that would have a similar impact on the AI ecosystem as Linux had on the operating system world. But the effects are yet to be seen.
LLMs, in their current form, are far from building something akin to an operating system. Experts such as Ilya Sutskever and Andrej Karpathy say that LLMs are the OS of the future, but no one has spoken about how LLMs can actually build an OS. Maybe the future of all software is building extensions on top of LLMs, but that seems unlikely in itself as well.
The truth is that AI tools need software engineers, and not the other way around. And it is also not about the “boiler-plate” code. It’s still out of reach for an LLM to build something out of scratch, if it has not seen the code for something similar before.
Can LLMs build LLMs all by themselves? Probably not right now, but surely in the future with the advent of agentic AI.
But as Diamos said, an LLM that was able to create hello.c back in 2017, with current capabilities and correct direction of research could be able to create something beyond a calculator. But for now, a calculator cannot create a calculator or do anything other than arithmetic.
No Idea of the Reality for Now
This is not a new discussion, though. Meta AI chief Yann LeCun disagreed with Sutskever and possibly with Karpathy’s assessment as well. LeCun believes, “Large language models have no idea of the underlying reality that language describes.”
Chollet had cleared the air during a previous discussion as well. “Language can be thought of as the *operating system* of the mind. It is not the *substrate* of the mind – you can think perfectly well without language, much like you can still run programs on a computer without an OS (albeit with much more difficulty).”
LLM OS, on the other hand, is an interesting idea. Maybe LLMs do not need to build a ‘Windows’. Over time, similar to how Windows or Mac OS access various applications to accomplish a task, an LLM OS would be able to access various tools, which would ideally be other LLMs, for problem-solving.
But as of now, creating an OS involves not only writing code in various languages (like C, C++, and assembly) but also designing and managing complex systems such as memory management, file systems, device drivers, user interfaces, security protocols, and networking.
This process requires specialised knowledge in computer science and software engineering, along with a deep understanding of hardware and system architecture.
While LLMs can assist in generating code snippets, explaining concepts, and providing guidance, the creation of a full-fledged ‘Windows’ would require significant human expertise, collaboration, and effort beyond the capabilities of current LLMs.
But OpenAI is already teasing us with the release of Project Strawberry, which Sam Altman claims has achieved Level-2 AI, which means human-level reasoning capabilities. Possibly, we are almost there.