UHG
Search
Close this search box.

14 Open Source LLMs You Need to Know

Every model open sourced since the genesis of GPT-2.

Share

Listen to this story

It seems like everyone is obsessed with the latest craze: large language models (LLMs). The appetite for these data-devouring behemoths just keeps growing. From GPT-3 to Megatron, the quest for bigger and better resources is far from over. So whether you’re a language processing newbie or a seasoned pro, here’s a rundown of all the open source LLMs that have hit the scene so far. Get ready to geek out!

Dolly & Dolly 2.0

Within weeks of releasing Dolly, Databricks has unveiled Dolly 2.0, a model for commercial use without requiring payment for API access or data sharing with third parties. The model is a potential solution to the legal ambiguity surrounding large language models that were previously trained on ChatGPT output. 

BLOOM

The world’s largest open-source large language model, presented by the Hugging Face team.  With the collaborative efforts of a thousand brilliant minds from across the globe, BigScience birthed BLOOM

GLM-130B


The model impressively surpasses GPT-3 and the largest Chinese language model on various benchmarks, this model is a true game-changer. But that’s not all – it also boasts a unique scaling property that allows for efficient inference on affordable GPUs. The best part? The model weights, code, and training logs are all available to the public. Say goodbye to language processing limitations and hello to GLM-130B!

GPT-Neo, GPT-NeoX & GPT-J

In the NLP realm, the GPT-Neo, GPT-J, and GPT-NeoX models shine, providing a powerful tool for few-shot learning.

Thanks to the minds at EleutherAI, these models have been crafted and made available to the public as open-source versions of GPT-3, which has been kept under lock and key by OpenAI. GPT-J and GPT-Neo, were trained on the mighty Pile dataset, a collection of linguistic data sources that spans across different domains, making them versatile and adaptable to various natural language processing tasks.

But the crown jewel of this trio is GPT-NeoX, a model built on the foundation of Megatron-LM and Meta’s DeepSeed, and designed to shine on the stage of GPUs. Its massive 20 billion parameters make it the largest publicly available model. GPT-NeoX is the proof-of-concept that pushes the boundaries of few-shot learning even further.

GPT-2

After initially withholding GPT-2 for nine months, due to concerns over its potential for spreading disinformation, spam, and fake news, OpenAI released smaller, less complex versions for testing purposes. In the November blog, OpenAI reported that it has witnessed “no strong evidence of misuse,” and as a result, made the full GPT-2 model available for use.

PaLM

Google AI disagreed with this ‘bigger the better’ assumption in the LLMs race where the size of the models have been the attention grabbing factor. The study found that bigger language models work better because they can learn from previous tasks more effectively. Based on this, Google created PaLM or Pathways Language Model, which has 540 billion parameters and is a decoder-only Transformer model.

OPT

Meta made a big splash in May 2022 with the release of its OPT (Open Pre-trained Transformer) models. Ranging from 125 million to a whopping 175 billion parameters, these transformers can handle language tasks on an unprecedented scale.

You can download the smaller variants from Github, but the biggest one is only accessible upon request. 

CerebrasGPT

Cerebras, an AI infrastructure firm based, made a bold move with the release of seven open-source GPT models. These models, including weights and training recipes, are available to the public free of charge under the Apache 2.0 license, challenging the proprietary systems of the current closed door industry.

Flan-T5


Google AI launched an open-source language model – Flan-T5 that can tackle more than 1,800 diverse tasks. Researchers claimed that the Flan-T5 model’s advanced prompting and multi-step reasoning capabilities could lead to significant improvements.

Llama 


Meta announced LLaMA at the end of February 2023. Unlike its counterparts, OpenAI’s ChatGPT and Microsoft’s Bing, LLaMA is not accessible to the public, but instead, Meta made it available as an open-source package that the AI community could request access to. 

But, just one week after Meta began accepting requests to access LLaMA, the model was leaked online, sending shockwaves through the tech community.

Read here: 7 Ways Developers are Harnessing Meta’s LLaMA

Alpaca 

From the halls of Stanford University emerged Alpaca. The model was created by fine-tuning LLaMA 7B with over 50k demonstrations following instructions from GPT 3.5. It was trained and tested at a mere $600, instead of the millions.

Since its release, Alpaca has been hailed as a breakthrough, Though it started small, with a Homer Simpson bot, the model quickly proved its versatility.

📣 Want to advertise in AIM? Book here

Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts
Association of Data Scientists
Tailored Generative AI Training for Your Team
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.