Join thousands of students in our LangChain and Vector DBs in Production course, with over 50+ lessons and practical projects for FREE!.


Exploring the World of Generative AI, Foundation Models, and Large Language Models: Concepts, Tools, and Trends
Latest   Machine Learning

Exploring the World of Generative AI, Foundation Models, and Large Language Models: Concepts, Tools, and Trends

Last Updated on July 25, 2023 by Editorial Team

Author(s): Claudio Giorgio Giancaterino

Originally published on Towards AI.


Artificial Intelligence (AI) has made tremendous strides in recent years, largely driven by advances in Deep Learning. With the advent of ChatGPT last year, the popularity of the Generative AI world increased with the rising of many words generating confusion, such as Foundation Models, Large Language Models, GPT-3 and GPT-4, PaLM and PaLM 2, LLaMA and LLaMA 2, Falcon, ChatGPT, Bard, Claude 2 and so on. The intention is to give a better comprehension of concepts around Generative AI and explore trends and tools, with the premise that this article is not exhaustive on the subject and is focused on text content.

Generative AI is a subfield of the Artificial Intelligence Universe with progressive growth, especially from last year with the advent of ChatGPT. The term, Generative AI refers to Deep Learning models capable of generating new content like text, images, video, audio, structures, and so on.

Foundation Models are a type of Generative AI that are trained on large amounts of unstructured data in an unsupervised manner in order to learn general representations that can be adapted to perform multiple tasks across different domains. They aim to provide a foundation for building many different AI applications. Foundation models have advantages in performance, efficiency and scalability over conventional AI models that are trained on task-specific data.

Foundation models, for instance, can be built for climate change purposes, using geospatial data to improve climate research. Another example can be the development of Foundation models for coding, helping to complete code as it’s being authored.

Large Language Models (LLMs) are a subset of Foundation Models focused on text generation and understanding. They are trained on vast amounts of textual data.

Ok, give us some examples….

GPT-3 was released by OpenAI in 2020. It has a size of 175 billion parameters and was trained on 300 billion tokens with a context window size of 2.048 tokens. GPT-4 was released in March 2023 which is an improvement of GPT-3 with a context window size of 32.768 tokens. It is a multimodal model which can accept image and text inputs and produce text outputs.

PaLM was released by Google in 2022 with a size of 540 billion parameters densely activated and was trained on 780 billion tokens with a context window size of 2.048 tokens. Google also launched PaLM 2 in May 2023 which is faster, relatively smaller, and cost-efficient because it serves fewer parameters supporting more than 100 languages and reaches a context window size of 8.000 tokens. It’s not multimodal as GPT-4, but the multimodal capability has been added with Med-PaLM 2 limited to the medical domain only.

LLaMA was developed by Meta, and released in February 2023. It has models ranging from 7 billion to 65 billion parameters trained on trillion tokens, and from July 2023, is available LLaMA 2 which improved the first version, reaching 70 billion parameters and increasing the context window size from 2.048 tokens to 4.096 tokens.

Falcon was developed by Technology Innovation Institute (TII), and the first version was released on October 2021 with models ranging from 7 billion to 40 billion parameters, trained on one trillion tokens from high-quality web data. It can be downloaded from Hugging Face.

Dolly was developed by Databricks and released in March 2023. It has a size of 12 billion parameters, based on EleutherAI’s Pythia model and fine-tuned on 15.000 record instruction corpus generated among Databricks employees.

All these LLMs use a Transformer-based model to predict the next token in a document, obviously with some difference in their architectures.

You can explore the Hugging Face leaderboard with the aim to track, ranking and evaluate open LLMs and chatbots as they are released.

These models represent the engine with which Generative AI tools like ChatGPT are built. If we make the comparison with a car, LLMs are the engine of the car. Meanwhile, Chatbots represent the bodywork.

ChatGPT is the first Generative AI Chatbot presented by OpenAI to the market in November 2022, it is fine-tuned from either GPT-3.5 or GPT-4 Large Language Models using Reinforcement Learning from Human Feedback (RLHF). It allows us to chat in a conversational way, supporting many tasks like the answer to questions, writing summaries, debugging codes, generating texts and more.

Bard is the lead OpenAI’s ChatGPT competitor, developed by Google and released on February 2023. Initially based on LaMDA Large Language Model and later powered by PaLM 2. It works as ChatGPT, it is able to understand and generate text in many languages, with the difference that it is updated in real-time, meaning it can access information from the web to provide more accurate and high-quality answers.

h2oGPT belongs to the new Chatbot generation developed by the platform, supporting a variety of models: GPT 3.5 turbo, LLaMA 2, Falcon…You can use it either online or locally, it has a bake-off UI mode that lets you compare the outputs of different models at the same time.

Claude 2 is developed by Anthropic, and it works like ChatGPT, understanding and generating texts, and giving harmless responses updated in real-time, it is a promising rival to ChatGPT and Bard.

Given ChatGPT works and it is really useful in rising productivity, it’s not only a fashion of the moment, but a business with the opportunity to grow in the next years. For this reason, Google has invested in Bard, and also other Companies have decided to enter the market as the Chatbots showed previously and other Generative AI tools.

The next picture, is projected the trending interest in ChatGPT vs Bard and the other Chatbots mentioned before in the world from the last 30 days by google trends.

Claude 2 and h2oGPT have overlapping with a low level of interest, so the competition is between ChatGPT and Google Bard, with the first one in a clear advantage with a huge gap till now.

Looking at the comparison with a one-year window, ChatGPT has reached the max level of interest during the Spring of 2023, and now has just a drop.

From these two pictures and the next one, the question I have in mind is: “In the coming months or years, will the competitive advantage of chatGPT be eroded or since ChatGPT entered the market first, has it consolidated itself as a permanent tool by users like Google search in comparison to other search tools?”

Enjoy your favorite Generative AI Chatbot.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓