Join thousands of students in our LangChain and Vector DBs in Production course, with over 50+ lessons and practical projects for FREE!.


Create Your First Chatbot Using GPT 3.5, OpenAI, Python and Panel.
Generative AI   Latest   Machine Learning   Natural Language Processing

Create Your First Chatbot Using GPT 3.5, OpenAI, Python and Panel.

Last Updated on May 9, 2023 by Editorial Team

Author(s): Pere Martra


Originally published on Towards AI.

In this article, we’ll see how the OpenAI API works and how we can use one of its famous models to make our own Chatbot.

To make this brief introduction to the world of LLMs, we are going to see how to create a simple chat, using the OpenAI API and its gpt-3.5-turbo model.

So, we will build a small ChatGPT that will be trained to act as a chatbot for a fast food restaurant.

A brief introduction to the OpenAI API.

Before starting to create the chatbot, I think it is interesting to explain a couple of points:

  • The roles within a conversation with OpenAI.
  • How is the conversations’ memory preserved?

If you prefer to start creating the chatbot, just move to the section: Creating the Chatbot with OpenAI and GPT.

The roles in OpenAI messages.

One of the lesser-known features of language models such as GPT 3.5 is that the conversation occurs between several roles. We can identify the user and the assistant, but there is a third role called system, which allows us to better configure how the model should behave.

When we use tools like ChatGPT, we always assume the role of the user, but the API lets us choose which Role we want to send to the model, for each sentence.

To send text, containing our part of the dialog to the model, we must use the ChatCompletion.create function, indicating, at least, the model to use and a list of messages.

Each message in the list contains a role and the text we want to send to the model.

Here is an example of the list of messages that can be sent using the three available roles.

 {"role": "system", "content": "You are an OrderBot in a fastfood restaurant."},
 {"role": "user", "content": "I have only 10 dollars, what can I order?"},
 {"role": "assistant", "content": "We have the fast menu for 7 dollars."},
 {"role": "user", "content": "Perfect! Give me one! "}

Let’s take a closer look at the three existing roles:

  • System: We can tell the model how we want it to behave and tell it how its personality and type of response should be. Somehow, it allows us to configure the basic operation of the model. OpenAI says that this role will become more important in the next models, even though now it’s importance it is relatively small in GPT 3.5.
  • User: These are the phrases that come from the user.
  • Assistant: These are the responses returned by the model. With the API, we can send responses that say they came from the model, even if they came from somewhere else.

Memory in conversations with OpenAI.

If we are familiar with ChatGPT, we can see that it keeps a memory of the conversation. That is, it remembers the context. Well, this is so because the memory is being maintained by the interface, not the model. In our case, we will pass the list of all messages generated, jointly with the context, in each call to ChatCompletion.create.

The context is the first message we send to the model before it can talk to the user. In it, we will indicate how the model should behave and the tone of the response. We will also pass the data needed to successfully perform the task we have assigned to the model.

Let’s see a little context, and how to have a conversation with OpenAI:

import openai

#creamos el contexto
context =[
{‘role’:’system’, ‘content’:”””Actua como el camarero de un restaurante de comida rapida \
pregunta al cliente que desea y ofrecele las cosas del menu. \
En el menu hay: \
Bocadillo fuet 6
Bocadillo jamon 7
Agua 2

#Le pasamos el contexto a OpenAI y recogemos su respuesta.
mensajes = context
respuesta = openai.ChatCompletion.create(

#enseñamos la respuesta al usuario y pedimos una entrada nueva.

#añadimos la respuesta al pool de mensajes

#añadimos una segunda linea del usuario.
mensajes.append({‘role’:’user’, ‘content’:’un agua por favor’})

#Volvemos a llamar al modelo con las dos lineas añadidas.
respuesta = openai.ChatCompletion.create(

As you can see, it’s simple, it’s about adding the conversation lines to the context and passing it to the model every time we call it. The model really has no memory! We must integrate memory into our code.

The code that can be seen above is made only as an example. We will have to organize it better, so we don’t have to write code every time the user adds new phrases.

With this brief explanation, I think we are ready to start creating our fast-food ordering chatbot.

Creating the Chatbot with OpenAI and GPT.

The first thing we have to consider is that we are going to need an OpenAI payment account to use their service and that we will have to report a valid credit card. But let’s not worry, I’ve been using it a lot for development and testing, and I can assure you that the cost is negligible.

Doing all the tests for this article, I think they cost me €0.07. We could only be surprised if we upload something to production that becomes a HIT. Even so, we can establish the monthly consumption limit that we want.

The first thing, as always, is to know if we have the necessary libraries installed. In case we work on Google Colab, I think we only have to install two, OpenAI and panel.

!pip install openai
!pip install panel

Panel is a basic library that allows us to display fields in the notebook and interact with the user. If we wanted to make a WEB application, we could use streamlit instead of panel, the code to use OpenAI and create the chatbot would be the same.

Now it’s time to import the necessary libraries and report the value of the key that we just obtained from OpenAI.

Don’t have a key? You can get one at this url:

import openai 
import panel as pn

#obtener la key
from mykeys import openai_api_key

My key is stored in a file where I keep the keys. But if you like, you can inform it directly in the notebook, or save the key in a file, with a .py extension.

In any case, make sure that nobody can ever know the value of the Key; otherwise, they could make calls to the OpenAI API that you would end up paying for.

Now we are going to define two functions, which will be the ones that will contain the logic of maintaining the memory of the conversation.

def continue_conversation(messages, temperature=0):
 response = openai.ChatCompletion.create(
 return response.choices[0].message["content"]

This function is very simple, it just makes a call to the OpeanAI API that allows you to have a conversation. The parameters are: the model we want to use, and the messages part of the conversation. It returns to us the response of the model.

But there is a third parameter that we haven’t seen before: temperature. It is a numerical value between 0 and 1, which indicates how imaginative the model can be when generating the response. The smaller the value, the less original the model’s response will be.

As you know, a language generation model does not always give the same answers to the same inputs. The lower the value of temperature, the more similar the result will be for the same inputs, even repeating itself in many cases.

I think it’s worth making a parenthesis to explain in broad terms how this parameter works in a language generation model. The model builds the sentence by figuring out which word it should use, choosing it from a list of words that has a percentage of chances of appearing.

For example, for the sentence: My car is… the model could return the following list of words:

Fast — 45%

Red — 32%

Old — 20%

Small — 3%

With a value of 0 for temperature, the model will always return the word ‘Fast’. But as we increase the value of temperature, the possibility of choosing another word from the list increases.

We have to be careful, because this not only increases the originality, it often increases the “hallucinations” of the model.

def add_prompts_conversation(_):
 #Get the value introduced by the user
 prompt = client_prompt.value_input
 client_prompt.value = ''

#Append to the context the User prompt.
context.append({‘role’:’user’, ‘content’:f”{prompt}”})

#Get the response.
response = continue_conversation(context)

#Add the response to the context.
context.append({‘role’:’assistant’, ‘content’:f”{response}”})

#Undate the panels to shjow the conversation.
pn.Row(‘User:’, pn.pane.Markdown(prompt, width=600)))
pn.Row(‘Assistant:’, pn.pane.Markdown(response, width=600, style={‘background-color’: ‘#F6F6F6’})))

return pn.Column(*panels)

This function is responsible for collecting user input, incorporating it into the context or conversation, calling the model, and incorporating its response into the conversation. That is, it is responsible for managing the memory! It is as simple as adding phrases with the correct format to a list, where each sentence is formed by the role and the phrase.

Now is the time for the prompt!

This is an LLM model. We are not going to program, we are going to try to make it behave as we want by giving it some instructions. At the same time, we must also provide it with enough information so that it can do its job properly informed.

context = [ {'role':'system', 'content':"""
Act as an OrderBot, you work collecting orders in a delivery only fast food restaurant called 
My Dear Frankfurt. \
First welcome the customer, in a very friedly way, then collects the order. \
You wait to collect the entire order, beverages included \
then summarize it and check for a final \
time if everithing is ok or the customer wants to add anything else. \
Finally you collect the payment.\
Make sure to clarify all options, extras and sizes to uniquely \
identify the item from the menu.\
You respond in a short, very friendly style. \
The menu includes \
burguer 12.95, 10.00, 7.00 \
frankfurt 10.95, 9.25, 6.50 \
sandwich 11.95, 9.75, 6.75 \
fries 4.50, 3.50 \
salad 7.25 \
Toppings: \
extra cheese 2.00, \
mushrooms 1.50 \
martra sausage 3.00 \
canadian bacon 3.50 \
romesco sauce 1.50 \
peppers 1.00 \
Drinks: \
coke 3.00, 2.00, 1.00 \
sprite 3.00, 2.00, 1.00 \
vichy catalan 5.00 \
"""} ]

The prompt, or context, is divided into two parts.

  • In the first one, we are indicating how he should behave and what his objective is. The instructions are that he must act like a bot in a fast food restaurant, and that her goal is to know what the customer wants to eat.
  • In the second part of the prompt, we give the composition of the restaurant’s menu. An element can have more than one price, we do not indicate what these different prices correspond to. You will see in the conversations that the model, without more information, finds out that each one corresponds to a different size of the plate.

Finally, we use panel to get the user input prompt and put the model to work!


panels = []

client_prompt = pn.widgets.TextInput(value=”Hi”, placeholder=’Enter text here…’)
button_conversation = pn.widgets.Button(name=”talk”)

interactive_conversation = pn.bind(add_prompts_conversation, button_conversation)

dashboard = pn.Column(
pn.panel(interactive_conversation, loading_indicator=True, height=300),


With this, we already have everything. I leave you the result of one of the conversations held.

As a curiosity, I would like to point out that although the context is written in English and the first sentence that the model tells us is in English, if we use Spanish, it will change the language with which the conversation is held. Even translating the dishes on the menu.

What’s next?

Firstly, you have the notebook available on GitHub:

It can be adapted to many businesses. There are a thousand possibilities for expansion. Perhaps one of the most important improvements could be to request the model that, at the end of the conversation, creates a JSON or XML with the order data, and thus we could send it to the order system.

The Chatbot has been created, influenced 95% by the course Prompt Engineering for Developers from As I am a mentor in the TensorFlow Advanced Techniques specialization, I had the opportunity to see how the course was created from scratch, and I can assure you that if you follow it, it will be time well invested.

I write about TensorFlow and machine learning regularly. Consider following me on Medium to get updates about new articles. And, of course, You are welcome to connect with me on LinkedIn.

My series about Tensorflow:

Pere Martra

TensorFlow beyond the basics

View list3 stories

Tutorial about Generative AI using GANS:

Pere Martra

GANs From Zero to Hero

View list4 stories

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓