AI for Engineers

Subscribe
Archives
September 6, 2023

[AI For Engineers Beta] Day 1: How to Make your own ChatGPT bot in 1 Hour with OpenAI’s API

Day 1: How to Make your own ChatGPT bot in 1 Hour with OpenAI’s API


Hello Beta testers! welcome to the first of 7 planned lessons in the AI for Engineers email course! We are sending this out for the first time for your feedback - please look out for "BETA TESTERS" callouts!

For feedback, you can use the ABCD framework:

  • What’s Amazing?
  • What’s Boring?
  • What’s Confusing?
  • What Didn’t you believe?

And more generally let us know where you got stuck, where we lost you, and where you want more detail! Can reply to Noah at nheindev@gmail.com and swyx at swyx@smol.ai and we'd love to improve this. Thanks for testing it out and we greatly look forward to your feedback!


In this short project, we will take you from 0 to having a personal Telegram bot that you can ask questions to using a chat command, and get GPT-powered responses back… in 1 hour and 47 lines of code!

Follow these steps to get started:

🎥 If you would prefer to watch a video, you can follow along here

Spin Up a Python Repl

You’ll need to register an accouint on Replit. From there you will need to click on “Create Repl” choose the Python template, and give it a name.

Get A Bot Token From The BotFather

The Botfather is Telegram's API-key dispenser, and bot manager.

Since we’re building our little chat through telegram, you will need to get a token from him. You can open your telegram app on desktop https://t.me/BotFather and follow his commands to create your own bot.

If you haven’t interacted with bots on platforms like Telegram, Discord, or Slack, they usually use “slash commands” to understand what you want from them. When you first start your conversation with the BotFather he will show you all of the available commands you can use.

Since we want to create a new bot we can use the /newbot command and he will then ask you to give it a name. Once you’ve successfully named your bot, he will give you an API key as well as a link to your new bot. Copy your API key in the telegram chat and head over to Replit where we can save it for use in our program later.

Open Replit and you should see a tools section in the bottom left, you can click on the “secrets” icon and see a new tab open up that looks something like this

image.png

Yours will not have anything populated yet like I have in the photo. Enter TG_BOT_TOKEN in the Key field and paste your API key in the Value field. This is called an environment variable.

Moving forward, we can now refer to the value of your API key ( that looks something like 58768348752:AAFVDXd88qsAuhG5XkDYzy9FcUwNutjcsjDs ) as TG_BOT_TOKEN.

The naming convention for environment variables is to be in all caps, and to have words separated by underscores (fun fact, this is called SCREAMING_SNAKE_CASE). Environment variables are typically pieces of sensitive data that you wouldn’t want to publish to the internet or want others to know the value of.

Things that fall in this category are things such as API keys, which are unique to each individual user, and usually have the ability to rack up dollar amounts to the account from which the API key was generated. In this case, anyone who has the API key for the telegram bot has full unfettered access to the bot you just created. Be sure to keep these safe! That’s what the secrets tab in Replit is for.

One last thing before moving on. We're going to tweak the Replit config a little bit. click on the 3 dots in the top left corner next to "Files", and click on "Show hidden files".

From there, you should see a new addition to your file explorer — .replit.

Copy this config, as some of the packages we are working with will break without this config update.

entrypoint = "main.py"
modules = ["python-3.10:v18-20230807-322e88b"]
disableInstallBeforeRun = true
disableGuessImports = true

[nix]
channel = "stable-23_05"

[unitTest]
language = "python3"

[gitHubImport]
requiredFiles = [".replit", "replit.nix"]

[deployment]
run = ["python3", "main.py"]
deploymentTarget = "cloudrun"

Now to get you into the action as quickly as possible, you can now open up Replit, and we can build your first slash command with your bot.

Step 1: Basic Bot Setup

Here’s a block of code, and I want you to paste the content of this into the main.py file.

import os
import logging
from telegram import Update
from telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler

tg_bot_token = os.environ['TG_BOT_TOKEN']

messages = [{
  "role": "system",
  "content": "You are a helpful assistant that answers questions."
}]

logging.basicConfig(
  format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
  level=logging.INFO)

async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
  await context.bot.send_message(chat_id=update.effective_chat.id,
                                 text="I'm a bot, please talk to me!")

if __name__ == '__main__':
  application = ApplicationBuilder().token(tg_bot_token).build()

  start_handler = CommandHandler('start', start)
  application.add_handler(start_handler)

  application.run_polling()

This code is importing some libraries that allow you to interface with the Telegram API in a very streamlined fashion. You should run the following command, and then click the 'run' button up top:

poetry add python-telegram-bot

Explaining the code:

We will pull out the API key for your bot using the line

tg_bot_token = os.environ['TG_BOT_TOKEN']

We can now reference the value of your API key without ever explicitly revealing what the value is to anyone looking at this code.

Then we set up a bit of logging.

Optional: Logging

Logging is like keeping a detailed diary for your program, where it records events and data about what it does and when. This "diary" or log file is extremely useful for understanding what happened in the past, helping to find and fix problems, or see how the program is being used.

logging.basicConfig(
  format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
  level=logging.INFO)

This bit isn’t strictly necessary, but is a good practice to get into, as it will provide you with better insights when something goes wrong and you start getting errors when trying to go off the beaten path.

Explaining the code

Next, we will walk through our first piece of bot code.

async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
  await context.bot.send_message(chat_id=update.effective_chat.id,
                                 text="I'm a bot, please talk to me!")

This is what every function to make our bot perform an action will look like. It’s an asynchronous function that has two parameters, an update, and a context.

update is an object that contains all of the information and data that is coming from Telegram itself. Things like the text of a message, the id of the user who sent it, etc.

context is an object that contains information and data about the status of the bot. This gets into more complex things like applications and job_queues.

For now you can think of update as being telegram information, and context is where you will tell the bot what to do.

When we call this function, we are going to use the context to tell the bot to send a message, with the context.bot.send_message() function. You can see that we pass two arguments to this function. The first one is the chat_id, the value of which is update.effective_chat.id which lets the bot know what user is sending the command, so it can know who to respond to. The 2nd argument is text which we pass in the generic statement: “I’m a bot, please talk to me!”.

So now, whenever we invoke this command, the bot will send a message to the user saying “I’m a bot, please talk to me!”.

With me so far? If this has been confusing at all please let us know. We’re trying to take this step-by-step.


****BETA TESTERS - we are very keen on your feedback - did we lose you at any point? was anything too excessive? please let us know******


Now that we have the function ready to go…..How will the bot know when a command is invoked? How will it know what command it should respond to?

Here we can dig into the rest of the code to find the answers.

Explaining the Step Handler(s)

if __name__ == '__main__':
  application = ApplicationBuilder().token(tg_bot_token).build()

  start_handler = CommandHandler('start', start)

  application.add_handler(start_handler)

  application.run_polling()

the first line is putting a condition on all of the subsequent code, ensuring that this code only executes inside the main.py file.

This is more of a best practice for extending and maintaining larger projects than something strictly necessary for you to implement right now. But it’s best to be exposed to these practices so you understand them when you see them in the wild.

application = ApplicationBuilder().token(tg_bot_token).build()

Here you can see us referencing the API key that we made earlier. This line of code is essentially booting up our bot using the API key we got from the BotFather earlier.

start_handler = CommandHandler('start', start)

This line is creating what is called a “Handler”. Handlers are responsible for handling different events that can trigger different actions in your bot. In this case, we create a CommandHandler which states that when it receives the command “start”, it should then run the start function we made earlier.

application.add_handler(start_handler)

This line tells our bot to pick up the handler we just made. So we’ve now successfully paired our specific bot, to this start_handler, which then calls the start function.

application.run_polling()

This is the very last thing we do. This tells the bot to begin polling.

Polling in programming is a method where your program repeatedly checks for a certain condition or change in state. It's like continuously asking "Are we there yet?", and scribbling in your notebook each time you get an answer.

Step 2. Running The Bot

Now that you know what your code is doing. click on “Run” in Replit at the top. In the console you should see it log out something like:

2023-05-19 03:27:34,668 - telegram.ext.Application - INFO - Application started

Now you can go back over to telegram and open up a chat with your bot. You should now be able to invoke the /start command, and have it respond back to you.

image.png

You now have your very own telegram bot! Now let’s supercharge it with some GPT powers.

Step 3: Adding OpenAI

You will need to register for an OpenAI account, set up a paid account, and generate an API key.

  • Copy your API Key from Openai

    image.png

  • Paste it into the secrets tab the same way you did for the telegram bot key. Name this one OPENAI_API_KEY

    image.png

Now you can use your API key to access GPT programmatically. The power of AI is at your fingertips! We’re going to create another slash command that your bot can recognize. This new command will take in a question from the user, pass it to GPT, and then respond with the answer that GPT came up with.

Before adding more code, install the openai package with this command:

poetry add openai

(this guide is using version 1.13.3 at the current time, let us know if something isn't working for you!)

For the AI, you will need to import the openai python library and assign your API key to it.

At the top of your main.py file add the following

from openai import OpenAI
openai = OpenAI(api_key=os.environ['OPENAI_API_KEY'])

so now the top of your file should look something like

import os
import logging
from telegram import Update
from telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler
from openai import OpenAI # This is new!

openai = OpenAI(api_key=os.environ['OPENAI_API_KEY']) # This is new!
tg_bot_token = os.environ['TG_BOT_TOKEN']

Now, we can go through the same flow we did for the start command.

Step 4: Create Chat Function

async def chat(update: Update, context: ContextTypes.DEFAULT_TYPE):
  messages.append({"role": "user", "content": update.message.text})
  completion = openai.chat.completions.create(model="gpt-3.5-turbo",
                                              messages=messages)
  completion_answer = completion.choices[0].message
  messages.append(completion_answer)

  await context.bot.send_message(chat_id=update.effective_chat.id,
                                 text=completion_answer.content)

The same pattern as before, we have the update and context available to us in the function. But this time instead of blindly yelling back to the user, we’re going to pass the message that the user generates to OpenAI, and then dynamically respond.

The first thing we do is append our user’s question to our global messages variable in a format that the gpt model is expecting.

Then we call the openai.chat.completions.create function.

This takes in model and messages as arguments. There are other optional arguments that you can pass in, such as TopP and temperature. You can get an in-depth post from cohere about these settings and how they can affect your outputs.

  • model - The large language model you would like to use to complete the request. OpenAI has a whole list of them.
  • messages - This is an array of objects. Each object has a role and a content key.

message.content - is the actual text that is either being provided by the LLM, or the user.

message.role has 3 different options, it can either be system, user, or assistant:

  • You can think of system as being the sort of ‘base prompt’ almost.
  • user is then the entity asking the questions.
  • assistant - is the LLM giving the response

So this call we are telling the LLM via the system role to be a helpful assistant that answers questions. Then we pass it the user’s question by plugging the update.message.text into the content field.

The response that we get from Openai is a rather large object, and we don’t want to send the user this big blob of json. The answer from the model is found at completion.choices[0].message.content, so that is what we will pass to the user.

We will then append that answer to our global message array as well, so that we have reference to both the user’s question and the bot’s response.

We will instruct the bot to reply to the user with the same send_message command that we used previously.

For all of the OpenAI instructions above, you can get a sense of the structure of everything, as well as additional configuration options, by using their playground.

Step 5: Adding Another Handler

The same pattern you used to connect the CommandHandler for the /start command, and then connected the handler to the bot, you will do for the /chat command. You will add the snippet below in that same block at the very bottom.

if __name__ == '__main__':
  application = ApplicationBuilder().token(tg_bot_token).build()

  start_handler = CommandHandler('start', start)
    chat_handler = CommandHandler('chat', chat) #new chat CommandHandler created

  application.add_handler(start_handler)
  application.add_handler(chat_handler) # New chat handler connected

  application.run_polling()

All in all your script should look like this

⚠️ Be sure to stop the running process and restart it after modifying all of your code. If you don’t do that the bot won’t update.

Using The /chat Command

Now when you run the bot on this new code, you can go back to your bot and use the /chat command, followed by any prompt or question that you would ask chatGPT. For example

image.png

Conclusion

If you use it for a bit you can immediately tell there are some UX pitfalls here:

  • You have to restart the application for each new question
  • It also isn’t streamed word-by-word as the response is coming back
  • There isn’t really any indication of whether your request worked or not by looking only at telegram. You just kinda sit there waiting for a response.

With all of that being said, you can really say that YOU made this, and that’s super exciting. This is really just the starting point, and you could make several improvements to this project if you so wished.

Bonus

For Advanced developers - we highly encourage implementations on other platforms than replit, please send us your repros! (eg vercel, fly.io). we’ll list them here with credit.

Here are some other jumping-off points where you could progress this on your own

  • Try messing with the ‘system’ role initial prompt - Make it talk like a pirate - Make it not give up a secret
  • Try gpt3.5 turbo vs gpt 4 - what difference do you see?
  • Move to a different chat platform, for example - Whatsapp - Slack - Discord

As a final resource, if you are looking to up your prompt game, we’d recommend Andrew Ng’s Prompt Engineering for Developers course (1.5hrs) done in collaboration with OpenAI.

Don't miss what's next. Subscribe to AI for Engineers:
Powered by Buttondown, the easiest way to start and grow your newsletter.