Article Image
Article Image
View All Posts

ChatGPT now has an official API - this means we can migrate our cocktail bot from using Davinci to using the official ChatGPT API.

If you prefer to watch video content - then you can see the full video walkthrough here (it’s pretty short as this is surprisingly easy to do) - there are also a few things that I might have missed in this write-up.

You’ll need to create an account on OpenAI and get an API key - you’ll need this later to make API requests. You do this by clicking on your profile picture and then clicking on “Manage API Keys”.

First though, we need to do a bit of “prompt engineering” - yes, I’ve come to accept that this may actually be a real job at some point…

Prompt Engineering

Head over to the playground tab on OpenAI and we’ll create our prompt.

There’s now a new model available in the dropdown list - we’ll be using the “chat” model. Selecting this option brings up a new UI - the most important area of this is the “System” section. This is where we’ll be writing our prompt.

The first thing we’ll do is tell the language model what we want it to do and what it should know about. In our case we want it to be an expert in cocktails and alcoholic beverages.

You are an AI assistant that is an expert in alcoholic beverages.
You know about cocktails, wines, spirits and beers.
You can provide advice on drink menus, cocktail ingredients, how to make cocktails, and anything else related to alcoholic drinks.

The next thing we want to do is to try and keep our conversation as focused as possible. We don’t want the language model to get distracted by other topics. So we’ll tell it to give us a generic answer if we ask it about something it doesn’t know about.

If you are unable to provide an answer to a question, please respond with the phrase "I'm just a simple barman, I can't help with that."

We also want our bot to helpful and friendly - no one wants to talk to a miserable bar person.

Please aim to be as helpful, creative, and friendly as possible in all of your responses.

I’ve also noticed in experimenting that occasionally the language model will refer to external URLs or blog posts - particularly when you ask it for details about a cocktail. So we’ll try and encourage it not to do that.

Do not use any external URLs in your answers. Do not refer to any blogs in your answers.

And finally, we want it to output lists in a nicely formatted way.

Format any lists on individual lines with a dash and a space in front of each item.

With the new chat model, this last section may not be needed. The chat model has been fine tuned to give responses that people will like so it should already be nicely formatted.

That’s our prompt, you can copy and paste the above lines into the playground.

Let’s get out chat bot talking to us. Add a user question:

“What are some cocktails I can make at home?”

This should give you a nice list of cocktails.

One of the really important things for our chatbot is that we want it to use context from previous exchanges. So we can test that by adding a follow-up question - for example, we can ask what glasses we should use for the suggested cocktails.

Add another user question:

“What glasses do I need?

You should get an answer that is relevant to the set of suggested cocktails.

You can play around with the system prompt and see how your bot behaves. It should work pretty well, but there’s always room for fine tuning.

To make things a lot easier for you, I’ve created a very simple Python command line application that will let you test your bot easily. You just need to copy the prompt that you’ve created along with any settings into it and you’ll have a fully working chatbot.

You can find the code for this on GitHub here: the code

Follow the instructions in the README to get everything set up - it’s pretty straightforward.

There are a few extra bells and whistles in the code. I’ve added moderation to the user questions - this is a really important thing for any chatbot that takes user input. You don’t want your bot to be used to spread hate speech or other offensive content.

OpenAI offers a nice API for this - which we’re simply plugging into.

I know that moderation of user input often seems to trigger people - I can understand that for some people moderation can feel very heavy-handed and can prevent some creativity. But there are some people who seem to feel that any moderation is “wokeness gone mad” and an infringement of their right to free speech. I’m not going to get into that debate, suffice it to say, if you ever want to make your chatbot public, you’ll be glad that you’ve added moderation.

To maintain conversation context I’m just keeping the last 10 questions and answers and sending these up to the API along with the new user question. This will work well for short conversations, but eventually the bot will start forgetting the start of the conversation.

There are many more clever things you can do here - and some of that cleverness is what makes the ChatGPT implementation so impressive.

The code is amazingly simple - in total there are around 100 or so lines of code. And most of that is simply boilerplate API calls to OpenAI.

One last point - as with any of these Large Language Models, the output may look very plausible but could be completely wrong. I won’t be held responsible for any disgusting cocktails you make or hangovers you get.

Blog Logo

Chris Greening


> Image


A collection of slightly mad projects, instructive/educational videos, and generally interesting stuff. Building projects around the Arduino and ESP32 platforms - we'll be exploring AI, Computer Vision, Audio, 3D Printing - it may get a bit eclectic...

View All Posts