View All Posts
Want to keep up to date with the latest posts and videos? Subscribe to the newsletter
HELP SUPPORT MY WORK: If you're feeling flush then please stop by Patreon Or you can make a one off donation via ko-fi

We’re all (well, at least I am) getting pretty excited by what we see with ChatGPT and other large language models - it’s amazing progress in a ridiculously short space of time.

For example, I recently used it to generate code for Arduino projects - and it produced a working project - amazing!

But, a lot of people take great pleasure in pointing out that it often seems to make things up. It can have quite an interesting relationship with the “truth”.

There was a really interesting post on HackerNews recently that linked to a great paper - Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models - and reading this gave me a bit of a lightbulb moment.

Large Language Models (LLMs) are great at generating plausible text - but they’re not necessarily great at generating truthful text. They are learning the structure of language, not the facts of the world.

There’s a great example in the paper where they ask GPT to continue the statement John Prescott* was born in…

When they tried this, GPT-3 generated the following completion:

in Hull on June 8th 1941.

This is a perfectly plausible answer - but it’s not true. John Prescott was born in Prestatyn on 31st May 1938.

It’s not answering the question “Where and when was John Prescott born?” - it’s just creating the most plausible continuation of the statement.

And it knows that a continuation of this kind of statement should be a relevant town and date - but, it doesn’t necessarily know which town and which date it should use.

According to the paper, this is because GPT has not memorised all the facts about John Prescott - it’s compressed the semantic relationships about him.

Interestingly, when I tried the same prompts in the OpenAI playgrounds - the davinci model produced an accurate answer:


The currie model made up some information:


The babbage model really went to town:


And the ada model also produced a plausible but incorrect answer:


ChatGPT produced this:

John Prescott was born on May 31, 1938 in Prestatyn, Denbighshire, Wales. He is a British politician who served as the Deputy Prime Minister of the United Kingdom from 1997 to 2007, under Prime Minister Tony Blair. He was also the Member of Parliament for Hull East from 1970 to 2010, and served as the President of the Board of Trade, the Secretary of State for the Environment, and the Secretary of State for Transport.


It’s correct - and includes some nice extra details - pretty similar performance to the davinci model.

However, as I’m writing this I’ve just realised that I blindly accepted the additional information as true. I didn’t even bother to check it! (it is true - just checked in on Wikipedia…).

This highlights the danger, as humans, we seem quite susceptible to believing things that seem plausible. This is probably quite an important part of just getting through the day - we don’t have time to check everything we read or hear - so we have to make some assumptions about what is true and what is not.

So, what does this mean for us? Is it safe to use things like ChatGPT?

If you’re looking for factual answers - then you need to verify what comes out of these models. And let’s be honest you should be doing this with any source of information - we’ve come to assume that what we get from a Google or Wikipedia article must be true - but maybe we should be a bit more careful.

What we should avoid:

  • Getting factual answers and blindly trusting them
  • Trying to solve maths problems - for the love of everything that is holy stop trying this and then posting about how it failed
  • Anything that involves deep reasoning and deduction - it’s not a human brain

Things that I think are great for ChatGPT are:

  • Creating marketing copy - with a human reviewer in the loop
  • Generating code - but check that the APIs it suggests are real
  • Finding problems in code
  • Summarising code/text
  • Rubber ducking - talking to a computer can be a great way to work through a problem
  • Providing inspiration for ideas
  • And many many others…

As always - this is a constantly moving target - and I’m sure that it won’t be long before we have an intelligent library computer at our fingertips.

*John Prescott

John Prescott was quite a famous politician in the UK. He was Deputy Prime Minister from 1997 to 2007. He was also the Member of Parliament for Hull East from 1970 to 2010, and served as the President of the Board of Trade, the Secretary of State for the Environment, and the Secretary of State for Transport (this was actually filled in for me by Copilot - and yes, I have fact checked it!).

John Prescott

They don’t make them like that anymore…


Related Posts

Why does ChatGPT make mistakes - a layman's explanation - In this enlightening blog post, I dive into the tantalizing world of ChatGPT and Large Language Models. Clarifying its operation, I unlock this enigma by comparing its mechanisms to a simple language model. However, Challenges arise due to the explosion of possible token combinations, leading to an inherent 'lossy' compression of our world's vast information. Surprisingly, even with such compression, these models can mimic human language in a compelling manner. I also investigate possible strategies to optimize this amazing technology - including zero-shot learning, one-shot learning, few-shot learning, and fine-tuning. Entering the era of prompt engineering and larger models, we're stepping into a thrilling future, so buckle up, folks!
Adding Memory To ChatGPT - Exploring the capabilities of ChatGPT, particularly GPT-4, I exposed a shortcoming regarding the model's ability to remember or store information it has 'thought' about during a dialogue sequence. Probing deeper, I developed an experimental system named ChatGPT Memory to input detailed information into the system like 'dreams', 'goals', 'inner dialogue' and more. While this method doesn't make the AI truly sentient, it definitely pushes the envelope and leads to interesting outputs. Although there are limitations, especially when handling more complex tasks, the enhancements present an exciting prospect for future iterations of the model.
Do you need a ChatGPT plugin? - We've seen two major shifts in technology trends with websites and mobile apps - now there's a third one rearing its head. OpenAI's ChatGPT with plugins is on the cards and you better be ready for it. In the midst of fumbling for answers to whether we need these plugins or not, let me reassure you that it's not too complex. Far from requiring a squad of specialist developers, all you need to know is how to make an API to create a plugin for ChatGPT. Yes, there are potential pitfalls around security and data protection, but with the right precautions, you will be fine. So, dear developer, explore, experiment and gear up for this exciting phase!
Using ChatGPT As a Co-Founder - In the quest to explore the capabilities of ChatGPT, I decided to utilized it as a startup brainstorming partner. From product description to building the product on AWS and GCP, crafting an elevator pitch, highlighting the ideal customer profile, sketching a business plan, and even generating a logo idea, ChatGPT has been surprisingly helpful and creative. We even explored potential team structures that can bring the business to life! Turns out, ChatGPT might just be the co-founder you never thought you needed.
I was wrong - we've not reached peak ChatGPT hype yet... - Strap yourselves in, folks, we're in for a wild ride! ChatGPT's new API has reignited my excitement for Large Language Models, just like the start of the dot com boom. With the pricing now 10 times cheaper, a flurry of creative and previously unthinkable use cases is within our grasp. Despite earlier doubts, I now believe we're scaling the peak of inflated expectations. Can't wait to see the innovative applications that will spring from this!

Related Videos

AI Powered Raspberry Pi Home Automation - Is this the future? - Witness the power of ChatGPT controlling home automation lights through a Raspberry Pi, making life easier with plugins. Delve into the fascinating world of large language models, redefining interactions with APIs.
Automating Blog Improvements with AI: Summaries, Tags, and Related Articles - Learn how to use ChatGPT to enhance your blog's homepage, create summaries and tags, find related articles, and generate post images with ease, leveraging AI to save valuable time and effort.
ChatGPT vs Stockfish: Can an AI Plugin Improve its Chess Game? - Watch as chat GPT takes on Stockfish, a world-class chess engine, in a thrilling match! See how GPT utilizes a chess plugin to improve its gameplay and compete against the best.
Unlocking the Power of ChatGPT: Effortlessly Generate Arduino Code for Your Projects! - Witness ChatGPT's impressive potential for generating working Arduino code, as demonstrated in a step-by-step ESP32-based project utilizing a potentiometer and dot star LED.
Wordle Solving Robot - Learn how a Wordle-solving robot was built using a 3D printer and a Raspberry Pi, tackling challenges like locating the phone screen, mapping printer bed coordinates, and selecting the best possible guesses.
HELP SUPPORT MY WORK: If you're feeling flush then please stop by Patreon Or you can make a one off donation via ko-fi
Want to keep up to date with the latest posts and videos? Subscribe to the newsletter
Blog Logo

Chris Greening


> Image


A collection of slightly mad projects, instructive/educational videos, and generally interesting stuff. Building projects around the Arduino and ESP32 platforms - we'll be exploring AI, Computer Vision, Audio, 3D Printing - it may get a bit eclectic...

View All Posts