Skip to main content
  1. Home
  2. Computing
  3. Features

That moment when I told ChatGPT it needed a history lesson, and it agreed with me

Chat GPT is awfully good at most things, but occasionally it’s just awful

Sam Altman wearing ChatGPT armor
Microsoft CoPilot / Microsoft

I had an experience this week which forcefully reminded me that ChatGPT and Google’s Gemini were great but not perfect. And to be clear, I have jumped into the AI pool with both feet and am enthusiastic about the long-term prospects. However, I believe that we need to tap the brakes on the irrational exuberance and belief that AI can do everything, everywhere all at once.

The specific project that broke ChatGPT’s back was obscure on the one hand but should not really have been that tough. My daughter is finishing her doctoral dissertation and was trying to generate a map that compared the borders of the Byzantine Empire in the years 379 AD versus 457 AD.

Recommended Videos

Here is the prompt that I used against deep research:

Create a detailed map that overlays the borders of the Byzantine empire in 379AD at the start of the reign of Theodosius the Great versus the borders in 457AD at the end of the reign of Marcian. I need both borders shown clearly on a single map.

Use a historical map style and highlight major cities.

The Deep Research option is powerful but often time-consuming. As it runs, I enjoy watching the play-by-play in the details window. ChatGPT did an excellent job of generating a text analysis of the changing borders, major cities, and historical events.

The wheels fell off the bus when I asked ChatGPT to turn its text analysis into an easy-to-read map.

Without digging too deeply into the minutiae of the fifth century world, the point is that it made up names, misspelled names and placed cities at random. Notice that Rome appears twice on the Italian peninsula. What is particularly frustrating about this effort is that the names and locations were correct in the text.

I tried patiently asking for spelling corrections and proper placements of well-known cities without success. Finally, I told ChatGPT that its results were garbage and threw up my hands. To its credit, ChatGPT took the criticism in stride. It replied “Thank you for your candor. You are right to expect better “. Unfortunately, things did not get better.

After a few minutes of cursing out that platform I decided to give Google Gemini a shot at the identical query. Shockingly its results were even worse. If you look at the image below, you will see “Rome” in the middle of the Iberian Peninsula. Antioch appears three or four times across Europe, but many of the other names are right out of fantasy novels.

I was complaining about this mapping chaos to a friend. He shared a similar story. He entered a photo from a small offsite meeting into ChatGPT. He asked it to add the words “Mahalo from Hawaii 2025” above a photo of a group of colleagues. Instead of just adding the text, the engine totally changed the image. It made people skinnier; it changed men into women and an Asian into a Caucasian. Another friend told me that an AI generated biography of him talked about his twin children which he does not have. It even provided a link to a non-existent source. Yikes.

Ronald Reagan used to say: Trust but verify.

My point is not to suggest that we run away from AI and cancel all our subscriptions. Rather, it is to remind everyone (me included) that we cannot hand the keys to the AI engines and walk away. They are tools that can assist us but, in the end, we need to look at the output, see if it looks and smells right, and decide whether to accept it or not. It is clear that the performance of AI engines is uneven; excellent at some projects and terrible at others–such as mapping.

We will probably see the rise of the machines someday–but today is not the day.

Peter Horan
Peter has published a number of technology magazines and sites over the years. His current passion is around AI.
Gemini web app just got Opal where you can build mini apps with no code
Google Labs' Opal is now in Gemini's Gems manager, letting you chain prompts, models, and tools into shareable workflows.
Opal in Gemini

Opal is now inside the Gemini web app, which means you can build reusable AI mini-apps right where you already manage Gems. If you’ve been waiting for an easier way to create custom Gemini tools without writing code, this is Google’s latest experiment to try.

Google Labs describes Opal as a visual, natural-language builder for multi-step workflows, the kind that chain prompts, model calls, and tools into a single mini app. Google also says Opal handles hosting, so once an app’s ready, you can share it without setting up servers or deploying anything yourself.

Read more
Love the Now Brief on Galaxy phones? Google just built something better
CC launches in early access today for consumer account users 18+ in the U.S. and Canada, starting with Google AI Ultra and paid subscribers.
CC AI Agent

Google Labs just introduced CC, an experimental AI productivity agent built with Gemini that sends a Google CC daily briefing to your inbox every morning. The idea is to replace your usual tab-hopping with one “Your Day Ahead” email that spells out what’s on deck and what to do next.

If you like the habit of checking a daily summary like Now Brief on Galaxy phones, CC is Google’s take, but with a different home base. Instead of living as something you check on your phone, Google is putting the briefing in email and letting you reply to it for follow-up help.

Read more
Google reveals Gemini 3 Flash to speed up AI search and beefs up image generation
Gemini 3 Flash is now available globally within AI mode for Search.
Gemini 3 Flash in Search

Google has just announced a new AI model in the Gemini 3 series, one that is focused on speedy responses. Say hello to Gemini 3 Flash, which is claimed to offer “frontier intelligence” and aims to speed up the Google Search experience for users. 

What’s the big shift?

Read more