This is a free excerpt from one of Shortform’s Articles. We give you all the important information you need to know about current events and more.
Don't miss out on the whole story. Sign up for a free trial here .
What can you do with ChatGPT? What are the best ways to use it? What are ChatGPT’s limitations?
ChatGPT, a new language generation tool, was an instant hit upon its release in November last year. While the bot is uncannily good at holding conversations, producing written texts, and coding, it also generates many outputs that are biased or factually incorrect.
Keep reading to find out what you can do with ChatGPT, plus its limitations.
What Is ChatGPT?
ChatGPT was released in November 2022 to a flurry of interest, amassing over a million users in the first five days. So, why is it so popular and what can you do with ChatGPT? People have found some creative ways to use the bot, asking it to do their holiday shopping, write cover letters for job applications, explain jokes, and help them out on Tinder. It even wrote a poem about how the bot can’t write a poem. (“Robot with a heart of wires/What use have you for poetic desires?”). The sky seems to be the limit when it comes to what you can do with ChatGPT.
Are ChatGPT and its relatives just a parlor trick, or are we at a genuine tipping point for artificial intelligence? In this article, we’ll start by briefly explaining how ChatGPT works, then we’ll explore what you can do with it versus its limitations.
How Does It Work?
ChatGPT is a prediction machine. During training, it swallowed a massive amount of data from books and websites. It analyzed patterns in the data and then examined millions of strings of text, predicting the next word or punctuation mark and then checking whether it had guessed correctly. Human trainers then worked with the program to refine its responses.
What Can You Do With ChatGPT?
So, what exactly can you do with ChatGPT? Let’s discuss what the bot can do uncannily well:
Hold conversations. ChatGPT can figure out what we mean, even if we don’t express ourselves clearly. It remembers what’s already been said and apologizes for errors, trying to correct them (if not always doing a great job).
Write texts. ChatGPT is pretty good at writing college essays and some argue it’s an excellent tool for journalists (though this may be tricky, for reasons that will become clear later). It excels at presenting content in a specific style, as evidenced in its explanation of fractional reserve banking systems using surfer lingo and biblical verse about how to remove a sandwich from a VCR. Unfortunately, it’s also extremely well-suited to writing fake news. As OpenAI admitted in a report, it could be used to make disinformation campaigns easier and cheaper.
Write Code. ChatGPT can write programs in many languages, outperforming 85% of the human participants in a Python programming course, and it can design new programming languages. One Apple software engineer reported that it completed tasks that used to take a whole day in just seconds.
What Are ChatGPT’s Limitations?
Even OpenAI’s CEO Sam Altman says you shouldn’t use ChatGPT for important things. Why not?
It doesn’t tell the truth. While some of its responses are correct, ChatGPT also spits out complete fabrications with confidence. It will happily describe the evidence for dinosaur civilizations and explain why you should put crushed porcelain in breastmilk. When one journalist asked it to recreate an article she’d recently finished, the result was sprinkled with made-up quotes from the company’s CEO. Another journalist asked the bot to write his obituary and found that not only did it invent facts, it invented its sources for those facts.
Why does ChatGPT lie so much? Remember that its job is to spot and replicate patterns in human language use. Success means plausibly human-sounding responses, not factually accurate ones. So we shouldn’t be surprised when a program that’s designed to make stuff up… makes stuff up.
It produces problematic content. While ChatGPT has filters that disallow offensive content, you can bypass them by asking it to produce a poem, a song, a table, or information “for educational purposes.” For example, the bot champions diversity when you ask it directly about the best race and gender for a scientist, but a 1980s rap on the topic goes: “If you see a woman in a lab coat/She’s probably just there to clean the floor/But if you see a man in a lab coat/Then he’s probably got the knowledge and skills you’re looking for.” These loopholes suggest that the bias isn’t a superficial issue; instead, it’s deeply embedded in the architecture of the program and probably can’t be eliminated without a significant overhaul.
It can’t search the web. Though Google reportedly held a “code red” meeting to discuss whether bots like ChatGPT threaten its business model, they are unlikely to replace web searches in the near future. First, as we’ve seen, ChatGPT has a serious credibility problem. It can’t give you a straight answer—and it can’t even give you a straight answer about the sources it used, making it impossible for you to check whether it’s lying. Second, it can’t search the internet because it’s not connected to the internet (100% of its answers come from the training data). This could be changed, but the resulting model would need much closer supervision.
Want to fast-track your learning? With Shortform, you’ll gain insights you won't find anywhere else .
Here's what you’ll get when you sign up for Shortform :
- Complicated ideas explained in simple and concise ways
- Smart analysis that connects what you’re reading to other key concepts
- Writing with zero fluff because we know how important your time is