ChatGPT, the artificial intelligence fueled chatbot, has been widely acclaimed for its ability to interact with users and generate “human-like” text. Since its launch by OpenAI on Nov. 30, more than a million people have signed up to use the program, many of which have marveled at its talent for spinning up short stories, rap lyrics, essays, poems, and more.
Because of all the hubbub, we at Gizmodo were curious whether the chatbot could write a “Gizmodo article.” Could the platform adequately replicate our style and tone? Could it get the facts right? We figured that the platform would do a pretty good job, given its ability to replicate other writing styles.
What should the article be about? We figured an explainer on large language models—the machine learning tech that powers chatbots like ChatGPT—would be an appropriate subject for it to expound upon.
“Write a Gizmodo article in which you explain large language models. Make sure to give specific examples. Keep the tone light and casual,” we wrote.
Given how easily it crafted a made-up King James Bible verse about removing peanut butter from a VCR, we expected that the chatbot would have no problem with our request. We figured it would effortlessly spin up a replica of our blogging style with the click of a button. However, that didn’t happen. Instead, what we thought would be a swift and efficient process turned out to be a lengthy, laborious, and frustrating one.
Suffice it to say, multiple attempts brought less than satisfactory results. For one thing, ChatGPT kept including incorrect information in its explainer—sometimes mixing up basic facts about the history of its own technology (factual inaccuracy has been an ongoing problem for the program). In other instances, it skimped on the details in critical areas, writing text that read more like OpenAI ad copy than a thoughtful explainer. In at least one instance, it forgot its own name, repeatedly telling me that it didn’t know anything about a “ChatGPT” and thus couldn’t help with my request. In general, it struggled to find a good balance of factual information, story structure, and humor, when putting its stories together.
In short: ChatGPT was not very good at writing a Gizmodo article—at least not one about the machine learning tech that underpins it. After probably a dozen tries or more, the following is the best ChatGPT managed to come up with:
Large language models, also known as neural network-based language models, have gained a lot of attention in recent years for their ability to generate human-like text. These models are trained on massive amounts of text data, and use advanced machine learning algorithms to learn the patterns and structures of natural language.
One of the most well-known examples of a large language model is GPT-3, developed by OpenAI. This model was trained on more than 45 terabytes of text data, and has 175 billion parameters, which are the variables that the model uses to make predictions.
In machine learning, a parameter is a variable that the model uses to make predictions. These parameters are adjusted during training, allowing the model to improve its performance on the task it was trained for. In the case of large language models, such as GPT-3, the model has a large number of parameters, often in the billions, which allows it to learn the intricate patterns and structures of natural language and generate highly coherent and fluent text.
[Note: I made one edit to the chatbot’s story, and it involves the previous paragraph. Try as I might, I could not get ChatGPT to write a story in which it gave a good explanation of AI “parameters.” As a result, I was forced to create a different prompt in the chatbot, specifically asking it to explain parameters. After it had completed the task, I inserted that graf into the blog above. As you can see, the chatbot repeats itself slightly and, in my opinion, doesn’t do a very good job of making the material accessible to the reader.]
GPT-3 has been shown to be capable of generating text that is highly human-like, and has been used in a wide range of applications, including language translation, summarization, and even creating content for websites.
Other examples of large language models include BERT, developed by Google, and RoBERTa, developed by Facebook AI. These models have also been trained on large amounts of text data, and have been shown to perform well on a variety of natural language processing tasks.
One of the key advantages of large language models is their ability to generate highly coherent and fluent text. This is because these models have been trained on a massive amount of text data, which allows them to learn the intricate patterns and structures of natural language. This allows them to generate text that is much more human-like than what is possible with traditional language processing techniques.
Another advantage of large language models is their ability to adapt to different tasks and domains. Because these models are trained on a wide range of text data, they can be fine-tuned for specific tasks, such as language translation or summarization, simply by providing them with additional training data. This allows them to perform well on a wide range of natural language processing tasks.
Overall, large language models have proven to be a powerful tool for generating human-like text, and have a wide range of potential applications. As the field of natural language processing continues to advance, we can expect to see these models play an even greater role in a variety of applications.
So there you have it. Is ChatGPT’s writing competently constructed? Sure. Does it adequately break down the concepts it’s tackling? Sorta. Has it produced a particularly bold or entertaining piece of writing? On that question, a big fat “nope” would suffice. In short: this article wasn’t the easy lift that we thought it would be. After asking the chatbot to write about itself a dozen different ways, the program consistently seemed to leave something critical out of its final draft—be that exciting prose or accurate facts.
Of course, the fact that we even have a chatbot that can attempt something like this remains quite impressive. Much has already been said about how this kind of automation could ultimately replace human writers—and this shows that it’s actually on the road to doing that. A robot could be writing articles for news sites tomorrow. Would the articles be any good? Based on this experiment, the answer is: no, probably not. They would be pretty boring and, given ChatGPT’s penchant for making shit up, would have to be heavily fact-checked. As a result, it doesn’t seem like chatbots are ready to replace human journalists quite yet. In fact, if ChatGPT were a freelancer, I’m pretty sure we wouldn’t hire them back.
Of course, these criticisms might be outdated sooner rather than later. The technology OpenAI is playing with is still in its infancy—and chatbots like ChatGPT are bound to grow increasingly more powerful and intelligent in the years to come. When that happens, the chances of editors hiring a chatbot to augment the local newsroom might tick upwards. And when that happens, I think Giz writers might have good reason to sweat.