Tech bros are always chasing the next big thing, so much so that some developers are already trying to imply that chatbots like ChatGPT are old hat. The real big AI innovation, they say, is language model-powered AI “agents” able to carry out multiple tasks in a row.
Compared to the “prompt, response” model of current chatbots, these agents like Auto-GPT are potentially capable of writing whole reams of code, building websites—or in one surprising case—making a call to a physical pizza place and placing an order.
These agents are essentially self-contained systems that use modern genartive AI models to automate tasks. Most agents use OpenAI’s ChatGPT and GPT-4 as a base, but several other homespun agents also take in generative AI image and voice models to create some surprising, if sometimes creepy results. These systems feed the AI’s outputs back into themselves, creating a program that can run semi-autonomously with an overarching goal.
Say I wanted the AI agent to create a plan to upgrade my PC with a limited budget. In several Agent models, I can set it on tasks like “find and rank the most-current different graphics cards based on price for under $500” and then do the same with a CPU, RAM, and more. Then I can list a task like “Use those lists and determine the best PC one can build for under $1,000.” Depending on the model, it could give me a good idea of where to find my next upgrade. It could also lock up and tell me it doesn’t know how to complete the task.
Compared to your regular old AI chatbot like ChatGPT, these AI agents can connect to the internet and search for information that isn’t present in their own training data. The other big selling point is these agents have more memory than a regular ChatGPT session. The thing is, while these agents work surprisingly well on very basic, specialized tasks, you really can’t leave them alone for too long. Large language models are already prone to spitting out false information, and running multiple instances of a large language model can dramatically increase the likelihood of failure. AI is fully capable of coding, but even one mistake can make the entire thing fail. Sure, you could automate routine code checks, but what if those fail as well?
So are agents actually the evolution of AI, or just a chain of Google searches? Well, the answer lies somewhere in the middle. Despite the moniker, AI simply isn’t intelligent by any real standard. These agents need quite a lot of guidance, and along the way there’s plenty of opportunity for the system to produce wrong information, spoiling the entire process. Before they become truly autonomous, these agents are little more than clever toys.
That doesn’t mean they aren’t interesting or don’t have the capacity to radically change how we currently think about AI. We’ve gone through some of the more interesting AI agents models currently out there, plus a few of the more dramatic agents built for specific tasks that you can check out by clicking through.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT.