Current artificial-intelligence systems are typically one of two types: logic-based or probability-based. But an MIT researcher has developed a new language, Church, that combines the best aspects of each, and it's making AI smarter than ever.
The first AI researchers, back in the 1950s, thought of the human mind as a set of rules to be programmed and developed systems based on logical inferences: "if you know that birds can fly and are told that the waxwing is a bird, you can infer that waxwings can fly."
But with rules-based AI, every exception had to be accounted for. The systems couldn't figure out that there were types of birds that couldn't fly; they had to be told so explicitly. Later AI models gave up these extensive rule sets and turned to probabilities: "a computer is fed lots of examples of something - like pictures of birds - and is left to infer, on its own, what those examples have in common."
Church, a "grand unified theory of AI" developed by MIT researcher Noah Goodman, combines both systems, creating probability-based rules that are constantly revised as the system encounters new situations:
A Church program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99.99 percent. But as it learns more about cassowaries - and penguins, and caged and broken-winged robins - it revises its probabilities accordingly. Ultimately, the probabilities represent all the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself, over time - much the way humans learn new concepts and revise old ones.
Researchers think that Church's fluidity will help it surpass current AI models, and in a test in which the system was charged to make predictions based on a set of observations, it did a "significantly better job of modeling human thought than traditional artificial-intelligence algorithms did." Church is still rough around the edges, and while it's effective at specific operations it's too "computationally intensive" to tackle broader brain-simulation at this point. But Goodman will continue working on the new system, and in the mean time, it will only be getting smarter. [MIT via Maria Popova]