How Google Crunches All That Data

Illustration for article titled How Google Crunches All That Data

If data centers are the brains of an information company, then Google is one of the brainiest there is. Though always evolving, it is, fundamentally, in the business of knowing everything. Here are some of the ways it stays sharp.

Advertisement

For tackling massive amounts of data, the main weapon in Google's arsenal is MapReduce, a system developed by the company itself. Whereas other frameworks require a thoroughly tagged and rigorously organized database, MapReduce breaks the process down into simple steps, allowing it to deal with any type of data, which it distributes across a legion of machines.

Looking at MapReduce in 2008, Wired imagined the task of determining word frequency in Google Books. As its name would suggest, the MapReduce magic comes from two main steps: mapping and reducing.

Advertisement

The first of these, the mapping, is where MapReduce is unique. A master computer evaluates the request and then divvies it up into smaller, more manageable "sub-problems," which are assigned to other computers. These sub-problems, in turn, may be divided up even further, depending on the complexity of the data set. In our example, the entirety of Google Books would be split, say, by author (but more likely by the order in which they were scanned, or something like that) and distributed to the worker computers.

Then the data is saved. To maximize efficiency, it remains on the worker computers' local hard drives, as opposed to being sent, the whole petabyte-scale mess of it, back to some central location. Then comes the second central step: reduction. Other worker machines are assigned specifically to the task of grabbing the data from the computers that crunched it and paring it down to a format suitable for solving the problem at hand. In the Google Books example, this second set of machines would reduce and compile the processed data into lists of individual words and the frequency with which they appeared across Google's digital library.

The finished product of the MapReduce system is, as Wired says, a "data set about your data," one that has been crafted specifically to answer the initial question. In this case, the new data set would let you query any word and see how often it appeared in Google Books.

Illustration for article titled How Google Crunches All That Data
Advertisement

MapReduce is one way in which Google manipulates its massive amounts of data, sorting and resorting it into different sets that reveal new meanings and have unique uses. But another Herculean task Google faces is dealing with data that's not already on its machines. It's one of the most daunting data sets of all: the internet.

Last month, Wired got a rare look at the "algorithm that rules the web," and the gist of it is that there is no single, set algorithm. Rather, Google rules the internet by constantly refining its search technologies, charting new territories like social media and refining the ones in which users tread most often with personalized searches.

Advertisement

But of course it's not just about matching the terms people search for to the web sites that contain them. Amit Singhal, a Google Search guru, explains, "you are not matching words; you are actually trying to match meaning."

Words are a finite data set. And you don't need an entire data center to store them—a dictionary does just fine. But meaning is perhaps the most profound data set humanity has ever produced, and it's one we're charged with managing every day. Our own mental MapReduce probes for intent and scans for context, informing how we respond to the world around us.

Advertisement

In a sense, Google's memory may be better than any one individual's, and complex frameworks like MapReduce ensure that it will only continue to outpace us in that respect. But in terms of the capacity to process meaning, in all of its nuance, any one person could outperform all the machines in the Googleplex. For now, anyway. [Wired, Wikipedia, and Wired]

Image credit CNET

Memory [Forever] is our week-long consideration of what it really means when our memories, encoded in bits, flow in a million directions, and might truly live forever.

Advertisement

Share This Story

Get our newsletter

DISCUSSION

I remember the early/mid 90's, before both IE and Netscape, when the web was basically brand new and finding anything worthwhile was a hit and miss affair. Many sites were, like mine, no more than some version of "Hello World". If you found a site chances are it was either down, broken in some way, or useless. Actually finding it in the first place was another story.

Sites more useful than mine and the other Hello Worlders would include the author's bookmarks, aka the early Yahoo. Providing a link to Yahoo on your own page became the thing to do, helping out your fellow traveller.

As everyone knows though, it got huge fast. Search engines and web directories tried to wrangle it all but for the most part they matched words. At a certain point the talk began to center around the impossibility of ever properly cataloging this information and it did very much seem like an impossible task. Even though there were now gazillions of pages they were still useless because you couldn't find them when you needed them.

Then came Google. Its claim to fame was its stripped down look and its speed. In fact along with delivering your results it also showed you how long the search had taken and it still does that.

When Google came along I believe I was a Lycos man if memory serves. I remember a programmer coming into my office and urging me to try it. At first blush it seemed the same as the others, maybe not as good for its spartan appearance. But I stuck with it for a bit and it wasn't long before it became clear that its actual claim to fame was the relevance of its results. Yes, the mighty and mysterious algorithm had been born.