There’s a good reason why you don’t know how your data gets from one place to another on the internet. The major network cables that truss the United States haven’t ever been fully viewable to the public–until now.
In 2006, the US Senator Ted Stevens used a string of four words to describe the internet: A series of tubes. That phrase would become iconic, a running punchline about the obliviousness of politicians in the internet age. They’re also invoked by a group of scientists behind the first public map of the internet’s major routes across the US. The group, led by University of Wisconsin’s Paul Barford, say it’s just another example of how little we really know about the most pervasive resource of the modern age. The name of their paper even nods to Stevens–InterTubes: A Study of the US Long-haul Fiber-optic Infrastructure.
“Despite some 20 years of research efforts that have focused on understanding aspects of the Internet’s infrastructure,” they write in the paper, which MIT Tech Review’s Tom Simonite pointed out week, “very little is known about today’s physical Internet where individual components such as cell towers, routers or switches, and fiber-optic cables are concrete entities with well-defined geographic locations.”
As the debate over net neutrality grows, our ignorance of the physical infrastructure of the web is actually becoming a liability.
The project to map the internet’s byways began in 2011 according to a presentation by Barford, when his team set out to figure out where the major cables connecting the US are actually located.
These aren’t the connections that bring the internet from a data center to your house. They’re “long-haul” routes that shoot data between major cities—the fiber optic interstates of the web. Because most of these connections were built by ISPs and providers like Comcast, EarthLink, and AT&T, it’s not as easy as accessing public data on a government website.
A “reeltender” named Mo Laussie installs fiber-optic cable in Colorado in 2001. Michael Smith/Getty Images.
Instead, they had to do serious detective work to piece together the routes. While some ISPs provide maps of their long-haul cables, others had to be investigated through secondary sources like public records. They accessed government documents permitting process that led to construction, or environmental impact statements about installation, or even the agreements between state governments about the cables.
For example, they only learned about a cable in Colorado thanks to an FCC study about the broadband environment that described shared cables by Comcast and other ISPs there. Other public documents showed cables at different points of installation, from permitting to construction, helping the group develop a working map of major cables.
A Insight Cable facility in Springfield, Illinois, in 2007. AP Photo/Seth Perlma
Still, the process of piecing together this information took a huge amount of work. They decided to create a standard for which cables qualified as long haul–it had to either connect cities of at least 100,000 people, be longer than 30 miles, and it had to be a cable shared by two providers.
Why did they care about which ISPs shared infrastructure? Because the whole point of the study was to understand the risks associated with the current internet map–if many ISPs share the same cable, that tells us a lot about a potential risk spot. For example, in one instance they found that some conduits were shared by as many as 19 different ISPs, including major links like the one between Denver and Salt Lake City, or Philly and New York City.
The final map, published in their paper and presented at SIGCOMM this August, is a tangled ravel of jagged lines that include 273 cities or hubs and 2,411 links. It’s the “first of its kind,” the authors say, but it’s far from complete. They’ve created a site for the project, which is now being expanded to include undersea cables and networks from other countries.
“Our goal in developing this archive and the associated portal is to use it as the ground-truth basis to address a variety of important questions on Internet robustness, performance, manageability and security,” Barford told me over email this week.
Here’s an important question: Why does it matter where the internet goes? Isn’t it robust enough that it’ll survive any specific attack or failure?
Sure. The internet, by virtue of its spiderweb-like redundancy, is pretty damn robust. But the paper also points out how dependent we are on it functioning, not just for YouTube but for the continuity of major infrastructure systems, security, and communication. If we don’t know where the juice that connects those systems comes from, we don’t have much recourse in protecting them when they fail, or if they’re attacked.
An AT&T mobile telephone switching office on in 2012. Photo by John W. Adkisson/Getty Images.
Then there are the huge political implications of the web’s physical existence. The debate over net neutrality focuses on whether ISPs should be classified as public utilities–which could be regulated by the FCC–or whether they should remain private entities. In that case, the physical stuff, the cables and trenches and connections that Barford and his collaborators have mapped, would become public infrastructure, available to third parties to access, despite whatever ISPs shared the cost to build them. More carriers will share the same old infrastructure.
One big consequence of Title II allowing other carriers to access these existing long-haul cables is that sharing existing cables won’t increase the number of redundancies in the system. Because more carriers will be using the same existing routes, there will be less redundancy. It’s an “unavoidable trade-off,” the authors say, and one that should be part of the net neutrality debate.
Another interesting point made by the paper? Look at their map of the internet, and you’ll see a rough proxy for other, older systems that connect the US. It looks a lot like the interstate map, as the study points out, as well as the railway map. That’s no surprise—our cities are in the same places, and in many cases these cables have been laid down in existing trenches holding infrastructure.
But should the internet mimic the structure of other systems from older centuries? Over email, Barford said the team’s current focus is developing ways to decrease the risk of sharing a few major cables using a system based on Internet eXchange Points. Known as IXPs, these would be physical hubs where ISPs could exchange traffic freely and create greater redundancy and lower lag than the current network. Perhaps states would even collaborate and support these hubs.
The work continues on the map, which is now being expanded to a global scale. But maybe the most important takeaway from the project? The fact that, while lawmakers debate the future of internet infrastructure in America, there hasn’t been much information available about what it really looks like–where the risks are, who carries that risk, and what can be done to mitigate it. In the end, the authors aren’t arguing for or against a specific approach to regulating internet–they’re providing very useful evidence about it.
Contact the author at kelsey@Gizmodo.com.