Could the U.S. economy stop growing forever?

Illustration for article titled Could the U.S. economy stop growing forever?

We take economic growth for granted — it's one of the defining characteristics of our economy that it grows, year after year. Any period when the economy doesn't grow is called a recession, and this is a kind of temporary economic emergency.


Top image: Kanu101/Flickr.

But sociology/economics professor Robert J. Gordon argues that we shouldn't assume economic growth will continue forever — there was no economic growth before 1700, when the first Industrial Revolution started, and it's entirely possible there will be none after 2050. In a "deliberately provocative" paper, Gordon argues that there were three separate Industrial Revolutions, and only one of them was a massive growth-spurrer:

  • IR #1 (steam, railroads) from 1750 to 1830;
  • IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and
  • IR #3 (computers, the web, mobile phones) from 1960 to present.

... IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972.

Once the spin-off inventions from IR #2 (airplanes, air conditioning, interstate highways) had run their course, productivity growth during 1972-96 was much slower than before. In contrast, IR #3 created only a short-lived growth revival between 1996 and 2004. Many of the original and spin-off inventions of IR #2 could happen only once – urbanisation, transportation speed, the freedom of women from the drudgery of carrying tons of water per year, and the role of central heating and air conditioning in achieving a year-round constant temperature.

But Paul Krugman responds that the growth thanks to IR # 3 isn't really over, and may not have started in earnest — because we could soon be getting more massive increases in productivity due to computers taking over tasks previously performed by robots. Says Krugman:

Not that much progress has been made in producing machines that think the way [humans] do. But it turns out that there are other ways of producing very smart machines. In particular, Big Data - the use of huge databases of things like spoken conversations - apparently makes it possible for machines to perform tasks that even a few years ago were really only possible for people. Speech recognition is still imperfect, but vastly better than it was and improving rapidly, not because we've managed to emulate human understanding but because we've found data-intensive ways of interpreting speech in a very non-human way.

And this means that in a sense we are moving toward something like my intelligent-robots world; many, many tasks are becoming machine-friendly. This in turn means that Gordon is probably wrong about diminishing returns to technology.


Chip Overclock®

(I haven't read the Gordon or Krugman articles yet, but coincidentally, links to them showed up in my RSS reader on Slashdot just a few minutes ago and I printed both of them out.)

I've been thinking about this in another context on and off for a long time (years), and the recent article here on The Great Filter and the Fermi Paradox

set me off again. I wonder if my professional interests in technological scalability and my dilettante interests in economics and the Great Silence might be all connected.

In my professional work, particularly with large distributed systems and supercomputing, I frequently see issues with scalability. Often it becomes difficult to scale up performance with problem size. Cloud providers like Google and have addressed many problems that we thought were intractable in the past, as has the application of massively parallel processing to many traditional supercomputer applications. But the ugly truth is that cloud/MPP really only solves problems that are "embarrassingly parallel", that is, that naturally break up into many mostly independent parts.

(I've written at length about this problem in

which likely falls under the tl;dr category.)

Many problems will remain intractable because they fall under the NP category, that is, the only algorithms that are known to solve them run in "non-polynomial time", which is to say, they scale, for example, exponentially with problem size. There are lots of problems that are in the NP category. Lucky for all of us, encryption is in the P category, while cryptographic code breaking is (so far) NP. True, codes become easier to break as processing power increases, but adding a few more bits to the key increases the work necessary to crack them exponentially.

I've been think about what problems in economics are in fact NP. For example, it could be that strategies necessary to more or less optimally manage an economy are fundamentally NP. This is one of the reasons that pro-free-market people give for free markets, where market forces encourage people to "do the right thing" independent of any central management. It really is a kind of crowd-sourced economic management system.

But suppose that there's a limit - in both a computation and time sense - to how well an economy can work as a function of the number of actors (people, companies) in the economy relative to its resources. Maybe there's some fundamental limit by which if a civilization hasn't achieved interstellar travel, it becomes impossible for them to do so. This can be compared to the story of Pacific islanders who got stuck on their island when they cut down the last tree; no more big ocean going canoes.

(Libertarian economist Tyler Cowen has argued in his book THE GREAT STAGNATION that the U.S. has historically taken advantage of "low hanging fruit", like cheap energy and education, to grow its economy, and that those day may be over.)

Henry Kissinger once famously said "Every civilization that has ever existed has ultimately collapsed." I wonder if this is the result of fundamental non-scalable economic principles, and is in part the explanation for the Fermi Paradox.