You're working on the most important document you've ever typed and suddenly—boom: Blue screen. "A PROBLEM HAS BEEN DETECTED." What the hell just happened?
There's all kinds of new hotness in Snow Leopard and Windows 7, but what's old and busted is when stuff crashes, even on the newest OSes. This is how that happens, and why it's thankfully happening less and less.
There are about a bajllion ways for a computer to crash, from hardware to software, so we're going to start with the little crashes and work our way towards kernel panics and BSODs.
Broadly speaking, the two most common causes of crashes, according to Microsoft's Chris Flores, a director on the Windows team, are programs not following the rules, and programmers not anticipating a certain condition (so the program flips out). The most obvious example of the former is a memory error. Basically, an operating system gives a program a certain amount of memory to use, and it's up to the program to stay inside the boundaries. If a program makes a grab for memory that doesn't belong to it, it's corrupting another program's—or even the OS's—memory. So the OS makes the program crash, to protect everything else.
In the other case, unexpected conditions can make a program crash if it wasn't designed with good exception handling. Flores' "oversimplified" example is this: Suppose you have a data field, like for a credit card number. A good programmer would make sure you type just numbers, or provide a way for the program to deal with you typing symbols or letters. But if the program expects one type of data and gets another, and it's not designed to handle something it doesn't expect, it can crash.
A completely frozen application is one that has crashed, even though it stays on your screen, staring at you. It's just up to you to reach for the Force Quit and tell the computer to put it out of its misery. Sometimes, obviously, the computer kills it for you.
Crashes, as you probably experience almost daily, are limited to programs. Firefox probably crashes on you all the time. Or iTunes (oh God, iTunes). But with today's operating systems, if you hit an omega-level, take-down-your-whole-system crashes, something's likely gone funky down at the kernel level.
The kernel is the gooey core of the operating system. If you think of an operating system as a Tootsie pop with layers of sugary shell, it's down at the lowest level managing the basic things that the OS needs to work, and takes more than a few licks to get to.
More than likely, your computer completely crashes out way less than it used to—or at least, way less than Windows 95. There's a few reasons for that. A major reason, says Maximum PC Editor Maximus Will Smith, is that Apple and Microsoft have spent a lot of time moving stuff that used to run at really low level, deep in the guts of the OS, up a few layers into the user space, so an application error that would've crashed a whole system by borking something at the kernel level just results in an annoying program-level hang up. More simply put, OSes have been getting better at isolating and containing problems, so a bad app commits suicide, rather than suicide bombing your whole computer.
This is part of the reason drivers—the software that lets a piece of hardware, like a video card talk to your OS and other programs—are a bigger source of full-on crashes than standard apps nowadays when it comes to modern operating systems. By their nature, drivers have pretty deep access, and the kernel sits smack in the middle of that, says Flores. So if something goes wrong with a driver, it can result in some bigtime ka-blooey. Theoretically, signed (i.e., vetted) drivers help avoid some of the problems, but take graphics drivers, which were a huge problem with Vista crashes at launch: Flores says that "some of the most complex programming in the world is done by graphics device driver software writers," and when Microsoft changed to a new driver model with Vista, it was a whole new set of rules to play by. (Obviously, stuff got screwed up.)
Another reason things crash less now is that Apple and Microsoft have metric tons of data about what causes crashes with more advanced telemetry—information the OS sends home, like system configurations, what a program was doing, the state of memory, and other in-depth details about a crash—than ever. With that information, they can do more to prevent crashes, obviously, so don't be (too) afraid to click "send" on that error message.
In Windows 7, for instance, there's a new fault tolerance heap—basically, a heap's a special area of memory that's fairly low-level—which could get corrupted easily in past versions of Windows. In Windows 7, it can tell when a crash in the heap is about to happen and take steps to isolate an application from everything else.
Of course, there are other reasons stuff can crash: Actual hardware problems, like a memory failure, or motherboard component failures. Hard drive issues. Hell, Will Smith tells us that a new problem with high-performance super-computing clusters are crashes caused by cosmic rays. A few alpha particles fly through a machine and boom, crash. They weren't a problem 30 years ago.
Granted, you don't have to worry about that too much. What you might worry about in the future, says Smith, with the explosion of processor cores and multi-threaded programs trying to take advantage of them, are the classic problems of parallel processing, like race conditions, where two processes are trying to do something with the same piece of data, and the order of events gets screwed up, ending in a crash. Obviously, developers would very much prefer if the next 5 years of computing didn't result the Windows 95 days, and programming techniques are always growing more sophisticated, so there's probably not a huge danger there. But as long as humans, who make mistakes, write programs, there will be crashes, so they're not going away, either.