A 1990s iMac Processor Powers NASA’s Perseverance Rover

A high-res image showing Perseverance seconds before reaching the Martian surface.
A high-res image showing Perseverance seconds before reaching the Martian surface.
Image: NASA/JPL-Caltech

As we watched NASA put a rover on Mars last month, it definitely seemed like the agency had to be using some sort of high-tech processor in its machine. Surely the rover is built on something much more powerful than the components in devices us civilians use, right? But while NASA is technically using a specialized processor to power the Perseverance rover, it’s not far removed from the world of consumer electronics—about 23 years ago.

Advertisement

NewScientist reports that the Perseverance rover is powered by a PowerPC 750 processor, which was used in Apple’s original 1998 iMac G3—you remember, the iconic, colorful, see-through desktop. If the PowerPC name sounds familiar, it’s probably because those are the RISC CPUs Apple used in its computers before switching to Intel. (Although now the company is back on the RISC train with its homegrown M1 processor.)

The PowerPC 750 was a single-core, 233MHz processor, and compared to the multi-core, 5.0GHz-plus frequencies modern consumer chips can achieve, 233MHz is incredibly slow. But the 750 was the first to incorporate dynamic branch prediction, which is still used in modern processors today. Basically, the CPU architecture is making an educated guess on what instructions the CPU is going to process as a way to improve efficiency. The more information that’s processed, the better the chip gets at predicting what it needs to do next.

However, there’s a major difference between the iMac’s CPU and the one inside the Perseverance rover. BAE Systems manufactures the radiation-hardened version of the PowerPC 750, dubbed RAD750, which can withstand 200,000 to 1,000,000 Rads and temperatures between −55 and 125 degrees Celsius (-67 and 257 degrees Fahrenheit). Mars doesn’t have the same type of atmosphere as Earth, which protects us from the the sun’s rays, so one flash of sunlight and it’s all over for the Mars rover before its adventure can begin. Each one costs more than $200,000, so some extra protection is necessary.

Motorola PowerPC 750 processor with off-die L2 cache on the CPU module from a Power Mac G3.
Motorola PowerPC 750 processor with off-die L2 cache on the CPU module from a Power Mac G3.
Photo: Henrik Wannheden (Other)

“A charged particle that’s racing through the galaxy can pass through a device and wreak havoc,” James LaRosa at BAE Systems told NewScientist. “It can literally knock electrons loose; it can cause electronic noise and signal spikes within the circuit.”

But why use a processor old enough to remember when Eve 6 released its first album? It has nothing to do with cost—those old processors are the best ones for the job because they are reliable. NASA’s Orion spacecraft, for instance, used the same RAD750 processor.

Advertisement

“Compared to the [Intel] Core i5 in your laptop, it’s much slower…it’s probably not any faster than your smartphone,” Matt Lemke, NASA’s deputy manager for Orion’s avionics, told The Space Review back in 2014. “But it’s not about the speed as much as the ruggedness and the reliability. I need to make sure it will always work.”

Taking that into consideration, it’s reasonable that NASA would choose older technology over the new stuff. After all, when you’re spending $2.7 billion to land a robot on Mars, it’s important that your tech is reliable enough to stand the test of time—down to the tiniest soldered circuits. Currently, the RAD750 powers around 100 satellites orbiting Earth, which includes GPS, imaging, and weather data as well as various military satellites. Not one of those has failed, according to LaRosa.

Advertisement

Staff Reporter, Reviews at Gizmodo. Formerly PC Gamer, Maximum PC.

DISCUSSION

joelhruska
Joel Hruska

If the PowerPC name sounds familiar, it’s probably because those are the RISC CPUs Apple used in its computers before switching to Intel. (Although now the company is back on the RISC train with its homegrown M1 processor.)

x86 CPUs are internally RISC and have been since the P6 inside the Pentium Pro. The “RISC versus CISC” debate is pretty meaningless today. There are two big reasons why:

1). All modern x86 CPUs execute internal RISC-like instructions. This conversion is handled in the decode units. The big risk Intel took with the original P6 (Pentium Pro) back in 1994 was that dedicating 10% of their die and power budget to handling instruction decode would still yield a significant net performance improvement. It paid off. The only chip from Intel or AMD that executed native x86 instructions in the past 20 years was the original Atom. 

2). Modern RISC CPUs do not conform to the original RISC design philosophy, either. Out-of-order execution did not exist when the first RISC designs were created, and RISC CPUs cannot maintain an execution rate of 1 IPC without pipelining. The original RISC design philosophy eschewed complexity as much possible, in favor of cranking up the clock speed. Modern RISC CPUs devote far more transistors to out-of-order execution, pipelining, and towards extracting instruction-level parallelism. Early RISC CPUs clocked far higher than their x86 counterparts; DEC’s Alpha was hitting 200MHz when the Pentium was at 66MHz.

In contrast, Apple’s M1 is clocked at 3.2GHz, compared to significantly higher frequencies for the x86 CPUs. The “RISC” CPU in this comparison favors high IPC and low clocks, while the “CISC” chips are clocked higher and execute fewer instructions per cycle. This is exactly the opposite of the historical “RISC versus CISC” positioning.

It is true that there are some differences between x86 and ARM CPUs that still *relate* to this CISC versus RISC question — x86 instructions have variable lengths, for example, and ARM instructions don’t — but summarizing the debate as “CISC versus RISC” obscures the fact that modern x86 chips and ARM chips actually look pretty darn similar from a block diagram perspective.

TLDR: “RISC versus CISC” is an incorrect oversimplification of the current state of x86 versus ARM CPUs.