Silicon Marriage Made in Heaven: Graphics, Computation on One Chip

Both AMD and Intel are working on plans to integrate graphics-processing capabilities onto the same chip as the central processing unit (CPU), which could help deliver ultra-powerful chips as early as 2008.

If plans from Advanced Micro Devices and Intel bear fruit, the next big bump in computer performance may come from the descendants of today's video cards.

Many companies have begun viewing the graphics processing unit, or GPU -- the chip found on video cards -- as a significant source of computing power in its own right. For instance, GPUs are now being used to tackle computing tasks like bioinformatics, cryptography and audio-signal processing, not just to render visual images onscreen.

AMD and Intel want to take this trend a step further and incorporate a GPU on the same chip as the central processing unit (CPU), possibly by late 2008 or early 2009.

Ever since its acquisition of graphics card maker ATI in July of 2006, AMD has made clear it would fuse GPU and CPU capabilities on one chip. Now Intel has announced plans to do the same thing, with a next-generation architecture the company is calling Nehalem.

It didn't take long for Advanced Micro Devices to point out that its competitor's new processor designs, announced at the Intel Developer Forum, bear an uncanny resemblance to its own, a development that, an AMD statement said, "should be seen as nothing more than proof positive that (we) had it right."

While such agitprop is nothing new for either company, it's a bit odd in this case because the idea to combine the CPU and GPU isn't that new. As many industry analysts have pointed out, the idea can be traced back at least a decade to Sun's ahead-of-its-time MAJC (Microprocessor Architecture for Java Computing), a multi-core, multithreaded, processor design from the mid-1990s that Sun eventually abandoned.

GPUs have become much more programmable over the years, and are capable of delivering higher sustained performance in many non-graphical applications than their CPU brethren.

It makes sense to incorporate that performance into the heart of the computer. In general, integrating a graphics processor onto the CPU yields better processor performance per watt, explains Mercury Research's Dean McCarron. It also could deliver higher levels of mobile-graphics performance. That's because the actual graphics unit has better access to the computer's memory, directly from the processor socket.

And AMD says an added GPU on the die -- a small block of silicon on which a circuit is etched -- could tackle a bunch of parallel computing tasks while the CPU could take on something else, like number crunching.

In spite of its rival's grumblings, Intel maintains there are fundamental differences in the way both companies approach the silicon marriage of CPU and GPU. Responding to AMD's copycat claims, CTO Justin Rattner says Nehalem is simply the natural evolution another one of Intel's mainstays: integrated graphics. Indeed, the new Nehalem architecture will see Intel's integrated graphics finally moving from the chipset onto the actual processor die, according to Rattner.

"One obvious reason you do that is because you want the graphics unit close to the memory connection," Rattner said. "If you leave it out of the chipset, now the graphics unit is across the link and through that chip and out to the memory and back. You're taking the very long way around, in other words."

Rattner claims AMD's CPU/GPU approach is significantly more specialized, much like IBM's Cell architecture, and entails surrounding a CPU with a palette of specialized graphics cores to tackle graphics-intensive problems.

According to Rattner, Intel's bet is that developers ultimately want to work within familiar architecture and then tweak it to more specialized applications, such as graphics or physics applications.

"If you have to (integrate the CPU and GPU in a single architecture), which base would you rather start with?" he asked. "Would you rather start with this odd thing that grew up around that traditional graphics pipeline (the GPU), or would you rather start with the most widely-known, widely-used architecture around ... and then make enhancements?"

AMD's chief technology officer Phil Hester says his company's special-purpose approach will eventually become more generalist. Hester says modules will be mixed and matched to create multi-core processors suited more to general-purpose applications.

Not everyone is convinced.

"The theoretical hardware integration benefits -- lower communication latency, a shared internal bus, shared caches, etc. -- aren't going to be realized in practice," Epic Games founder Tim Sweeney said in an interview with Wired News. "The CPU and GPU remain two architecturally separate units, and programmers are forced to treat them as if they're a million clock cycles latent, whether they are or not," says Sweeney.

In other words, the increased power of these next-generation CPUs doesn't necessarily translate into a free ride for programmers.

But then, that's true of almost every advance in chip technology, including the move to multi-core. So even if your next chip is ultra-powerful at rendering graphics, you still might have to wait awhile for the software to catch up.