Category Archives: Uncategorized

What’s Next For Intel? No One Knows

This important point seems obvious in retrospect:

In addition to the CEO, the Board of Directors of enterprises whose core products are engineered, must have enough of an engineering background to make a clear-eyed assessment of the best path forward.

Usually, the importance of having technologists run technology companies finds expression in the form of CEO selection. Microsoft suffered a lost decade during Steve Ballmer’s tenure as CEO. The story may be apocryphal, but Microsoft lore held that Ballmer’s signature accomplishment at Procter & Gamble had been to design packaging that literally crowded competitors out of store shelves. In any case, he was a marketer, not a technologist, and Microsoft found its footing again after appointing Satya Nadella to replace him as CEO.  

Boeing lost its way under CEOs whose backgrounds favored finance over engineering, with Jim “Prince Jim” McNerney referring to longtime engineers and skilled machinists as “phenomenally talented assholes” and encouraging their ouster from the company. The disastrous performances of the 787 and 737 MAX are widely attributed to Boeing’s embrace of financial as opposed to aeronautical engineering.

For Intel’s part, the beginning of the end appears to date back to the CEOs appointed after Paul Otellini (May 2005-May 2013): Brian Krzanich (May 2013-June 2018) and especially ex-CFO Bob Swan (June 2018-January 2021) recorded tenures marred by acquisitions of dubious merit, blunders in product development, and a loss of Intel’s historic lead in semiconductor fabrication.

The commentary around Gelsinger’s ouster quickly coalesced into two camps, as summarized by Dr. Ian Cutress:

John Carmack made his presence known in the (1) camp with this tweet:

When Intel first announced that Pat Gelsinger would return as CEO, I was surprised to hear that he even wanted the role. Intel had been foundering for years, and Gelsinger was in a position to know just how deep a hole they’d dug for themselves. He tried, and apparently failed, to set expectations to the public and to the board as to what a long and difficult road lay ahead. Breaking apart the CPU product development and chip fabrication, as AMD did with Global Foundries almost 15 years ago, was the right thing to do. Lobbying for the enactment of CHIPS, and soliciting federal subsidies to re-shore semiconductor manufacturing, also was the right thing to do. It was going to take a long time, and some serious politicking: on the one hand, layoffs seemed inevitable and necessary; on the other, Members of Congress being asked to support a chipmaker with billions of taxpayer dollars don’t want to hear about the need for layoffs.

It turned out to be too difficult an optimization problem to solve in the time allotted. To many, it doesn’t seem reasonable for Intel’s Board to have expected a turnaround in the time Gelsinger had at the helm, and it is disqualifying for them to oust him without a succession plan.

I am no fan of Intel. In its heyday, Intel indulged in anticompetitive practices that put to shame anything Microsoft attempted in the 1990s, and never got the regulatory scrutiny it deserved. Before TransMeta, there was Intergraph. AMD and NVIDIA eventually prevailed in antitrust settlements, and it is genuinely shocking to intellectualize that as recently as 2016, Intel was paying $300M per quarter to NVIDIA as part of a $1.5B private antitrust settlement. How quickly the mighty have fallen!

At the same time, Intel’s steadfast commitment to executing on its core technical roadmap – improving the x86 architecture, without breaking backward compatibility – enabled them to bring volume business models to positively disrupt not only the PC industry (1980s), but the workstation industry (1990s) and HPC and data centers. They democratized computing in a way that few other companies can claim. For that, the company deserves our gratitude and respect, and we all should be pulling for its turnaround and a brighter future.

Unfortunately for Intel and the board, it is not at all clear that ousting Pat Gelsinger will lead to those favorable outcomes.

Reflections On The Downfall of Intel

“If you put $100 into Intel 25 years ago, your investment would still be worth $100.”

https://x.com/TrungTPhan/status/1819174107646513173

Intel recently suffered the largest decline in its stock price in 50 years, an ignominious prelude to a period that has been and, according to Intel CEO Pat Gelsinger, will continue to be, very difficult for the company. Intel’s market capitalization is now less than it was 25 years ago. Investors have sued, alleging that Intel deliberately concealed the problems that led to this ignominious incident.

What Happened? How could Intel, a company whose meteoric rise decades ago helped coin the term Silicon Valley, have fallen so far from grace? The answer to this question is complex, and I can only scratch the surface in a blog post, but I think the root causes can be traced to 1) Intel does not understand software, 2) Intel owns its fabs, with an honorable mention to 3) Intel does not integrate acquisitions well. An early draft of this post added a numbered point 4) Intel missed the boat on AI, but that follows from points 1) and 3).

1. Intel Does Not Understand Software

Intel is the epitome of a hardware company that does not understand software. For decades, they spent a great deal of R&D money trying and failing to build disruptive hardware that advanced the state of the art, only to have those products failed to achieve market success – even when they collaborated with Microsoft, as they did with Itanium. In the 1990s, Intel won a pitched battle for general-purpose CPU clock cycles by designing the Pentium Pro, a RISC-like implementation to execute their CISC architecture.

The following table summarizes Intel’s forays into computer architecture, organized by decade.

EraProductComments
1980si432Too CISC
1980si860Too RISC; too tight a tie between architecture and implementation
1990sItaniumSlow x86 emulation; success was predicated on advancements in the state of the art of compiler technology; not a good fit with processes in Microsoft’s Systems division
2000sLarrabeeIntel’s offering to compete with CUDA, Cell, etc.
2010sXeon PhiLarrabee without texture units
2010sdGPU/GaudiBelated attempt to re-enter the discrete GPU business. Raja Koduri was recruited from AMD and recently departed.

Within the x86 realm, Intel’s blindness to software did find expression, but their oversights tended to be more correctable. By the 2010s, their x86 business seemed to be humming, with a “tick-tock” tempo alternating between updates to the microarchitecture to improve instructions per clock, often adding new instructions, and a die shrink with bug fixes and higher performance. More recently, Intel abandoned “tick-tock,” apparently without bothering to replace it with a similarly cogent strategy for chip design and evolution. Finally and most recently, Intel badly botched the introduction of AVX512 to the mainstream x86 architecture.

1a. Intel botched AVX512 rollout

Historically, updates to x86 SIMD instruction sets have had small impacts on the die area (and therefore manufacturing costs), because ever since the 80486 (c. 1989) CPUs have been mostly SRAM; and it has been easy to support backward compatibility by ensuring that every newer chip supports all of its predecessors’ features, often in more efficient and higher-performance ways.

It now seems clear that AVX512 was more disruptive than any other SIMD instruction set extension.

I get it, AVX512 is a beast. It not only doubled the SIMD width again over AVX (making the SIMD registers and operations 64 bytes wide, the same size as a cache line), it added predication in the form of new mask registers that have a rich set of instructions to manipulate them. The instruction set extensions were so rich that Intel chose to separate the initial rollout into at least 6 separate feature bits, depending on how you count. But there are ways for hardware designers to pursue area/performance tradeoffs, as AMD has.

Given Intel’s historic performance in adding new x86 features, it’s not a stretch to say that the recent bobbles with AVX512 have contributed to the company’s woes. AVX512 is the first ISA extension I’m aware of that was rescinded in the field through microcode updates. (I am not an operating system architect, but I can only imagine the difficulties that heterogeneous ISA capabilities visit on an OS scheduler. Perhaps shipping devices with some cores that were AVX512-capable and some that were not, is another insight into Intel’s poor software discernment.)

I suspect that with AMD having added AVX512 to all cores, with varying performance characteristics, Intel eventually will be forced to follow in AMD’s footsteps once again, as they did on the 64-bit x86 architecture.

2. Intel Owns Its Fabs

Intel was founded as a foundry or “fab”: a company whose primary business was making semiconductor chips. In Intel’s case, their biggest business was memory chips (DRAMs, or Dynamic Random Access Memories), which happened to be the type of chip that fabs used to test their latest processes. Moore’s Law, the famous observation about the exponential increases in transistor density, is named for Intel co-founder Gordon Moore.

Until the mid-1980s, manufacturing x86 chips was a side business for Intel. When they made a strategic decision to focus on the x86 business, and for many years afterward, it was considered a competitive advantage for Intel to own its foundry. They were at the forefront of innovation through the 1990s and 2000s, relentlessly improving transistor densities in keeping with Moore’s Law (named for Intel co-founder Gordon Moore). By vertically integrating chip design and production, they could closely co-design chips and the processes used to manufacture them, and trade secrets become a reasonable alternative to patents for protection of intellectual property.

Through the 1990s, the x86 business grew so quickly that it could fund the incredibly expensive construction of new fabs that implemented the latest manufacturing processes. For much of that time, Intel had so much manufacturing capacity it was a proximate cause of their monopoly on the x86 chip business: there literally was not enough capacity elsewhere on the planet to service demand.

The challenge with fabs, though, is that because they are expensive to build, the way to maximize the return on that investment is to keep them running long after whatever process they were designed to target has become obsolete. (Retooling fabs to manufacture chips with denser geometries seems to be prohibitively difficult.) With the market growing as fast as it did in the 1990s, the problem of utilizing depreciated factory equipment was less pressing, because the newer factories had more capacity. But as Intel was pursuing this strategy, the industry was continuing to develop an alternative business model where so-called “fabless” semiconductor companies could design chips, then contract with fabs to manufacture those chips. TSMC, the Taiwan Semiconductor Company, is now the most famous of these contract firms; other examples include Samsung and (once upon a time) IBM. The benefit of a company making a pure fab play was obvious: as manufacturing processes advanced, the fabs could simply reduce pricing for outdated processes, and customers who did not need the latest, most expensive process would generate demand.

Most graphics chip companies, including NVIDIA, were fabless. The business and technical relationships between fabs and their biggest customers are sensitive and incredibly important. The fabs must rigorously respect the IP rights of their customers, since often they are making chips for companies that are directly competing in the marketplace.

Until the late 2000s, AMD also owned its fabs; but AMD was never able to use its fabs as a competitive advantage. Since fabs like TSMC had processes to cooperate closely with their customers (and their customers, of course, had experts who could coordinate with fabs on production issues), having the chip designers and the factories making the chips be under the same roof only offered a muted benefit that did not offset the downside risks of having unused, outdated factory capacity.

I actually know an executive who left AMD for NVIDIA in the early 2000s – exactly because AMD still owned its fabs. AMD later was able to divest of its fabs in the Global Foundries deal, which began in 2008 and concluded in 2012.

At some point in the last decade, (I would say more than 5 years ago but perhaps less than 10 years), Intel lost its technology lead over TSMC. They no longer had the best fabrication process in the world, having taken too long to adopt EUV (Extreme Ultraviolet) based lithography methods, famously embodied in ASML’s tin-vaporizing machines. If Intel’s CPU business were fabless, they would be able to access the best fabrication technology to service the business.

Back To The Future

With Gelsinger as CEO, he has made Intel’s future direction clear: he is reversing the course that was set almost 40 years ago, when Intel pivoted from a memory chip company (a fab) to an x86 company. He traveled all over the country, advocating for passage of the CHIPS Act to re-shore semiconductor manufacturing, and has vowed to retake the lead in semiconductor fabrication. Intel is making steady progress, but as their latest quarterly results show, the transition will be a painful one, not least because Intel historically has been much better at making chips for itself than making chips for other companies. Gelsinger is widely expected to lead a divestiture that would echo AMD’s Global Foundries deal, formalizing the division of labor between Intel’s CPU and fab businesses.

3. Intel Does Not Integrate Acquisitions Well

Contrast the acquisition of Altera with AMD’s acquisition of Xilinx: nine years ago, Intel acquired Altera, only to decide to spin it off as an independent entity again recently. In contrast, Xilinx is so fully integrated with AMD that many of CEO Lisa Su’s most trusted lieutenants came to AMD via Xilinx.

Contrast the acquisition of Nervana with AMD’s acquisition of nod.ai. Intel spent $400M acquiring Nervana, then utilized none of its personnel or intellectual property to advance its business or technical objectives. AMD acquired nod.ai, whose former CEO now reports directly to Vamsi Boppana, the ex-Xilinx executive now in charge of AMD’s AI roadmap.

AMD uses acquisitions to increase revenue, broaden the diversity of revenue sources, and deepen its bench of executive management talent. Intel used acquisitions to stage bonfires fueled by their investors’ money.

The Utility of Futility

Fatalism Can Be Useful Sometimes

Introduction

A wise man once told me: “Scientists build in order to learn. Engineers learn in order to build.” One of the most exciting experiences for an engineer is realizing that new discoveries or technologies enable new possibilities, but part of an engineer’s job also is to save work by eliminating possibilities. For example, it sometimes comes in handy to know that nothing can travel faster than the speed of light in a vacuum.

I call this principle “The Utility of Futility,” and it is an ethos that more software engineers would do well to embrace. As a species, software engineers are incorrigible optimists, and we work in a profession where 90% solutions can be deceptively easy to develop. These characteristics may lead us to overlook opportunities to exploit the utility of futility. But before exploring some costly mistakes that could have been avoided by embracing this ethos, let’s review some related areas where system design was informed by a recognition of what’s not possible.

Speed Bumps

Our first story begins with a (metaphorical) speed bump.

Work on CUDA began in earnest in early 2005, and when I joined the team, the driver consisted of a few hundred lines of code. Most of that code implemented a handle allocator that would ‘allocate’ fixed-length memory buffers (e.g., context structures) that then could be referenced through integer handles. Internally, the driver then would translate these handles into pointers through pointer arithmetic on the buffer from which the index had been allocated.

Early on, we replaced these integer handles with so-called “opaque pointers,” replacing e.g.

typedef unsigned int CUcontext;

with:

typedef CUctx *CUcontext;

Note that the typedef does not declare the context structure – it declares a pointer to same. C/C++ clients of CUDA know that there is a context structure, but they do not know what it contains.

When this minor refactoring was done, it caused a stir and prompted an internal discussion on our development team. To some, the opaque handles seemed more secure, because the driver had to run some code to translate them into pointers to the driver’s internal structures. And developers who reverse-engineered the structure’s layout and then took advantage by, say, hard-coding offsets from the structure pointer into their own applications, would be risking their own applications’ continued functionality. Such code definitely breaks compatibility with future versions of CUDA, among other things. Why not put a little speed bump in the path of such developers?

Applying the utility of futility, we decided that the benefits of the speed bump were outweighed by the additional complexity needed to implement a fixed-length allocator and handle validation. Any developers determined to reverse-engineer the layout of the CUDA context structure would be able to do so.

One can consider this particular “utility of futility” story to occupy a gray area. Most do. Intel is famed for backward compatibility, but every time they ship a new instruction set, they break compatibility in a subtle way: the new instructionspreviously-invalid opcodesexecute and have architectural side effects, instead of signaling invalid-opcode exceptions! But any developer who ships software that relies on that behavior would elicit little sympathy: about the same amount as any developer who reverse-engineered the layout of an internal structure in CUDA.

The Leaking Nanny

Software that runs in data centers must be robust. If a server running hundreds of virtual machines loses power, it must resume running them after regaining power, having lost as little work as possible. Such robustness involves heroics like journaling disk traffic to solid state drives that have enough capacitance built into their power supplies to post all their pending writes in the event of a power failure. The hypervisor software has to be able to restart all those virtual machines, in as close a state as possible to whatever they were doing before the server lost power.

I once worked at a utility computing vendor that ran a management process whose memory usage would steadily increase, and no one could figure out why. There was a memory leak somewhere in the code, and the code was big and complicated and written in a garbage-collecting language that made it difficult to diagnose such issues. Eventually, the excess memory usage caused the server to fail.

The “utility of futility” solution to this problem: instead of fixing the memory leak, the vendor simply created a watchdog that monitored this process’s memory usage and, when it became too much, killed the process. Remember, this process had been built to be robust in the face of power failure, so getting summarily executed by a peer process is also a recoverable event.

If the stopgap measure is using a feature that the system must deliver in any case, it needn’t stay a stopgap.

Exit On Malloc Failure

QEMU, the hardware emulator that enables virtualization for HVM guests on Xen, features an interesting engineering compromise: its internal memory allocator, the equivalent to malloc(), exits on failure. As a result, code reviews that check the return value from this function are rejected – the function either succeeds, or does not return at all because the whole process exited. The reason: gracefully handling out-of-memory situations introduces too much possibility for (presumably rare) and difficult-to-diagnose error, and therefore security risk. Since QEMU instances and their clients have to be robust in the same ways as the preceding system (e.g. recover machines in their latest-known states before the adverse event such as power failure), the “utility of futility” favored having malloc() exit rather than doing a prohibitively expensive and error-prone security analysis.

Memory Probes

As a young software engineer who cut his teeth on microcomputers with no memory protection whatsoever (IBM PCs running MS-DOS, Macs running the original MacOS), I was excited to work on Windows, a platform with some semblance of memory protection (don’t laugh – before Apple bought NeXT for its UNIX-like operating system, the MacOS was not any more secure than MS-DOS). Running your program under a debugger, invalid memory references in your program were flagged immediately.

But the Windows API had something more: functions IsBadReadPtr() and IsBadWritePtr() that could check the validity of a memory location. As a budding API designer, this function seemed like the perfect opportunity to elevate my parameter validation game: if my caller passed an invalid memory range to one of my functions, I could return an error rather than just having the program crash in my function.

The problem with this API is that even 16-bit Windows was a multitasking operating system. Memory could become invalid as a consequence of code running elsewhere in the system. Even if you wanted to build a function that “validated” a memory range before using it, one could contrive a scenario – say, a context switch at an inopportune time – where the memory was subsequently invalidated by other code running in the system. If Microsoft had recognized the utility of this futility, they would not have built this API, since all it did was give a false sense of security.

Note: In Windows NT, the structured exception handling (SEH) feature did, in fact, enable robust memory validation. Using SEH, the memory’s validity is evaluated at the time it is referenced, and not before, as when using a memory probe API. But in the intervening years, a consensus has developed among API designers is that the costs of such memory validation outweighs the benefits. It is left as an exercise for the student to determine whether APIs that crash when you pass in NULL parameters are a manifestation of the utility of futility!

Microsoft WDDM: A Missed Opportunity

This utility of futility story is going to require more background. A lot more background.

One of the signature achievements for graphics in Windows Vista (c. 2007) was to move most of the graphics software stack into user mode. Or I should say, back into user mode. For Windows NT 4.0 (c. 1996), Microsoft had moved GDI (the Graphics Device Interface) into kernel mode, reportedly after Michael Abrash buttonholed Bill Gates at a party (never a robust process for decision-making), and presumably because Abrash wanted to continue writing optimized software rasterizers and believed overall performance would be better if they were running in kernel mode. Wrong on all counts, and fodder for another blog to be written someday.

By the time Windows XP shipped (c. 2001), it was abundantly clear that moving GDI into kernel mode had been a catastrophic mistake, because the entirety of display drivers got moved into kernel mode with it, and by the time Windows XP shipped, graphics drivers included things like pixel shader compilers that were waaay too unstable and used waaaay too much stack to run in kernel mode. (In Windows NT, kernel stacks are limited to 12K on x86, or 24K on x86-64 – big enough for reasonable kernel applications, but not for things like shader compilers that should not run in kernel mode at all.) In fact, when I ported NVIDIA’s graphics driver to x86-64, one of the things I had to do to get the compiler running was spawn another thread and delegate onto it just to buy another kernel stack. Fortunately, the shader compiler didn’t seem to need more than 2 kernel stacks, or I would’ve been tempted to build a kernel stack usage nanny that spawned threads on an as-needed basis just to emulate a larger stack!

By the time Windows XP shipped, there was widespread consensus that most of the graphics driver had to be moved back to user mode. But at Microsoft, the kernel team still harbored a deep distrust of graphics hardware vendors, fueled by a mid-1990s era of incredibly buggy hardware, operated by poorly written drivers that had to try to work around the buggy hardware. Back then, dozens of hardware vendors had been competing for OEMs’ business, and schedule slips could be fatal; as a result, many hardware bugs were left in, as long as they could be papered over by driver work (provided those driver workarounds did not have too much performance impact). The bulk of this activity was occurring on Microsoft’s Windows 95 platform, which was built on a completely separate code base from Windows NT. Cries from the NT kernel team, who wanted robust hardware and drivers, went unheard by hardware developers who were more concerned about their companies’ continued existence. The number of OEMs was daunting as well as the number of graphics IHVs: graphics companies such as S3, ATI, Cirrus Logic, Tseng Labs, Matrox, Chips and Technologies, Oak Technology, Number Nine, and Trident were selling to OEMs such as Acer, AST, Compaq, Dell, Gateway 2000, HP, IBM, NCR, NEC, and Packard Bell. Both of these businesses were competitive, with new entrants funded either by large companies seeking to diversify, or startups seeking to parlay niche expertise into market share. Consumer electronics titans Samsung and Sharp entered the PC business, for example, while startups like 3Dfx, 3Dlabs, Rendition, and NVIDIA entered the graphics chip business.

Suffice to say that in that competitive environment, graphics chip companies were in no mood to slip schedule just to make their hardware more robust for a workstation platform whose unit sales were a fraction of the consumer platform – even if the workstation platform represented the future.

By the early 2000s, the competitive landscape had shifted, for graphics IHVs at least. Companies that couldn’t deliver competitive performance or features were acquired, went out of business, or were relegated to the margins. Consolidation eventually reduced the major players to a handful: Intel, NVIDIA and ATI accounted for most unit sales. These companies all had the wherewithal to build robust hardware and drivers, but between ongoing fierce competition and vastly more complicated hardware, the vendors did little to earn back the trust of the Windows NT kernel team after losing it in the 1990s.

To understand the landscape, it’s important also to understand the organizational tension between the NT kernel team and the multimedia team that owned the Windows graphics stack. Much of the NT kernel team, led by the brilliant and cantankerous operating system architect Dave Cutler, had been recruited to Microsoft from Digital Equipment Corporation in 1987; in contrast, the multimedia team that owned the 3D graphics drivers had developed in the Windows 95 organization and been reorganized into the NT organization in 1997. So, as the multimedia team redesigned the graphics stack to move most code back into user mode, they were doing so under the watchful eye of skeptical kernel architects who did not particularly trust them or the vendors whose capabilities were being exposed by the multimedia team.[1]

Even the hardware interfaces had changed so much that they would’ve been unrecognizable to graphics chip architects of the mid-1990s. Instead of submitting work to the hardware by writing to memory-mapped registers (MMIO), the drivers allocated memory that could be read directly by the graphics chips (via direct memory access or DMA), filled those buffers with hardware commands, then dispatched that work to the graphics chips[2]. Given that the NT architecture required that hardware be accessed only from kernel mode, management of these “command buffers” presented a challenge to the multimedia team. For performance and platform security, the bulk of the code to construct these command buffers had to run in user mode; but in keeping with the NT architecture, the command buffers could only be dispatched from kernel mode.

To avoid extra copying, Microsoft designed the system so hardware-specific commands to be written directly into these buffers by the user mode driver, since one vendor’s idea of a “draw triangle” command may differ from that of another. These commands would be queued up until the command buffer was full, or had to be submitted for some other reason; the system them would do a “kernel thunk” (transition from user to kernel mode), where the kernel mode driver would validate the buffer before submitting it to the hardware.

For those familiar with the NT architecture, the flaw in this design should be obvious, and is somewhat related to the preceding memory probe “utility of futility” story: since Windows is a multitasking API, the buffer can be corrupted during validation by the kernel mode driver. No amount of validation by the kernel mode driver can prevent corruption by untrusted user mode code between when the kernel mode driver is done with the validation, and when the hardware reads and executes the commands in the buffer.

It is, frankly, incredible to me that this platform vulnerability was not identified before the WDDM design was closed. The NT kernel team may not like having to trust graphics hardware, but as long as buffers can be corrupted by user mode code before the hardware can to read it, the only way to build a robust platform is to have the hardware validate the commands.

Another way to protect from this race condition would be to unmap the buffer so user mode code wouldn’t be able to change it, but editing the page tables and propagating news of the newly-edited page tables (“TLB invalidations”) would be too costly.

Conclusion

As you explore design spaces in software architecture, if you can prove that an interface is making promises it is not in a position to keep, don’t be afraid to invoke the Utility of Futility to simplify the system.


[1] Talking with members of the NT kernel team was illuminating, because they had a different relationship with hardware than did the multimedia team; they literally had fixed the Pentium FDIV bug by trapping and emulating the FDIV (floating point divide) instruction. But since FDIV is infrequently executed, emulating it incurred a modest performance penalty that would go unnoticed by end users without the aid of measurement tools. Even if graphics hardware were designed to be trapped and emulated, like the CPU instruction set, trapping and emulating graphics functionality in the NT kernel would incur a large-enough performance penalty that the resulting product would be unusable.

[2] These are commands such as “move this 2D block of pixels from here to there,” or “draw the following set of triangles.” The WDDM (the Windows Display Driver Model) architecture had many other features, such as supporting multitasking so many applications could do 3D rendering into the Windows desktop concurrently, but those details are not relevant to this “utility of futility” discussion.

A Paean To Structure-Based Interfaces

When the time came for us to gather requirements and start designing CUDA 2.0 (c. 2008), I was surprised to hear from our product manager that the key Next Biggest Thing in CUDA was… drum roll… 3D textures[1]!

The reason this priority was surprising is because adding texturing in the first place had been somewhat controversial. Why have graphics features in a general purpose computing ecosystem[2]?

Early versions of CUDA hardware did not have caches to mitigate the overhead of uncoalesced memory traffic, though, so applications that really needed misaligned reads had no choice but to use the texturing hardware. As an added bonus, the cache resources were separate, so by reading inputs via the texture pipeline, you were enlisting hardware resources that otherwise would have gone unused.

But… 3D texturing? Okay. That would require us to implement extensive, backward-compatible revisions to all of the CUDA array APIs, the texturing APIs, and the memcpy APIs. CUDA 1.0 already had a plethora of memcpy functions, especially at the driver API level, where we’d opted to try to maintain type safety between CPU and GPU pointers.

Adding new functionality in a backward compatible way is a long-held tradition in API design. For example, flags words are a good way for interface designers to future-proof their designs; or in a pinch, replacement functions may be provided that enable new functionality. Since those functions did not exist when legacy applications were being developed, the interface designer knows they can only be called by newer application expecting the new behavior. Sometimes we get both, even in close quarters: the driver API function cuEventCreate() takes a flags word that later was used for innovations such as blocking synchronization, but whomever wrapped the driver API in the CUDA Runtime omitted it from cudaEventCreate(), so NVIDIA later added the replacement interface with the awkward name cudaEventCreateWithFlags()!

The reason API designers have to be careful with this type of thing is because applications can rely on existing behaviors, a lesson that most interface designers learn through hard experience. Mine was acquired at Microsoft in the multimedia division…

Aside: How To Compatible

When I was Direct3D development lead in 1998 or so, we worked with Intel to enable a new software rasterizer that benefited from the new MMX instruction set. The reason we had to expose it as a new rasterizer, instead of simply replacing the incumbent RGB rasterizer with a faster one, was because applications would subtly break when using the new rasterizer, and no matter how hard Intel tried, they couldn’t just replace the old, slower, serial RGB rasterizer with a new, faster one. So, we added the MMX rasterizer as an option; that way, we had both the new MMX rasterizer and backward compatibility (old applications would continue to work). The only downside was that newer applications had to specifically ask for it, but Intel was okay with that, as a measure of accountability for not being able to develop a drop-in replacement for the RGB rasterizer. Such compatibility measures are called “opt-ins” because the application is opting into the new behavior: if a legacy application were ported to the newest interfaces without any changes, it still would get the old RGB rasterizer. A change to the source code, not just a recompile, was needed to get the new MMX rasterizer.

But when we simply started enumerating the MMX rasterizer to applications, we found a lot of compatibility breaks. Many applications ran much more slowly, and others crashed or otherwise didn’t work at all. When we debugged the problem, we found that we had to add a few more levels of opt-in to keep legacy applications working. You see, Direct3D always had reported 2 or 3 rasterizers: the “Ramp” rasterizer that ran quickly; the “RGB” rasterizer that was more functional, but slower; and, if hardware was available, the “HAL” rasterizer made hardware-acceleration available[3]. But by adding the MMX rasterizer, we’d increased the number of rasterizers in the API from 3 to 4 if hardware acceleration was available; and several different compatibility breaks became possible:

  • Some applications had statically declared a 3-element array of rasterizer handles that was now being overflowed by our 4-element response, causing the application to crash;
  • Some applications were assuming that if 3 rasterizers were enumerated, then hardware must be available;
  • Some applications were further assuming that if 3 rasterizers were enumerated, then the third must be the HAL[4].

The fix was to make enumeration of the MMX rasterizer an implicit opt-in if the application was accessing Direct3D using a new interface. We also wrote better sample code – that was where the statically declared arrays had come from[5].

After being burned a few times by promising that existing applications will continue to work on new operating systems (or new versions of Direct3D) and delivering disappointing results, interface designers start to learn tactics to make everyone’s jobs easier[6], not just their own, but also the lives of developers who are using their interfaces. With that in mind, let’s take a close look at the 3D memcpy APIs that were added in CUDA 2.0.

An API Design Headache

Set aside the complexities of 3D texturing for a moment and focus on the difficulties presented by just copying the memory from here to there. For starters, CUDA has three (3) different memory types: host memory[7], device memory, and CUDA arrays. Since CUDA arrays come in 2D and 3D variants, we also needed to support 1D, 2D and 3D memory copies. And because CUDA arrays’ layout is opaque to developers, the memory copy interface must support offset values (X, Y and Z).

Designing a set of distinct memory copy functions that covered the full Cartesian product of possibilities would have been an API design and development nightmare, accompanied by a developer education headache[8]. Between 3 source memory types, 3 destination memory types, and 3 dimensionalities for the memcpy’s, twenty-seven (27) memory copy functions would have had to be designed, implemented, and documented. Just coming up with a heuristic for naming would be a chore, reminiscent of the naming problem confronted by the designers of SIMD intrinsics.

For example, a function to copy a 2D slice of device memory into a 3D CUDA array might look something like this:

CUresult cuMemcpy2D_2DDto3DA( CUarray dstArray, size_t offsetX, size_t offsetY, size_t offsetZ, CUdeviceptr src, size_t srcPitch, size_t WidthInBytes, size_t Height );

The offset parameters are needed for CUDA arrays because the layout is opaque to developers. In this interface design, callers who are copying from a subset of a 2D array in device memory are expected to do the address arithmetic to specify the correct base pointer (for type T, this could be computed with an expression such as ((int8_t *) base)+srcY*srcPitch+srcX*sizeof(T) – perfectly intuitive)[9].

An alternative design would be a single entry point, with parameters designed to be ignored if not relevant (say, the height parameter if a 1D memory copy is being requested), would be more tractable. Such a function might look like this:

CUresult cuMemcpy3D( void *dstHost, CUdeviceptr *dstDevice, size_t dstPitch, CUarray dstArray, size_t dstXInBytes, size_t dstY, size_t dstZ, const void *srcHost, size_t const CUdeviceptr dstDevice, size_t srcPitch, CUarray srcArray, size_t srcXInBytes, size_t srcY, size_t srcZ,size_t WidthInBytes, size_t Height, size_t Depth );

This function has a dizzying number of parameters (17, in fact), but it does cover all of the use cases. The memory types of the source and destination would be inferred from whichever of srcHost/srcDevice/srcArray and dstHost/dstDevice/dstArray was non-NULL. (If more than one are specified, the function should fail.) The offset parameters are needed for CUDA arrays, since the layout is opaque to developers; the API designer then would have to decide whether to respect them if they are non-zero for host and device memory, or specify that they are ignored unless the participating memory type is a CUDA array. Finally, specifying a height of 0 and/or a depth of 0 naturally describes a 1D or 2D memory copy.

The problem with this function is that a single call to it occupies at least five lines of code to specify the 17 parameters. This interface is cumbersome at best! But, a better alternative to this mega-function is available, and that’s what we wound up shipping in CUDA: all of the parameters are collected together into a single structure that enabled the API client to separately describe the source and destination memory ranges, along with the location and size of the copy itself. The resulting structure had about two dozen members:

typedef struct CUDA_MEMCPY3D_st {
    size_t srcXInBytes;         /**< Source X in bytes */
    size_t srcY;                /**< Source Y */
    size_t srcZ;                /**< Source Z */
    size_t srcLOD;              /**< Source LOD */
    CUmemorytype srcMemoryType; /**< Source memory type (host, device, array) */
    const void *srcHost;        /**< Source host pointer */
    CUdeviceptr srcDevice;      /**< Source device pointer */
    CUarray srcArray;           /**< Source array reference */
    void *reserved0;            /**< Must be NULL */
    size_t srcPitch;            /**< Source pitch (ignored when src is array) */
    size_t srcHeight;           /**< Source height (ignored when src is array; may be 0 if Depth==1) */

    size_t dstXInBytes;         /**< Destination X in bytes */
    size_t dstY;                /**< Destination Y */
    size_t dstZ;                /**< Destination Z */
    size_t dstLOD;              /**< Destination LOD */
    CUmemorytype dstMemoryType; /**< Destination memory type (host, device, array) */
    void *dstHost;              /**< Destination host pointer */
    CUdeviceptr dstDevice;      /**< Destination device pointer */
    CUarray dstArray;           /**< Destination array reference */
    void *reserved1;            /**< Must be NULL */
    size_t dstPitch;            /**< Destination pitch (ignored when dst is array) */
    size_t dstHeight;           /**< Destination height (ignored when dst is array; may be 0 if Depth==1) */

    size_t WidthInBytes;        /**< Width of 3D memory copy in bytes */
    size_t Height;              /**< Height of 3D memory copy */
    size_t Depth;               /**< Depth of 3D memory copy */
} CUDA_MEMCPY3D_v2;

To call this API, typically the developer declares one of these structures on the stack, then passes the structure to the API itself[10]:

CUresult cuMemcpy3D( const CUDA_MEMCPY3D *cp );

Soon after I checked in the first draft of this API, I got an irate email from a fellow NVIDIAn who thought it was way too verbose. The exact wording is lost in the mists of time, but I remember a lot of heat! What he didn’t understand from a superficial reading of the API was that calling this function would only be about as complicated as the operation being requested.

For example, a simple 1D memcpy from host to device memory may be implemented as follows:

CUresult
cuMemcpyHtoD_via3D( CUdeviceptr dst, const void *src, size_t bytes )
{
    CUDA_MEMCPY3D cp = {0};
    cp.dstMemoryType = CUDA_MEMORYTYPE_DEVICE;
    cp.srcMemoryType = CUDA_MEMORYTYPE_HOST;
    cp.dstDevice = dst;
    cp.srcHost = src;
    cp.WidthInBytes = bytes;
    return cuMemcpy3D( &cp );
}

The declaration of cp zero-initializes the structure using a C programming construct that was valid in the 1970s, when the First Edition of the White Bible came out. As long as the interface has defined reasonable defaults to be zero, (such as the source and device pointers and offsets), callers can use this idiom to write compact, intuitive code.

A more complicated memory copy, like the 2D memcpy function we declared earlier, could be implemented as follows:

CUresult cuMemcpy2D_2DDto3DA( CUarray dstArray, size_t offsetX, size_t offsetY, size_t offsetZ, CUdeviceptr src, size_t srcPitch, size_t WidthInBytes, size_t Height )
{
    CUDA_MEMCPY3D cp = {0};
    cp.dstMemoryType = CU_MEMORYTYPE_ARRAY;
    cp.dstArray = dstArray;
    cp.dstXInBytes = offsetX;
    cp.dstY = offsetY;
    cp.dstZ = offsetZ;
    cp.srcMemoryType = CU_MEMORYTYPE_DEVICE;
    cp.srcDevice = src;
    cp.srcPitch = srcPitch;
    cp.WidthInBytes = WidthInBytes;
    cp.Height = Height;
    return cuMemcpy3D( &cp );
}

The structure-based interface delivers the best of both worlds: the caller can express any valid operation, but the function invocation is only as complicated as the operation requested! The main downside is that when implementing such an interface, parameter validation can be Byzantine in complexity. As a result, function call latency can become a concern, as small operations may spend more time validating the call than performing the requested operation. NVIDIA resolved this issue with CUDA graphs, which can record operations for later playback and enables parameter validation to be done at the recording stage.

With benefit of hindsight, there is one change to the interface that I wish I had thought to make before we shipped. When I designed the interface, I knew that assigning zero to the structure was valid K&R C from the 1970s – handy when zero is a good default value for most parameters! But if I’d eaten my own dogfood just a bit more, I might have thought to put the memory types at the beginning of the structure. They always are needed – you can’t specify a memory copy without specifying the types of the memory participating – and the developer education would’ve been even more intuitive, with the destination and source memory types specified at the site of the structure declaration. This idiom takes advantage of another subtlety of K&R C, namely that if only the first few members of a structure are initialized, the remainder of the structure is zero-initialized. Here’s a rewrite of our 2D-device-to-3D-array memcpy function that does just that:

CUresult cuMemcpy2D_2DDto3DA( CUarray dstArray, size_t offsetX, size_t offsetY, size_t offsetZ, CUdeviceptr src, size_t srcPitch, size_t WidthInBytes, size_t Height )
{
  CUDA_MEMCPY3D cp = {CU_MEMORYTYPE_ARRAY, CU_MEMORYTYPE_DEVICE};
  cp.dstArray = dstArray;
  cp.dstXInBytes = offsetX;
  cp.dstY = offsetY;
  cp.dstZ = offsetZ;
  cp.srcDevice = src;
  cp.srcPitch = srcPitch;
  cp.WidthInBytes = WidthInBytes;
  cp.Height = Height;
  return cuMemcpy3D( &cp );
}

Named Parameters

C++20 gives us an even more intuitive way to access such interfaces, first by enabling the parameter structure to be declared inline with the function invocation (not always what the developer wants, of course), but also by enabling the structure members to be named as they are initialized. In C++20, echoing the C++ requirement that constructors of embedded members be invoked in the order of appearance of said members, the named structure members must be specified in order of appearance. The resulting wrapper function isn’t much more intuitive-looking than the 1970s-era K&R edition, but your mileage may vary.

CUresult cuMemcpy2D_2DDto3DA( CUarray dstArray, size_t offsetX, size_t offsetY, size_t offsetZ, CUdeviceptr src, size_t srcPitch, size_t WidthInBytes, size_t Height )
{
  CUDA_MEMCPY3D cp = {
    .dstMemoryType = CU_MEMORYTYPE_ARRAY,
    .dstArray = dstArray,
    .dstXInBytes = offsetX,
    .dstY = offsetY,
    .dstZ = offsetZ,
    .srcMemoryType = CU_MEMORYTYPE_DEVICE,
    .srcDevice = src,
    .srcPitch = srcPitch,
    .WidthInBytes = WidthInBytes,
    .Height = Height};
  return cuMemcpy3D( &cp );
}

Next: we’ll take a look at applying this API design methodology to BLAS.


[1] For reference, CUDA 1.1 added streams and events, which have proven to be useful and future-proof over the last 15 years.

[2] They must not have minded that much, because cubemaps and mipmapped textures were added sometime after I left NVIDIA. Maybe OptiX needed them.

[3] With benefit of hindsight, it seems ridiculous, but in the mid-1990s some game developers wanted to keep rasterization on the CPU, where they could retain complete control. They prided themselves on their ability to write optimized software renderers, and some were loath to get rid of their most valuable IP. Reportedly it took an in-person visit by 3Dfx to convince John Carmack that the novel visibility methods he’d developed for the Quake engine could be combined with hardware-accelerated rasterization to make an even better product.

[4] “Hardware abstraction layer,” a term that I believe originated with Dave Cutler’s original Windows NT kernel,  whose HAL enabled the NT kernel to be portable across CPU architectures. Cutler’s HAL ran in kernel mode and abstracted operations like mapping page tables, and ours ran in user mode and abstracted operations like drawing triangles.

[5] We had shipped several version of DirectX before we realized that sample code should be written by the best developers available, since that is the code that developers copy and paste to build their own applications.

[6] Interface designers must account for their clients’ needs, or no one will use their APIs. The metric is ROI: the interface must deliver a return on the investment that justifies the cost of learning the interface. The first version of Direct3D that used “execute buffers” failed in this mission – it was too difficult to use, and the promised benefits of execute buffers were never delivered – and almost caused the API to be cancelled.

[7] The CUDA driver treats page-locked and pageable memory differently, but those differences generally are transparent to the API client.

[8] As it happens, the CUDA runtime did expose multiple entry points to cover some of the different use cases, for example, cudaMemcpy2D() and cudaMemcpy2DFromArray().

[9] When interface designers create work for their clients, you may hear that referred to as “reverse delegation.” Sometimes it is the best choice for an interface, especially if different clients may want to do the operation differently. In the case of computing offsets into 2D linear arrays in device memory, we decided to take offset parameters and do the pointer math on behalf of our client.

[10] Another decision we had to make for this function was whether to embed the CUstream parameter for asynchronous variations. For orthogonality with the existing APIs for asynchronous memcpy, I kept both the functions and the stream parameter in separate functions, cuMemcpy3D() and cuMemcpy3DAsync().

CUDA Graphs, ROI, and API Adoption

CUDA 10 adds a new API called “CUDA Graphs” that are immediately familiar to graphics API designers: they are a scene graph API for compute. Scene graph APIs enable developers to describe geometry at a “higher”™ level, in ways that express the relationships between, say, rooms and doorways within a castle or the arms and legs of a 3D character. The idea is that with this additional information, the API implementor (in this case, NVIDIA) can write code that will traverse the scene graph (say, rendering the characters with their limbs animated) more efficiently than code written by the developer. Either that, or the scene graph API is sufficiently easier to learn than learning how to write the scene graph code that developers can achieve faster time-to-market by learning and using the scene graph API.

I am skeptical that CUDA Graphs will achieve adoption outside NVIDIA’s SDK samples.

API designers drive adoption by maximizing the return on investment, where the return is efficient, working code and the investment is developer time. APIs that are not easy to learn are disadvantaged because every developer who writes or maintains the code must invest in learning the API. APIs that don’t deliver a compelling performance advantage must be *very* easy to learn, hence conferring an expressive advantage. (i.e. faster development times.)

CUDA adoption has been driven by delivering huge performance gains (the return) despite a steep learning curve (the investment). (It makes for an interesting thought-experiment to wonder why CUDA has succeeded and other manycore platforms have not. Although this blog post does not touch on the issue, customer investments must be considered in addition to developer investments.)

An early API (in fact, it was created in the 1970s, long before the term “API” had been invented) that delivers high ROI is BLAS, the Basic Linear Algebra Subprograms. Originally written in FORTRAN, the motivations for this library were twofold: to “provide names and argument lists that might become widely used and recognized for some of the basic operations of computational linear algebra,” and “to improve efficiency of math software.” BLAS code is reasonably performance- and platform-portable. As the underlying platforms evolved, the same BLAS code benefited transparently from assembly language hand-coding to cache blocking to SIMD instruction sets. There was no need to update the API client code as the implementation changed underneath. BLAS has achieved widespread adoption in numerical code, amplifying developers’ expressive power and enabling them to leverage the development effort invested by others in its implementation. At this point, BLAS gets an inordinate amount of attention from hardware vendors, making it unlikely that developers can match its performance without exploiting a priori knowledge of their application requirements. It takes time to learn, but delivers a considerable return on that investment.

On the other end of the spectrum, an API that has high ROI by minimizing developer investment is malloc()/free(). Learning first-hand the difficulty of writing a fast, robust memory allocator has been an inflection point in many junior developers’ careers – it’s harder than it looks. Other APIs that deliver a high return with minimal investment: the thread synchronization APIs built into operating systems. They are not hard to learn and, for most developers, impossible to implement.

In the early days (DirectX 2.0-3.0), Direct3D had a scene graph API called the “retained mode,” but the last version shipped in 1996. No one was using it, despite heroic evangelism efforts by its developers. Developers could use “immediate mode” APIs to implement their own scene graphs more efficiently – both in terms of developer time and in terms of high-performance implementations of the operations they needed. As an added bonus, by writing the scene graph traversal themselves, developers kept all the IP in-house (e.g., their visibility algorithm) and, if there was a bug, they could fix it in their code on their own schedule.

Since game developers co-design their content development tools with the runtime, a great deal of intellectual property is encapsulated in the scene graph traversal. In a sense, 3D scene graph API designers were aspiring to co-opt developers’ core IP – never a winning proposition for a platform.

I suspect that CUDA developers will come to similar conclusions with the CUDA Graphs. No one will use them unless they deliver a return on investment in the form of higher performance, or greater expressiveness commensurate with the effort to learn the APIs. Higher performance will be difficult to achieve since CUDA gives developers ready access to the underlying tools used by the CUDA Graphs.

One possible opportunity for NVIDIA: perhaps CUDA Graphs will be an efficient way to enable concurrent execution of kernels that weren’t designed to run in streams? CUDA streams are like const correctness – it is difficult to retrofit code to use them because they must be plumbed into interfaces from top to bottom. An alternative to revisiting interfaces top-to-bottom is to add a “current stream” API (as CUBLAS did), but current-anything APIs interoperate poorly and tend to be inefficient at changing the current-thing. More importantly, the current-thing state must be saved and restored across interfaces.

So one path to adoption for CUDA Graphs may be an efficient way to enable concurrent execution of kernels that weren’t designed to use streams. But in general, like immediate-mode graphics APIs, most developers will be able to more quickly write their own code expressing the dependencies in their application than it would take to learn and use the CUDA Graphs APIs. And developer-authored code will run at least as fast, paying tribute to the First Law Of CUDA Development.

Unless CUDA Graphs deliver a high ROI, they will go the same way as other features that Seemed Like A Neat Idea At The Time, like dynamic parallelism and managed memory.

Don’t Move The Data!

NVIDIA just delivered their first Volta-enabled DGX-1 systems – great news for those who need the additional compute power of GV100 versus GP100:

GP100 GV100
FP32 Compute 10.6 TFLOPS 15.0 TFLOPS
FP64 Compute 5.30 TFLOPS 7.50 TFLOPS
Memory Bandwidth 720 GB/s 900 GB/s

Wait, you say, that’s an interesting qualifier. Who doesn’t “need the additional compute power…?” Did someone hack into Nick’s blog account and post on his behalf? Or has he become a Luddite in his dotage?

Nope, no, I still think more compute is generally better; but it is past time to question the architecture of these systems with huge, discrete GPUs connected to the world by buses. The problem with DGX-1 is that those GPUs are hungry! They need to be fed! And they can only sip data through the tiny soda straw known as the PCI Express bus.

For perspective, let’s compare these chips to G80, the first CUDA-capable GPU. Let’s set the stage by observing that G80 was the largest ASIC NVIDIA could feasibly design and fabricate in 2006, straining the limits of contemporary fabrication technology – a classic “win” chip. It had 684M transistors, a theoretical maximum performance of 384GFLOPS for single precision, and no support at all for double precision. GP100 and GV100 respectively have 22x and 31x more transistors, and 27x and 39x more single precision performance than G80. But the bandwidth to deliver data to and from these GPUs has not been increasing commensurately with that performance.

Here’s a table for all 3 GPUs – G80, GP100 and GV100 – that highlights the FLOPS/byte of bandwidth for device memory (attached to the GPU), NVLINK (NVIDIA’s property GPU-GPU interconnect), and PCI Express:

G80 GP100 GV100
GFLOPS (SP) 384 10600 15000
GPU↔GPU memory 84 GB/s 720 GB/s 900 GB/s
FLOP/Byte 4.5 14.7 16.67
GPU↔GPU n/a 20 GB/s 20 GB/s
FLOP/Byte 530 750
CPU↔GPU 3.1 GB/s 3.1 GB/s 3.1 GB/s
FLOP/Byte 124 3419 4839

The 3.1GB/s figure comes from dividing the available PCIe bandwidth by the number of GPUs in the system. Two 16-lane PCIe 3.0 connections are about 25 GB/s observed, and there are 8 GPUs.

As the number of FLOPS per byte of I/O diverges, the number of workloads that benefit from more FLOPS diminishes. Googling around for literature on FLOPS/byte, I ran across this 2011 presentation by Peter Kogge entitled “Hardware Evolution Trends of Extreme Scale Computing.” For anyone in the GPU business, the first sign that something’s amiss crops up in Slide 3, which cites “1 byte/FLOP” as the “classical goal.” Even G80’s device memory fell well short of that goal with 1 byte/4.5FLOPS. I prefer this framing because it adopts the viewpoint of scarcity (bytes/FLOP – getting data in and out for processing) rather than abundance (FLOPS/byte – having lots of processing power to bring to bear on data once it is in hand).

The presentation is from 2011, but still very relevant: after reviewing Moore’s Law and the rise and fall of Dennard scaling, and the preeminent importance of power dissipation in modern computing, the concluding slide reads in part:

  • World has gone to multi-core to continue Moore’s Law
  • Pushing performance another 1000X will be tough
  • The major problem is in energy
  • And that energy is in memory & interconnect
  • We need to begin rearchitecting to reflect this …
  • DON’T MOVE THE DATA!

“DON’T MOVE THE DATA” has been good advice to everyone who’s had the data for decades (in 1992 I wrote a Dr. Dobb’s Journal article that focused on hand-coding x87 assembly to keep intermediate results in registers)… but the advice has more currency now.

Moving The Data on CPUs

The data/compute conundrum finds expression on modern multi-core CPUs, too. Each core on a modern x86 CPU has ILP (instruction level parallelism) of 5, meaning it can detect parallelism opportunities between non-dependent instructions and execute up to 5 instructions in a single clock cycle. Latency to the L3 cache is about 50 clock cycles. So a CPU core can perform dozens of FLOPS on data in registers during the time it takes for the L3 to service a load (conservatively – 2 of the 5 pipelines can do 8 FLOPS per instruction via AVX). And that’s assuming the data was in cache!

As an aside, this observation helps explain why “optimized” numerical Python code is still dead slow. Python is interpreted, so has a library called Numpy that wraps vectorized implementations of operations that do things like element-wise addition or multiplication between arrays. But for arrays that don’t fit in cache (and to some extent, even arrays that do fit in cache), it is very inefficient to do multiple passes over the data if the computation could have been fused into a single pass. The code spends all of its time moving data, and very little time processing it.

DON’T MOVE THE DATA!

A Gift From Heaven: Deep Learning

Which workloads, pray tell, require endless FLOPS per byte of I/O? Or turn it around and ask, which workloads still thrive when there is barely any I/O per FLOP? NVIDIA hasn’t been shy about trumpeting its solution to this problem: deep learning! Training a deep learning network entails refining floating point weights that roughly represent neurons that “learn” as they are trained on the data. As long as the weights can reside in device memory, only a modest amount of I/O is needed to keep the GPU busy. In retrospect, NVIDIA is extremely fortunate that deep learning cropped up. Without it, it’s not clear what workload could soak up all those FLOPS without the GPUs starving. The importance of machine learning as a workload helps explain why GV100 contains purpose-built hardware for machine learning, in the form of the TensorCore. But that hardware actually exacerbates the GPU starvation problem, by increasing FLOPS without increasing bandwidth.

NVIDIA probably isn’t comfortable betting the farm on a single workload – especially one where their main customers are enterprises that can invest in their own machine learning hardware and that is attracting VC money for application-specific hardware. How do you hedge? How can NVIDIA relieve the bottleneck? Unless some workload materializes that is as compute-intensive (per byte of I/O) as machine learning, NVIDIA must seek out ways to address their GPUs’ I/O bottleneck.

I/O: NVIDIA’s Strategic Landscape

The problem confronted by NVIDIA is that they are hindered by some business and legal challenges. According to the terms of their 2011 settlement with Intel, 1) They do not have a license to Intel’s industry-leading cache coherency protocol technology, and 2) they do not have a license to build x86 CPUs, or even x86 emulators.

NVIDIA has done what they can with the hand they were dealt – they built GPUDirect to enable fellow citizens of the bus (typically Infiniband controllers) to access GPU memory without CPU intervention; they built NVLINK, a proprietary cache coherency protocol. They have licensed NVLINK to IBM for the POWER architecture and signaled a willingness to license it to ARM licensees. The problem is that POWER and ARM64 are inferior to Intel’s x86, whose high-end CPU performance is unmatched and whose “uncore” enables fast, cache coherent access across sockets. NVIDIA itself, though an ARM licensee, has announced that they will not be building a server-class ARM chip.

I’m not sure why NVIDIA announced they would not be building their own ARM to drive their GPUs, because that seems like an obvious way for them to own their destiny. It may be that NVIDIA concluded that ARM64 cores simply will never deliver enough performance to drive their GPUs. That’s too bad, because there is a lot of low-hanging fruit in NVIDIA’s driver stack. If they made the software more efficient, it could either run faster on the same hardware or run at the same speed on lesser hardware – like ARM64 cores.

Not being able to coordinate with Intel on the cache coherency protocol has cost NVIDIA big-time in at least one area: peer-to-peer GPU traffic. Intel could, but chooses not to, service peer-to-peer traffic between NVIDIA GPUs at high performance (Intel and NVIDIA give different stories as to the reason, and these conversations happen indirectly because the two companies do not seem to have diplomatic relations). As things stand, if you have a dual-CPU server (such as NVIDIA’s own DGX-1) with cache coherency links between the CPUs, any peer-to-peer GPU traffic must be carefully routed past the CPUs, taking care not to cross the cache coherency link. If Intel could license QPI to Altera, they could license it to NVIDIA. Failing to do so is a matter of choice and a by-product of the two companies’ respective positions in the business and legal landscapes.

As things stand, NVIDIA is dependent on Intel to ship great CPUs with good bus integration, and peer-to-peer-capable GPU servers have to be designed to steer traffic around the QPI link. The announcement that NVIDIA would not build ARM64 SOCs was done in 2014, so now that the competitive landscape has evolved (and though I can remember when Intel’s market capitalization was 12x NVIDIA’s, it is now only about 1.7x), it would not surprise me if NVIDIA revisited that decision.

One Path Forward: SoCs

One partial solution to the interconnect problem is to build a System on a Chip (SoC): put the CPU and GPU on the same die. Intel and AMD have been building x86 SOCs for many years; it is Intel’s solution to the value PC market, and AMD has behaved like their life depended on it since 2006, when they acquired GPU vendor ATI. NVIDIA’s Tegra GPUs are all ARM SoCs. The biggest downside of SoCs is that the ratio of CPU/GPU performance is fixed years before the hardware becomes available, causing workloads to suffer if they are more CPU- or GPU-intensive than the SoC was designed to address. And if the device doesn’t have enough performance, scaling performance across multiple chips may be more difficult because GPUs require such high bandwidth. A conspicuous success story for big SoCs has been in the gaming console market, where the target workload is better-understood and, in any case, game developers will code against whatever hardware is in the console.

So I suspect that as workloads continue to tap out the FLOPS and balance out the bandwidth/FLOPS, big SoCs will start to make more sense. In sizing the CPU/GPU ratio, hardware designers can create a device with the biggest possible GPU that doesn’t starve with the available bandwidth.

SoCs are just a stopgap, though. As the laws of physics continue to lower the boom, the importance of system design will continue to increase, as Kogge pointed out in his 2011 presentation. The fundamental problem of the speed of light isn’t going away… ever.

Managed Memory and Segmentation

At the GPU Technology Conference this year, I ran into an old colleague from NVIDIA and the topic of managed memory came up. He related that earlier in the conference, the Q&A session after a Dell-sponsored presentation on managed memory had suffered a serious decline in the level of discourse. Neither of us had been in attendance, but apparently one questioner after another stood up and asked the presenters, in effect, “Where are you going with this?”

He made it sound like the presenters thought they were going to be pelted with rotten fruit!

Why the controversy? Managed memory is supposed to make CUDA programming simpler. It is intended to eliminate the need to copy data back and forth between CPU and GPU memory. If it were performance-neutral like, say, C++ lambdas, then managed memory would be a welcome addition to CUDA. The problem is that any feature that presents a risk of degrading performance will be viewed with skepticism by the CUDA programming community – because no one programs CUDA for fun.

What is it about CUDA that makes it so difficult to automatically manage residency of memory?

Reflecting on the answer to this question brought me back to a conversation I had at a different GTC, so long ago that I’m not sure which year it occurred. I had just met Daniel Moth, the Program Manager at Microsoft for C++ AMP. Once it was firmly established that we were fellow travelers, charting the technical roadmaps for competing data parallel programming environments, he had a question.

“Tell me one thing,” he asked. “Why do you need streams and events?”

I had to think for a minute. Why indeed? I’d added the feature in CUDA 1.1, to cover new hardware that could DMA host memory concurrently with kernel execution; but it was already clear that the new abstractions were future-proof to multiple kernels executing concurrently, and even coordinating execution between multiple GPUs.

“Streams are like CPU threads,” I told him stupidly, quoting from the original design document I’d written in 2007. “Operations that are done in different streams can happen concurrently. And you need events to coordinate execution between streams.”

“But we don’t need that stuff in C++ AMP. The stuff that can be done in parallel, we just do it in parallel.”

After a few minutes’ conversation, the key difference emerged and I finally had it.

“Oh,” I cried. “CUDA has a flat address space!”

C++ AMP does not.

CUDA’s address space causes more trouble than is widely appreciated. Because pointers can be stored in device memory, any CUDA kernel can attempt to access any address. In early versions of CUDA, where paging is not supported (every byte of virtual memory is backed by physical memory) and systems with multiple GPUs were rare (and certainly not for sale in the public cloud), having an address space seemed to make sense. That impression was bolstered by the hardware design community’s ideological commitment to linear address spaces, which had taken root after a divisive debate contrasting linear address spaces with segmentation.

Linear Addressing versus Segmentation

Segmentation is the idea that memory should be modeled as a set of discrete buffers with base pointers and lengths, rather than assigning an address (like a PO Box) to each byte of memory. Segmented memory is accessed via a segment/offset tuple instead of by a single address, a paradigm that is implemented at the hardware level. Intel’s x86 architecture was segmented from the beginning (c. 1976). It provided for 4 segments to be accessible at any given time: the segment registers CS, DS, SS, and ES were for code, data, stack, and “extra” data, respectively. Each segment register had a base address and a length, and most machine instructions implicitly referenced a segment that represented a sensible default. The PUSH and POP instructions that operate on the stack implicitly referenced the stack segment (SS). Loads and stores from memory implicitly used DS, the data segment, unless that default was overridden by a “segment prefix” instruction. For example, the SS: prefix could be used to operate on stack memory.

The problem with segments was that they made code difficult to compose: even simple operations like function calls were complicated by potential differences between the segment register settings needed by the caller and callee. The callee could save and restore its segment registers at the subroutine boundary, but that hurt performance. More typically, developers would select a “memory model” with fixed segmentation usage that was appropriate for their application. So-called “large” memory models would just specify a segment:offset tuple for every address; under MS-DOS, this amounted to a cheesy way to enable 20-bit addressing with 32-bit addresses, or 1M of memory with 4G worth of address width. It also hurt performance since every load and store needed a segment override.

Segmentation introduced difficult, but solvable, problems for developers of individual applications; but even 25 years ago, it was clear that plugin architectures like OLE automation would play a central role in future software development. Being able to load code and data dynamically into an application and have it “just work,” without having to worry about segments, was of paramount importance. The ability for libraries to efficiently access their callers’ data, and process it on their behalf, overrode the concerns that buggy code could corrupt data that happened to be accessible.

Segmentation and flat addressing can be reconciled by enabling large segment offsets and having the operating system map all the segments to cover the same address range. This usage was anticipated when the Intel i386 was released in 1986, and implemented in 32-bit multitasking operating systems like UNIX (or Microsoft’s long-lost Xenix), and later, OS/2 and Windows NT. This paradigm was so popular, and the need for segmentation support in hardware so unclear, that AMD mostly did away with segment registers when they revised x86 to enable 64-bit addressing in the early aughts.

When I wrote the specification for CUDA textures, with a clear separation between memory and views on the memory (CUDA arrays and texture/surface references, respectively), it quickly became clear that CUDA arrays were effectively segmentation. A CUDA kernel can’t access just any CUDA array; the CUDA driver must predeclare the CUDA arrays to be accessed by a kernel. Coupled with other per-launch parameters, such as the amount of shared memory and the number of registers needed, a CUDA kernel launch more closely resembles a container launch than a subroutine call.

A key reason segmentation was an abject failure for general-purpose computer architectures was the high cost of “switching segments” on a per-instruction basis. On x86, instructions such as LDS (load data segment) were costly; instruction prefixes to change the segments being operated on by a given instruction added complexity; and naïve systems that kept segment:offset tuples for all pointers essentially wasted addressing bits. Now that we have 64-bit addressing, it is possible to envision having page tables play the role of segments (by introducing a byte-granular limit to page table size), as argued in this blog post. For now, however, there is a decisive consensus in favor of flat address spaces.

What does all this have to do with managed memory?

By implementing segmentation on a per-kernel basis instead of a per-machine-instruction basis, GPU computing technologies get many of the benefits of segmentation, without the costs that hindered adoption on the CPU side. Kernels may take slightly longer to launch than they would otherwise, but the cost of a kernel launch is high enough that the additional cost of segmentation is negligible. And if each kernel launch predeclares the needed segments, the system can infer residency requirements, ensure coherency, and identify parallelism opportunities, much in the same manner that superscalar CPUs use real-time dependency analysis to identify which instructions can execute in parallel.

What do you mean by “infer residency requirements,” you ask? You guessed it: managed memory!

What do you mean by “identify parallelism opportunities,” you ask? You guessed it: automatic CUDA streams!

What about coherency? Not much would change here. The CUDA driver already uses software mechanisms to enforce coherency, for example, by inserting cache-invalidate instructions into the command stream before launching kernels that read from texture. In a segmented memory architecture, read-only segments can be copied where they are needed, then discarded without having to worry about propagating changes to the data. Writeable segments could be copied back wholesale, or using dirty bit optimizations.

So, it is not hard to imagine a GPU computing technology that uses segmentation to manage memory rather than a linear address space. In fact, we do not have to imagine C++ AMP – it’s already here – and for CUDA, programs that used only CUDA arrays would have the properties needed to automate residency and parallelization. As a side note, the WDDM display driver model introduced in Windows Vista embraced a segmented memory architecture for paging.

Let’s review some of the deficiencies in managed memory, as discussed in my previous blog. It attempts to infer residency requirements based on memory accesses – which hurts performance and breaks the First Law of CUDA Development. It breaks the useful ability to infer the “owner” (CPU or which GPU) of a given address in the Unified Virtual Address Space. The semantics of multi-engine and multi-GPU memory management are complicated, and require hinting. Even if we set aside warranted skepticism about whether the hinting will be future-proof (I have my doubts), it introduces enough complexity that managed memory does not compare favorably to the static, affinitized allocations like CUDA 1.0 or segment-based architectures like C++ AMP.

Managed Memory: Belated Comments on Implementation

Managed memory is a new-ish CUDA feature that aspires to do away with the need to explicitly copy CPU memory to and from GPU memory. Introduced in CUDA 6.0, its initial implementation was unusably slow. (For example, copying managed memory from GPU to CPU memory ran at 512MB/s, 25x slower than an asynchronous memcpy.)

When they undertook to build the managed memory feature, NVIDIA had many different implementation strategies they could have pursued. As best I can tell, here is a summary of their implementation strategy:

  1. Upon allocation of managed memory, the CUDA driver allocates device memory, plus a pageable range of CPU memory at the same virtual address range.
  2. The CUDA driver use dirty page bits to track which 4K pages were “touched” by the CPU.
  3. Upon kernel launch, the CUDA driver would unmap the managed memory from the CPU and copy the dirty pages from the CPU to the GPU. Unmapping the CPU memory removes the risk of write-after-read hazards from the CPU corrupting managed memory before the GPU was able to copy it.
  4. While CUDA kernels run, the device memory copy of the managed memory is the only valid one.
  5. Upon CPU/GPU synchronization, the CPU buffer is made accessible again, but is not copied wholesale from GPU memory. It is possible the GPU’s hardware does not have the same dirty bit tracking facilities as the CPU, or perhaps NVIDIA just thought it would be preferable to copy device memory back to the CPU “on demand.”
  6. Copying managed device memory back to host memory is prompted by page faults: when the CPU attempts to access a page of managed memory, the CUDA driver handles the page fault by copying the 4K of GPU memory to CPU memory.

The application I used to investigate NVIDIA’s managed memory implementation is only about 60 lines of code. The key component is a function usPerLaunch that allocates a specified amount of managed memory, launches a NULL kernel, synchronizes with the GPU, then optionally “touches” the managed memory to force the CUDA driver to copy it back to host memory. (In an earlier version of this test, I confirmed that CUDA lazily copies only “dirty” pages in the other direction, as NVIDIA claims in its documentation.)

Here is the function in question – it may be found in cudahandbook/concurrency/managedOverhead.cu:

const size_t pageSize = 4096;
template
double
usPerLaunch( int cIterations, size_t cPages=0 )
{
    cudaError_t status;
    double microseconds, ret;
    chTimerTimestamp start, stop;
    void *p = 0;

    cuda(Free(0) );
    if ( cPages ) {
        cuda(MallocManaged( &p, cPages*pageSize ) );
    }

    chTimerGetTime( &start );
    for ( int i = 0; i < cIterations; i++ ) {
        NullKernel<<<1,1>>>();
        cuda(ThreadSynchronize() );
        if ( bTouch && 0 != p ) {
            for ( int iPage = 0; iPage < cPages; iPage++ ) {
                ((volatile unsigned char *) p)[iPage*pageSize] |= 1;
            }
        }
    }
    chTimerGetTime( &stop );

    microseconds = 1e6*chTimerElapsedTime( &start, &stop );
    ret = microseconds / (float) cIterations;
    cudaFree( p );
Error:
    return (status) ? 0.0 : ret;
}

I ran this program on a Haswell-based Windows 7 machine on two NVIDIA GPU boards: the NVIDIA GeForce GTX 970 and Titan X (GM200 and GP100, respectively). Although both are large “win” chips, I would expect similar test results to hold true across all Maxwell and Pascal GPUs, since they seem to have implemented a hardware interface that improved managed memory performance.

μs

Launch time (ms) Memory (KB) Bandwidth (MB/s)
47 0
105 4 39
104 8 78
115 16 143
134 32 244
213 64 307
381 128 344
649 256 404
1247 512 420
2221 1024 472
4712 2048 445
8458 4096 496
17041 8192 492
33992 16384 494

Table 1. GM200 results.

Launch time (ms) Memory (KB) Bandwidth (MB/s)
39 0 0
47.15 4 7
49.86 8 164
57.84 16 283
59.04 32 555
64.73 64 1012
79.08 128 1657
98.41 256 2664
137.15 512 3823
205.56 1024 5101
391.91 2048 5351
745.81 4096 5624
1543.91 8192 5433
3114.83 16384 5386

Table 2. GP100 launch results.

“Better,” however, does not mean “good.” The most important thing to note is that these kernel launch times are VERY SLOW. You can measure synchronous and asynchronous kernel launch times with the nullKernelSync.cu and nullKernelAsync.cu programs in the same directory. On this machine, those times are 46.35 and 3.25 microseconds, respectively. (In fairness, results likely would be better under Linux, especially the synchronous kernel launch. On Windows 7, launching a CUDA kernel always requires the driver to have the operating system do a user-kernel transition or “kernel thunk.” Sadly, no amount of editing can get around the sad fact that CUDA kernels and OS kernels are completely different things and some sentences must refer to both!)

On the Maxwell machine, whatever mechanism NVIDIA is using to copy managed memory back from the GPU has a maximum performance of less than 500MB/s. That’s a nonstarter. It is more than 25x slower than the bus bandwidth. Pascal has improved things, but is still less than half the performance of a PCI Express 3.0 link. A CUDA kernel reporting results via mapped pinned memory would achieve much higher performance.

Superficially, NVIDIA’s implementation makes sense, assuming there is one CPU and one GPU and that the application isn’t doing any fancy tricks with CPU/GPU concurrency. The main mistake in their implementation was failing to speculatively copy extra pages back from the GPU to the CPU in Step 6, an oversight that seems to have been remedied in subsequent releases. The overhead of servicing the page fault is so high that it’s dominated by interrupt handling, not copying of a 4K page, so it makes sense to copy more pages on the page fault until the overhead of the additional copying becomes non-negligible.

Less clear, however, is the optimal behavior of managed memory in a system with multiple GPUs. Does a managed memory buffer get allocated for each GPU? When a kernel is launched on GPU 0, do the other GPUs get copies of the managed memory? Which memory ranges are valid for which GPUs as kernels are executing? And it seems clear that managed memory can’t possibly retain the property that the “owning” device can be inferred from a UVA address, by e.g. calling cudaPointerGetAttributes().

The paradigm also breaks for applications that perform memory copies and kernel processing concurrently.

I submit that the APIs needed to “enlighten” the managed memory subsystem to do the right thing, are at least as complicated as simply writing the CUDA code to explicitly allocate and copy memory.

More on managed memory soon.

Ten Years Later: CUDA Succeeded Despite…

After posting a list of reasons why CUDA succeeded, it seems worthwhile to reflect on some of its apparent vulnerabilities, and why CUDA has been successful despite those issues.

CUDA Succeeded Despite…

1. Being Proprietary.

NVIDIA builds the hardware and software to run CUDA applications and has never licensed the technology to anyone else. Conventional wisdom in the industry holds that proprietary software technologies are doomed to failure – they don’t get shepherded well by a single owner, and they don’t gain adoption by developers. But by making CUDA software portable to everything from Linux to Windows to MacOS, and making CUDA hardware available in a broad range of products from SOCs (Tegra) to high end servers (DGX-1), NVIDIA has staved off the risks they incurred by going it alone.

2. Explicit Memory Management.

It’s every new CUDA programmer’s rite of passage: As if allocating and copying input and output data to and from device memory weren’t enough trouble, developers also explicitly manage shared memory to facilitate data interchange between threads.

Fortunately for NVIDIA, due to the First Law of CUDA Development, developers haven’t been fazed by the need to learn these idiosyncrasies.

3. Limited Cache Coherency.

Some rules of thumb have been internalized by hardware designers to such a degree that they are not so much sound engineering practices, but religious edicts. One such rule is that caches have to be coherent. All the time. In hardware.

But CUDA is pervaded by violations of this tenet. Device memory is not coherent with host memory. Shared memory effectively resides in a separate address space, so isn’t coherent in the same sense as an L1 cache. Constant and texture memory are not coherent with device memory, and when changes are made to the memory, the illusion of coherence is maintained via software invalidation. As with explicit memory management, developers are willing to treat the lack of cache coherency as a cost of doing business – as long as they get the performance they crave.

4. Limited PC market share.

Discrete GPUs only occupy about 25% of PC market share by unit volume, and NVIDIA competes with AMD in that space. NVIDIA’s limited market share helps explain why CUDA has had limited success achieving developer adoption in packaged PC software, even when there’s a good fit with the software requirements.

Put yourself in the shoes of an engineering director at (say) Adobe. “Port this code to CUDA,” says NVIDIA, “and it will run much faster… on 18% of your potential customers’ machines.” Even that proposition is sketchy when accounting for the costs and benefits of supporting the full range of CUDA GPUs extant.

But for vertical applications (think HPC), CUDA developers build data centers with thousands of identical servers. And for embedded applications (think automotive), every GPU in a given design win has identical properties. In both cases, developers have a fixed hardware target to develop against, and they get a compelling return on the engineering investment of the CUDA port.

In the longer term, companies like Adobe and Autodesk should be able to gain the same benefits by transitioning to cloud-provisioned GPU platforms.

Ten Years Later: Why CUDA Succeeded

CUDA first became available about 10 years ago, so it seems like a good time to take note of its success and reflect on why it has been successful.

1. GPUs are not CPUs.

What I mean by this is not just that you don’t have to recompile your app (this point gets its own bullet later in this article), but that core operating system changes are not needed for GPU support. GPUs are complicated peripherals, but when the rubber meets the road, they are still just peripherals. They hang off the bus, get enumerated by the OS, get a driver loaded, and go. Proponents of competing technologies such as the Cell processor or Larrabee (now Xeon Phi) would have you believe otherwise, but GPUs have been served well by the flexibility and platform portability that comes with being a “dumb peripheral.”

2. GPUs are everywhere.

Jensen Huang has said the GPU had a “day job.” NVIDIA had an established, high-volume market for their ASICs. The overlap in requirements between a big, fast graphics chip and a general-purpose manycore processor was significant, but it wasn’t obvious to all that the incremental cost would be worth it. I personally had lunchtime arguments with senior graphics architects at NVIDIA who didn’t want to spend 10% die area on compute (the estimated hardware cost of adding support for scatter/gather and shared memory) because it would put them at a disadvantage running graphics benchmarks against AMD (at the time, it was known as ATI). Fortunately for NVIDIA, those skeptics were overruled and the business risk turned out to be justified.

Another way to look at it: though NVIDIA was weighing a 10% die area risk, technologies like Cell and Larrabee/Xeon Phi, or companies like Ageia and other coprocessor vendors, were incurring a 100% die area risk. They did not have an established market to fall back on if things didn’t work out.

3. GPUs are compellingly faster than the CPU.

Shortly after one of our first, best customers for CUDA received his first CUDA-capable GPU, he contacted NVIDIA with a question. He had gotten a sample workload ported, and, he said, it looked like it was working. The problem? He wanted to know how it could be so fast!

The senior people at NVIDIA had long known GPU performance was going to be amazing. Shortly after I joined NVIDIA in 2002, I had lunch with a senior NVIDIA architect and asked him what he was working on. “NV50,” he said. (Mind you, this conversation occurred before NV30 had taped out.) “It will unify vertex and pixel shader processing. We’ll have room to build a chip with about a teraFLOPS of processing power, but we’ll spend half the area on graphics so it will have peak performance of about 500 GFLOPS.” Later, in an internal company email, the same architect said NV50 was going to “make the CPU look like a toy.”

His prediction turned out to be amazingly accurate, considering it was made four years and two major architectural revisions in advance. NV50 turned into G80, the first CUDA-capable chip, and had 384 GFLOPS of peak performance – within spitting distance of his casual lunchtime conjecture.

Remember that when CUDA first shipped, Intel’s floating point capabilities were much more limited than they are today. The SIMD width was only 128 bits (Sky Lake currently supports 512), and Intel had only recently widened the actual execution unit (singular – modern Intel CPUs have multiple SIMD execution units) to a full 128 bits. Before the Core 2 Duo, one generation after another of Intel CPUs had supported SSE as two micro-ops (“high” and “low”) for the 64-bit-wide execution unit, limiting instruction throughput. In fact, CUDA may have prompted Intel to dramatically improve their floating point capabilities.

Today, it is still true that for suitable workloads, GPUs are compellingly faster than CPUs. Intel has doubled the SIMD width in their processors twice, and also doubled the number of SIMD execution units, but in that time, NVIDIA has increased the number of transistors in their “win” GPU by 30x (from 684M to 21B), with a commensurate increase in performance. NVIDIA GPUs, by the way, still benefit from Dennard scaling because they target much lower clock rates than CPUs. In 2006, G80 ran at <600 MHz, while the latest GPU (V100) runs at 1455 MHz. NVIDIA also has led CPU vendors in advancing their instruction set support, being the first to add FP16 and fused multiply-add support. For these reasons, NVIDIA has held off Intel’s attempts to close the performance gap over the last 10 years.

4. CUDA has a low barrier to entry.

On the hardware side, this point goes hand in hand with how the GPUs already had an established, high-volume market. A CUDA GPU could be had for well under $1000, and as an added bonus you got to play World of Warcraft on a badass gaming card. Later, CUDA GPUs found their way into laptops. Still later, CUDA GPUs can be rented on an hourly basis in the cloud with a credit card.

So the barrier to entry to acquire hardware always has been low. But the same is true of Intel CPUs – they are inexpensive and everywhere. But unlike Intel, who charges for their vectorizing compilers, NVIDIA wisely chose not to charge for the toolchain. CUDA has always been free to download, and NVIDIA has never charged royalties to use it.

It’s hard to beat free, and when it came to hardware, it was hard to beat a GPU. With such a low barrier to entry, it is no wonder developers flocked to it.

5. CUDA is as easy to program as SSE/AVX.

I devote a whole chapter to this point in The CUDA Handbook, but it bears repeating. The portions of an application that are most amenable to CUDA acceleration are, for the most part, the same as for SIMD instruction set optimization. In either case, only a small portion of the application – certainly less than 10%, and in some applications, as little as 2% – needs to be ported to yield a benefit. So the question becomes, which technology gives the biggest return on the engineering investment?

Let’s pause for a moment to reflect on two things. First, Intel had a 10-year head start on NVIDIA in building compilers for their respective target technology (SSE versus CUDA). For Intel, that investment was in vectorizing compilers – compilers that examine scalar code and emit executable code that uses SIMD instructions. Second, despite that head start, that investment has delivered a limited return – partly because, as already mentioned, only small parts of an application actually benefit from SIMD optimizations, but also because vectorizing compilers have never fulfilled their promise. See for example this GDC 2015 presentation by Andreas Fredericksson. The game development company where he works avoids vectorizing compilers because an innocent-seeming change can cause the vectorization to break – a potentially catastrophic setback when most games have to be done in time for the holiday season (“This is what will happen two days before gold.”) Instead, they use compiler intrinsics, which use functions with names like _mm_add_ps()  to operate on special types with names like __m128. With few exceptions, these functions have direct analogs to machine instructions (in the case of _mm_add_ps(), the SSE instruction is ADDPS). From an engineering standpoint, intrinsics enable developers to take advantage of the new instructions without worrying about register allocation, instruction scheduling, or the intricacies of the ABI. (An especial challenge on x86-64.)

In stark contrast, CUDA lets you write scalar-looking code that alludes to the parallelism by referencing built-in variables such as threadIdx and blockIdx. I’d call the memory management issues a wash – in CUDA, you have to allocate and copy to and from device memory, but SIMD instructions have alignment restrictions and do everything 4 or 8 or 16 things at a time in a way that makes it difficult to deal with edge cases. I admit to being biased, but I have written a great deal of both types of code and I consider CUDA at least as easy to target.

6. CUDA has superior performance portability.

Performance portability is the idea that code will not just run correctly, but deliver high performance against a variety of platforms. For CUDA, performance portability within a given GPU generation is a given, as long as applications launch enough thread blocks to saturate the largest GPU. Performance portability across GPU generations is a bit sketchier, but has held up over time. Even features like FMAD (fused multiply-add) were added seamlessly, and always had native compiler support. NVIDIA has changed architectures and instruction sets with high frequency, but masks those architectural differences with a sophisticated mix of driver and compiler software.

On multicore CPUs, developers pursue performance along two axes: multithreading and SIMD. For multithreading, major operating systems have very different operations to manage threads and synchronization. Mutexes, semaphores, and events were all built into Windows; condition variables were in Linux, and added to Windows in Windows Vista. Windows also added reader-writer locks, mutexes that can accommodate multiple threads when the resource is being accessed in a read-only manner. When you add in the instruction-level support for thread synchronization (“interlocked exchange” or “compare and swap” primitives can be used to implement any number of thread synchronization primitives – especially the so-called “lockless” data structures), the number and variety of options for developers is overwhelming. No wonder process-level parallelism (i.e. eschewing threads entirely) has become a popular method of leveraging multicore CPUs!

On the SIMD side, Intel has added instructions about every 2 years, and increased the SIMD width twice since 1999. But software developers can’t immediately use new instructions without qualification. For one thing, since only new CPUs include the new instructions, applications must test which instruction set level is available, and run the corresponding code path. Applications must support “downlevel” hardware that corresponds to the installed base owned by their target users (notably, this calculation is different for a supercomputing data center as opposed to a consumer application such as Photoshop). One interesting data point: CCP, the company that makes the popular online game EVE Online, did not start requiring SSE2 on EVE clients until 2011. SSE2 first became available in 2001!

So for every instruction set innovation – notably AVX, AVX2, and now AVX-512 – new code must be written, along with detection code to ensure the “best” code paths are executed on the various flavors of CPU. If intrinsics are the developer tool of choice, the development burden grows linearly in the number of supported instruction set permutations. If you want both SSE and AVX implementations, you write twice as much code, and so on. But even that understates the burden of supporting a plethora of instruction sets, because we haven’t yet accounted for the QA burden. The QA department can’t get away with just running the code on CPUs that support all of the available instruction sets; they have to make sure the code is tested on CPUs that don’t support all of the target instruction sets. Otherwise, the QA process will overlook bugs in the detection code – the code that decides which code path to run, depending on CPU capabilities. Unless you are testing on hardware that doesn’t support the latest instructions, an SSE2 instruction (say) may find its way into your SSE code paths. And because newer CPUs  also support the older instructions, they will run that buggy code just fine. But on older CPUs, when they encounter the instruction they don’t support, they throw an exception and the application crashes.

Efforts to address the performance portability of multithreading and SIMD have been desultory at best. If you take the intersection of threading primitives across operating systems, you get something that resembles C++’s std::thread – useful only to the simplest of parallel applications. For SIMD, rather than vectorizing compilers, the technologies that offer the best prospect at performance portability are domain-specific languages like Halide – which also has a CUDA implementation.

7. You don’t have to recompile your app.

The siren song of parallel technologies has echoed through the years: “Just recompile your app!” The marketing folks would have you believe that all the latent benefits of parallelism will be laid bare by their magical compilers. The problem is that 95+% of the application won’t benefit at all, so much of that porting effort is for naught. Think about the millions of lines of code in a flagship application from a company like Adobe or Autodesk. Do you really think the engineering manager of such an application is excited at the prospect of having to port and re-test millions of lines of code that implement the user interface, file parsing, and other portions that won’t run any faster? What about interoperability with the installed base of third party plug-ins? The last time mainstream developers undertook full ports of their applications, it was for 64-bit addressing.

With CUDA, developers port the small percentage of an application that can benefit. The rest of the application stays the same. If it runs on systems without CUDA hardware, QA managers have to test both code paths, and make sure to test the variety of CUDA hardware that may run the application. It is nontrivial, but it’s a much smaller pill to swallow than having to recompile the entire application.

 

There you have it. As a final note, notice that whether the list is prioritized from top to bottom or the other way around, CUDA GPUs’ status as a peripheral (not a CPU) is a central reason they have been so successful.