CUDA first became available about 10 years ago, so it seems like a good time to take note of its success and reflect on why it has been successful.
1. GPUs are not CPUs.
What I mean by this is not just that you don’t have to recompile your app (this point gets its own bullet later in this article), but that core operating system changes are not needed for GPU support. GPUs are complicated peripherals, but when the rubber meets the road, they are still just peripherals. They hang off the bus, get enumerated by the OS, get a driver loaded, and go. Proponents of competing technologies such as the Cell processor or Larrabee (now Xeon Phi) would have you believe otherwise, but GPUs have been served well by the flexibility and platform portability that comes with being a “dumb peripheral.”
2. GPUs are everywhere.
Jensen Huang has said the GPU had a “day job.” NVIDIA had an established, high-volume market for their ASICs. The overlap in requirements between a big, fast graphics chip and a general-purpose manycore processor was significant, but it wasn’t obvious to all that the incremental cost would be worth it. I personally had lunchtime arguments with senior graphics architects at NVIDIA who didn’t want to spend 10% die area on compute (the estimated hardware cost of adding support for scatter/gather and shared memory) because it would put them at a disadvantage running graphics benchmarks against AMD (at the time, it was known as ATI). Fortunately for NVIDIA, those skeptics were overruled and the business risk turned out to be justified.
Another way to look at it: though NVIDIA was weighing a 10% die area risk, technologies like Cell and Larrabee/Xeon Phi, or companies like Ageia and other coprocessor vendors, were incurring a 100% die area risk. They did not have an established market to fall back on if things didn’t work out.
3. GPUs are compellingly faster than the CPU.
Shortly after one of our first, best customers for CUDA received his first CUDA-capable GPU, he contacted NVIDIA with a question. He had gotten a sample workload ported, and, he said, it looked like it was working. The problem? He wanted to know how it could be so fast!
The senior people at NVIDIA had long known GPU performance was going to be amazing. Shortly after I joined NVIDIA in 2002, I had lunch with a senior NVIDIA architect and asked him what he was working on. “NV50,” he said. (Mind you, this conversation occurred before NV30 had taped out.) “It will unify vertex and pixel shader processing. We’ll have room to build a chip with about a teraFLOPS of processing power, but we’ll spend half the area on graphics so it will have peak performance of about 500 GFLOPS.” Later, in an internal company email, the same architect said NV50 was going to “make the CPU look like a toy.”
His prediction turned out to be amazingly accurate, considering it was made four years and two major architectural revisions in advance. NV50 turned into G80, the first CUDA-capable chip, and had 384 GFLOPS of peak performance – within spitting distance of his casual lunchtime conjecture.
Remember that when CUDA first shipped, Intel’s floating point capabilities were much more limited than they are today. The SIMD width was only 128 bits (Sky Lake currently supports 512), and Intel had only recently widened the actual execution unit (singular – modern Intel CPUs have multiple SIMD execution units) to a full 128 bits. Before the Core 2 Duo, one generation after another of Intel CPUs had supported SSE as two micro-ops (“high” and “low”) for the 64-bit-wide execution unit, limiting instruction throughput. In fact, CUDA may have prompted Intel to dramatically improve their floating point capabilities.
Today, it is still true that for suitable workloads, GPUs are compellingly faster than CPUs. Intel has doubled the SIMD width in their processors twice, and also doubled the number of SIMD execution units, but in that time, NVIDIA has increased the number of transistors in their “win” GPU by 30x (from 684M to 21B), with a commensurate increase in performance. NVIDIA GPUs, by the way, still benefit from Dennard scaling because they target much lower clock rates than CPUs. In 2006, G80 ran at <600 MHz, while the latest GPU (V100) runs at 1455 MHz. NVIDIA also has led CPU vendors in advancing their instruction set support, being the first to add FP16 and fused multiply-add support. For these reasons, NVIDIA has held off Intel’s attempts to close the performance gap over the last 10 years.
4. CUDA has a low barrier to entry.
On the hardware side, this point goes hand in hand with how the GPUs already had an established, high-volume market. A CUDA GPU could be had for well under $1000, and as an added bonus you got to play World of Warcraft on a badass gaming card. Later, CUDA GPUs found their way into laptops. Still later, CUDA GPUs can be rented on an hourly basis in the cloud with a credit card.
So the barrier to entry to acquire hardware always has been low. But the same is true of Intel CPUs – they are inexpensive and everywhere. But unlike Intel, who charges for their vectorizing compilers, NVIDIA wisely chose not to charge for the toolchain. CUDA has always been free to download, and NVIDIA has never charged royalties to use it.
It’s hard to beat free, and when it came to hardware, it was hard to beat a GPU. With such a low barrier to entry, it is no wonder developers flocked to it.
5. CUDA is as easy to program as SSE/AVX.
I devote a whole chapter to this point in The CUDA Handbook, but it bears repeating. The portions of an application that are most amenable to CUDA acceleration are, for the most part, the same as for SIMD instruction set optimization. In either case, only a small portion of the application – certainly less than 10%, and in some applications, as little as 2% – needs to be ported to yield a benefit. So the question becomes, which technology gives the biggest return on the engineering investment?
Let’s pause for a moment to reflect on two things. First, Intel had a 10-year head start on NVIDIA in building compilers for their respective target technology (SSE versus CUDA). For Intel, that investment was in vectorizing compilers – compilers that examine scalar code and emit executable code that uses SIMD instructions. Second, despite that head start, that investment has delivered a limited return – partly because, as already mentioned, only small parts of an application actually benefit from SIMD optimizations, but also because vectorizing compilers have never fulfilled their promise. See for example this GDC 2015 presentation by Andreas Fredericksson. The game development company where he works avoids vectorizing compilers because an innocent-seeming change can cause the vectorization to break – a potentially catastrophic setback when most games have to be done in time for the holiday season (“This is what will happen two days before gold.”) Instead, they use compiler intrinsics, which use functions with names like _mm_add_ps()
to operate on special types with names like __m128
. With few exceptions, these functions have direct analogs to machine instructions (in the case of _mm_add_ps()
, the SSE instruction is ADDPS
). From an engineering standpoint, intrinsics enable developers to take advantage of the new instructions without worrying about register allocation, instruction scheduling, or the intricacies of the ABI. (An especial challenge on x86-64.)
In stark contrast, CUDA lets you write scalar-looking code that alludes to the parallelism by referencing built-in variables such as threadIdx
and blockIdx
. I’d call the memory management issues a wash – in CUDA, you have to allocate and copy to and from device memory, but SIMD instructions have alignment restrictions and do everything 4 or 8 or 16 things at a time in a way that makes it difficult to deal with edge cases. I admit to being biased, but I have written a great deal of both types of code and I consider CUDA at least as easy to target.
6. CUDA has superior performance portability.
Performance portability is the idea that code will not just run correctly, but deliver high performance against a variety of platforms. For CUDA, performance portability within a given GPU generation is a given, as long as applications launch enough thread blocks to saturate the largest GPU. Performance portability across GPU generations is a bit sketchier, but has held up over time. Even features like FMAD (fused multiply-add) were added seamlessly, and always had native compiler support. NVIDIA has changed architectures and instruction sets with high frequency, but masks those architectural differences with a sophisticated mix of driver and compiler software.
On multicore CPUs, developers pursue performance along two axes: multithreading and SIMD. For multithreading, major operating systems have very different operations to manage threads and synchronization. Mutexes, semaphores, and events were all built into Windows; condition variables were in Linux, and added to Windows in Windows Vista. Windows also added reader-writer locks, mutexes that can accommodate multiple threads when the resource is being accessed in a read-only manner. When you add in the instruction-level support for thread synchronization (“interlocked exchange” or “compare and swap” primitives can be used to implement any number of thread synchronization primitives – especially the so-called “lockless” data structures), the number and variety of options for developers is overwhelming. No wonder process-level parallelism (i.e. eschewing threads entirely) has become a popular method of leveraging multicore CPUs!
On the SIMD side, Intel has added instructions about every 2 years, and increased the SIMD width twice since 1999. But software developers can’t immediately use new instructions without qualification. For one thing, since only new CPUs include the new instructions, applications must test which instruction set level is available, and run the corresponding code path. Applications must support “downlevel” hardware that corresponds to the installed base owned by their target users (notably, this calculation is different for a supercomputing data center as opposed to a consumer application such as Photoshop). One interesting data point: CCP, the company that makes the popular online game EVE Online, did not start requiring SSE2 on EVE clients until 2011. SSE2 first became available in 2001!
So for every instruction set innovation – notably AVX, AVX2, and now AVX-512 – new code must be written, along with detection code to ensure the “best” code paths are executed on the various flavors of CPU. If intrinsics are the developer tool of choice, the development burden grows linearly in the number of supported instruction set permutations. If you want both SSE and AVX implementations, you write twice as much code, and so on. But even that understates the burden of supporting a plethora of instruction sets, because we haven’t yet accounted for the QA burden. The QA department can’t get away with just running the code on CPUs that support all of the available instruction sets; they have to make sure the code is tested on CPUs that don’t support all of the target instruction sets. Otherwise, the QA process will overlook bugs in the detection code – the code that decides which code path to run, depending on CPU capabilities. Unless you are testing on hardware that doesn’t support the latest instructions, an SSE2 instruction (say) may find its way into your SSE code paths. And because newer CPUs also support the older instructions, they will run that buggy code just fine. But on older CPUs, when they encounter the instruction they don’t support, they throw an exception and the application crashes.
Efforts to address the performance portability of multithreading and SIMD have been desultory at best. If you take the intersection of threading primitives across operating systems, you get something that resembles C++’s std::thread – useful only to the simplest of parallel applications. For SIMD, rather than vectorizing compilers, the technologies that offer the best prospect at performance portability are domain-specific languages like Halide – which also has a CUDA implementation.
7. You don’t have to recompile your app.
The siren song of parallel technologies has echoed through the years: “Just recompile your app!” The marketing folks would have you believe that all the latent benefits of parallelism will be laid bare by their magical compilers. The problem is that 95+% of the application won’t benefit at all, so much of that porting effort is for naught. Think about the millions of lines of code in a flagship application from a company like Adobe or Autodesk. Do you really think the engineering manager of such an application is excited at the prospect of having to port and re-test millions of lines of code that implement the user interface, file parsing, and other portions that won’t run any faster? What about interoperability with the installed base of third party plug-ins? The last time mainstream developers undertook full ports of their applications, it was for 64-bit addressing.
With CUDA, developers port the small percentage of an application that can benefit. The rest of the application stays the same. If it runs on systems without CUDA hardware, QA managers have to test both code paths, and make sure to test the variety of CUDA hardware that may run the application. It is nontrivial, but it’s a much smaller pill to swallow than having to recompile the entire application.
There you have it. As a final note, notice that whether the list is prioritized from top to bottom or the other way around, CUDA GPUs’ status as a peripheral (not a CPU) is a central reason they have been so successful.