Why you should care about RISC-V
If you haven’t heard about the RISC-V (pronounced “risk five”) processor, it’s an open-source (open-hardware, open-design) processor core created by the University of Berkeley. It exists in 32-bit, 64-bit, and 128-bit variants, although only 32- and 64-bit designs exist in practice. The news is full of stories about major hardware manufacturers (Western Digital, NVidia) looking at or choosing RISC-V cores for their product.
But why should you care? You can’t just go to the local electronics boutique and buy a RISC-V laptop or blade server. RISC-V commodity hardware is either scarce or expensive. It’s all still early in its lifespan and development, not yet ready for enterprise tasks. Yet it’s still something that the average professional should be aware of, for a number of reasons.
By now everyone has heard about the Meltdown and Spectre issues, and related “bugs” users have been finding in Intel and AMD processors. This blog is not about how hard CPU design is – it’s hard. Even harder than you realize. The fear created by these bugs was not that there was a problem in the design, but that users of these chips had no insight into how these “black boxes” worked, no way to review code that was outside their control, and no way to audit these processors for other security issues. We’re at the mercy of the manufacturer to assure us there are no more bugs left (ha!).
The advantage of an open core here is that a company can audit the internal workings of a processor, at least in theory. If a bug is found by one chip manufacturer using a RISC-V core, the fix can be shared with other manufacturers. And certainly, if there are bugs to be exploited, the black hats and white hats will be able to find them (and fix them) much faster and sooner.
And what if you do want to try a RISC-V system today? Support for 64-bit RISC-V cores with common extensions (MAFD – multiply/divide, atomics, float, and double – aka the ‘G’ set) was added to the GNU C Library (glibc) in version 2.27, which means (for example) Fedora 28 contains RISC-V support. Bootable images are available, which run in a qemu emulator (standard in Fedora) or on real hardware (such as the SiFive HiFive Unleashed board).
A team of volunteers (of which I am one) is currently working on building the latest Fedora packages for RISC-V on a large number of emulators and a small number of hardware systems, such as this one (mine):
But are there down sides to choosing an open core? Well, there are considerations that anyone should be aware of when choosing any core. Here are a few:
- More flexibility for you. If you need to integrate a core into a custom ASIC for your hardware, with custom peripherals, an open core gives you a good base core to work from. However…
- More work for you. A core is just a core, you need to add everything (serial ports, DDR interfaces) yourself.
- A wider range of options and configurations. You get to decide which extensions and peripherals your core will have, which minimizes space and cost of each implementation. However…
- A fragmented ecosystem is possible. If you customize your core too much, you might need to customize the tools to match, and sharing code and designs becomes more complicated. Distributions like Fedora standardize on a set of common extensions that manufacturers can include to ensure compatibility.
- An open design means anyone can audit the design for security. However…
- An open design means everyone must audit the design for security. Perhaps an ecosystem for audits and auditing will arise.
- An open design can be cheaper on a per-core basis, due to the lack of licensing costs and freely available tooling. However…
- An open design can be more expensive due to the lack of a robust ecosystem to drive engineering costs down.
So, like all things engineering… YMMV.
In summary… any time something new comes around, in this case a new processor core and a new way of thinking about the intellectual property therein, it offers users more choices about where they want to put their efforts, resources, and risks. For me, having (and supporting) a new architecture give me an opportunity to hone my skills as well as revisit old decisions about how platforms can be used portably.