This is the Compute Blade, and I'm test driving it in a four-node cluster:
I'm testing the Dev version, and @Merocle from Uptime Lab sent four Blades, a 3D-printed 4-bay case (a metal 1U rackmount enclosure is in the works), and two fan modules.
He's been testing 40 of these in a rack at Jetbrains for months, and they're about to go live on Kickstarter.
But why build a cluster with these Blades? And what good are they if you can't even buy a Compute Module 4 from Raspberry Pi? Do any alternative compute modules work? I'll get to ALL those questions in this blog post.
Or, if you're more into visual learning, check out my video on the Compute Blade:
Compute Blade Overview
Last year I posted a video on an early alpha version of the board. Ivan redesigned almost everything since then. And it looks gorgeous! The blade has an M.2 slot and is powered via a 1 Gbps PoE port on the front. The Dev model has extras like a TPM module, USB and HDMI ports, and physical switches for WiFi and bluetooth.
Above the Ethernet port on front there are a bunch of LEDs, a button, and a couple neopixels. I'll cover those later.
On the opposite end there's a fan header. There's a basic fan board that just holds a 40mm fan in place, or... if you're lucky like me, you have a one-of-a-kind 'overengineered edition' fan controller (pictured below). It has another Raspberry Pi on it—in this case the tiny RP2040 microcontroller—and it measures airflow temperatures and adjusts the fan speeds accordingly. It also has more neopixels on it.
As far as just getting air to flow over the Pi goes, yeah... it's definitely overkill.
Both these fan modules slide into the back of the custom 1U blade chassis, and the Compute Blades slide in the front.
You might've also observed the sleek red heatsinks. They work amazing, but take a look underneath—they're probably a nightmare to machine. I'm not sure if the heatsinks will make it to mass production but they work and look great. The Pis stayed under 42°C after ten minutes of stress-ng
on all 16 CPU cores.
Even without heatsinks, these blades supply plenty of power and cooling for stable overclocking. Ivan's been running and testing forty of them for months in the lab where he works, with no downtime (though one Pi was drowned and did not come back to life).
The TPM and Dev versions both come with an integrated Infineon TPM 2.0 module. TPM stands for Trusted Platform Module, and it can be used for secure embedded computing—especially paired with a Zymbit which I'll talk about later. This chip stores encryption keys and secure passwords so someone couldn't steal a blade and get your data.
Ivan went a step further and placed the chip under the Compute Module for better security. Even if someone got physical access to the blade, they couldn't break into the TPM without unplugging the Compute Module. That'd turn off power to the chip and (ideally) lock all your data.
Secure Computing is more complicated than this, and the Raspberry Pi isn't perfect, but the Compute Module does offer some improvements for trusted boot and TPM that I'll touch on more in a future video / blog post.
Continuing the theme of turning the Raspberry Pi enterprise-grade, these blades also have two features that fit right in with other racked equipment:
The pull tab at the front is hinged so it can press the front button. And the LEDs indicate SSD activity, power, and Pi activity, plus there are front and top-mounted neopixels you can program to do whatever you want. You can also turn off all the LEDs in software if you want.
This demo Python script displays CPU temperature using different colors, and allows the LED to be used for locating the blade. If you have a bunch of these in a rack somewhere, finding a particular Blade might be tricky. So you can trigger the neopixel, then when you find the right Blade, press the button to dismiss it.
Why Compute Blade?
So there's more to this board than meets the eye, but... why? What would you use these things for?
Ivan's original motivation was to get a bunch of ARM computers running for Continuous Integration testing at Jetbrains. They build tons of software for developers, and they need to test them on Macs, PCs, and yes, even Raspberry Pis!
He's running forty Blades in 2U. That's:
- 160 ARM cores
- 320 GB of RAM
- (up to) 320 terabytes of flash storage
...in 2U of rackspace.
That's actually useful for some people. Like if you want a relatively low-power ARM cluster for testing or research. Considering they're only burning a few watts each, you could have 160 ARM cores under 200 watts in 2U, with 40 NVMe drives!
Another advantage of running multiple smaller machines instead of a few large ones is resource isolation. If you host lots of small apps, it's more secure to isolate them on their own hardware. Many modern security problems are due to people running more and more services on one system, sharing the same memory and CPU.
For me, these blades make learning easier. I test open source projects like Kubernetes and Drupal. K3s, in particular, runs great on Pi clusters, and I have a whole open source pi-cluster setup that I've been working on for years. It has built-in monitoring so you can see your cluster health in real-time, and there example Drupal and database deployments built-in.
I've also tested clustering software like Ceph, which I also have in that pi-cluster project, so go check that out on GitHub even if you just have regular old Pis.
It's just more fun to do this stuff with physical computers, running right next to me on my desk.
And sure, I could run some VMs on a PC, but that doesn't give me bare metal control and physical networking. And performance per watt isn't bad at all if you're running certain workloads like web services. My cluster uses less than 30 watts running four NVMe drives under 100% load, and it's quietly sitting here on my desk.
But just running a bunch of Pis in a cluster is old news. Tons of people are running Pi clusters. The Blade, though? It takes Pi clustering up a notch. Ivan sent over some other accessories he's been testing.
This is a ZYMKEY 4, which is an additional hardware security module that plugs into the partial GPIO header on the blade.
The ZYMKEY has encrypted storage, tamper sensors, and a real-time clock built in, and it turns the Blade into a fully secure compute node.
Ivan also made a custom board using Zymbit's HSM4 security module. Using that, he made this demo where if you pull out the Blade, it can react to that by doing things like automatically destroying sensitive data.
Other Blades
The rest of the world isn't standing still, though. Pine64 launched their own blade, too. I haven't had time to fully test it out yet, but I did throw both the SOQuartz and a Compute Module 4 on it to see how it performs.
The integrated PoE circuit had a bit of coil whine sometimes, and none of the images I downloaded for the SOQuartz would give me working HDMI or NVMe yet, so I swapped over to a Compute Module 4. My eMMC version worked fine, with HDMI, networking, and NVMe all present. But a Lite CM4 didn't work, it would just go to the rainbow screen when it started booting up.
So Pine64's Blade seems functional, but it's definitely more barebones and doesn't seem to be fully supported yet. If the Compute Blade gives you a slice of Pi, the SOQuartz blade feels like it came out a little... half-baked.
Other CM4-compatible Modules
And I know how hard it is to find a Raspberry Pi right now. I get it. Just looking at rpilocator.com, it's pretty bleak.
But there are four other Compute Module clones you can buy now. All of them say they're pin-compatible with the Compute Module 4.
And I have three of them to test. I actually ordered a BPI CM4, too, but it's still stuck somewhere between China and my house.
But I do have these other clones: BigTreeTech's CB1, Pine64's SOQuartz, and Radxa's CM3. They're all meant to be drop-in replacements, though the CB1 doesn't support PCI Express, so I didn't test it on this board. Check out my Live stream from October, where I tested out the CB1 and talked more about the Pi shortage.
But the SOQuartz does have PCI Express, so I tested it. I actually did a whole video on it and the CM3 over a year ago! Back then, it was hard to even get the boards to boot! Have things improved since then?
Well... a little. A lotta Raspberry Pi clones take the approach of 'throw hardware at the wall, and see what sticks.'
But if spec sheets were everything, Raspberry Pi would've been just a tiny footnote in computing history. The big difference is in support, and Raspberry Pi has that in spades, especially with their Raspberry Pi OS. Even Orange Pi started getting in that game with their own custom OS last year.
If I head over to Pine64's download page for the SOQuartz, it's a mess. There are six different OSes listed, and the page doesn't recommend any. In fact, it says right on the page the first three images don't even work!
I get that Pine64 is community-based, but anyone besides a developer who comes into the Pine64 ecosystem and expects to be productive is in for a rough ride.
That said, after reading this blog post, it looked like I might have the best experience with Armbian. So I looked on Armbian's website, and to my surprise, the SOQuartz wasn't even listed. So I kept searching and found that for some reason the recommended Armbian download was hosted on a forum (www.t95plus.com
) that wasn't even related to either Pine64 or Armbian.
It's not even apparent how that image was built! It felt sketchy but I downloaded the image anyway. And... it wouldn't download. It got to 250 MB, and got stuck. I tried it a few times but couldn't get that to work.
So I switched gears and tested Plebian Linux instead.
Plebian's goal is to get vanilla Linux running without any hacky RockChip patches. This time the download worked, and it actually booted right up, which was a nice surprise at this point. But it doesn't support HDMI or WiFi yet. And even though I could see my NVMe drive with lspci
, it seems like the OS can't use it.
So it's a bit of a mess, but at least I can say the SOQuartz does run on the Compute Blade, it's just a matter of software support.
The Radxa CM3 is still giving me trouble flashing an OS, so I couldn't test it out yet. Maybe I'm just unlucky, but it's definitely not all rainbows and butterflies with CM4 clones.
If you do still wanna use one, splurge on the Dev version of the Compute Blade. microSD and HDMI access are invaluable for debugging.
So for production use, I don't recommend clones yet. They're slower, and they don't work out of the box like a Pi does. Even though it pains me to say this, hold out for Compute Module 4s. Raspberry Pi said stock should improve through 2023—let's hope that's true.
And I asked Ivan if there was any way he could get a batch of CM4s to sell on Kickstarter for early backers, but he said it would be months, even with a bulk order.
How to buy a Compute Blade (or 20)
Regardless, the Compute Blade is a great way to run Pis in clusters—in fact it's my favorite so far. It's satisfying sliding these things in and watching them run in a rack. Ivan's working on a metal 1U rackmount enclosure too, but I don't have a clue how much it would cost.
If you're just tinkering with some Raspberry Pis, the price is a bit steep. But if you have specific needs for dense ARM compute nodes, or you just want the coolest Pi board on the market, the Compute Blade is worth a look.
It's been fun watching the design of these blades from this first proof of concept version all the way to production, watching Ivan tweak every single part of this board until it became what it is today.
It'll launch on Kickstarter this week, with three models:
- A basic version for $60
- A TPM version for $69
- and the Dev version for $90
...though those prices aren't a hundred percent final yet. Refer to the Compute Blade Kickstarter for all the details, or browse the Compute Blade website for even more, including a build log!
Comments
Thanks Jeff, really great information and it's really great to see you recovering after you surgery.
I believe there is a typo in the last header. "Hot" should be "How"
Oops! I fixed that, thanks.
Those board process don't include the CM4 middle? Ouch. As a compute product it seems underpowered thought I guess there is a niche for it.
Jeff, I wish it was easier to buy into Pi-Clustering. I've got quite a bit of x86 hardware in my lab, but I'm the kind of person who likes to have their hands in all of the cookie jars.
I'd absolutely buy into the CM4 platform w/4 nodes, but seeing how impossible it is to acquire the CM4 boards has kinda left me high and dry. Hopefully things will turn around soon, and pricing will come down.
By chance, do you know of any affordable ARM-based appliances out there in the wild that are for sale? I'd love to break into the architecture, but I don't want to be bound by cloud providers like Oracle with Free-tiers and etc. I'd want to run it local, ya know?
Very nice article, thanks. =)
Hi, I'm the Plebian guy. Both NVMe and HDMI should definitely work on the SOQuartz Blade. I'll re-test it again when I get some sleep but for now make sure you use a monitor that can accept a 1080p mode, there's a problem in mainline kernels with the screen resolutions it accepts. (We're getting 4K fixed as we speak, just hasn't been merged yet) As for NVMe, that should Just Work, but I can look into it again. If it doesn't work, please open an issue on the Plebian github with dmesg output, though I understand if you don't care enough to re-test it and do that.
As for why the wiki says the images don't work, that's because some frustrated user error type of person edited that in. No clue if it's actually true, I can't be bothered to check those images, I already do enough free work as it is.
Thank you for your work! The entire experience with Plebian was a lot nicer than most other distros I've tested for these SBCs. So keep it up :)
It's so frustrating that for a year now it's been "No raspi availability, oh the clones don't work, wait for a pi. At some point we have to acknowledge reality that most of the cool stuff Jeff shows is vaporware most of us will never be able to use.
Wooo this is so cool! I am part of the Scientific Python team and I might order some to explore and setup a CI service we could use for the projects. If it's good for JetBrains, should be good for us too. Also cannot be worse than relying on flaky GitHub actions, Azure and other providers haha.
I have recently been considering moving my old 2xE5 Xeon server running proxmox to an ARM64 platform. I still feel like a full size ATX Mobo and compatible ARM64 CPU would be more what I'm looking for though. Just sick of the amount of power I am using for what I run