Building a tiny 6-drive M.2 NAS with the Rock 5 model B

As promised in my video comparing SilverTip Lab's DIY Pocket NAS (express your interest here) to the ASUSTOR Flashstor 12 Pro, this blog post outlines how I built a 6-drive M.2 NAS with the Rock 5 model B.

The Rockchip RK3588 SoC on the Rock 5 packs an 8-core CPU (4x A76, 4x A55, in a 'big.LITTLE' configuration). This SoC powers a PCIe Gen 3 x4 M.2 slot on the back, which is used in this tiny 6-drive design to make a compact, but fast, all-flash NAS:

6-bay Rock 5 NAS

Pair that with the built-in 2.5 Gbps Ethernet port on the Rock 5, and... could this little package compete against commercial offerings like those from QNAP and ASUSTOR? It's certainly a lot more compact:

ASUSTOR 12-bay M.2 Flashstor NAS with Rock 5 model B compact 6-bay SATA NAS

Watch the video linked at the top of this post to find out. And if you're interested in a Pocket NAS-style device (the one I tested is just a prototype), express your interest here!

The rest of this blog post details how I set up OpenMediaVault on the Rock 5 to test SMB sharing performance on my network.

Preparing the Rock 5 for OMV (Armbian setup)

  1. Download Armbian 23.02 Bullseye CLI (minimal) (Go to 'Archived versions for reference and troubleshooting) – the specific version I downloaded was Armbian_23.02.2_Rock-5b_bullseye_legacy_5.10.110_minimal.img.xz
  2. Flash the ISO to a microSD card with Etcher
  3. Insert the microSD card and boot the Rock 5
  4. Log into the Rock 5 via SSH and follow the first time setup (initial login is root / 1234)
  5. Install git: apt update && apt install -y git
  6. Get the fan set up (see separate instructions below)
  7. Run updates: sudo apt update && sudo apt upgrade -y
  8. Reboot: sudo reboot

Set up the PWM fan

WINSINN fan on top of 6-bay Rock 5 model B NAS

  1. Clone the fan control software to the Rock 5: git clone https://github.com/XZhouQD/Rock5B_Naive_Pwm_Fan.git
  2. Switch to root user: sudo su
  3. Enter the fan control directory: cd Rock5B_Naive_Pwm_Fan
  4. Copy the fan control binary: cp fan_pwm /usr/local/bin/.
  5. Make it executable: chmod +x /usr/local/bin/fan_pwm
  6. Set up the fan control systemd service: cp fan_pwm.service /etc/systemd/system/.
  7. Reload systemd: systemctl daemon-reload
  8. Start the fan service and enable it at system boot: systemctl start fan_pwm && systemctl enable fan_pwm

As an alternative to the PWM fan control app detailed above, you could instead use https://github.com/pymumu/fan-control-rock5b.

Install OMV

The last step is to install OpenMediaVault, a nice NAS-style management UI and ecosystem that works great on Arm boards like the Rock 5 model B.

  1. Install OMV: sudo wget -O - https://github.com/OpenMediaVault-Plugin-Developers/installScript/raw/master/install | sudo bash
  2. After reboot, access the IP address of the Rock 5, and log into openmediavault with login admin / openmediavault.
  3. In OMV, to create a SMB share for testing (make sure you click 'Apply' when it pops up in the UI after each step!):
    1. Go to Storage > Software RAID, and add a new software RAID device. I chose three drives and RAID 5, but you can choose what you want.
    2. While the RAID volume is sycning, go to File Systems, and add a new one; I chose EXT4 for mine.
    3. After the File System is created, it is not mounted. You have to click on it and click the 'Play' button to mount it.
    4. Go to 'Shared Folders' and create a new one; I created one with the defaults called 'test'.
    5. Go to Services > SMB/CIFS > Settings, and check the 'Enabled' checkbox. Click 'Save' at the bottom of the settings page.
    6. Go to Services > SMB/CIFS > Shares, and add a new SMB Share. Select the Shared Folder you created earlier, and configure permissions as you see fit. I enabled full public read/write access for testing.
  4. Wait for OMV to finish syncing the RAID 5 array (you can monitor progress under Storage > Software RAID).
  5. Once the array is finished syncing, the 'State' should read "clean"
  6. On another computer on the network, access the SMB share. On my Mac, in the Finder, I chose Go > Connect to Server (⌘ K), then entered the address smb://[ip address of Rock 5]/test

Log in using a user account on the system, and you can now copy files to and from the SMB share to your heart's content!

On the Rock 5 model B, I was getting around 100 MB/sec write speeds, and 200 MB/sec read speeds, using a 3-drive RAID 5 volume over my 2.5 Gbps network. Write speeds were an improvement over the Raspberry Pi CM4 NAS I built last year, but not double or triple the speed as I was hoping. And SMB read speeds could hit about 1.9-2.1 Gbps but still couldn't saturate the Ethernet connection. So... good, but not as marked an improvement over a slower and older Pi as I was hoping.

For more details, and a full comparison, watch the full video on the Pocket NAS and Flashstor 12 Pro.

Comments

I noticed you use `sudo su` but you may also use `sudo -i` for this.
In my experience both options get the job done.

This is a superb little project!
I'm running a Pi4 8gb in an Argon EON NAS case. It's a cool looking case, but running the drives over USB kills performance speeds, and also only allows RAIDing via software hack.

All this is nice, but as long as those boards basically only support Armbian (with its load of hacky-patchy that-golden-binary-we-can't-get-to-compile-anymore linux-5.15-actually-3.15-with-cherrypicked-patches-and-special-non-mainstreamed-voodoo kind of things), it always feels *very* cool yet ultimately *very* underwhelming. Any news of a true SBBR ARM board out there?

This is really not a critique of what you do as projects, it's more desperation at how cool things could be if only ARM board manufacturers stopped with the whole "we'll hack uboot and whatever kernel we have now to get a barely working armbian version" and drop support as soon as the board stops making money... When you follow standards (like UEFI and ACPI) and you mainstream you board support (or follow other boards specifications to avoid having to write 100k LoC in the kernel every time), working on those boards becomes much less of a risk and much more pleasant.

This ties in into some of your other projects as well, like the RPi clusters... Can't deploy nor redeploy anything automatically, can't use RPi-compatible replacement modules (at least not without manually remaking a boot drive specially for the new board), when, say, there is a RPi shortage making the prices soar.

On the other hand, one can find crappy celeron boards that sip little power too, support EFI/ACPI, can PXE-boot and are at least as powerful as a RPi4 without all the shenanigans needed to boot, get hardware accelerated decoding, get the latest kernel with all features working 100%, get enough PCI lanes to run a project like this. Sure, the form-factor is not nearly as cool but at least it will run and be updated without loss of functionality for years to come and has all the bells and whistles to be easily redeployable.

There are a few Arm vendors committed to full and standard SystemReady support (like Ampere), but so far no SBC vendors. I've been asking Raspberry Pi about this and it seems to not be the highest priority, likely due to the way boot works through the GPU from Broadcom. Hopefully we can make it so someday, because it would be nice to enjoy some of the standardization x86 enjoys.

Armbian has additional patches, but is in general close to mainline, clean and reproducible. There are ofc SoC vendor kernels, which are "just something", and some 3rd party voodoo magic kernels, that are hacked even dirtier (provided unofficially). This is more an exception than a property of Armbian (which is 1st a build tool). Vendors creations are not near.

Software support is too complex for a few of engineers that board designers (around SoC that they might know little about) can cover from their profit margins ... and without 3rd party software support, Armbian, community, 3rd party paid projects that develop some key parts (Collabora, Bootlin, ...) it would already look pretty terrible and useless ... UEFI and other standards? Who of those has commercial interest? Ampere servers are playing in much different game then diverse custom hardware. We agree its possible, many things has been done, but without proper interest, things usually idles at best effort amateur level.

Would you consider running 4 drive raid on a lattepanda sigma, it has 4 M2 slots that's small form factor and seems to have the speeds. One M2 need to be shorted 2230 but still I would like to see how does that compare. Especially it's half the price flashstor and I think little bit more expensive than rock 5.

6 drive M.2 nas off of a single M.2 connection? I see no space or mention of a PCIe switch so...is he using a PCIe to USBx6 bridge and then going for each USB back to an M.2 NVMe connection? Or are these M.2s only USB?

These are M.2 SATA SSDs, so the card that's being used is a SATA controller that has 6 SATA ports off the M.2 PCIe bus connection.

This is probably some ASM1166 based M.2 to 6 SATA adapter card. I have one ordered from Taobao, but you might find AliExpress easier. If you want a name brand card in PCIe form factor, SilverStone's ECS06 is also based on ASM1166.
https://www.silverstonetek.com/en/product/info/expansion-cards/ECS06/

On a side note, these adapters are known to ship with firmware incompatible with Intel 600 series chipset, which would require a firmware reflash using another machine to work. You can get it from ECS06's product page.

*And SMB read speeds could hit about 1.9-2.1 Gbps but still couldn't saturate the Ethernet connection.*

Of course it couldn't,
First, 2.5 GbE by specification may provide only:
2358 Mbps
That's because of tcp/ip headers and coding
Let's continue to math.
2358 Mbps is not 2^ but its 10^
So for linux which use 2^ it will be 2287 Mibps.

And samba is one hell of protocol in regards of additinal commands, which eats a bit of channel (unless you are testing with one big 30GiB+ file).

In case of write - where are CPU load in tests? What about local speed of m.2 sata drives? What abour local r5 speed of said sata?

Awesome! is there a case / enclosure that would naturally adopt such design? all examples I can think of seem so humongous compared to this beauty! )

I forget, did you test NFS on this? There is a CM3588 board out that has four NVME and 2.5gbps ethernet. The NVME can break down to four gen3x1 which certainly isn't great, but also should be decent for only a single 2.5gbe connection. Just looking for options for my lab so I might ditch the HP DL360e Gen8 (noise, heat, power draw). Still trying to decide if 2.5gbe is going to be enough for everything I want to test, I have 10gbe now so this would somewhat be a step backwards.