iKOOLCore R2 Max Review: Using this mini PC a network or storage appliance

by LaptopLightHouse.com
iKOOLCore R2 Max Review: Using this mini PC a network or storage appliance


Disclosure: Some links on this page are monetized by the Skimlinks, Amazon, Rakuten Advertising, and eBay, affiliate programs, and Liliputing may earn a commission if you make a purchase after clicking on those links. All prices are subject to change, and this article only reflects the prices available at time of publication.

The iKOOLCore R2 Max is a mini PC that launched late last year as a small, versatile computer designed for use as a networking appliance or general purpose computer. It supports up to an Intel Core 3 N355 Twin Lake processor and 32GB of RAM.

While the iKOOLCORE R2 I reviewed in 2023 was a palm-sized PC with two 2.5 Gb LAN ports, the new model is much larger and features four high-speed networking ports including two 10 Gigabit Ethernet (GbE) ports and two 2.5 GbE ports. It also supports two SSDs instead of one. And while the R2 has a fan inside the case, the R2 Max is designed to run passively without one.

The R2 Max is available in several different configurations. Prices start at $309 for a barebones model with an Intel N150 quad-core processor based on Intel Twin Lake architecture, or $409 for a model with an Intel Core 3 N355 octa-core Twin Lake chip.

When the company first started selling the R2 Max in October it was available with Intel N100 and Core i3-N305 Alder Lake-N processor options and iKOOLCORE offered a choice of fanless models or ones with fans for active cooling. Since that time, the company has switched to only selling models with Twin Lake chips. And moving forward all R2 Max units will be fanless models.

But iKOOLCORE arranged to send me demo units for testing before making those changes. So the two units featured in this review both have Intel N100 processors, with one having a fan for active cooling, while the other is passively cooled.

While the company no longer sells R2 Max units with this processor, performance should be very similar to what you’d get from a model with a newer Intel N150 processor.

The starting prices mentioned above are for barebones computers that ship without memory, storage, or an operating system. But iKOOLCORE also sells the R2 Max with 8GB, 16GB, or 32GB of DDR5-4800 memory and up to a 2TB M.2 2242 SSD. At time of publication the SSDs are out of stock though, which means you’ll need to supply your own storage as well as an operating system.

Both of the demo units I received came with 8GB of memory and a single 128GB NVMe drive for storage. The mini PCs were provided to Liliputing for free, with no requirement that the computers be returned upon completion of the review. This review is not sponsored by iKOOLCORE, and the company did not modify or approve the content of this article in any way.

Design

The R2 Max is housed in a 157 x 118 x 40 mm (6.18 x 4.65 x 1.57 inches) rectangular aluminium metal case. Running across the top are fins similar to a heatsink, and these serve exactly the same purpose which is to improve thermal cooling.

The actively cooled model also includes dual 4010 (40 mm) fans on the top’s left side which can be seen through the fins looking down on the device.

On the front of the R2 Max on the left are a pair of USB Type-A 3.2 Gen 1×1 (5 Gbit/s) ports. On the right is the power button.

The right side has a Genesys Logic micro SD card slot, and the left side is portless.

The rear has four Ethernet ports. The first two are Marvell (Aquantia Corp) AQC113C-B1-C 10 GbE ports and the second two are Intel I226-V 2.5 GbE ports. Next to the Ethernet ports is an HDMI 2.0 port. The last port on the rear is a USB Type-C 3.2 Gen 2×1 (10 Gbit/s) port which supports Power Delivery and Alternate mode (DP 1.4) with 4K@60Hz.

There is also a power jack on the rear as the device can be powered by an included 12V 4A (48W) power adapter from Guangdong Sangu Technology.

On the bottom of the R2 Max are two access plates.

The one marked DDR provides access to the single SODIMM slot that the N100 processor can support. Included with each device was a single stick of 8 GB DDR5 SDRAM SODIMM 4800MHz memory (HMCG66MEBSA092N). Whilst only up to 16 GB is officially supported, Linux appears to work with 32 GB whereas Windows has been known to be unstable with 32 GB.

The access plate marked SSD covers two M.2 NVMe slots. Both are PCIe 2.0 x1 which means a maximum speed of 500 MB/s. The review model included a Samsung M.2 2242 PM991a MZ-ALQ128B 128 GB NVMe (MZALQ128HCHQ-00BL2 ) drive. However, the NVMe slot supports both 22 × 42 mm (2242) and 22 × 80 mm (2280) cards.

Also included in the box is a bilingual user manual in English and Chinese

Testing Methodology

The review models came with an Intel Processor N100 Alder Lake-N mobile processor consisting of four cores (without hyperthreading) with a maximum turbo frequency of 3.40 GHz and a TDP of 6 watts. The processor also includes an iGPU with Intel UHD Graphics, which has a maximum dynamic frequency of 750 MHz and 24 execution units.

The N100 performance has been available for more than two years and it’s become popular as a low cost, low powered processor for use in mini pcs that offers acceptable performance for many different use cases. So I don’t believe running the usual selection of benchmarks would really highlight anything special about the R2 Max mini PC.

Instead, since this mini PC is marketed as a “Next-Gen 10G Firewall Gateway Server,” I wanted to see how well suited it is for use as a network appliance given there are two 10 Gbps ports. However I was also interested in seeing if it could be used as a storage appliance given the hardware features of two M.2 2280 NVMe drives and a micro SD card.

It is pretty easy to verify the marketing claim by installing some router software and testing its performance. One issue that could be encountered is getting the 10 GbE ports to work. For example, earlier releases of Debian and Ubuntu do not have the drivers built into the kernel. So it is necessary to first download them, build them and then install them before the ports will work. Some router OS software built upon FreeBSD (e.g. pfSense) make resolving this issue somewhat harder to resolve. In this case, virtualisation is the best route to take. Support for other router OS software such as OpenWRT is available as an image downloadable from iKOOLCORE. Fortunately the iKOOLCORE R2 Max Wiki covers this together with other drivers and UEFI/BIOS updates.

Considering these issues, and with two devices at my disposal, I could run one as a dedicated NAS and the other as a server with a virtualised storage appliance for the NAS and a virtualised network appliance for the router.

As the R2 Max doesn’t include any eMMC storage, in order to keep my dedicated NAS as simple as possible I decided to install Debian as the OS on a micro SD card. This would allow both my NVMe drives to be allocated as NAS storage managed by Debian. The NAS storage could be shared over SMB or accessed through SSH.

For my server the virtualization platform I chose was Proxmox as it is widely used. For the router I opted for pfSense simply because I’ve not encountered problems when running on Proxmox. And my preferred option for the NAS was TrueNAS Scale running as a virtual storage appliance.

Then just to provide some realistic usage for such a configuration, I decided to put media management and streaming application Jellyfin on both devices to provide some semblance of purpose. By connecting both devices to a network with internet access I could simulate a WAN, and by interconnecting the devices I would create a local network or LAN.

The drawback to this design was that the only connections available to the LAN would be through the remaining 2.5 ports. So I put a 10 GbE switch on the WAN side allowing me to test PCs connecting both directly, and through the virtualised router.

For testing I wanted to look at the performance of both the dedicated and virtualised NAS, when accessed directly by the console on each device, together with the performance obtained from PCs accessing through the switch. By comparing direct access against access through the virtualised router I hoped to see any performance overhead from the router functionality.

I wanted to run Windows 11 on one of the test PCs and Linux on the other to see if a client’s OS created any performance degradation. Given each would be equipped with 10 GbE ports, the networking throughput should be equivalent from a hardware perspective.

Testing Installation

To identify the two R2 Max devices, I named the one with the fans as “active” and the other one as “passive”. On each device I removed the pre-supplied 128 GB M.2 2242 NVMe drive and installed two KLEVV CRAS C910 4 TB M.2 2280 NVMe drives specifically for this review.

On “passive” I installed Debian 12 (bookworm) onto an old 32 GB micro SD card. After installing “zfsutils-linux” I then created a zpool stripped across both NVMe drives followed by creating several ZFS file systems including one for the NAS and one for Jellyfin. I wanted to keep the wear and tear down on the micro SD card, so I moved the “/var/log” directory across to its own ZFS file system and also created a ZVOL for swapping on.

For networking I gave static IP addresses for the two 10 GbE ports, one in the range suitable for the WAN side and the other for the LAN port which was going to be connected directly to a 10 GbE port on “active”.

Finally, I installed Jellyfin on “passive”, binding Jellyfin’s cache, log, lib and share directories to a ZFS file system dedicated to Jellyfin. I also installed Samba for SMB sharing.

On “active” I installed Proxmox 8.3.0 using the installer’s ZFS option to stripe the installation across the two NVMe drives. Then in Proxmox I created a network bridge for a single 10 GbE port which would connect to the “WAN” switch and another network bridge with the 2.5 GbE ports and remaining 10 GbE port. I connected this 10 GbE port to “passive”.

I then created a virtual machine for pfSense 2.7.2, allocating 2 cores and 2 GB of memory, and a virtual machine for TrueNAS 24.10 (which runs on Debian 12), initially allocating 2 cores and 4 GB of memory. However, TrueNAS Scale has a minimum hardware requirement of 8GB of RAM, so I had to replace the original 8 GB RAM with a stick of 16 GB Crucial memory (CT16G56C46S5) and allocate 8 GB to the NAS VM. For NAS storage, I also added a 6 TB “hard drive” created on the local ZFS zpool from the Proxmox installation.

I next created a container with 2 cores and 4 GiB of memory now that I had some surplus. On it I installed Debian 12 and mounted the storage provided by a TrueNAS dataset that I had created specifically for media. Finally, I installed Jellyfin in the container using the standard “debuntu” shell script.

The following diagram shows the completed setup:

Testing Scenarios & Commands

As outlined in the testing methodology above, one area of testing I wanted to perform was the performance of a NAS when shared over SMB and/or accessed through SSH:

An example of this would be mounting the NAS from “active” on the “passive” device using SMB (cifs-utils).

Considering all the various testing permutations I mapped out the following scenarios:

Given I proposed testing from two PCs running either Windows or Linux, the two fundamental measurements I wanted to capture was the speed of the NAS and the speed of transferring a file from the test PCs to the NAS.

For Windows, I typically use CrystalDiskMark (CDM) to measure a drive’s speed. On Linux I use “fio” or “flexible I/O tester” which is a command line based tool. When looking around for a graphical equivalent of CDM I stumbled across kdiskmark (KDM) which is currently available as a package in Fedora and is making its way into Debian where it is currently in “Trixie”.

Given I was running Debian standard as the base OS on both “passive” and “active”, it meant that I had no desktop environment. So prior to installing KDM on both devices, I used “tasksel” to install the “Qt desktop”.

It was during initial testing that I encountered issues with KDM. Firstly, it wasn’t possible to install it on TrueNAS Scale as “Package management tools are disabled on TrueNAS appliances”. And of course, manually running it returned the error “usr/bin/kdiskmark: error while loading shared libraries: libQt5Widgets.so.5” because it needs “Qt 5” to be installed.

But the real issue was that I could not use it to test a NAS if it was mounted on a device’s file system.

I also encountered a second issue when running KDM on “passive” using the default settings of 5 runs with a 1 GiB file. The result shows a value that is beyond the capability of the drives. For example, in the above run it returned 3,917.24 MB/s for the sequential read speed.

Now the R2 Max uses ASM1182e PCIe express packet switches to split a single lane of PCIe Gen 3 into two lanes of PCIe Gen 2 allowing each lane to then be allocated as an NVMe drive. The specification for PCIe 2.0 x1 results in a top speed of 500 MB/s. Using the command “lspci” on the devices confirms this speed limitation from using ASM1182e chips as it shows “LnkSta: Speed 5GT/s (downgraded), Width x1 (downgraded)”.

As KDM uses “fio” I decided to write my own drive benchmark tool which I called LinuxiumDiskMark (LDM) and ran as the command “ldiskmark”. I based it on the same “fio” usage as KDM but made the tool text-based and extended it to accept any mount points for testing rather than just block devices. This would allow me to run LDM on the “active” NAS as a local command and also run it on any Linux device with mounted file systems.

Comparing the LDM results with the earlier KDM ones shows a massive difference between the read speeds whilst the write speeds are relatively similar. The sequential read speed was now reported as 537.4 MB/s.

Both benchmarks were run using a 1 GiB test file. However, reading and writing an NVMe drive is affected by many things including NAND flash type, the number of NAND chips, the drive’s controller and DRAM cache to name just a few.

To improve the accuracy in both benchmarking and also when measuring transfer speeds, I needed to use a sufficiently large file to saturate any cache without making it too large so that it takes ages to complete.

Based on the options provided by both CMD and KDM, the best viable file size is 4 GiB as this is just large enough to observe the performance when copying/transferring the file. Anything smaller can finish too quickly to see the statistics flashed on the screen.

I created the initial 4 GiB file for testing using the “dd” command by taking data from “/dev/random”. On the Windows and Linux test PCs, and also on the “active” device, I placed a copy of this file in the “$HOME” directory. However, as “$HOME” on the “passive” device is located on the micro SD card, I put the file in “/var/log”. I also placed the test file in /var/log on TrueNAS Scale as “$HOME” was located on the NAS storage.

So having established the tools on Windows and Linux for testing the NAS drive speeds, to measure transfer speeds an obvious tool to use is “scp”. Not only is this available on both OS but it conveniently shows the transfer speed. Another tool available on Linux that shows transfer speed is “dd” although it is only shown at the end of its execution. In contrast, File Explorer can be used on Windows to transfer files but it only displays the speed during the actual transfer. As both of these tools are representative of real-world usage, I decided to include them in the test plan.

Finally, for each of the scenarios, I documented test commands to use each tool when connected to NAS data either locally, through SMB, or accessed by ssh or http where possible. My rationale was that testing would now be just a matter of running the commands and recording the results.

Test Results

Before looking in detail at the various scenarios test results, I want to start with some basic performance observations about storage speeds, thermals and networking for both the “passive” and “active” devices.

First, let’s look at the root file system on both devices. Since I installed Debian on the micro SD card on the “passive” device, the root “/” directory can be used with both KDM and LDM to show the sequential read and write speeds of this “ext4” file system.

Both tools report the sequential speed of the micro SD card to be around 20 MB/s. The speed is relatively slow compared to what SD cards are now capable of. But this is because the Genesys Logic (05e3:0751) micro SD Card Reader is attached to a USB 2.0 Hub and so is limited to 480 Mbits/s or 60 MB/s.

NVMe drive speeds not only depend on the physical hardware being used, but also on the file system and RAID level.

So, for the root “/” directory on “active” (or “rpool/ROOT/pve-1” for KDM which is the same physical location), the results are similar between the tools for this ZFS file system and indicate a sequential read speed of around 430 MB/s and a sequential write speed of around 400 MB/s.

Turning to thermals, the “passive” model obviously relies solely on convection cooling by the case being aluminium and having fins across the top.

Running a Linux stress test across all four cores for 20 minutes was just long enough to see the CPU temperature level out at its maximum average temperature of 68°C with an occasional peak of 69°C. The CPU frequency was 2900 MHz for the duration of the test.

There is no cooling provided to the NVMe drive bay. With the cover off, it looks like there is plenty of clearance from the top of a drive and the bottom of the cover.

However, it is not deep enough to allow drives with heatsinks to fit as they end up protruding out of the bay.

During the stress test, despite no load being placed on the drives, they did heat up at a similar rate as the CPU. For the “passive” model both drives reached 69 °C which is very close to the maximum recommended operating temperature of 70°C by manufacturers like Western Digital and Samsung. However, it is unlikely you would run the CPU at 100% for a prolonged time and create such high temperatures.

At idle, the temperature of the top of the “passive” device is quite warm to touch, sitting at around 49.6°C. During the stress test the top of the device increased in temperature to 57.1°C.

The twin 40 mm PWM fans on the “active” model certainly help with cooling as they lower the maximum temperature in a stress test by 20°C.

A maximum average temperature of 48.5°C with an occasional peak of 49°C and a single peak of 50°C were recorded. The CPU frequency however was the same as for the “passive” model at 2900 MHz.

Being PWM fans means they do ramp up and down as and when required. They are not silent at idle however resulting in a minor low whirring sound. When running faster the maximum noise I recorded was 38.5 dBA on my sound meter next to the device.

Like the CPU temperatures, the NVMe drives were also slightly cooler during the stress test. They reached a maximum of 60.5°C with the temperature fluctuating rapidly between 60°C and 61°C. Again, this is without load but with the help of the fans the temperature increase was minimised.

While idle, the top of the “active” device has a temperature of around 41.5°C. During the stress test, the top’s temperature did increase slightly to an observed maximum of 43.5°C indicating that the fans are very effective in keeping the device cool.

I also performed some network throughput testing and confirmed that all the ethernet ports functioned at their intended speeds once the correct MTU was applied to the 10 GbE ports. I actually installed and compiled iperf3 version 3.18 to ensure I didn’t encounter any software issues affecting throughput performance on Linux. The above picture shows the connection between “active” and “passive” runs as 10 GbE at 9.90 Gbits/sec.

Further information on the exact hardware configuration can be gleaned from the two “inxi” command outputs below:

linuxium@r2max-passive:~$ sudo inxi -Fzy-1
System: Kernel: 6.1.0-31-amd64 arch: x86_64 bits: 64 Console: pty pts/2 Distro: Debian GNU/Linux 12 (bookworm)
Machine: Type: Desktop Mobo: iKOOLCORE model: R2Max v: 1.0.1 serial: N/A UEFI: iKOOLCORE v: 1.0.1 date: 07/21/2024
CPU: Info: quad core model: Intel N100 bits: 64 type: MCP cache: L2: 2 MiB
Speed (MHz): avg: 700 min/max: 700/3400 cores: 1: 700 2: 700 3: 700 4: 700
Graphics: Device-1: Intel Alder Lake-N [UHD Graphics] driver: i915 v: kernel
Display: x11 server: X.org v: 1.21.1.7 driver: X: loaded: modesetting unloaded: fbdev,vesa dri: iris gpu: i915 tty: 156x41
API: OpenGL Message: GL data unavailable in console for root.
Audio: Device-1: Intel Alder Lake-N PCH High Definition Audio driver: snd_hda_intel
API: ALSA v: k6.1.0-31-amd64 status: kernel-api
Server-1: PulseAudio v: 16.1 status: active (root, process)
Network: Device-1: Intel Ethernet I226-V driver: igc
IF: enp1s0 state: down mac: <filter>
Device-2: Intel Ethernet I226-V driver: igc
IF: enp2s0 state: down mac: <filter>
Device-3: Aquantia AQC113C NBase-T/IEEE 802.3an Ethernet [Marvell Scalable mGig] driver: atlantic
IF: enp7s0 state: up speed: 10000 Mbps duplex: full mac: <filter>
Device-4: Aquantia AQC113C NBase-T/IEEE 802.3an Ethernet [Marvell Scalable mGig] driver: atlantic
IF: enp8s0 state: up speed: 10000 Mbps duplex: full mac: <filter>
RAID: Device-1: r2max type: zfs status: ONLINE level: linear raw: size: 7.44 TiB free: 7.41 TiB zfs-fs: size: 7.31 TiB free: 7.28 TiB
Components: Online: 1: nvme0n1 2: nvme1n1
Drives: Local Storage: total: raw: 7.48 TiB usable: 7.34 TiB used: 44.66 GiB (0.6%)
ID-1: /dev/nvme0n1 vendor: Lexar model: SSD NM790 4TB size: 3.73 TiB
ID-2: /dev/nvme1n1 vendor: Lexar model: SSD NM790 4TB size: 3.73 TiB
ID-3: /dev/sda type: USB vendor: Generic model: STORAGE size: 29.81 GiB
Partition: ID-1: / size: 27.74 GiB used: 15.24 GiB (54.9%) fs: ext4 dev: /dev/sda2
ID-2: /boot/efi size: 511 MiB used: 5.8 MiB (1.1%) fs: vfat dev: /dev/sda1
ID-3: /var/log size: 50 GiB used: 4.02 GiB (8.0%) fs: zfs logical: r2max/log
Swap: ID-1: swap-1 type: partition size: 1024 MiB used: 286.5 MiB (28.0%) dev: /dev/zd0
Sensors: System Temperatures: cpu: 47.0 C mobo: N/A
Fan Speeds (RPM): N/A
Info: Processes: 249 Uptime: 8d 1h 21m Memory: 7.5 GiB used: 1.93 GiB (25.7%) Init: systemd target: graphical (5) Shell: Sudo inxi: 3.3.26
linuxium@r2max-passive:~$
linuxium@r2max-active:~$ sudo inxi -Fzy-1
System: Kernel: 6.8.12-4-pve arch: x86_64 bits: 64 Console: pty pts/2 Distro: Debian GNU/Linux 12 (bookworm)
Machine: Type: Desktop Mobo: iKOOLCORE model: R2 Max v: 1.0.1 serial: N/A UEFI: iKOOLCORE v: 1.0.1 date: 07/21/2024
CPU: Info: quad core model: Intel N100 bits: 64 type: MCP cache: L2: 2 MiB
Speed (MHz): avg: 1967 min/max: 700/3400 cores: 1: 1646 2: 1873 3: 3029 4: 1321
Graphics: Device-1: Intel Alder Lake-N [UHD Graphics] driver: i915 v: kernel
Display: x11 server: X.org v: 1.21.1.7 driver: X: loaded: modesetting unloaded: fbdev,vesa dri: iris gpu: i915 tty: 156x41
API: OpenGL Message: GL data unavailable in console for root.
Audio: Device-1: Intel Alder Lake-N PCH High Definition Audio driver: snd_hda_intel
API: ALSA v: k6.8.12-4-pve status: kernel-api
Server-1: PulseAudio v: 16.1 status: active (root, process)
Network: Device-1: Intel Ethernet I226-V driver: igc
IF: enp1s0 state: down mac: <filter>
Device-2: Intel Ethernet I226-V driver: igc
IF: enp2s0 state: down mac: <filter>
Device-3: Aquantia AQC113C NBase-T/IEEE 802.3an Ethernet [Marvell Scalable mGig] driver: atlantic
IF: enp7s0 state: up speed: 10000 Mbps duplex: full mac: <filter>
Device-4: Aquantia AQC113C NBase-T/IEEE 802.3an Ethernet [Marvell Scalable mGig] driver: atlantic
IF: enp8s0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-1: bonding_masters state: N/A speed: N/A duplex: N/A mac: N/A
IF-ID-2: fwbr100i0 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-3: fwbr100i1 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-4: fwbr101i0 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-5: fwbr101i1 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-6: fwbr102i0 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-7: fwbr102i1 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-8: fwln100i0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-9: fwln100i1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-10: fwln101i0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-11: fwln101i1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-12: fwln102i0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-13: fwln102i1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-14: fwpr100p0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-15: fwpr100p1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-16: fwpr101p0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-17: fwpr101p1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-18: fwpr102p0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-19: fwpr102p1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-20: tap100i0 state: unknown speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-21: tap100i1 state: unknown speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-22: tap101i0 state: unknown speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-23: tap101i1 state: unknown speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-24: veth102i0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-25: veth102i1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-26: vmbr0 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-27: vmbr1 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
RAID: Device-1: rpool type: zfs status: ONLINE level: linear raw: size: 3.62 TiB free: 3.33 TiB zfs-fs: size: 3.51 TiB free: 3.21 TiB
Components: Online: 1: nvme0n1p3
Drives: Local Storage: total: raw: 3.64 TiB usable: 3.51 TiB used: 11.58 GiB (0.3%)
ID-1: /dev/nvme0n1 vendor: KLEVV model: CRAS C910 M.2 NVMe SSD 4TB size: 3.64 TiB
Partition: ID-1: / size: 3.22 TiB used: 8.39 GiB (0.3%) fs: zfs logical: rpool/ROOT/pve-1
Swap: Alert: No swap data was found.
Sensors: System Temperatures: cpu: 42.0 C mobo: N/A
Fan Speeds (RPM): N/A
Info: Processes: 320 Uptime: 5d 3h 18m Memory: 15.36 GiB used: 12.24 GiB (79.7%) Init: systemd target: graphical (5) Shell: Sudo inxi: 3.3.2623222
linuxium@r2max-active:~$

Note that when running the above commands there were two replacement NVMe drives on passive and only one of the NVMe drives installed on “active”. I’ll explain why in the results below.

Finally, I want to explain the command scripts used for testing, as this should make understanding the results a little easier.

When developing the test scripts, I noticed some curious behaviour from CrystalDiskMark in certain circumstances. Typically, the default number of tests that CDM runs is 5. The rationale is to improve accuracy by running more tests, but the trade-off is obviously an increase in the total time taken to run. However, I saw that sometimes each subsequent test run was unreasonably increasing the performance and was obviously being affected by either caching or some form of buffering.

In the above example, the speed after the first test, i.e. 673.15 MB/s is the closest to the true read speed. After the third test the read speed has virtually doubled to 1238.78 MB/s which is not increased accuracy but a false reading.

So, for all instances of running CDM, I ran the tool with a single test run as this would at least provide a reading close to the true speed.

For the “scp” command scripts, I have to use an IP address with the command to be able to see the statistics. Where the “scp” command is simply copying a local file to a local drive/directory (for example when a NAS has been mapped [Windows] or mounted [Linux] locally) I use “localhost” as the IP address. So for Windows, a command might look like “scp localhost:/C:/Users/linuxium/4GiB P:4GiB-scp” or for a Linux command, “scp localhost:/home/linuxium/4GiB 4GiB-scp”.

This first scenario represents one of the more typical ways of using a NAS. For Windows, the NAS is shared over SAMBA (SMB) and mapped as a local drive on the Windows PC. File Explorer can then be used to copy files to and from the NAS and other local drives. Alternatively, users might use “scp” to transfer files between locations if the NAS has not been mapped locally. Linux access would be very similar, except that instead of mapping the NAS, it would be mounted on a local mount point.

What the results show is a wide variation for read and write speeds that differ, not only by command and how the NAS has been implemented, but also by OS.

Asking the obvious question of “How fast is the NAS on the ‘passive’ and ‘active’ device?” and you’ll struggle to work out a simple answer.

Unfortunately there were issues encountered when running the tests, including major and minor crashes.

An example of a minor crash was when running LDM on “active” whilst routing through “pfSense”. The “active” NAS crashes as the Out Of Memory (OOM) killer is invoked by the Proxmox kernel. I’ll show more details later when discussing the results for mounting the “active” NAS on the “active” file system.

During testing I experienced two major crashes. The first one occurred trying to use CDM to measure the “active” NAS. This is marked by a note “[1]” at the top of the third column in the image above.

Unlike when a minor crash occurred, the entire device became unresponsive making it impossible to access via the console or by “ssh”. Cycling the power was the only option. However, the device would not reboot and dropped into the “initramfs” shell.

Interestingly the NVMe drives were not corrupt, and the NAS could successfully be loaded on another computer. I was able to create a snapshot and save it to another drive. However, I was a bit blasé when swapping out the NAS drives for new ones and rather than checking the drive names I was too cavalier in using control-r to rerun a “wipefs” command and promptly wiped my snapshot drive together with one of the NAS drives. As I now had two clean but mismatched drives, rather than mess around with “testdisk” I decided to cut my losses by reinstalling everything on one of the original but now clean drives.

Rather than disabling OOM killer or lowering the possibility of having KVM killed, by lowering its process OOM score adjustment, I opted for stopping the Jellyfin container thinking it would give me further 4 GiB of memory headroom. This didn’t actually fix the problem as I still suffered a minor crash when running LDM.

Another intriguing phenomenon that occurred in a few places was having to wait for the command prompt to reappear even though the command had finished as indicated by the statical information appearing on the screen. An example of this type of buffering can be seen for the results of the Linux “scp” command when run over SMB on the “active” device. I timed this specific occurrence for both the read and write results.

Once the “scp localhost:/home/linuxium/4GiB 4GiB-scp” command had returned a write speed of 841.9 MB/s but I had to wait 26.42 seconds for the cursor to appear.

If the wait time is taken into consideration, then the true write speed is around 136 MB/s. After the read speed of 730.0 MB/s was reported I only had to wait for a much shorter time of 6.92 seconds indicating the true read speed was around 335 MB/s. If you were not observing the command actually running (e.g. by running it in an automated script) then such anomalies would be missed. This is yet another reason to be wary of just looking at numbers without context.

It is worth considering the second scenario results together with the network accessed NAS results above in order to get some clarity. This second scenario covers local access to a NAS from the console, i.e. when a terminal is attached to the device through the HDMI port.

Obviously for this scenario only Linux commands will be run on the console. I expected the “dd” results to be the most consistent and that does seem to be the case on the “passive” device. Going through the router doesn’t seem to affect the results for dd on “passive”. LDM, i.e. “fio”, doesn’t seem to be a particularly accurate tool for benchmarking the NAS speeds. It does give an indication of the write speed but the read speed throws out different numbers depending on the access infrastructure. Contrary to “dd”, “scp” seems to speed up when going through a router so it is likely being affected by buffering.

Windows “file copy” (as undertaken by File Explorer) is interesting as through SMB on “active” the results are very similar to “dd” on Linux. However, on “passive” the equivalent results do not align.

It also looks like “scp” is faster on Linux than on Windows, yet “file copy” is faster than “scp” on Linux. The biggest takeaway (which is more of a confirmation) is that the NAS on “passive” is faster than the NAS on “active” and that benchmark results alone are meaningless unless a full description of how they were obtained is provided.

As mentioned earlier, one usage I was interested to test was when the NAS from one device was mounted and accessed from the other device. In order to establish whether the networking required to support that configuration had any impact on the transfer speeds, I first had to establish the performance impact of mounting the NAS locally. In the first instance it could be accessed through the network like before.

One consistent but negative aspect revealed by the testing was the crashing of the “active” NAS due to OOM killer being activated.

It was not happening in the same place with the same command every time. However, it was happening when “processing” a 4 GiB file and occurred on both Windows and Linux. In fact it made testing the “active” device so unreliable that I abandoned testing and just used it as a Jellyfin server to listen to music.

With a locally mounted NAS, it could be accessed directly from the console, thus removing any network impact completely.

Looking at the local results first, not surprisingly the read speeds dropped, but the write speeds were not so badly affected. Obtaining results for the “active” NAS would have been helpful but really the OOM issue needs to be solved before realistically considering this configuration.

Continuing the annoying trend of getting somewhat confusing results, for Linux access of the “passive” NAS on the “active” device, “dd” write was faster and “scp” read slower than for when accessing just the basic “passive” NAS. It is worth remembering that the speed of the “passive” NAS hasn’t yet been established but the speed of the “active” file system upon which it is mounted was shown earlier to have a read speed of around 430 MB/s and a write speed of around 400 MB/s.

The same speed increase for “dd” and decrease for “scp” was also seen when accessing locally.

So, given all these results just how fast is the NAS on both the “passive” and “active” devices? The dd command on Linux seems to be the most consistent when measured either locally or over “ssh”. I’ve taken all the relevant results and put them into a spreadsheet.

Based on some of the observations discussed above, I can eliminate some results from the spreadsheet (coloured in red). Namely all the slow read speeds for “passive” when mounted on the micro SD card, and all the fast write results for “passive” when mounted on the “active” file system. Similarly, I can do the same for “active” in that I can eliminate all the fast write results for when mounted on the “active” file system. I can then average the remaining speeds to give an average for each scenario type, and by taking the average of these I can derive the average NAS speeds.

For the “passive” NAS, the average read speed and write speed are the same at 368 MB/s. The “active” NAS is slower, with an average read speed of 215 MB/s and write speed of 231 MB/s.

None of these speeds are going to saturate the two 10 GbE ports and the active NAS will not see any degradation being accessed just through the 2.5 GbE ports.

Just for a validation test, I briefly installed Windows on each device and ran CrystalDiskMark. On the left was the result for the “passive” device which gave a sequential read speed of around 451.33 MB/s and a sequential write speed of around 435.23 MB/s. On the right were the “active” results which were similar but slightly slower. The sequential read speed was 447.90 MB/s and the sequential write speed was 431.21 MB/s.

However, I am concerned about the integrity of running a NAS on these devices. Part way through my testing I had to power down all my equipment as there was some external electrical work planned which meant my power would be off for several hours. I shutdown both “active” and “passive” and no errors were observed. When power was restored, I went about starting up everything. The “active” device booted without incident however “passive” booted without its NAS.

Both NVMe drives failed to become ready with an error of “CSTS=0x1”. Searching the web for further information proved fruitless. But one of the advantages of having the OS on a different drive to the NAS meant I could at least boot the system. I pulled the drives from the device and installed them into my Linux testing PC.

They were recognised without issue, and I was able to import the zpool.

I then added a pair of new 4 TB Lexar NM790 M.2 2280 PCIe Gen 4×4 NVMe SSD drives and remembered to check my drives before creating the zpool and taking a snapshot of the existing zpool.

Having restored the snapshot to the new drives I discovered that there was a compatibility issue, so I had to repeat the process with the appropriate fix.

Then having removed the snapshot from the restored zpool I put the drives back into the “passive” device and imported the zpool which was now running on two new drives.

Whilst no data was lost after this crash, I’m not sure what caused it and whether I was just lucky to not loose anything. Obviously, this is where making the zpool with “mirror” would be used so that all the data is mirrored on both NVMe drives. I have subsequently powered off and rebooted both “passive” and “active” without incident.

Further Observations

I mentioned that when I recovered from the system freeze that resulted in a failure to reboot, I had reinstalled all the software on “active” just on a single drive.

I did some retesting to see if the results differed from earlier testing using two drives. The retests were based on the scenario of mounting the “passive” NAS on the “active” file system and then accessing it from the networked test PCs. Both the Windows and Linux tests returned very similar results, indicating I would not have to retest everything again. It looks like the only gain from using two drives on “active” was the increased storage with no improved drive performance.

During the time when I wasn’t running tests, I used Jellyfin on both the “passive” and “active” devices alternatively, to stream music for entertainment. I didn’t encounter any problems on either device and playback was flawless. The only inconvenience was having to stop Jellyfin on “active” prior to testing and restarting it when I wanted to listen to more music until I abandoned testing on “active” all together.

Power Consumption

When booted in Windows I checked the power settings for PL1, PL2, PL4 and tau.

On both the devices the values for PL1 was 30 W, PL2 was 33 W and tau was 28 seconds. On “passive” the value of PL4 was set to 56 W whereas on “active” the value was increased to 78 W.

I measured power consumption of both devices by connecting each to a power meter that was in turn connected to a UPS power outlet to provide clean power.

The power consumption was taken when the devices were installed with the test configurations described in the previous sections.

“Passive:”

  • Idle – 15.5 watts
  • Stress – 27.0 watts

“Active:”

  • Idle – 18.5
  • Stress 26.0 watts

On “active”, idle means both VMs and the CT are running, just not being used.

Conclusion

The R2 Max is very well made and feels of high quality when handled. It is minimalistic in design to the point of being too bare as it needs labels for the ports. That’s especially important for the Ethernet ports, as there’s nothing to indicate which are 10 GbE and which are 2.5 GbE ports.

Cooling is satisfactory on the “passive” model, but the fan in the actively cooled model helps keep temperatures lower. The trade-off is that the computer’s small fans make some noise.

Whilst the R2 Max comes with a micro SD card port which can be successfully used for the OS, I would have preferred to have onboard eMMC instead.

From an operational perspective having two 10 GbE ports and two 2.5 GbE are great if using the R2 Max as a router. However, the slow performance of the NVMe drives means the R2 Max can’t fully utilise the networking capabilities if used as a NAS.

The R2 Max Wiki points out that Intel’s Alder Lake-N and Twin Lake processors only support 9 PCIe lanes. It makes the following point about the NVMe support: “The two SSDs share a PCIe 3.0 x1 bandwidth (one of which uses an ASM118X chip to share the x1 bandwidth), with a theoretical data transfer speed of 8GT/s, which equals 985MB/s (megabytes per second)”.

I find that statement rather misleading, so it is better to refer to the product specification that clarifies the actual NVMe performance as: “Storage: 2 x M.2 2242/2280 NVMe SSD, PCIe 2.0 x 1”. This is further confirmed by the message buffer of the kernel stating for any NVMe drive: “4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link”.

However what isn’t clear is why the NVMe drives were put behind the ASM1182e chips and not the 2.5 GbE ports. Had the R2 Max been configured this way, as both NVMe drives can utilise the increase in bandwidth provided by a single PCIe 3.0 x1 lane each, the speed would have increased to a theoretical maximum data transfer speed of 985 MB/s. Moving the 2.5 GbE ports behind the ASM1182e chips would not affect their performance as each would get a single PCIe 2.0 x1 lane each, with a theoretical maximum data transfer speed 500 MB/s.

My main concern was the issues encountered with the zpool on the NVMe drives not being recognised and subsequently forcing a ZFS recovery from a snapshot. I don’t know if this is related in some way to the use of the ASM1182e chip as I’ve not seen this problem before.

Another concern is the Out-Of-Memory killer stepping in to stop the NAS VM when transferring the 4 GiB test file. This isn’t a particularly large file given many ISO files exceed this size like the current Windows 24H2 ISO. As a result, I wouldn’t recommend using the R2 Max as a NAS.

It is a shame the R2 Max is not available from Amazon as it will make a very good network appliance. It is only available for purchase from iKOOLCORE.

I would like to thank iKOOLCORE for providing the review units.

Liliputing’s primary sources of revenue are advertising and affiliate links (if you click the “Shop” button at the top of the page and buy something on Amazon, for example, we’ll get a small commission).

But there are several ways you can support the site directly even if you’re using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or…

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Join 9,545 other subscribers



Source link

You may also like

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00