When you read my blog articles and stuff – you may get the idea that everything I do – just happens to be right and that I succeed at every attempt. This article is here to remind you that I also often fail trying to do what was suppose to be great ‘on paper’ before doing it. Some call it experience … but the problem with experience is that you get it just after you needed it.
Out of Space
While I was relatively happy with my earlier backup box – Perfect NAS Solution – described here – it had one drawback. Space … lack of it. I did not wanted to invest in 8 TB NVMe SSD – so I used 4 TB NVMe SSD 2280 and 2 TB NVMe SSD 2230 as this AMD Ryzen based box had only two M.2 slots for storage … and getting 4 TB 2230 SSD is also very expensive.
% df -g /data Filesystem 1G-blocks Used Avail Capacity Mounted on data/data 7311 4833 2478 66% /data % zpool list data NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 7.27T 4.72T 2.55T - - 1% 64% 1.00x ONLINE -As I got 2 TB and 4 TB I created two independent ZFS pools on them and then I created needed datasets for needed directories from the /data dir … but as time as passed some grew too much – like /data/download for example … so I started to manually move these datasets between these SSDs … and that just started to require too much micro management I did not wanted to waste time on.
Perfect Hardware for the Job
After checking what is available on the market I decided to get new small box – this time Intel N100 (or N150) based – with multiple M.2 slots. I was able to find even smaller then AMD Ryzen box computer that would have not only four M.2 slots but even FIVE of them.
The fifth one is hidden inside and fits nicely the 2 TB NVMe SSD 2230 that I already had – I also got four 2 TB 2280 ones and 16 GB RAM as seen on the above picture. The system is not fanless – but a small occasional fan does not hurt much – as AMD Ryzen box also had fan that almost never started – and it had much larger TDP then the 6 W TDP of Intel N100 CPU.
This little gem even have additional USB-A slot inside – so you can fit Lexar S47 32 GB USB pendrive there – not to mention TWO Intel 2.5 GB network cards.
Here is how Maiyunda M1S compares against previous GenMachine solution.
Keep it Cool
The only thing that I was worried about was cooling of the NVMe SSDs – to make them not hot enough – and while the box has two 2.5 GbE ports (supported on FreeBSD by igc(4) driver) – speeds like 50-70MB/s are more then enough for my needs (assuming that LAN would be used) – as I also am used to 10-11 MB/s when WiFi is involved … and I came prepared when it comes to cooling.
I attached radiators to all of the SSDs – the internal one had smaller heatsink (and it was more hot) but the top ones got really nice piece of aluminum on them – attached using 15W/mK silicone thermal pad.
Huge Metal Fan
While I prefer passive cooled solutions – its not always possible to get all the features in decent prices in fanless mode.
After tweaking various BIOS settings I came to Hardware Monitor for thermal related stuff … and it seems that even that small Intel N100 with 6W TDP can be REALLY hot. After messing with the settings of the internal fan – and keeping it running all the time at about 3000 RPM – I settles on about 60oC temperature.
The temperatures reported with sensors(8) were high but not problematic. As you can expect the internal NVMe SSD was little warmer.
# sensors BATTERY/AC/TIME/FAN/SPEED ------------------------------------ dev.cpu.0.cx_supported: C1/1/0 dev.cpu.0.cx_usage: 100.00% last 6353us dev.cpu.0.freq: 800 hw.acpi.cpu.cx_lowest: C1 powerd(8): running SYSTEM/TEMPERATURES ------------------------------------ hw.acpi.thermal.tz0.temperature: 27.9C (max: 110.1C) dev.cpu.0.temperature: 68.0C (max: 105.0C) dev.cpu.1.temperature: 67.0C (max: 105.0C) dev.cpu.2.temperature: 67.0C (max: 105.0C) dev.cpu.3.temperature: 67.0C (max: 105.0C) DISKS/TEMPERATURES ------------------------------------ smart.nvme0.temperature: 74.0C smart.nvme1.temperature: 39.0C smart.nvme2.temperature: 38.0C smart.nvme3.temperature: 39.0C smart.nvme4.temperature: 38.0CI am Speed
This is how all the disks looked like using lsblk(8) command.
# lsblk -d DEVICE SIZE MODEL da0 29G Lexar USB Flash Drive nda0 1.9T WD PC SN740 SDDPTQE-2T00 nda1 1.9T ADATA SX8200PNP nda2 1.9T ADATA SX8200PNP nda3 1.9T ADATA SX8200PNP nda4 1.9T ADATA SX8200PNP - 10T TOTAL SYSTEM STORAGEQuick ‘benchmark’ of the NVMe SSD drives using diskinfo(8) is shown below.
# for I in 0 1 2 3 4; do diskinfo -vt nda${I}; echo; done | grep -e nda -e side nda0 outside: 102400 kbytes in 0.137933 sec = 742389 kbytes/sec inside: 102400 kbytes in 0.136937 sec = 747789 kbytes/sec nda1 outside: 102400 kbytes in 0.136848 sec = 748275 kbytes/sec inside: 102400 kbytes in 0.135698 sec = 754617 kbytes/sec nda2 outside: 102400 kbytes in 0.136665 sec = 749277 kbytes/sec inside: 102400 kbytes in 0.135783 sec = 754144 kbytes/sec nda3 outside: 102400 kbytes in 0.190700 sec = 536969 kbytes/sec inside: 102400 kbytes in 0.135555 sec = 755413 kbytes/sec nda4 outside: 102400 kbytes in 0.136825 sec = 748401 kbytes/sec inside: 102400 kbytes in 0.135868 sec = 753673 kbytes/secZFS Part
So … I had the system running – I had the drives attached – I created ZFS pool … and for the first time I decided that ZFS based encryption is good enough – so I did not used geli(8) this time. The plan was to use RAID5 (raidz) setup here – so I will have some redundancy again.
# zpool create data raidz nda0 nda1 nda2 nda3 nda4 # zpool list data NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 9.31T 912K 9.31T - - 0% 0% 1.00x ONLINE - # zpool status data pool: data state: ONLINE config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 nda0 ONLINE 0 0 0 nda1 ONLINE 0 0 0 nda2 ONLINE 0 0 0 nda3 ONLINE 0 0 0 nda4 ONLINE 0 0 0 errors: No known data errors # zfs set recordsize=1m data # zfs set compression=zstd data # zfs set atime=off data # zfs set mountpoint=none data # zfs set mountpoint=/data data/data # zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt data/data # zfs mount -a… but one of the NVMe SSD 2280 drives came broken – lots of read/write errors and entirely ‘broken’ S.M.A.R.T report.
Broken Drive and Resilver
That allowed me to test the ZFS resilver on these drives – you can see for yourself how it went below.
# zpool status pool: data state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J scan: scrub repaired 768K in 00:02:50 with 0 errors on Sun May 18 13:29:47 2025 config: NAME STATE READ WRITE CKSUM data DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 nda0 ONLINE 0 0 0 13389973369551797347 UNAVAIL 0 0 0 was /dev/nda1 nda2 ONLINE 0 0 0 nda3 ONLINE 0 0 0 nda4 ONLINE 0 0 0 errors: No known data errors # zpool replace data 13389973369551797347 /dev/nda1 # zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 21 20:33:21 2025 198G / 198G scanned, 3.95G / 197G issued at 1011M/s 806M resilvered, 2.00% done, 00:03:15 to go config: NAME STATE READ WRITE CKSUM data DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 nda0 ONLINE 0 0 0 replacing-1 DEGRADED 0 0 0 13389973369551797347 UNAVAIL 0 0 0 was /dev/nda1/old nda1 ONLINE 0 0 0 (resilvering) nda2 ONLINE 0 0 0 nda3 ONLINE 0 0 0 nda4 ONLINE 0 0 0 errors: No known data errors # zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed May 21 20:33:21 2025 198G / 198G scanned, 95.0G / 197G issued at 1.22G/s 19.1G resilvered, 48.16% done, 00:01:23 to go config: NAME STATE READ WRITE CKSUM data DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 nda0 ONLINE 0 0 0 replacing-1 DEGRADED 0 0 0 13389973369551797347 UNAVAIL 0 0 0 was /dev/nda1/old nda1 ONLINE 0 0 0 (resilvering) nda2 ONLINE 0 0 0 nda3 ONLINE 0 0 0 nda4 ONLINE 0 0 0 errors: No known data errors # gstat -p -I 1s dT: 1.009s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 1171 1171 274041 0.451 0 0 0.000 41.3| nda0 0 1235 0 0 0.000 1235 275531 0.362 36.9| nda1 0 1164 1164 274283 0.407 0 0 0.000 38.4| nda2 0 1159 1159 274295 0.410 0 0 0.000 38.5| nda3 0 1161 1161 274263 0.409 0 0 0.000 38.4| nda4 0 2 2 16 0.901 0 0 0.000 0.2| da0 dT: 1.002s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 1165 1132 265890 0.450 31 148 0.038 39.9| nda0 2 1214 0 0 0.000 1212 265882 0.352 36.1| nda1 1 1172 1137 265942 0.407 33 152 0.038 37.7| nda2 1 1166 1133 265902 0.408 31 148 0.042 37.7| nda3 1 1171 1129 265698 0.411 40 180 0.041 37.7| nda4 0 0 0 0 0.000 0 0 0.000 0.0| da0 # zpool status data pool: data state: ONLINE scan: resilvered 39.6G in 00:02:50 with 0 errors on Wed May 21 20:36:11 2025 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 nda0 ONLINE 0 0 0 nda1 ONLINE 0 0 0 nda2 ONLINE 0 0 0 nda3 ONLINE 0 0 0 nda4 ONLINE 0 0 0 errors: No known data errorsSo it went pretty fast while the temperatures remained close to what have You seen earlier.
The Ugly
… but after some changes in the BIOS – like disabling the Integrated GPU – I needed to reset the BIOS settings altogether … and this is the part when things went fast south really hard.
After the reset I went into BIOS to setup the fan to run at about 3000 RPM … but it seems it went entirely gone … and that small box while doing nothing – started to be as warm as 90oC now … which I consider really bad – I was not even able to hold that computer case without any gloves on my hands.
When I entered the BIOS this is what I saw.
I was out of ideas and really disappointed – so I took a picture of that BIOS and created a ‘send item back and get money back’ issue on Aliexpress portal – where I got it. To be honest I did not expected much – more like a long battle to prove that something is really wrong.
Mine ‘problem report’ was not long – just a simple description on what is wrong:
“Hello. The FAN will just not start and I get some crazy temperatures like 89/82 Celsius – this Mini PC is hot as fuck – and its doing nothing – its crazy for a 6W TDP Intel N100 CPU.”
Get Money Back
I was trying to be polite – while I was also very angry because such high temperatures on a 6W TDP CPU are insane …
… and to my surprise the request was positively accepted the next day – they offered a free send back was of the hardware and promised to send me all the money back – at least that part of the story is successful.
Summary
When You want to do something ambitions – there is bigger chance that you fail … and I failed here … or should I say the hardware failed me.
Some say that if you learn from a failure it is not really a failure but a valuable lesson – and I treat that one exactly like that.
Next plans? I already ordered another Intel N100 box (actually N150 as they run out of N100 devices) with 4-5 M.2 slots and I will share with you in other article how that story went …
EOF