This year, I embarked on a project to assemble my very own home storage server. The result is a robust 32 TB system, meticulously built with open-source software to safeguard both my personal and business data. The total investment for this Nas For Server setup came to $1,263, encompassing $531 for the server components and $732 for four high-capacity hard drives. While comparable in price to pre-built, off-the-shelf storage servers, my DIY NAS for server solution delivers enhanced power and unparalleled customization.
In this article, I’ll detail my journey through the part selection process, highlight the missteps I encountered, and offer actionable recommendations for anyone considering building their own NAS for server.
Before and after: My 2022 homelab TrueNAS server construction
For those who prefer a visual walkthrough, I’ve also created a detailed video explanation on YouTube.
Background
Why Build a NAS Server?
NAS, or Network-Attached Storage, is fundamentally a server dedicated to storing data and ensuring its accessibility to other devices on your network. But why dedicate an entire server to data storage? Every computer, after all, inherently stores data.
I’ve found significant advantages in separating data storage from my primary computing systems. Upgrading my workstation and laptop every few years used to be accompanied by the cumbersome process of data migration. A dedicated NAS for server streamlines this, virtually eliminating data migrations and facilitating seamless file sharing across all my devices.
Furthermore, my data volume is substantial. Classifying myself as a data hoarder, I meticulously archive every digital photograph, every email from the past two decades, and the source code for all my personal projects. Currently, this archive occupies 8.5 TB and growing.
My most significant data source is my extensive DVD and Blu-Ray collection. Reluctant to rely solely on streaming services for access to my favorite content, I continue to purchase physical media. Upon acquiring a new disc, I immediately create a raw image rip and a streamable video file. The combination of the raw ISO and streamable MP4s for a single disc can consume up to 60 GB of storage.
I continue to purchase physical DVDs and Blu-Rays for content I anticipate re-watching.
What is a Homelab?
“Homelab” has become a popular term in recent years, describing a dedicated space within your home for experimenting with IT hardware and software typically found in professional office or data center environments. A homelab serves as an invaluable practice ground for honing professional skills or simply a playground for exploring intriguing technologies. For enthusiasts interested in server technology, a NAS for server is a common and practical homelab project.
Why Build Your Own NAS for Server?
If you’re new to the homelab concept or lack experience in PC building, my initial advice is to reconsider building your own NAS for server. Off-the-shelf solutions offer similar functionalities with a considerably gentler learning curve.
Prior to constructing my homelab NAS for server, I relied on a 4-disk Synology DS412+ for seven years. Frankly, my Synology was an exceptional investment, providing a user-friendly introduction to NAS servers. For those uncertain about the NAS concept, I would wholeheartedly recommend starting with a Synology device.
My 10 TB Synology DS412+ served reliably for seven years.
However, a few months ago, my Synology failed to boot, accompanied by an ominous clicking sound. This incident served as a stark reminder of my dependence on a single device. Synology servers are not designed for user repair; post-warranty component failure necessitates complete server replacement. Compounding this, my use of Synology’s proprietary storage format meant data inaccessibility without another Synology system. (Edit: A commenter on Hacker News pointed out the possibility of recovering a Synology Hybrid RAID volume from a non-Synology system.).
Fortunately, cleaning and reseating the disks revived my old Synology, but this close call prompted a critical decision. I resolved to transition to TrueNAS for my NAS server, attracted by its open-source nature and open storage format. This move towards a DIY NAS for server offered greater control and flexibility.
TrueNAS and ZFS
TrueNAS (formerly FreeNAS) stands out as a leading operating system for storage servers. Its open-source nature and nearly two-decade history solidified its reputation as a dependable choice for a NAS for server.
TrueNAS leverages ZFS, a filesystem architected specifically for storage servers. Unlike traditional filesystems like NTFS or ext4 that operate atop a data volume managing low-level disk I/O, ZFS manages the entire stack, from file-level logic down to disk I/O. This comprehensive control grants ZFS superior power and performance compared to other filesystems, making it ideal for a robust NAS for server.
Key features of ZFS include:
- Aggregating multiple physical disks into a unified filesystem.
- Automatic data corruption repair.
- Point-in-time data snapshots (similar to macOS Time Machine).
- Optional data encryption and compression.
Prior to this project, my experience with ZFS was non-existent, making its exploration a compelling aspect of building my NAS for server.
Storage Planning
Estimating My Storage Capacity Needs
When initially setting up my Synology NAS, I installed three 4 TB drives, leaving the fourth slot vacant. This provided 7 TB of usable space with Synology Hybrid RAID. Three years later, nearing capacity, I added a fourth drive, expanding usable space to 10 TB.
For my new NAS for server build, I adopted a similar growth strategy, aiming for an initial 20 TB of usable storage with expansion capability up to 30 TB.
While ZFS currently doesn’t support adding drives to an existing pool, this feature is actively being developed. Hopefully, it will be integrated into TrueNAS by the time I need to increase storage for my NAS server.
Update (2025-01-25): This feature is now implemented in the latest ZFS version, though I haven’t yet tested it in TrueNAS.
Many Small Disks or Fewer Large Disks?
ZFS’s resilience against disk failures relies on redundant data storage, which complicates capacity planning. Usable storage isn’t simply the sum of disk capacities.
ZFS organizes disks into “pools.” Efficiency in storage utilization increases with pool size. For instance, two 10 TB drives yield only half the total capacity usable. Conversely, five 4 TB drives provide 14 TB of usable storage. Despite identical total disk space, smaller drives offer a 40% increase in usable space for a NAS for server.
Choosing between fewer large disks or more small disks is a key decision. Smaller drives are typically cheaper per TB but more operationally expensive due to increased electricity consumption. Two 4 TB drives consume twice the power of a single 8 TB drive.
Prioritizing a minimal physical footprint for my NAS server, I opted for fewer, larger drives.
raidz 1, 2, or 3?
ZFS offers varying redundancy levels: raidz1, raidz2, and raidz3, differing primarily in robustness. raidz1 tolerates single-disk failure, raidz2 two simultaneous failures, and raidz3 up to three.
Increased robustness comes at the cost of usable storage. With five 4 TB drives, usable storage varies as follows:
ZFS Type | Usable Storage | % of Total Capacity |
---|---|---|
raidz1 | 15.4 TB | 77.2% |
raidz2 | 11.4 TB | 57.2% |
raidz3 | 7.7 TB | 38.6% |
I selected raidz1 for my NAS for server. With a limited number of disks, the probability of simultaneous failures is low.
It’s crucial to remember that ZFS is not a backup solution. While ZFS protects against drive failure, it doesn’t mitigate threats like accidental deletion, malware, or theft. I employ restic to backup critical data to encrypted cloud storage.
ZFS’s value lies in avoiding reliance on cloud backups for single drive failures, but recovery from backups would be necessary for multiple drive failures. The 20% storage sacrifice for raidz2 wasn’t justified for my NAS for server.
With larger disk pools, adopting raidz2 or raidz3 becomes more prudent. For a 20-disk pool, I would likely opt for raidz2 or raidz3.
Preventing Concurrent Disk Failures
The probability of simultaneous disk failures may seem negligible. Backblaze’s data indicates annual failure rates of 0.5-4% for quality drives. A 4% annual risk translates to a 0.08% weekly chance. Two simultaneous failures might occur once every 30,000 years – seemingly reassuring, right?
However, disk failures aren’t statistically independent. A failing disk significantly elevates the failure risk for its neighbors, especially for disks of the same model, manufacturing batch, and workload history.
Furthermore, ZFS pool rebuilding intensifies strain on surviving disks. A disk near its lifespan under normal use might fail under the added stress of a rebuild process.
To mitigate concurrent failure risks for my NAS for server, I took precautions. I chose two distinct disk models from different manufacturers and purchased them from separate vendors to reduce the likelihood of receiving disks from the same manufacturing batch. While the impact is uncertain, the minimal cost increase made it worthwhile.
Purchasing the same disk model from two vendors aimed to reduce the chance of receiving disks from the same batch.
How I Chose Parts
Motherboard
The motherboard size was my initial consideration. I appreciated the compact size of my Synology DS412+. Having never built with a mini-ITX motherboard, this NAS for server project presented an ideal opportunity.
I chose the ASUS Prime A320I-K for several reasons:
- Four SATA ports, accommodating four direct disk connections.
- Radeon graphics support, negating the need for a dedicated graphics card.
- Affordability at $98.
The ASUS Prime A320I-K features onboard graphics in a mini-ITX format.
Warning: I regret this motherboard choice. Further details are provided below.
I also considered the B450, similar but almost twice the price, with its primary advantage being enhanced overclocking support, which was unnecessary for my NAS for server.
CPU
From my research, ZFS isn’t CPU-intensive. Testing TrueNAS on a budget Dell OptiPlex 7040 mini PC confirmed minimal CPU usage, suggesting a low-power CPU would suffice for my NAS for server.
My CPU selection criterion was Radeon graphics support for utilizing the A320 motherboard’s onboard HDMI output.
The AMD Athlon 3000G is economical and includes integrated graphics.
The AMD Athlon 3000G emerged as the ideal choice. Priced at $105, it offered great value, Radeon graphics, and respectable CPU benchmarks.
Case
For my previous VM server build, I used a Fractal Design case. Being my favorite case brand, I returned to Fractal Design for this NAS for server project.
I selected the Fractal Design Node 304 Black, a compact mini-ITX case. Its cube-like design and six drive bays, allowing for initial capacity and future expansion, appealed to me for my NAS for server.
The Fractal Design Node 304 Black mini-ITX case accommodates six disks.
Disk (Data)
With six case drive bays, I decided to start with four 8 TB disks, providing 22.5 TB of usable raidz1 storage for my NAS for server. Future expansion with a fifth disk would yield 30.9 TB, and a sixth, 37 TB.
In the 8 TB range, most drives operate at 7200 RPM or higher, with some reaching 10k RPM. For my NAS, speeds beyond 7200 RPM were unnecessary as network bandwidth is the bottleneck. 10k RPM drives would increase noise and power consumption without practical performance gains.
Initially, I consulted Backblaze’s hard drive stats to identify reliable drives, but their recommendations were pricier. Considering $400 drives with a low 0.5% failure rate seemed disproportionate to the marginal reliability increase.
Avoiding shingled magnetic recording (SMR) was crucial. ZFS performs poorly on SMR drives, making it essential to avoid known SMR drives for a NAS for server. CMR (conventional magnetic recording) drives are ZFS-compatible.
I selected the Toshiba N300 and Seagate IronWolf, both well-regarded in TrueNAS forums and Reddit. Priced at $180-190, they offered excellent value for the storage capacity needed for my NAS for server.
Toshiba N300 (left) and Seagate IronWolf (right) data drives
Disk (OS)
TrueNAS requires a dedicated OS disk, but its demands are minimal. Although 2 GB is the minimum, OS disk read/writes are infrequent in typical NAS for server operation.
The Kingston A400 120 GB M.2 SSD is a great value at only $32.
The Kingston A400 120 GB M.2 SSD, priced at a mere $32, was an excellent, space-saving choice for the NAS for server OS. M.2 disks simplify installation and eliminate cabling.
Memory
My research frequently cited the “rule” of 1 GB RAM per TB of disk space for ZFS. However, ZFS developer Richard Yao clarified that this rule is a myth. While features like data deduplication are RAM-intensive, ZFS functions effectively with limited memory for a home NAS for server.
Shopping for RAM felt mundane. Lacking reliable RAM benchmarks or user reviews, my selection process was:
- Reviewing RAM sticks compatible with the ASUS A320I-K motherboard.
- Filtering for 32 GB or 64 GB options in two sticks.
- Filtering for trusted brands (Corsair, Crucial, G.SKILL, Kingston, Samsung, Patriot, Mushkin, HyperX).
- Filtering for options under $150.
This led to the CORSAIR Vengeance LPX 32GB CMK32GX4M2A2400C14 (2 x 16GB) at $128, a suitable choice for my NAS for server.
The CORSAIR Vengeance LPX 32GB CMK32GX4M2A2400C14 (2 x 16GB) is compatible with the A320I-K motherboard and reasonably priced for 32 GB.
Power Supply Unit (PSU)
Power capacity wasn’t critical; PCPartPicker estimated my system at 218 W. A 300-400 W PSU would have sufficed, but semi-modular options were scarce at lower wattages. I chose the 500 W EVGA 110-BQ-0500-K1 for my NAS for server.
The EVGA 110-BQ-0500-K1 is a semi-modular PSU, providing ample power for this NAS build.
90-Degree SATA Cables
90-degree SATA cables were necessary for the case’s space constraints.
90-degree SATA cables were a novel purchase. Their necessity became apparent due to insufficient space between the motherboard and PSU for standard SATA cables. These slim, 90-degree cables resolved the issue for connecting disks in my NAS for server.
The tight PSU-motherboard clearance necessitated slim 90-degree SATA cables.
What’s Missing?
Several components were intentionally omitted from my NAS for server build for reasons of cost, complexity, or space.
Graphics Card (GPU)
Given space limitations and motherboard port constraints, a dedicated graphics card was undesirable. I opted for a motherboard and CPU combination supporting onboard graphics.
Host Bus Adaptor (HBA)
Many NAS builds include a host bus adaptor (HBA) to increase motherboard disk support via a PCI slot.
HBA firmware reflashing, a reportedly complex process, deterred me initially. The ASUS A320I-K’s four SATA ports sufficed for my immediate NAS for server needs. I reserved a PCI slot for a future HBA if required.
ECC RAM
Researching TrueNAS builds revealed frequent recommendations for ECC RAM (error correction code RAM) to prevent data corruption. Ultimately, I decided against ECC RAM, using standard consumer-grade RAM for my NAS for server.
While data corruption is a concern, 30 years of computing without ECC RAM without noticeable corruption led me to believe consumer-grade RAM would be adequate for home NAS for server use, unlike high-load, multi-user server environments.
SLOG Disk
Many ZFS setups include a separate SSD, the SLOG (Separate Intent Log), for enhanced write performance in a NAS for server.
SSDs offer significantly faster write speeds than spinning disks. ZFS can quickly log write operations to the SLOG SSD, acknowledging the write to the application, and then asynchronously transfer data to the storage pool. This improves write speeds considerably.
However, port and drive bay limitations led me to omit a SLOG disk. Adding one would necessitate sacrificing either the PCI slot or a drive bay, and prioritizing future storage expansion for my NAS for server seemed more critical.
Parts List
Category | Component | Price Paid |
---|---|---|
CPU | AMD Athlon 3000G | $105.13 |
Motherboard | ASUS Prime A320I-K* | $97.99 |
Graphics | Onboard graphics (motherboard native support) | $0 |
Disk (OS) | Kingston A400 120GB | $31.90 |
Memory | CORSAIR Vengeance LPX 32GB CMK32GX4M2A2400C14 (2 x 16GB) | $127.99 |
Power | EVGA 110-BQ-0500-K1 500W 80+ Bronze Semi-Modular | $44.99 |
Case | Fractal Design Node 304 Black | $99.99 |
SATA cables | Silverstone Tek Ultra Thin Lateral 90 Degree SATA Cables (x2) | $22.30 |
Total (without disks) | $530.29 | |
Disk (Storage) | Toshiba N300 HDWG480XZSTA 8TB 7200 RPM (x2) | $372.79 |
Disk (Storage) | Seagate IronWolf 8TB NAS Hard Drive 7200 RPM (x2) | $359.98 |
Total | $1,263.06 |
* Caveat: This motherboard might require a BIOS update for AMD Athlon 3000G CPU compatibility. Details below.
Compared to Off-the-Shelf Products
For comparison, here are similarly priced off-the-shelf NAS for server solutions:
Product | 2022 Budget NAS | Synology DS920+ | QNAP TS-473A-8G-US |
---|---|---|---|
Disk bays | 6 | 4 | 4 |
RAM | 32 GB | 4 GB | 4 GB |
Max RAM | 32 GB | 8 GB | 8 GB |
CPU benchmark | 4479 | 3002 | 4588 |
Price | $530.29 | $549.99 | $549 |
While the total cost is comparable, my DIY NAS for server offers significantly more value: 8x the RAM and freedom from closed-source, vendor-specific OS platforms.
Build Photos
All components in their retail boxes ready for assembly.
Motherboard installation in the Fractal Design mini-ITX case was straightforward.
M.2 SSD installation is incredibly simple – one screw and it’s done, no wires or rails needed.
This build is unique as the PSU’s back face is internal. A NEMA extension cable routes power to the case’s external input.
The tight space between motherboard SATA ports and PSU necessitated special slim 90-degree SATA cables.
All motherboard components connected except for the CPU fan.
The finished DIY NAS server setup.
Building the Server with TinyPilot
As a creator of TinyPilot, a Raspberry Pi-based tool for server building and management, this project marked my third server build using TinyPilot, and the first with the new TinyPilot Voyager 2.
The TinyPilot Voyager 2 managed the installation process, eliminating the need for direct keyboard, mouse, and monitor connections.
Building with Voyager 2 was enjoyable and efficient. Managing the installation via web browser, from BIOS access to mounting the TrueNAS installer image, was seamless. No physical keyboard or monitor connections were needed for this NAS for server build.
TinyPilot enabled mounting the TrueNAS installer ISO, streamlining the OS installation process without USB drives or direct peripherals.
One limitation encountered was BIOS upgrading. TinyPilot supports .img
and .iso
disk images but not direct file sharing. For the ASUS BIOS .CAP
file, I resorted to a USB drive, deviating from a purely TinyPilot build. Future TinyPilot updates aim to address this scenario.
Is This BIOS Version Incompatible? Or Am I an Idiot?
Upon assembly, the system powered on but displayed no video output.
Initial panic set in – had I misinterpreted the motherboard’s onboard video capabilities for this NAS for server? Standard diagnostics (RAM reseating, CPU reseating, cable checks) yielded no change.
Google searches revealed ASUS Prime A320I-K BIOS upgrade requirements for Athlon 3000G compatibility. I vaguely recalled this warning during part selection but dismissed it, assuming BIOS updates were routine. I overlooked the challenge of BIOS updating without a compatible CPU.
Fortunately, the Ryzen 7 CPU from my 2017 homelab VM server was compatible with the ASUS Prime A320. Borrowing the CPU and GPU from that server, I successfully booted my new NAS for server and accessed the BIOS.
Parts from my older 2017 homelab VM server facilitated the BIOS upgrade process.
Surprisingly, even with borrowed parts, the motherboard reported BIOS version 2203, which ASUS claimed was compatible with the AMD Athlon 3000G. Nevertheless, I updated to the latest BIOS, version 5862.
The ASUS Prime A320I-K CPU compatibility page specifies Athlon 3000G support starting from BIOS version 2203.
Post-upgrade to 5862, boot failures persisted. Then, I realized the HDMI cable was mistakenly plugged into the DisplayPort output.
The design similarity between DisplayPort and HDMI connectors can lead to accidental misplugs.
Was the parts-borrowing process necessary? Two possibilities arose:
- Human error: HDMI cable in DisplayPort, corrected post-BIOS upgrade.
- ASUS inaccuracy: Athlon 3000G incompatibility with BIOS 2203 despite ASUS’s claim.
While self-blame is typical, ASUS BIOS flakiness suggested potential fault on their side. Regardless, booting the NAS for server without borrowed parts was a relief.
The moment of achieving the first boot screen with the Athlon 3000G installed.
Performance Benchmarks
Surprisingly, finding robust NAS benchmarking tools proved difficult. Tools exist for local disk I/O benchmarking, but these don’t reflect real-world network usage, which is paramount for a NAS for server. Network bottlenecks would be missed entirely.
I devised a rudimentary benchmark. Generating two sets of random file data (dummy_file_generator), I used robocopy to measure read/write speeds between my desktop and the NAS for server. This wasn’t rigorous – no isolated network, no process shutdowns. The same tests were run against my old Synology DS412+ for comparison.
File sets: 20 GiB of 1 GiB files, and 3 GiB of 1 MiB files. Averaged over three trials, across encrypted and unencrypted volumes.
Performance capped at 111 MiB/s (931 Mbps), suspiciously close to 1 Gbps, suggesting network hardware limitations (1 Gbps Ethernet ports on switch, desktop, and NAS for server).
Read Performance
For unencrypted volumes, surprisingly, my 7-year-old Synology outperformed the new TrueNAS build. Synology was 31% faster reading small files and 10% faster with large files.
Synology’s lead vanished with encryption. Synology read speeds plummeted by 67-75% on encrypted volumes, whereas TrueNAS was unaffected. TrueNAS then outperformed Synology by 2.3x (small files) and 3x (large files) on encrypted volumes, more representative of my typical NAS for server usage.
Write Performance
Synology’s read performance advantage didn’t extend to writes. Even unencrypted, TrueNAS was 77% faster on small files, with comparable performance on 1 GiB files.
Encryption decimated Synology write performance. With encryption, TrueNAS was 5.2x faster (small files) and 3.2x faster (large files), highlighting TrueNAS as a superior NAS for server in encrypted environments.
Power Consumption
Using a Kill A Watt P4460 meter, power consumption measurements for both NAS systems were:
Synology DS412+ | 2022 NAS |
---|---|
Idle | 38 W |
Load | 43 W |
The new NAS for server consumes 60% more power, a bit unexpected. At $0.17/kWh, monthly running cost is ~$7.20.
Power draw factors are unclear, but PSU inefficiency is a possibility. Synology likely uses a perfectly sized PSU, while my 500 W PSU might be inefficient at only 15% capacity utilization for this NAS for server.
Final Thoughts
Motherboard
My main issue with the ASUS Prime A320I-K was limited compatibility, though user error is possible.
Beyond that, the BIOS was underwhelming. The upgrade utility was dysfunctional, falsely indicating the latest BIOS version when outdated. Manual updates via downloaded files on a thumb drive were necessary.
ASUS EZ Flash incorrectly reported BIOS 2203 as the latest version, while website version 5862 necessitated manual update.
Also, the A320I-K’s 32 GB RAM limit was a missed detail. Future RAM expansion might be restricted for this NAS for server.
Fixing the Realtek Networking Driver
Intermittent Ethernet adapter failures under heavy network load were resolved thanks to /u/trevaar on Reddit. The FreeBSD driver for the A320I-K Realtek NIC has stability issues, addressed by loading the official driver via these TrueNAS Tunables:
- System > Tunables in TrueNAS web dashboard.
- Add these settings:
Variable | Value | Type |
---|---|---|
if_re_load |
YES |
loader |
if_re_name |
/boot/modules/if_re.ko |
loader |
Case
The Fractal Design Node 304 was disappointing. Prior Fractal Design experiences were positive, but this case felt awkward to work with, lacking documentation and intuitive mechanisms for this NAS for server build.
Being my first mini-ITX build, size constraints might explain some issues, but the overall experience was less than ideal.
CPU
The Athlon 3000G is more than sufficient; TrueNAS reports 99% CPU idle time over the past month:
TrueNAS utilizes minimal CPU resources for basic NAS for server operations.
Its Radeon graphics support, negating a GPU, was the key factor. At $105, it was a cost-effective choice for this NAS for server.
Disk (Data)
Long-term disk reliability is yet to be determined. Initial operation is satisfactory.
Noise levels are negligible, noticeable only during performance benchmarks, especially file deletions.
Power Supply Unit (PSU)
The 60 W idle power consumption raises questions about PSU efficiency. A lower capacity PSU (300-400 W) might reduce idle power draw for this NAS for server.
Disk (OS)
The Kingston A400 performs adequately. TrueNAS OS disk load is minimal; 90 GB free space indicates even smaller drives could suffice for a NAS for server OS.
Disk activity is minimal, with weekly I/O reads for scheduled error checks being the primary activity.
TrueNAS OS disk activity is minimal post-boot for a NAS for server.
TrueNAS
Running TrueNAS Core 13 (FreeBSD-based), the more mature version. TrueNAS Scale (Debian-based) offers broader hardware and software compatibility.
Synology’s web UI is unmatched in elegance and intuitiveness. TrueNAS, while functional, is a usability downgrade, feeling command-line oriented.
Synology’s web interface (left) is significantly more user-friendly than TrueNAS (right) for NAS server management.
Creating volumes and network shares in TrueNAS was less intuitive, requiring navigation across disconnected menus without clear guidance. Synology’s UI provides a smoother, guided workflow for NAS server configuration.
Third-party app installation is also more complex in TrueNAS. Plex Media Server, a pre-configured plugin, took significant effort to install compared to Synology’s simple wizard-driven process for NAS server applications.
Despite usability challenges, I’m committed to TrueNAS for its open-source nature and platform independence. For less ideologically driven users seeking a NAS for server, Synology remains a strong recommendation.
ZFS
ZFS is powerful, but beyond RAID, most features remain unused.
Snapshots are available but haven’t been necessary, given restic backups. Encrypted snapshots, however, are intriguing for secure, infrequent access data backups in a NAS for server setup.
Overall
Overall, the new NAS for server is satisfactory, and the build process was educational. Without prior Synology experience, the DIY NAS journey might have been overwhelming. Synology provided a valuable introduction, and now I’m ready to explore ZFS and TrueNAS’s advanced features.
Video
[YouTube video link about the NAS build]
2.5-Year Update
As of November 2024, after 2.5 years of NAS for server usage, here are updated reflections.
Still Happy with the NAS
Continued satisfaction with the NAS for server. Synology’s user experience is missed, but TrueNAS offers a greater sense of control.
One of My Toshiba N300 Disks Started Clicking
After 18 months, a Toshiba N300 disk began clicking. SMART tests showed no errors, but preemptive replacement with an 8 TB Seagate IronWolf was chosen. No issues since.
Switched to a Rack-Mounted Chassis
A year after building, a server rack was acquired, prompting infrastructure migration.
For the NAS for server, a Sliger CX3701 10-bay server chassis was purchased. Recommended for HBA-centric setups, but PCI slot usage for graphics or 10 GbE limits drive bays to four (due to mini-ITX motherboard SATA port limitations).
Switched to TrueNAS Scale
TrueNAS Scale’s increased development focus led to a switch from TrueNAS Core. Scale is Linux Debian-based, Core is FreeBSD-based.
Minimal difference noted, except for a slightly improved Scale Web UI and more comfortable terminal usage (Linux familiarity).
Added a 10 Gbps Fiber NIC
Server rack adoption included a 10 Gbps switch, prompting a 10 Gbps NIC addition to the NAS for server.
Initial NIC incompatibility issues arose, with three different NICs failing.
Motherboard incompatibility was suspected, leading to a Gigabyte B550I Aorus Pro motherboard upgrade (see below), which resolved Mellanox ConnectX-3 EN CX311A NIC compatibility.
TrueNAS 10 Gbps NIC configuration was challenging, defaulting to onboard LAN. Static IP migration to the 10 Gbps NIC clashed with Kubernetes IP usage. Motherboard LAN disabling in BIOS and IP address change were necessary for 10 Gbps NAS for server operation.
Switched to Gigabyte B550I Aorus Pro Motherboard
The Gigabyte B550I Aorus Pro AX motherboard replaced the ASUS Prime A320I-K to address 10 Gbps NIC compatibility.
Minor improvements:
- Pros: Integrated I/O shield, upward-facing SATA ports (no right-angle connectors needed), easier RAM/CPU slot access, convenient fan pin placement.
- Cons: Confusing M.2 slot (poor documentation), slower BIOS boot times.
Regret: Mini-ITX Limits Expansion
Mini-ITX case and motherboard choice is the biggest regret.
Mini-ITX motherboards limit PCI slots (single slot) and SATA ports (typically four, none with more and onboard graphics). HBA usage for more than four disks eliminates PCI slots for other upgrades like 10 GbE.
10 Gbps network card installation now restricts disk capacity to four unless a chassis and motherboard replacement is undertaken.
Future builds would prioritize rack-mounted chassis with 6-8 3.5″ drive bays and motherboards with multiple PCI slots or at least eight SATA ports for a more expandable NAS for server solution.
Thanks to the Blogging for Devs Community members for early post feedback.