For half a decade, virtual machines (VMs) have been central to my software development workflow at rental-server.net. Each project operates within its own dedicated VM, effectively eliminating dependency conflicts and TCP port collisions. This approach significantly streamlines development and enhances system reliability.
Expanding on this strategy three years prior, I constructed a homelab server dedicated to VM hosting, a move documented in “Building a VM Homelab in 2017“. This investment proved invaluable, accelerating development tasks and bolstering overall system stability. For those looking to host services like a Premade Vmware Virtual Machines Smtp Server, a homelab setup offers a robust and controllable environment.
However, recent project growth and increased resource demands began to strain the capabilities of my initial VM server. Early build decisions, while adequate initially, started to reveal limitations. Recognizing the need for enhanced performance and scalability, I embarked on building a completely new homelab VM server for 2020, designed to handle modern development workloads and potentially host services like a premade VMware virtual machines SMTP server for testing.
alt=”Server components for a new VM homelab build, including CPU, motherboard, RAM, SSD, PSU, GPU, case, and CPU cooler, essential for running virtual machines and potentially hosting services like a premade VMware virtual machines SMTP server.”
Get to the Build Details Directly!
If the background and rationale aren’t your focus, you can skip directly to the detailed server build.
The Rationale Behind a Dedicated VM Server
Initially, I utilized VirtualBox to manage VMs directly from my Windows desktop. While functional for a period, system reboots became increasingly disruptive.
Frequent forced reboots due to Windows Updates, necessary restarts for software installations, and occasional operating system crashes meant my suite of development VMs was interrupted three to five times monthly. This inconsistency hampered workflow and productivity.
A dedicated VM server mitigates most reboot-related disruptions. By running a streamlined host OS, system crashes and mandatory reboots become infrequent, ensuring greater uptime and stability for development environments and services, including potential premade VMware virtual machines SMTP server setups.
Understanding the “Homelab” Concept
“Homelab” is a popular term referring to servers built at home, as opposed to corporate or data center settings. Homelab servers are functionally identical to professional servers but are deployed in a home environment. They serve as excellent, low-pressure environments to practice with server technologies and configurations before applying these skills in professional contexts, such as setting up and managing a premade VMware virtual machines SMTP server.
Why Not Opt for Cloud Computing?
Cloud-based servers offer a seemingly simpler solution, eliminating the need for hardware maintenance. However, the cost of cloud resources comparable to my homelab server’s capabilities is prohibitively high. For equivalent VM resources on AWS EC2 instances, the estimated annual cost would exceed $6,000. This cost is a significant barrier, especially when aiming for always-on availability for development and testing, including hosting a premade VMware virtual machines SMTP server.
alt=”Screenshot of AWS pricing calculator showing an estimated annual cost of over $6,000 for EC2 instances comparable to a homelab server, making local solutions more attractive for consistent VM needs including testing premade VMware virtual machines SMTP server.”
While cloud costs could be reduced by dynamically starting and stopping instances, this introduces undesirable workflow friction. A local VM server allows for continuous operation of 10-20 VMs, readily available without constant cost management concerns. This always-on accessibility is crucial for maintaining a fluid development environment and for applications that benefit from continuous uptime, such as a premade VMware virtual machines SMTP server for development and testing.
Lessons from the Previous Build
My 2017 server provided valuable service, but three years of operation highlighted key areas ripe for improvement. These insights informed the design and component selection for the new server.
1. Prioritize Local Storage
My Synology NAS, with 10.9 TB of capacity, initially seemed like ample network storage. The logic was to minimize local server disk space, relying primarily on the NAS.
alt=”Screenshot showing 10.9 TB storage capacity on a Synology NAS, illustrating the initial reliance on network storage for VMs, a strategy later reconsidered for performance and dependency reasons, especially when considering applications like premade VMware virtual machines SMTP server.”
This decision proved to be a critical misstep.
Firstly, VM operation became strictly dependent on the NAS. Synology’s frequent OS updates, each requiring reboots, necessitated shutting down the entire VM fleet hosted on the NAS storage. This reintroduced the reboot-related downtime I aimed to avoid by moving away from desktop-hosted VMs. This was a significant issue for maintaining consistent availability, particularly if hosting services like a premade VMware virtual machines SMTP server.
alt=”Screenshot of a Synology DSM upgrade screen, highlighting the frequency of storage server OS updates and associated reboots, which can disrupt VM operations when VMs rely on network storage, impacting the availability of services like premade VMware virtual machines SMTP server.”
Secondly, network-based random disk access was significantly slower. While backend Python and Go development, prevalent during the first build, didn’t heavily rely on disk I/O, my shift to frontend web development changed the landscape. Modern web frameworks, often using Node.js, involve projects with tens to hundreds of thousands of JavaScript files. Node.js builds are characterized by intense random disk access, a worst-case scenario for network storage, severely impacting performance for development tasks and potentially for server applications like a premade VMware virtual machines SMTP server if they involve similar I/O patterns.
2. Choose Superior VM Management Software
For the initial server, VM management software evaluation narrowed down to Kimchi and VMware ESXi. While VMware ESXi was clearly more refined and feature-rich, Kimchi’s open-source nature and perceived agility were appealing.
alt=”Screenshot of Kimchi’s web user interface displaying a list of virtual machines, representing an early VM management choice that ultimately proved less reliable than desired for consistent server operation and management of services like premade VMware virtual machines SMTP server.”
However, Kimchi development essentially ceased shortly after my installation.
alt=”Graph illustrating code commits to the Kimchi project repository, showing a sharp decline in activity shortly after the author began using it, indicating a lack of ongoing development and support for the chosen VM management software, which can be problematic for long-term server reliability and management of premade VMware virtual machines SMTP server.”
Kimchi’s limitations became increasingly apparent. Basic operations like cloning or shutting down VMs often required multiple attempts. Frustrating UI bugs, such as disappearing or shifting buttons, further complicated VM management. This unreliability underscored the need for a more robust and actively maintained VM management solution, especially for critical development environments and potential server applications like a premade VMware virtual machines SMTP server.
3. Plan for Comprehensive Remote Administration
alt=”Photo of the front of the author’s old VM server, situated in a corner, illustrating the physical inaccessibility that highlighted the need for remote administration capabilities, especially for tasks like OS installation or troubleshooting, which are crucial for maintaining server uptime and managing services like premade VMware virtual machines SMTP server.”
The server’s location, tucked away in a corner, was convenient for space management but problematic when physical access was needed.
While Kimchi’s software shortcomings might suggest simply changing VM management software, the lack of remote administration capabilities became a larger issue. The server, a standard PC without a dedicated monitor or keyboard, relied on SSH and the web interface for management 99% of the time. However, the 1% of instances requiring physical access—server boot failures or host OS reinstallation—became major inconveniences. Moving the server to a desk, disconnecting desktop peripherals, troubleshooting, and then re-establishing the original setup was time-consuming and disruptive.
For the new build, a virtual console with physical-level access from power-on was a priority. Solutions like Dell’s iDRAC or HP’s iLO were considered.
alt=”Screenshot of Dell’s iDRAC interface, showcasing a remote server management option considered for the new homelab server build, emphasizing the importance of remote access for system administration, OS installation, and troubleshooting, particularly for servers hosting services like premade VMware virtual machines SMTP server.”
Component Selection Rationale
CPU
The original VM server used a Ryzen 7 1700 CPU. With 8 cores and 16 threads, it was a high-performance consumer CPU at the time. However, online homelab communities (/r/homelab) often favored enterprise-grade hardware.
alt=”Screenshot of a Reddit comment from user /u/pylori on /r/homelab forum, questioning the author’s server build with a dismissive ‘do u even’, reflecting the community’s preference for enterprise-grade hardware over consumer parts in homelab setups, a consideration that influenced the component choices for the new server intended for hosting VMs and potentially services like premade VMware virtual machines SMTP server.”
To align with perceived “enterprise” standards and maximize performance, a dual-CPU system was chosen for the 2020 build.
To optimize cost-effectiveness, used CPUs released 4-8 years prior were targeted. Performance benchmarks from PassMark and used CPU pricing on eBay were compared.
The Intel Xeon E5 v3 family, particularly the 2600 models, offered the best performance-per-dollar ratio. The E5-2680 v3 was selected, boasting a PassMark benchmark of 15,618 and a used price around $130.
alt=”Screenshot of Intel Xeon E5-2680 v3 benchmark score on cpubenchmark.net, showing a score of 15,618, justifying its selection for the new VM server build based on performance and cost-effectiveness, suitable for demanding workloads and potentially hosting multiple services like premade VMware virtual machines SMTP server.”
With dual E5-2680s, processing power was more than doubled compared to the previous Ryzen 7 server, which had a benchmark of 14,611. This significant upgrade promised enhanced performance for resource-intensive tasks and hosting numerous VMs, potentially including a premade VMware virtual machines SMTP server.
Motherboard
Choosing a dual-CPU system limited motherboard options. Only a few motherboards support dual Intel 2011-v3 CPUs, with prices ranging from $300 to $850, exceeding initial budget expectations.
The SuperMicro MBD-X10DAL-I-O was selected at $320, being a more affordable option among dual-CPU motherboards, although still significantly more expensive than the motherboard in the previous build. This motherboard was crucial for supporting the dual Xeon CPUs and providing the necessary infrastructure for a high-performance VM server capable of handling demanding tasks and potentially hosting services like a premade VMware virtual machines SMTP server.
Memory
Server RAM selection lacked readily available reviews and benchmarks compared to consumer RAM.
Crucial CT4K16G4RFD4213 64GB (4 x 16GB) was chosen due to brand trust. 64 GB was selected as double the previous 32 GB, anticipating future RAM demands and ensuring sufficient resources for running multiple VMs and server applications, such as a premade VMware virtual machines SMTP server, without memory constraints.
Storage
M.2 SSDs were preferred for their performance and clean installation, but the MBD-X10DAL motherboard lacked M.2 support.
A 1 TB Samsung 860 EVO SATA SSD was chosen. With 40 GB allocated per VM, 1 TB provided ample initial space. SATA SSDs offer a balance of performance and cost-effectiveness, suitable for hosting VMs and applications, including a premade VMware virtual machines SMTP server, while allowing for future storage expansion if needed.
Power
PSU selection prioritized brand reliability. The Corsair CX550M 550W 80 Plus Bronze was chosen.
While component wattage totaled around 400W, the 550W PSU offered extra headroom for minimal additional cost. Semi-modular cabling was a key feature for cleaner cable management, improving airflow and system aesthetics within the server case, crucial for a well-organized and efficiently cooled server environment, especially when running demanding applications or services like a premade VMware virtual machines SMTP server.
Fans
Dual-CPU setup presented cooling challenges due to limited space between CPU sockets on the MBD-X10DAL.
Cooler Master Hyper 212 fans were selected for their slim profile, allowing side-by-side installation and effective CPU cooling in the constrained space of the dual-CPU motherboard. Efficient cooling is essential for maintaining system stability and performance, especially under heavy loads or when hosting services like a premade VMware virtual machines SMTP server.
Case
A discreet, non-flashy case was preferred as the server is located in an office corner.
The Fractal Design Meshify C Black was chosen for its positive reviews, simple design, and quiet operation. Its understated aesthetics and functional design make it suitable for a home office environment, while its build quality ensures good airflow and component protection for reliable server operation, whether for development VMs or hosting services like a premade VMware virtual machines SMTP server.
Graphics
For a headless server, a high-end GPU wasn’t necessary. However, a basic GPU is needed for initial setup and occasional debugging.
The MSI GeForce GT 710 was selected as a low-cost, functional option for basic display output, sufficient for initial server setup, BIOS access, and troubleshooting, without adding unnecessary cost to a server primarily intended for headless operation and hosting VMs or services like a premade VMware virtual machines SMTP server.
Remote Administration
Remote administration solutions were surprisingly expensive. Dell iDRAC required a costly enterprise license and Dell-specific hardware. KVM over IP solutions ranged from $600 to $1,000.
alt=”Screenshot of a purchase page for Raritan Dominion KVM over IP, showing prices between $500 and $1,000, illustrating the high cost of commercial KVM over IP solutions and justifying the author’s DIY approach for remote server management, crucial for headless servers hosting VMs and services like premade VMware virtual machines SMTP server.”
A DIY KVM over IP device, TinyPilot, built using a Raspberry Pi, was developed as a cost-effective remote administration solution.
TinyPilot captures HDMI output and forwards keyboard/mouse input via a web browser, providing physical-level access remotely. This DIY solution offers essential remote management capabilities for headless servers, enabling OS installation, BIOS configuration, and troubleshooting remotely, critical for managing VMs and services like a premade VMware virtual machines SMTP server efficiently. TinyPilot is open-source and also available as pre-made units for purchase at TinyPilot.
My 2020 Server Build: Component List and Costs
Category | Component | Price Paid |
---|---|---|
CPU | Intel Xeon E5-2680 v3 (x2, used) | $264.82 |
Motherboard | SuperMicro MBD-X10DAL-I-O | $319.99 |
Disk | Samsung 860 EVO (1TB) | $149.99 |
Memory | Crucial CT4K16G4RFD4213 64GB (4 x 16GB) | $285.99 |
Power | Corsair CX550M 550W 80 Plus Bronze | $79.99 |
Graphics | MSI GeForce GT 710 | $44.99 |
Case | Fractal Design Meshify C Black | $84.99 |
CPU Fans | Cooler Master Hyper 212 (x2) | $72.98 |
Remote administration | TinyPilot (KVM over IP) | $65.00 |
Total | $1,368.74 |
alt=”Photos showcasing cable management inside the Fractal Design Meshify C case, highlighting velcro straps and rubber dividers that aid in organizing and concealing cables, contributing to a clean and efficient server build, suitable for hosting VMs and services like premade VMware virtual machines SMTP server.”
The Meshify C case excelled in cable management with built-in Velcro straps and rubber dividers, simplifying cable organization and improving the overall build aesthetics.
alt=”Photos showing the process of installing components on the motherboard, including CPUs, RAM, and fans, illustrating key steps in building a VM server capable of running demanding workloads and potentially hosting services like premade VMware virtual machines SMTP server.”
alt=”Photo of the completed homelab VM server build in its operational location, demonstrating the final setup and integration of components for a functional server ready to host VMs and services, potentially including premade VMware virtual machines SMTP server, in a home lab environment.”
alt=”Completed homelab VM server build in its new home, ready for virtual machine hosting and development tasks, with potential to run services like premade VMware virtual machines SMTP server.”
VM Management with Proxmox
Proxmox VE (https://www.proxmox.com/en/) was chosen as the VM management platform.
alt=”Screenshot of the Proxmox dashboard showing an overview of virtual machines, resource utilization, and server status, illustrating the user interface of the chosen VM management platform for the homelab server, capable of managing VMs and potentially hosting services like premade VMware virtual machines SMTP server.”
After the disappointing experience with Kimchi, Proxmox, with its 12-year history, offered a more reliable and mature solution. While not as visually polished as ESXi, Proxmox represents a significant improvement over Kimchi in terms of features and stability. For users considering running a premade VMware virtual machines SMTP server, Proxmox offers robust management capabilities.
Proxmox’s scriptability is a major advantage. Automating VM creation from templates and software installation via Ansible is streamlined through Proxmox’s powerful CLI, allowing for efficient VM deployment with simple scripts. This level of automation is crucial for rapidly setting up development environments or deploying services like a premade VMware virtual machines SMTP server for testing. In contrast, ESXi lacked comparable CLI-driven automation capabilities, often requiring manual web UI interactions.
Proxmox’s initial learning curve can be steep. Installation, in particular, was made easier with resources like Craft Computing’s installation tutorial. However, once familiar, Proxmox proves to be user-friendly and efficient for VM management.
Performance Benchmarks
Performance improvements were benchmarked against the old VM server across common workflows. Benchmarks compared three scenarios:
- 2017 Server (NAS): VMs on network storage (typical setup).
- 2017 Server (SSD): VMs on local SSD (limited VMs).
- 2020 Server: All VMs on local SSD (new standard).
Note: Benchmarks are not rigorous, representing single samples for each workflow without normalized conditions.
Provisioning a New VM
Provisioning a new VM from a standard Ubuntu 18.04 template involved:
- Cloning from template.
- Booting VM.
- Hostname change.
- Reboot.
apt update && apt upgrade
.
alt=”Graph comparing VM provisioning times for 2017 server (NAS and SSD) and 2020 server, showing significant speed improvements on the new server, particularly in VM cloning, which benefits development workflows and rapid deployment of services like premade VMware virtual machines SMTP server for testing.”
The new server significantly accelerated VM provisioning. Cloning time reduced from 15 minutes to under 4 minutes.
Skipping package upgrades reduced the speedup slightly, but the new server still outperformed NAS storage significantly (8 minutes to under 2.5 minutes). SSD-to-SSD cloning was slightly slower on the new server, likely due to the older server’s faster M.2 SSD compared to the new server’s SATA SSD, highlighting the impact of storage type on disk-bound operations like VM cloning, even when deploying services like premade VMware virtual machines SMTP server.
VM Boot Time
Boot time, measured from power-on to login prompt, was significantly improved.
alt=”Graph comparing VM boot times for 2017 server (NAS and SSD) and 2020 server, demonstrating substantial reduction in boot time on the new server, improving responsiveness and workflow efficiency, especially when frequently starting and stopping VMs for development or testing services like premade VMware virtual machines SMTP server.”
Old VMs booted in 48 seconds, SSD VMs on the old system in 32 seconds, while the new server booted VMs in just 18 seconds, showcasing the performance gains in basic VM operations.
What Got Done End-to-End Tests
Automated end-to-end tests for the What Got Done journaling app, involving backend compilation, frontend build, Docker containerization, and browser automation, represented a diverse and resource-intensive workflow.
alt=”Graph comparing execution times for What Got Done end-to-end tests on 2017 server (SSD) and 2020 server, showing minimal performance difference, suggesting the workflow is bottlenecked by disk I/O and browser interaction rather than CPU, even with the new server’s upgraded processing power, which may also be relevant for performance considerations when hosting services like premade VMware virtual machines SMTP server.”
Surprisingly, performance was similar between servers. Cold starts (Docker image downloads) were slightly slower (2%) on the new server. With local Docker images, the new server was only 6% faster, suggesting disk and browser interactions were the primary bottlenecks, limiting the impact of the CPU upgrade for this specific workflow. This highlights that even with powerful hardware, specific workflow characteristics can limit performance gains, a consideration relevant for various server applications including premade VMware virtual machines SMTP server.
Is It Keto Build
Building the Is It Keto website using Gridsome, a Vue static site generator, was another common workflow.
alt=”Graph comparing build times for Is It Keto website on 2017 server (SSD) and 2020 server, showing slightly slower build times on the new server, contrary to expectations of CPU-bound workload speedup, suggesting potential bottleneck in parallel processing or per-core CPU performance, which is an important consideration for optimizing server performance for specific applications and workloads, including potentially hosting services like premade VMware virtual machines SMTP server.”
Build times were unexpectedly slower on the new server. Despite being perceived as CPU-bound on the old server, doubling CPU resources didn’t improve build times. Moving files to RAMdisk also didn’t change performance, suggesting the workflow was CPU-bound but poorly parallelized, limiting the utilization of the new server’s 48 cores. Faster per-core performance on the old server might explain the slightly better build times, indicating that core count isn’t always the primary factor for performance, especially in applications that don’t scale linearly with multiple cores, a factor to consider when selecting hardware for various server tasks, including potentially hosting a premade VMware virtual machines SMTP server.
Zestful Model Training
Training the Zestful recipe ingredient parsing API, a highly CPU-intensive workflow, showed significant performance gains.
alt=”Graph comparing Zestful model training times for 2017 server (SSD) and 2020 server, demonstrating a significant reduction in training time on the new server, highlighting the benefit of increased CPU cores for highly parallelizable and CPU-bound workloads like machine learning model training, showcasing the advantage of the new server’s dual-CPU configuration for specific intensive tasks, which may also apply to certain server applications but not necessarily for services like premade VMware virtual machines SMTP server that may be more I/O or network bound.”
Model training time was halved on the new server, demonstrating the benefit of the 48 CPU cores for highly parallelizable, CPU-intensive tasks. However, this workflow is infrequent, occurring only a few times per year, suggesting that while the dual-CPU setup excels in specific scenarios, its overall impact on daily workflows might be limited.
Reflections on the Build
Consumer Hardware is Valid
Despite community preferences for enterprise hardware, consumer hardware remains a viable option for homelab servers.
Server hardware’s primary advantage lies in better compatibility with server software and enhanced reliability for user-facing services. While server component compatibility was a factor in the past, Linux kernel updates have improved compatibility with consumer CPUs. Reliability is less critical for a development server where occasional crashes are less disruptive compared to production environments. For many homelab purposes, including testing environments and even hosting services like a premade VMware virtual machines SMTP server for personal use, consumer hardware offers a cost-effective and often sufficient solution.
Dual-CPU Costs and Benefits
Building a dual-CPU system was an interesting experience but potentially not worth the added complexity and cost for this specific use case.
CPU usage monitoring in Proxmox revealed that CPU load rarely exceeded 11%, indicating significant over-provisioning.
alt=”Graph showing maximum CPU usage on the new server never exceeding 11% of capacity over several months, indicating significant CPU over-provisioning with the dual-CPU setup for the author’s typical workloads, suggesting that a single CPU configuration might have been sufficient and more cost-effective for their needs, even when considering potential server applications like premade VMware virtual machines SMTP server.”
Dual-CPU requirements significantly increased motherboard costs and limited motherboard choices. The limited availability of dual Intel 2011-v3 motherboard options constrained feature selection. For future builds, re-evaluating the necessity of dual CPUs and considering single, high-performance consumer CPUs might be more practical and cost-effective for development-focused homelabs and even for hosting less CPU-intensive services like a premade VMware virtual machines SMTP server.
Remote Administration’s Flexibility
TinyPilot’s remote administration capabilities drastically improved server management flexibility.
The virtual console eliminated the reluctance to modify BIOS or network settings, knowing recovery was easily achievable remotely. This newfound freedom encourages experimentation with different operating systems and configurations. Without TinyPilot, sticking with a “good enough” solution like ESXi might have been more likely, rather than exploring and adopting a more suitable platform like Proxmox. Remote administration is crucial for efficient server management, especially for headless setups and for tasks like OS reinstallation or troubleshooting, whether for development VMs or for maintaining services like a premade VMware virtual machines SMTP server.
One Year Later Update (2021-12-05)
After a year of using the new server, a retrospective update was shared based on user feedback.
CPU – Overkill
Dual E5-2680 v3 CPUs proved to be excessive.
alt=”Graph showing CPU usage on the new server rarely exceeding 50% of capacity over a year of usage, reinforcing the conclusion that dual CPUs were an over-provisioning for the author’s workloads, and a single CPU would have been sufficient, even considering potential hosting of services like premade VMware virtual machines SMTP server.”
CPU usage rarely reached 100% and only exceeded 50% a few times, confirming that a single CPU would have been adequate.
SSD – Insufficient
The 1 TB Samsung SSD filled up, necessitating the purchase of a 2 TB Samsung 870 Evo for a total of 3 TB.
alt=”Screenshot showing disk usage on the server at 85% full, indicating near capacity and the need for additional storage, highlighting the importance of accurately estimating storage needs for VMs and server applications, including potential disk space requirements for services like premade VMware virtual machines SMTP server.”
Default 40 GB VM disk allocation proved limiting, especially with Docker container images consuming significant disk space. Frequent docker system prune --all
commands became necessary, highlighting the need for more generous VM disk allocation or larger initial SSD capacity, especially when planning to host containerized services or applications that generate substantial data, such as a premade VMware virtual machines SMTP server with email queues.
RAM – Slightly Under-provisioned
64 GB RAM was mostly sufficient but occasionally required VM shutdowns to free up memory.
alt=”Graph showing RAM usage on the server frequently reaching 64 GB of capacity, indicating that while generally sufficient, RAM was nearing its limit under peak workloads, suggesting that 64 GB might be slightly under-provisioned and additional RAM would improve performance and prevent memory-related bottlenecks, particularly when running multiple VMs or memory-intensive applications and services, including potentially hosting a premade VMware virtual machines SMTP server.”
Another 64 GB RAM kit was purchased to increase total RAM to 128 GB, aiming to eliminate memory constraints and improve overall VM performance and responsiveness, ensuring smoother operation even under heavy loads or when running numerous VMs and services, including potentially a premade VMware virtual machines SMTP server alongside other applications.
Proxmox – Continued Satisfaction
Proxmox remained the preferred VM manager. A license was purchased to support the project, although feature additions were unclear. However, Proxmox licensing is per-CPU, doubling the cost due to the dual-CPU setup, further reinforcing the reflection on potential CPU over-provisioning.
Updated Parts List (2021-12-05)
Category | Component | Price Paid |
---|---|---|
CPU | Intel Xeon E5-2680 v3 (x2, used) | $264.82 |
Motherboard | SuperMicro MBD-X10DAL-I-O | $319.99 |
Disk | Samsung 860 EVO (1TB) | $149.99 |
Disk | Samsung 870 EVO (2TB) | $239.99* |
Memory | Crucial CT4K16G4RFD4213 64GB (4 x 16GB) | $285.99 |
Memory | Crucial CT4K16G4RFD4213 64GB (4 x 16GB) | $164.11* |
Power | Corsair CX550M 550W 80 Plus Bronze | $79.99 |
Graphics | MSI GeForce GT 710 | $44.99 |
Case | Fractal Design Meshify C Black | $84.99 |
CPU Fans | Cooler Master Hyper 212 (x2) | $72.98 |
Remote administration | TinyPilot (KVM over IP) | $65.00 |
Total | $1,772.84 |
* Purchased a year after the original build.