Homelab 2019: Overview

As I started to build my own homelab, I thought why not share it. Some is still word in progress like the 10G NICs and I am debating with myself on some other networking gear.

What are my goals for the homelab?

  • Provide sufficient resources for testing, aim would be to host more or less the VMware SDDC stack (or components for the SaaS-flavor of our products in some instances)
  • Keep noise to a minimum, it must be “silent” from 0,3m away
  • Keep power consumption down, I want to spent less than a Euro per day
  • Re-use existing hardware where possible
  • All things must fit into my office in terms of space

On a high level my plan is to build a resource cluster to host my extended monitoring and management workloads (i.e. everything that is not required to run this lab) and some nested environments for the actual tests. As supporting infrastructure I have a standalone host that will be used for the core management components and co-function as a jump box.

In somewhat more detail it should look like this from workload distribution perspective:

and the final vCenter layout is going to look like this:

Resource cluster

The question is always how to build the lab in the first place:

Small boxes

Smaller nodes like NUCs have a cheaper price point per unit for the box itself but will max in resources fairly quickly. Here the price goes up through the number of devices required and personally I saw two drawbacks:

  • each device adds more overhead to my environment (cabling, cooling, system overhead)
  • Scalability is very limited, basically you can just scale-out which again adds overhead and can be a PITA if you are trying to use for instance vSAN and need just one resource.

Big boxes

Okay, now small boxes didn’t fit my need – what about “big boxes”? Real big boxes like servers can pack in tons of resources but

  • they would not meet the requirements of being quiet
  • I expect the power consumption would be too big
  • They take up a lot of space

The middle ground (BYO)

As I couldn’t find a suitable product, the supermicros seem not to be easy on the ears, I did go the “build your own” route. Here I opted for the middle ground with a decent amount of RAM but still consumer hardware which can be tweaked to lower power consumption and nose emission. Finally, this is what I ended up with:

In terms of RAM I opted to go with 64 GB hosts as this reduces the overhead per system and allows for a higher consolidation ratio. Downside of this: it drives the cost per host if I ever want to expand to a third node.

With six CPU cores per host a decent amount of resources are provided so I can easily enable things like vSAN dedup/compression and still have enough CPU power left to host some VMs on top of nested hypervisors

I had the choice of a NAS solution or vSAN as HCI for the storage layer. I opted for vSAN/HCI as it allows me more flexibility on how to assign resources (e.g. I would not need to provide extra FTT to a nested ESXi) and I can just add capacity disks as needed. Major downside is the extra resource consumption per host.

List of components

So what does a host look like in more detail?

  • CPU: 1* Intel Core i5 8400 6x 2.80GHz
  • RAM: 4* 16GB G.Skill Aegis DDR4-2400 DIMM 
  • Mainboard: GIGABYTE Q370M D3H GSM PLUS
  • NIC (PCI1): <not yet decided on>
  • ESXi boot disk: 1* 120GB Kingston A400 2.5″ SATA
  • vSAN cache device: 1* 250GB Samsung 970 Evo M.2 2280 PCIe 3.0 x4 (Node2 has a WD black 250GB NVMe)
  • vSAN capacity device: 1* 512GB SanDisk X600 2.5″ SATA (Node 2 has a SanDisk Ultra 3D 512GB)
  • Power Supply: 1* 400 Watt be quiet! Straight Power 10 Non-Modular 80+ Gold
  • Case: Thermaltake Core V21
And of course a photo of it all, excuse me for the mess in the other compartments of this fine pice of IKEA furniture 🙂

Standalone host (existing)

The idea is to leave the management host on all the time, so the workloads deployed there should be kept to a minimum but still they would need to be the essentials that is required to start the rest of the lab.

Fewer hardware components in the box keeps power consumption to a minimum and at the end of the day I need to pay bills, so I decided to stick with what I got in terms of investment.

I also opted to use my NUC in a workstation setup (that not ESXi as OS) so I can have a jump box with my tools on it – makes life so such easier.

List of components

  • System: 1* Intel NUC-Kit i3-7100U 2.4GHz HD620 NUC7I3BNH
  • RAM: 2* 8 GB
  • Boot disk: 1* 512GB Samsung Evo 840 SATA
  • Disk: 1* Crucial MX500 512GB M.2

Summary

This is just a first overview, as I build this stuff over the next weeks I try to add some more content on networking and storage as well as deployment/management of the components.