this post was submitted on 18 Aug 2025
923 points (99.3% liked)
Microblog Memes
8965 readers
761 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Do you want to hear about my homelab?
Heck yes
Well, I started with a lowly single Dell 2950 back in the day. I was working for VMware at the time, so naturally, I used that.
Since then I upgraded to a c6100 with an R710, and I'm not even sure how I got where I am right now, but I have an R630, R330, FX2s with 3x FC630 nodes and an FD332 (currently empty), and a powervault somethingorother for storage which I desperately want to upgrade.
I finally decided on disks to buy for my FD332, I'm going with Intel DC S4500 (used) at 1.92GB. I have yet to purchase one, and the FD332 can take 16(?) disks.... I'm going SATA SSD for this and moving away from the RAID 6 I'm using on the powervault (this model is basically a slightly modified R510, with 22.5" disks in the back for the OS, and 123.5" in the front for storage). The current primary storage for my VMs (still on VMware) is 64TB WD Red plus, and I have additional storage of 68TB WD Red plus for media content (large files mostly, 90%+ read).
I have over 30 virtual machines, currently all on one FC630, for all kinds of self hosted stuff. I've noticed that they are starting to run incredibly slow, especially after upgrading from the c6100 nodes over to the FC630, so I picked up the FD332 to build a new storage array for the OS data, which will be all flash.
I decided on all flash early on in my thought process, but struggled with deciding what drive I should buy. I want all of the drives to at least be the same make/model for consistency. I finally landed on the Intel DC S4500, because the performance is quite good, and the endurance is 1DWPD, so in the used market, they should have a lot of life left. I picked 1.92 TB because of my space requirements, if I do some version of raid 6, I'll lose at least two drives. Raid 60 would be four drives. So all of my data needs to live on the 12-14 that remain, and it should comfortably fit on 20TB with some room to grow.
I'm currently evaluating alternatives to VMware. I looked at OpenStack for a bit but found it to be too restrictive for my homelab. I'm currently looking at xcp-ng, which shows promise but the GUI is clunky and there's still a nontrivial number of things that you need to drop to the CLI to do. I'll probably be looking at proxmox next.
I'm not in a hurry to populate the new storage because I'm planning on setting it up on whatever hypervisor I pick to move to after VMware, and I haven't made that choice yet. Once I do, the new storage will need to be in place before I make the leap. I will basically export the VMs from VMware, then import them to the new hypervisor, placing them on the "new" Flash storage as I go.
Among my VMs, I have a full Windows active directory set up, with exchange, and a dedicated MS SQL server. I had a remote desktop server for a while, and I maintain a small handful of gaming server VMs. There's more but I don't want to detail every system I run.
Current standard spec is 2* Intel Xeon E5-2618L v4, with 256GB RAM, and 2* 480G Intel DC S4500 drives for the bare metal OS. I have 2 of 3 fc630 nodes all specc'd out, or very close to being all specc'd out (the VMware node is using a set of Samsung SSDs for it's OS, since I hadn't decided on the S4500 yet, and I had them laying around... This will be replaced when I move away from VMware and need to reformat the node). The R630 is almost the same but was built before the FC630s, and only has 128G of RAM; it is using an 8 drive RAID 6 array of Corsair 500G SSDs. The powervault is my oldest server and has a pair of 300ish GB spinning disks for OS, as well as the two six drive WD Red plus arrays for my main storage.
All of this is connected to a Cisco catalyst 4948. The FX2s is 10Gbit linked, while everything else has a number of 1GbE ports, some aggregated, some are just multi-link/multi-path for iSCSI.
I have two main gateways, the one that serves the servers is a sonicwall firewall. I used sonicwall a lot at work when I bought it, I've since changed jobs, but it's still a decent piece of gear, so it stays. I also have a full ubiquiti network running the main access for the people at home, so they won't be disturbed by my activities. It includes a UDM Pro, an enterprise 48 Poe switch, and a handful of U6 Pro access points with one U6 mesh to fill in a gap. Four ubiquiti access points in total. On my lab side I have a Cisco wlc 2504 wireless controller with a pair of 2802i access points, powered by a 48 port catalyst 3750x PoE which is 10G linked to the 4948. I have a Cisco 2911 router I've been meaning to get set up as a phone server. I have a collection of 7940/7960G phones from Cisco and I recently acquired a couple 8841 phones to use with it as well.
My current project is to update my exchange server from 2010 to 2019.
Physically, almost everything is in a 42RU rack in my basement. It's a complete mess right now. I need to finish doing in-wall ethernet runs before I can clean it up, and get some new patch cables to tidy up the wiring. I also need to physically relocate some servers as my FX2s is currently on a table since the c6100 was taking up too much space in the rack when I got it, and has since been decommissioned. I'm kind of waiting to decom the powervault before I start racking everything properly. Idk.
My hero