OK, so this is the requisite ‘home lab’ post… I have struggled with this for a while, and finally buckled under the pressure of my own inner struggle.
For years I have been trying to do ‘more with less’ … getting by with some limited hands on in labs at VMworld, or using the VMware SE Lab (if you don’t already know about this, you should ask your local VMware SE), and whatever online training I could find. From time to time, I would also stand up some product using VMware Workstation or Fusion to make sure I know my way around the interface, but that’s it.
The problem, of course, is that VMware really can’t be felt or completely understood unless you are in a multi-host environment. So, as I gear up for my re-certification for VMware (not to mention all the EMC and Cisco product I will be learning soon), I finally decided to invest in my home lab. Of course part of me objects to the fact that this is necessary, but after 15 years, this really seems to be only way to acquire a solid, broad, and deep understanding of a given set of products. When I first became a Microsoft certified instructor (15 years ago) I lived for 6 weeks with 5 desktops on my dining room table (my wife was very understanding). The foundational Windows knowledge I built during that time has served me well, but it would appear that 15 years (at least for me) is the limit on that kind of investment, and it is time to dive in again.
So, I have become something of a student of Newegg.com (love the site), and have put together for myself a set of three systems with 2 NAS boxes to use as an ESX cluster running vSphere 4.1. I went ahead and got some fancy cases with nifty blue lights (really, if you’re going to geek it up, go whole hog!). The complete gear list is below – each system is built as follows:
- Western Digital Caviar Black WD1001FALS 1TB 7200 RPM SATA 3.0Gb/s 3.5″ Internal Hard Drive
- Crucial 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory
- AMD Phenom II X4 965 Black Edition Deneb 3.4GHz Socket AM3 125W Quad-Core Processor
- Diablotek PHD Series PHD750 750W ATX12V / EPS12V Power Supply
- SAPPHIRE 100293L Radeon HD 5570 1GB 128-bit DDR3 PCI Express 2.1 x16 HDCP Ready Video Card
- MSI 790FX-GD70 AM3 AMD 790FX ATX AMD Motherboard
- Antec Nine Hundred Black Steel ATX Mid Tower Computer Case
Let’s just say that I’m about as happy as I could be without actual servers (see the picture above). I already have ESX installed, and soon will be installing VirtualCenter, after Geek Week… I will post an update on that later this week.
Great looking lab, love the geeked out blue lights 🙂
I’m looking into doing something similar, so my question is what did the full environment cost?
Each server cost about $1200, which is up there for a system i built myself, but i wanted some decent systems I wouldn’t become frustrated with, at least not soon. I have built systems before, and you can certainly go cheaper, but for what i wanted I decided to spend a bit more.
Did you use esx4i? 4.1, 4.0 …
I am having trouble getting vmware installed on default system. Couple of differences on system. I have the x6 amd, 790gto video card, 2TB SATA HD. I have read enough articles that I am going to order Intel Pro NIC card. VmWare doesn’t seem to support onboard NICs. I’m trying to figure out if there are any custom configs in the BIOS? I’ve tried several configurations and no luck. Can you explain what you disabled and left enabled in BIOS?
r/Milo
I actually didn’t have to do much… I left hardware virtualization enabled, turned off all the ports I don’t need (parallel, serial, etc), and everything worked fine with the exception of the onboard NICs. I have RealTek NICs on my motherboard, and those are explicitly not supported. ESX (and ESXi) both will usually recognize Intel and Broadcom inboard NICs pretty week, though of course you should check the VMW ompatability guides at http://www.Vmware.com/support.