Whether you are looking to migrate to vSphere for the first time, or migrate your vSphere workloads to a new environment, understanding how to ‘size’ the environment properly can be tricky, but it doesn’t have to be daunting or overwhelming. Over time, lots of tools have become available to assist with this process, and though they don’t all provide exactly the same functionality or output, any of them may be used to help you along.
Generally speaking, sizing is an exercise during which one gathers several metrics related to capacity and utilization of a certain application, environment or number of workloads. This is done in an effort to help understand how much new infrastructure or services will cost to service or move said workloads. It is important to note that sizing is only a small part of an overall migration or upgrade project, and the new capacity represents only a small fraction of the overall cost of such a project. Nonetheless, it is a critical part – after all, without new infrastructure to migrate to, nothing would happen, would it?
Final disclaimer – I certainly don’t have a lock on this, but I seek here to share the lessons and experiences I have learned, in the hopes of helping you out with your migration project(s).
What are you sizing for?
I am primarily focused on infrastructure, and highly virtualized infrastructure at that. Therefore, I am looking to gather generalized capacity, utilization, and performance metrics for a wide variety of applications running in virtual machines. Since new infrastructure for a highly virtualized environment usually means new servers, I would like to have a clear understanding of the consumption of CPU, memory, disk space for those virtual machines. Furthermore, since I am at this time primarily focused on hyper-converged infrastructure (HCI), I like to have a good understanding of performance requirements on the storage – both in terms of IOPS as well as throughput (MBps / Mbps). Of course, a holistic analysis would include network utilization in terms of packets per second (PPS) and throughput.
The nature of the workloads in question, and the type of infrastructure needing to be replaced will largely define the type of data you need to gather. For example, replacing a storage array would require a much closer look at underlying storage metrics, front-end IOPs types of protection and RAID levels, back-end IOPs, growth trends, replication requirements, etc. Relocating a large database for analytics might require a much closer analysis of the database read and write behavior, buffering and caching requirements, maximum transactions per second, etc.
Know Your Inputs
By this I mean those variables you will use to 1) understand the current environment and 2) extrapolate for the new environment. I am thinking about relocating the virtual machines currently running on old physical infrastructure to new physical infrastructure. The only constant here are the workloads being moved – the physical infrastructure is going to change. Therefore, I need to gather information about the workloads, since they will be moving.
- number of virtual machines to be relocated
- configured and consumed CPU per virtual machine
- configured and consume memory per virtual machine
- configured and consumed disk space per virtual machine
- IOPs and disk throughput per virtual machine
- PPS and network throughput per virtual machine
Collecting / Gathering Data
As I mentioned above, there are any number of tools to be used for gathering data – and this post is not meant to be a comprehensive survey. However, here are the top tools I tend to use with customers when examining an existing vSphere environment for workload relocation – either to a new on-prem cluster, or to VMware Cloud on AWS.
LiveOptics is my preferred tool of choice – originally built by the presales team at Dell for understanding existing server assets, it has since grown to perform not only data gathering on physical systems, but virtual systems as well, and also storage (block and file). It’s a great, free, all-around tool for gathering and visualizing information about your current environment – though there are some noticeable gaps. For example, it doesn’t really do any kind of capture / analysis on networking gear or infrastructure. However, for my purposes in evaluating a vSphere environment, it’s great. Most importantly, it allows you to collect data over a period of time to get a good representation of actual performance and utilization.
RVTools has been around for years, and is well known in the VMware ecosystem as a great tool for running reports on your environment. Purpose built by a long-time VMware customer in the Netherlands, it is a favorite among vSphere administrators due to its ease of use and comprehensive data set. My only possible knock on RVTools is it gathers a snapshot, rather than gathering data over a period of time – though it does include utilization and performance data.
Of course you can use vRealize Operations for this – and if you already have it installed, then you don’t even need to run anything! You just have to get the data in a format you can manipulate. It can be difficult to do so with vROps, as many of us know, as it is an extremely mature and powerful product – and with all that maturity and power can come complexity.
I have written a simple set of instructions in the following article to allow you to run an inventory report in vROps – I hope soon to write another demonstrating how to gather all the appropriate performance information as well.
Finally, of course, there is vCenter itself – easy to get a list of virtual machines from, and of course it is the source for all the data gathered by the tools above! It’s easy to get virtual machine configurations out of here, sometimes somewhat less so to easily gather and correlate performance information.
In my next post, I will talk a bit about what to do with your data once it has been collected.