VM Teleportation

This is a live update of VMworld 2010 Session TA8051

I am sitting in the VM Teleportation session that Chad Sakac is holding, and it’s pretty cool. I am somewhat disappointed that he showed the same demos as in yesterday’s VMworld Supersession. On the other hand, he actually went into some details behind the technology.

VPLEX is essentially the child of EMC’s acquisition of YottaYotta, and provides storage virtualization and federation services for both EMC and non-EMC gear. The basic idea is allowing a storage device or volume (LUN) to be present, active, and not only readable but writable in multiple locations. This idea has been around for a while, pioneered by companies such as Datacore and later IBM.

(As a side note, I spent some time as an employee at Datacore.)

The current implementation of VPLEX allows for federated storage with either synchronous mirroring, or without advanced mirroring. This second scenario is pretty interesting… You can have up to 8 VPLEX engines (each with two directors in an HA configuration), and they all share a global cache index. This means a read or a write coming in from a given engine could be requesting access to a block that is owned by a different engine in another location. IN THE CASE that the block requested is not owned by the current engine and the volume is not already synchronously mirrored, the VPLEX engine can take control of the block in question to complete the write.

NOTE: Current best practices dictate synchronous mirroring between sites.

Using a volume that is not mirrored, therefore (meaning the data is NOT present in both locations), it is possible to perform a vmotion between locations, and the VPLEX engines will perform a background data move post-vmotion. This obviously degrades performance of the workload until all the blocks have been copied to the target engine(s).

This is pretty fascinating to me, as the technologies I have had experience with previously have used synchronous cache coherency to ensure availability of the active/active LUN. In this case, the controller receiving the write commits the data to the local write cache and simultaneously send the data to the write cache of the remote array. Acknowledgement of the committed write from the remote controller must be received prior to sending an ACK to the application server. While this provides a high degree of availability, it is pretty intensive on the bandwidth, and doubles your storage utilization…

Now, you may be thinking, but you just wrote that best practices dictate synchronous mirroring… that’s true, but Chad demonstrated a vmotion without advanced mirroring, and it still works! this has enormous potential for organizations that have less than desirable network connections or can’t pre-provision the storage ahead of time. I am all for making these advanced technologies available to more companies.

If you think about it- and as Chad pointed out- the sweet spot is vmotion between 2 clusters (not HA)… Synchronous mirroring is preferred due to the performance impact to the application. Furthermore, while there is an obvious application for stretched clusters and HA across sites, the HA admittance algorithm not designed to understand if ESX hosts are in different locations… Host affinity can help, but is not perfect. I will try to fill in some more detail on this in another post.

Preferred sites in VPLEX Metro

VPLEX provides the administrator to select a “preferred” site. This allows the VPLEX to choose to fail a volume over to a site or not based on the preference rules in the event of an outage.

As far as I am concerned, the implications here are huge. EMC and VMW are working together to figure out how to use this technology to create stretched clusters that will leverage HA reliably. Furthermore, EMC is working to produce an SRA for Vmware SRM. All together, these technologies are creating the foundation for a significantly higher level of availability over much larger distances than we have see before. While VPLEX Geo is still in development, just seeing a virtual machine move through vmotion over 2300 kilometers is astounding. This is going to bring long distance vmotion to reality much sooner than i thought, and customers will start to look for ways to leverage the technology as part of their availability plans.

See TA8218 for additional perspectives…

– Posted using BlogPress…

Location:Howard St,San Francisco,United States

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s