This post was jointly authored by Tom Twyman (VMware) and Johnson Cauthon (Pureport).
This does not represent an official support statement by either VMware or Pureport. The testing scenarios described were attempted simply to validate expected functionality – all configurations were completed by following public documentation.
As customers expand their services and continue to adopt cloud services, it is becoming more and more common for them to adopt a multi-cloud stance. This enables them to embrace services from multiple cloud vendors, adopting the service(s) most appropriate for the use case at the time. However, this also presents a problem for the customer – how does one adopt services from multiple cloud providers, and allow traffic to flow between those services in a high-throughput, low-latency, and secure fashion without requiring all that traffic to ‘trombone’ or ‘hairpin’ to the customer datacenter for appropriate routing? The answer is a multi-cloud network fabric – in this blog we will examine how we can accomplish this using Pureport MultiCloud Fabric.
The diagram above represents a reference architecture for a customer that desires the ability to use the Pureport Multicloud Fabric to enable east-west communications between multiple cloud providers without requiring the traffic to route back to their on-premises routing infrastructure to do so. In this diagram we see the customer has provisioned environments for Google Cloud Platform, Amazon Web services, and VMware Cloud on AWS as well. Pureport provides connectivity to all the hyperscalers via a network fabric in a regional cross-connect facility.
Relevant networking information depicted above is as follows:
- VPC with CIDR 172.24.0.0/16
- Virtual machine with 172.16.0.2 provisioned
- VPC with CIDR 172.28.0.0/16
- EC2 instance with 172.28.0.240 provisioned
VMC on AWS:
- Management CIDR: 10.2.0.0/16 (used by vCenter, ESXi, and NSX)
- Compute Segment of 10.3.2.0/24 (with DHCP)
- Virtual machines provisioned:
- Photon OS: 10.3.2.100 (DHCP)
- CentOS with WordPress:10.3.2.101
AWS ‘Linked’ VPC:
- VPC with CIDR 184.108.40.206/16
Customer Prem Location (Raleigh, NC – connected via Pureport VPN):
- DMZ network 172.16.1.0/24
- Pingable endpoint: 172.16.1.2
Note that although Microsoft Azure was not included in this testing, there is every reason to believe it would operate functionally equivalent to the other cloud platforms included in this test.
Viewing the Pureport Multicloud Fabric
Below we see an overview of the entire network in the customer’s Pureport console.
- Cloud capacity for AWS and VMware Cloud on AWS are provisioned in us-west-2 (Oregon), while the network point-of-presence (POP) is Seattle.
- This causes network latency between cloud environments of about 15 ms (see ping tests below).
- Similar deployments in other cloud regions will yield different results, based on proximity of POP to cloud datacenters.
- For example, a similar setup in us-east-1 (N. VA) would likely yield response times closer to 2-3 ms.
BGP routing table viewed from customer-prem-facing Pureport routing gateway:
BGP routing table viewed from the Google-facing Pureport routing gateway:
BGP routing tabled viewed from Pureport gateway attached to the AWS Native environment:
BGP routing table viewed from Pureport gateway attached to the VMC SDDC:
Connect Your SDDCs with VMware Transit Connect
VMware Cloud on AWS provides multiple connectivity options for a customer’s software defined datacenter (SDDC). For any single SDDC, either a VPN connection or a Direct Connect virtual interface (VIF) may be configured. However, in the event a customer needs to either connect multiple VMware Cloud SDDCs together – OR they need or desire high-throughput / low latency connectivity to other native AWS VPCs, or even other cloud providers, one may also leverage VMware Transit Connect.
VMware Transit Connect was released earlier this year as a method to provide connectivity to a VMware-managed AWS Transit Gateway (vTGW) at native attachment speeds up to 50 Gbps. Gilles Chekroun has blogged extensively about Transit Connect here.
A VMC SDDC may be connected to the vTGW via an SDDC Group – follow the instructions in VMware’s documentation to create your SDDC Group. You may find the documentation here: https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-C957DBA7-16F5-412B-BB72-15B49B714723.html
Note that any SDDCs that are members of the SDDC group are automatically connected to a VMware-managed Transit Gateway behind the scenes. This is depicted in the graphic above as the green-colored TGW icon. This allows for seamless east-west connectivity between SDDCs in the cloud via a high-speed, low-latency connection at 50 Gbps.
Provide External Connectivity over Direct Connect
To allow connectivity to the rest of the environment – including other VPCs in AWS, or other cloud providers, the Transit Connect must be connected to a AWS Direct Connect Gateway, as seen in the environment diagram above. To complete this task a Direct Connect and Direct Connect Gateway must be provisioned in the customer’s AWS account – this can be accomplished natively in the customer’s AWS console or via the Pureport console.
Once the DX and DXGW have been provisioned, the VMware Transit Connect may be connected to the external DXGW. To connect your Transit Connect and SDDC group to your Direct Connect Gateway, view your SDDC Group in the VMware Cloud Console.
Click on “Direct Connect Gateway” and then click “Add Account.”
Enter the ID of your DXGW, as well as the AWS account ID, and the network prefixes you will allow for outgoing communication from your SDDC:
Your new connection to the DXGW will show as “REQUESTED.”
In your AWS console, you will see the pending request for the Direct Connect Gateway:
You will need to accept the request – click the hyperlink for the DXGW showing the pending request.
Highlight the request, and click “Accept.”
The proposal will appear, and you will need to confirm the advertised routes – click “Accept Proposal.”
Once accepted, the state will change to “associating.”
Back in your VMware cloud console, the connectivity status for your Transit Connect will change to “CONNECTED.”
And you should see any local networks in the SDDC advertised on the “Routing” tab:
Once BGP converges over the Direct Connect Gateway, you should begin to see networks advertised from the network provider – in this case, we see the networks from the ‘customer’ datacenter, as well as from the native Google and AWS VPCs:
From the customer datacenter, we run a quick test to our WordPress blog running in VMC on AWS:
(Screen shot of WordPress site from customer prem location)
From the Photon OS running in the VMware Cloud SDDC, we can also ping all the other environments, including the customer datacenter – which in our test environment is actually connected via a VPN from Raleigh, NC – while the cloud environments were created in Oregon (AWS) and Seattle, WA (Google). This is reflected in the significantly longer ping times to the 172.16.1.0/24 network shown below:
Note that all the response times for cloud-to-cloud services are only 15-16 milliseconds, while ping tests to the customer datacenter reflect a much longer RTT – closer to 100 ms. By disabling the VPN to the customer datacenter in NC, we see the routes updated appropriately in the SDDC Group routing table:
If we once again test ICMP RTT with ping to the other cloud services, we see the RTT remains 15-16 ms, confirming that traffic remains routed east-west within the Pureport multicloud fabric, and does not have to be routed back on prem:
As you can see, leveraging a multi-cloud fabric from Pureport provides a robust and high-performance network fabric in the cloud. This alleviates any concerns that may arise for cross-cloud routing, as all east-west traffic between cloud providers remains in the provider network fabric, rather than returning on premise for routing. A multicloud fabric is an excellent addition to any customer’s hybrid cloud strategy and journey.