BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello everyone. In this video, we will see how you can build a hybrid cloud solution with flex pod and NetApp cloud volume on tab in GCP. The Netab data fabrics enables customers to move resources securely from their on-pre data center to any public cloud where NetApp has a cloud offering. And for this video, we are using GCP. In this demo, we will start with creating on-prem flex pod using Cisco interface workflow. We will deploy a connector which is an instance deployed in any cloud platform or on-prem environment. The connector enables cloud manager to manage resources and process within public cloud environment. We will use Cisco interface workflow and hashikop terraform to establish NetApp snap mirror replication relationship between on-prem and cloud instance. We will also examine in detail how we can orchestrate and automate the data replication and disaster recovery solution between flexport data center and cloud volume on. This is the logical topology of the solution. The replication of workload data between flex pod to CVO is handled by NetApp snap mirror and the overall process is orchestrated using Cisco interite cloud orchestrator for both on-prem and cloud environment. Cisco interite cloud orchestrator consumes Terapform resources providers for NetApp cloud manager to carry out operations related to CVO deployment and establish data replication relationship. Let's begin with Cisco interite. Here I have imported a workflow which will create a SVM and FS leaf and a policy is added to the SVM within the on-prem flexport. On clicking execute, interite throws up a wizard which will collect user input parameters. As you can see, workflow execution is in progress. Now once successful let's log into onap system manager to check the SVM which was created by the workflow earlier. Before deploying the connector in NetApp cloud manager let's log into GCP and make sure we have VPC subnet and service account with required roles attached to deploy the connector. Let's log into cloud manager. Click add connector. Select GCP. Enter connector name, project name, service account details which was created in last step. Let's enter location details like region, zone, VPC and subnet. In the network settings, we have disabled public IP and enter HTTP proxy value.We have set the firewall policy to anywhere for HTTP, HTTPS and SSH. Let's quickly review the details. It will take around 5 to 7 minutes to deploy a connector. You can click on show details to view the progress. As you can see, the connector is deployed. Now let's go back to Cisco interite and we will pick another workflow to set up snap mirror replication relationship between flex pod and cloud volume on tab which is also called CVO. As you execute the workflow, user input details like volume size, data center, cluster, NFS leaf IP, etc. is populated into the new wizard. We will also need to provide our on-prim and GCP details. Finally, we can execute the workflow. It takes couple of minute for the workflow to complete the execution.This workflow will create a workspace as you can see in terapform cloud which will configure CVO in GCP and later create a snap mirror replication relationship between on-prem cluster and CVO.Let's go back to cloud manager and navigate to the timeline to get the detailed update which would take few minutes to complete. Once CVI is created, we can see replication process has already started. Once the replication is successful, we can check the status in Terraform cloud. Let's mount the volume to a VM that is running on on-prem flex pod. We will use sample test data which will be replicated to CBO in cloud. We are going to ensure the data integrity by running check sum and you will see when we do the same on target volume by the end of this video. In order to simulate a disaster, we are going to stop the SVM through NetApp system manager. We will verify the same at the NetApp cloud manager where you can see cloud manager losing connectivity to the on-prem flexboard. We are going to break the replication relationship and promote the destination cloud volume on tab as production. We will navigate to CV working environment. And we can see that's our destination volume. Let's edit the volume to make it accessible using custom export policy. Copy the readyto use mount command. Next step is to mount the target volume which is the volume replicated to the CVO to a VM which is running in GCP setup. We can mount the volume the same way we did for on-prem instance and you will see the test file which got replicated from the source or on-prem instance. As you can see the check sum is matching between on-prem and cloud instance which shows the data is equally the same between the two instances. To summarize, we used terraform to automate the deployment of CVO in GCP and created snap mirror replication relationship between on-prem and CVO. Later we broke the replication relationship and promoted the CVO to production to access the mirror data from the VM running on GCP CVO. We have also validated the data integrity between on-prem and CVO in cloud. Customers can further expand any of the tasks created in this demo to meet the business outcome they are looking for. Thank you.
With the launch of FlexPod XCS customers will have the ability to connect their On-Prem infrastructure to AWS, Azure and GCP. Learn how to connect your on-prem to a CVO instance in GCP using Cisco Intersight.