BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
On the left hand side we have the first broker of our cluster one. This cluster is running confront version 7.2.1. On the NFS client side that is the brokers we have installed all the uh new kernels the custom changes that we have made and for the NFS server we are using the NetApp CVO and the mount is done via version uh NFS version 4.1. On the right hand side we have the first broker of our cluster 2. Uh this is again running Confluent version 7.2.1. Um but this time no kernel changes are installed on the broker side. And on the back end on the NFS server side we are using a custom NFS server instead of the NetApp CVO instance. And this mount is done via NFS version 3. Let's quickly check the test types on both the brokers of the respective clusters. On the left hand side, cluster one, we can see that there's a the mount is done via NFS version 4. And on the right hand side of the broker one of the cluster 2, we can see that the mount is done via uh NFS version 3. As previously mentioned,for this demonstration, we are going to quickly create a new topic. And uh as you can see we will be creating a topic calleddemo topic. So let's quickly get the command. And now this topic is created. This is has a replication factor of two and partitions four. We can see on the cluster one it is created. Uh we will go ahead and create the same topic on the second cluster. And here as well the a demo topic has been created with the same replication factor 2 and partitions 4. Now as the topics are created we will go ahead and run the command to get the topic description. So this particular command is going to tell us the partition distribution of this newly created topic over the cluster. So let's go ahead and run it for the cluster one. So after pulling up these details we can see that um for the partition three of this newly created topic this particular broker one acts as the leader. This broker one also contains the replica of the partition zero of this topic. As we had created four partitions, we can see all the four brokers of this cluster are uh leader for some of the some or the other partition. [clears throat] And here we see that the broker one acts as a leader for the partition two of this topic and the broker one also contains the replica of partition one. We will go ahead and verify the same. So on the broker one we can see that the partition zero and partition three residesfor the first cluster on the left hand sideand on the right hand side for the second cluster broker one has the partition one and the partition two. Now we are going to be pushing some messages into this newly created topic on both the cluster and for that purpose we'll be using the Kafka's inbuilt packaged uh Kafka per producer perf test. We'll be pushing approximately 3 million messages into each of the uh cluster for this a demo topic that we have created. So to do that the command for the first cluster is on the left hand side we are executing it from the broker one. We can see that the number of records is 3 million.And to do that on the second cluster the command is on the right hand side. We are running it again from the broker one of the cluster and we are again pushing 3 million records. After the data has been produced for the topic, just quickly resetting both the windows on the left hand and the right hand side both. And uh before we proceed, we will quickly perform a health check on uh the broker one of each of the cluster. So the broker one is at uh IP address 160 and Kafka is running on 9092 default port. So we'll just quickly tell net that and we get a positive response. It is connected and on the 198 that is the broker one on the cluster two as well we get a positive response and it is able to successfully connect. Now that we have created a topic populated messages into that topic using produce perf test and also we have performed a health check on each of the broker one of each of the cluster. What we will do next is that we will start our steps towards the partition reassignment. So initially when we had described the topic we could see a certain partition distribution over the cluster. What we aim at is to again redistribute those partitions across the cluster for this particular topic. Now we'll be running the command to generate the proposed partition reassignment configuration for us. As you can already see on the left hand side it's generated and we will go ahead and do a same on the right hand side for the cluster two. This shows us two things the current partition replica assignment and the proposed partition reassignment configuration. So if you compare the current and the proposed on the left hand side as one example you can see that forthe demo topic partition three is currently held at the broker one and broker three of this four node cluster. But the proposed particle partition reassignment suggests that we could uh have the partition three replicas in two or four. This eventually means that this partition three will be deleted from the broker one and it will be transferred to the broker two. And you can draw another other examples also if you compare the current and the proposed partition assignment configuration.Similarly on the right hand side as well. We will now just take copies of this proposed partition reassignment as we would be requiring this particular JSON as an input to the uh partition reassignment command that actually performs the redistribution and this particular JSON would be an input for that. Now that we have everything to go, we will go ahead and start the actual partition reassignment uh on the broker one for the cluster one. We'll be running it running the command. So here it is and uh this particular command we can see that it has started the partition reassignment. You can read the output there. And uh reassignment JSON file 2 has been taken as an input what we had earlier copied. Let's go ahead and do that. Uh on the broker one of the cluster 2 as well. And here again I have provided the proposed uh partition reassignment JSON as an input. And here also it has started the reassignment.we will quickly run the verify command to check the update. So for the cluster one upon running the command we can see that the reassignment of partition for all the four partitions is complete and we would be running the same command on the cluster two as well. Here it also says that the uh partition of reassignment of the partition is complete. But we will see what has happened under the hood. And to check that we will run the watch command on this particular demo topic directory. And we can see that the part it has cleanly deleted the previous partitions and now partition zero and two reside. But on the right hand side we can see that it has not been able to perform a clean delete and instead we have this extra partition reminiscent of the old partition which has marked to be delete. It was unable to delete it. And we also see that NFS files which is also known as the NFS silly rename whenever file is uh attempted to be deleted. But now let's check out the actual problem and let's get back to the health checkand let's quickly run the health check on the one and on the left hand side and yes it connects successfully the broker is still alive but when we do that on the second cluster broker one it has absolutely crashed and we are not able to connect. Okay.
Watch how silly rename is no longer an issue for Kafka workloads with NFS Datastores. Learn about functional validation of the silly rename issue in Apache Kafka workloads running on NFS storage.