How to rebind Mirror or Performance Drives back to S2D Cache device

A few days ago a friend of mine asked me if i had any idea on how to get his SSD’s and HDD’s to be attached to the NVME cache devices. As he had added alot of disks over the last 7 months to his S2D cluster. And the normal behavior is that any new Disk will be auto bound to the Cache


He noticed things where getting slower and slower. So we run this command to check the bind list of drives against the NVME cache devices. And he noticed that only 6 of 27 drivers pr node where bound to the Cache devices. This means that all IO bypasses the cache devices. And is not good.

That will get you an output like this

Now what you will see that if you have 2 NVME Cache devices every other SSD and HDD will bind to 1 NVME device. So half of the disks will be on one NVME device and the other half on the other NVME device. So if you have 20 drives pr server you will see 20 disks in the list bound to 2 NVME Cache devices.

Now if this is not the case you will need to rebind them. In many cases you can do this without moving any roles off the nodes. This will require that you have disks in the same slots in the enclosures and that you have enough space to failover data. Not this was done on a 2 node cluster.

  • Let’s find the disks that are missing, in the above screenshot you will see the physical disks in the first nr series and the Cache device second. So let’s say disk 10 is not bound it would have been bound to 10:1 for instance.



  • Let’s say you want to rebind disk 10 in both enclosures you will run the following command’s to remove and rebind. Now i don’t have 10 disk’s in this server but let’s imagine 🙂


  • Now you should have the disks that you just run this on. If you have more disk’s you will need to start from top to bottom again. Do not do this until all storagejob’s are done.


Id like to thank my good friend Vidar Friis for providing this solution after having this issue on his production cluster.

Leave a Reply

Your email address will not be published. Required fields are marked *