Troubleshooting failed VirtualDisk on a Storage Spaces Direct Cluster

In this guide i will explain what you can do to fix a failed virtualdisk in a Failover Cluster. In S2D the ReFS volume will write some metadata to the volume when it mounts it. If it can’t do this for some reason it will jump the virtualdisk from node to node until it’s tried to mount it on the last host. Then it will fail and you will get this state in the event log and the Virtual disk will be failed.


If you also look in your ReFS event log you will see things like this


Now let’s run a powershell command on one of the nodes to look at the VirtualDisk


Now let’s run some commands to fix this issue.



Now the virtualdisk should look like this in Failover Cluster manager.


So now we can add the virtualdisk back as a Cluster Shared Volume



And in failover cluster it should look like as normal. You can now start up your VM’s




3 thoughts on “Troubleshooting failed VirtualDisk on a Storage Spaces Direct Cluster

  • January 26, 2018 at 4:47 pm

    Thanks! this is the only reference I have found to this problem.
    I would have wished the events were in text, not pictures – so I found this a bit late – searching for the errors using google does not pop this article, because it’s pictures I guess.

    Never the less – THANKS!
    At least I got a bit of my trust in the S2D back after loosing complete access to a volume with no explanation.

  • January 1, 2018 at 4:18 pm

    I did not check the storagejob’s if any was running. You can probably wait if you want to yes if a repair job is running. But it works without waiting as well.


  • December 31, 2017 at 1:12 am

    Many thanks for this post ! Is very “simple” but is very Good !

    Just before to re-add the VirtualDisk to the Cluster, It’s possible to wait the Repair Storage Job.

    Best regards,
    Philippe G.


Leave a Reply

Your email address will not be published. Required fields are marked *