Build Your own DIY home or lab Storage Spaces Direct Cluster orderd from ebay Part 1

So i built a 2 node S2D cluster  a while ago at home on some old HP G6 nodes i got cheap. But have decided to get rid of that and setup a new 3 node cluster with bit’s i can find on ebay. And reuse disks i have. This will be a multipart blog post as the parts are orderd and they come in. And during the build process. I will provide a step by step guide in building it, installing, configuring, monitoring and troubleshooting Storage Spaces Direct. Including switch config.

Read more

Replacing a failed Disk on a storage spaces direct pool failed

So yesterday i had to replace a disk on a failed HDD in one of our S2D cluster. After replacing the drive and removing the failed drive from the cluster i ran Get-Physicaldisk and noticed i had no disks with Canpool = True. This is normal as S2D will detect the new disk and add it to the Storage Pool to balance the pool correctly.

Read more

Dataon S2D-3212 HyperConverged Cluster

Updated 27. feb

We have been testing Storage Spaces Direct for a while on our Ebay cluster. We have been running development and some production systems. As the 2nd exchange node, a mediation server and our vmm server.

We have been looking to replace our current Hyper-V solution that consist of HP BL465c G8 and  BL490 G7 blade servers attached to HP P2000 G3 MSA over iscsi. This has been slower and slower as we have setup more virtual machines. This was a 12 disk shelf with 11 disks active with one spare. One 15k disk gives about 170 iops, giving a whopping 1870 iops on max speed. On normal load it would use about 1200-1500  IOPS so not a lot of spare IOPS. We had one pr cluster.

Most of you know what S2D(Storage Spaces Direct) is, if you don’t go look at Cosmos Darwin’s post over at Technet to get some good insight about S2D.

What i am going to focus on in this blog is the new Dataon HyperConverged server. Back at Ignite 2016  Dataon released there first offering the S2D-3110 all flash solution pumping out 2.6 Million IOPS in a 1u form factor. Read more

How to replace a NVME Caching device on a Storage Spaces Direct Cluster

After my initial failure of replacing a NVME Caching card and hitting a bug in the 2016 version i was on, i replaced another one today. As we where starting our cluster out with Intel 750 drives, and these NVME PCIe cards only have 70gb of write’s pr day. So i decided to replace them with the Intel DC P3600. The first failed as can be seen here.

Read more

Troubleshooting failed VirtualDisk on a Storage Spaces Direct Cluster

In this guide i will explain what you can do to fix a failed virtualdisk in a Failover Cluster. In S2D the ReFS volume will write some metadata to the volume when it mounts it. If it can’t do this for some reason it will jump the virtualdisk from node to node until it’s tried to mount it on the last host. Then it will fail and you will get this state in the event log and the Virtual disk will be failed.

Read more

Troubleshooting performance issues on your windows storage. Storage Spaces Direct

In this guide i will give you a quick overview on how to troubleshoot your Storage Spaces, like ordinary Storage Pools with and without Tiering and Storage Spaces Direct. I will do the troubleshooting based on an issue we had with our test Storage Spaces Direct Cluster.

What happen was that we where starting to experience really bad performance on the VM’s. Response times where going trough the roof. We had response times of 8000 ms on normal os operations. What we traced it down to was faulty SSD drives. These where Kingston V310 consumer SSD drives. These did not have power loss protection on them, and that’s a problem as S2D or windows storage want’s to write to a safe place. The caching on these Kingston drives worked for a while. But after to much writing it failed. You can read all about SSD and power loss protection here.

Read more