Hi everyone, I am back with another Storage Spaces Direct post. This time I would like to talk to you about planning S2D clusters from 2 node to a 16 node cluster. What you need to think about when doing your planning. This will be a multi part series covering 4 nodes, adding additional nodes. All Flash, Hybrid, MRV and the big one a 16 Node setup.
Storage Spaces
Patching Storage Spaces Direct
Hello Everyone
I have been starting over and over on this blog post since nov 2017 wich coinsided with me chaning jobs, wich i again just did in april of 2018 to work soly with S2D, VMM, Azure Stak and other datacenter products. So i thought it was time to get this done.
Storage Spaces Direct monitoring with Barton Glass
A while ago i gave u the first look into Dataon Must, Dataon’s monitoring system that comes with their S2D servers.
Today i want to give you an insight into a new offering that is coming, Barton Glass. Barton Glass is built by Barton Systems member of the Cronos group and 2016 Microsoft Partner of the year in Belgium.
Build Your own DIY home or lab Storage Spaces Direct Cluster orderd from ebay Part 1
So i built a 2 node S2D cluster a while ago at home on some old HP G6 nodes i got cheap. But have decided to get rid of that and setup a new 3 node cluster with bit’s i can find on ebay. And reuse disks i have. This will be a multipart blog post as the parts are orderd and they come in. And during the build process. I will provide a step by step guide in building it, installing, configuring, monitoring and troubleshooting Storage Spaces Direct. Including switch config.
Replacing a failed Disk on a storage spaces direct pool failed
So yesterday i had to replace a disk on a failed HDD in one of our S2D cluster. After replacing the drive and removing the failed drive from the cluster i ran Get-Physicaldisk and noticed i had no disks with Canpool = True. This is normal as S2D will detect the new disk and add it to the Storage Pool to balance the pool correctly.
Dataon S2D-3212 HyperConverged Cluster
Updated 27. feb
We have been testing Storage Spaces Direct for a while on our Ebay cluster. We have been running development and some production systems. As the 2nd exchange node, a mediation server and our vmm server.
We have been looking to replace our current Hyper-V solution that consist of HP BL465c G8 and BL490 G7 blade servers attached to HP P2000 G3 MSA over iscsi. This has been slower and slower as we have setup more virtual machines. This was a 12 disk shelf with 11 disks active with one spare. One 15k disk gives about 170 iops, giving a whopping 1870 iops on max speed. On normal load it would use about 1200-1500 IOPS so not a lot of spare IOPS. We had one pr cluster.
Most of you know what S2D(Storage Spaces Direct) is, if you don’t go look at Cosmos Darwin’s post over at Technet to get some good insight about S2D.
What i am going to focus on in this blog is the new Dataon HyperConverged server. Back at Ignite 2016 Dataon released there first offering the S2D-3110 all flash solution pumping out 2.6 Million IOPS in a 1u form factor. Read more
How To replace S2D Cache device from SSD to NVME
A friend of mine asked me about this a while ago, as he had setup his S2D cluster with SSD and HDD only. So the SSD’s became the journal drives(caching drives). Now he wanted to replace the SSD’s with NVME disks that he had replaced. Yesterday he did the swap and it worked great.
Bug in KB3206632 for StoragePool for DPM with Modern Backup Storage
After System Center 2016 came out i went straight on to setting up DPM 2016 with Modern Backup Storage. This had been running fine for a while. A bit slow for the DPM console to respond after doing a synchronization .
How to replace a NVME Caching device on a Storage Spaces Direct Cluster
After my initial failure of replacing a NVME Caching card and hitting a bug in the 2016 version i was on, i replaced another one today. As we where starting our cluster out with Intel 750 drives, and these NVME PCIe cards only have 70gb of write’s pr day. So i decided to replace them with the Intel DC P3600. The first failed as can be seen here.
Troubleshooting Storage Spaces Direct
Over the last few weeks we have been having some issues with our Storage Spaces Direct test/dev cluster. To start off i will explain what happened and what did go wrong.