As I’m not going, and a bit glad as I have a huge task at home fixing things up after a water leak in the roof. But sad as I see people now heading for Ignite on Twitter. Thinking for myself, stop posting those posts 🙂
I wanted to share the sessions I will be trying to watch live and watch on demand after, I will separate them into the different solutions and post them pr day. As the schedule for live streaming is not yet available for those of us not going to Ignite.
I have been quite on the blog front for a while as when we got home from summer holiday in early August we had lots of water in the 2nd floor bathroom floor which had run down a crack in the floor into the living room underneath. So I have spent almost every free hour after that figuring out where the water was coming from by removing walls. And finding a lot of rot in the construction where the water came in. And the water damage in the living room also kicked off a total renovation of the living room. As the insurance does not cover everything we will need to do some stuff our self.
But today i found some time to write this blog post for you, and it’s about Upgrading your S2D cluster to 2019 🙂
Hi everyone, I am back with another Storage Spaces Direct post. This time I would like to talk to you about planning S2D clusters from 2 node to a 16 node cluster. What you need to think about when doing your planning. This will be a multi part series covering 4 nodes, adding additional nodes. All Flash, Hybrid, MRV and the big one a 16 Node setup.
So i have been working on this case for about 4 weeks. Where a client have been having issues with CSV’s becoming unhealthy and Cache drives going into Lost Communication during backup. We have a MS Support case but as the client do not have Premier Support it’s taking a while to get trough 1st line and getting it escalated.
I have been starting over and over on this blog post since nov 2017 wich coinsided with me chaning jobs, wich i again just did in april of 2018 to work soly with S2D, VMM, Azure Stak and other datacenter products. So i thought it was time to get this done.
So a while ago i started on a blog post series i had hoped to be able to update with you on a regular basis. That did not happen as the new job was quite hectic and i was not able to get the funding i needed for the new S2D lab. So i ended up getting the old HP DL380 G6 servers from my then old job.
After the Nordic Infrastructure Conference i was approached by a company about a new exciting new job working on S2D, VMM, SCOM and Azure Stack as main focus. After some interview’s i decided to join CTGlobal which i started with in April. So now i thought it was time to revive this series. Since then my S2D cluster has been running very good. Upgrading it to Insider Build 17083 which is the last Insider Build that is supported on VMM 1801. New blog post cumming as soon as the VMM team releases a build for RS5 Windows Server 2019.
4 weeks ago i started my new job at CTGlobal here in Norway. Where i will be focusing on S2D, VMM, OMS, Azure Stack and other datacenter solutions. As we are deploying S2D with VMM we wanted to build an easy to use but robust way of configuring VMM and deploy the physical hosts for S2D and configure them all the way til we create the cluster and enable S2D. Deploying the host require BMC deep discovery of the host. It detects everything from disks, controller, network cards and so on. In my home lab i have Mellanox CX3 cards and are using them for host mgmt and SMB, so i am using a setSwitch with this. And configured this in the script.