Yesterday i was deploying a new HA VMM setup for a client where we are going to baremetal deploy all the physical hosts. As we had a limitation on VMM 2016 GA due to migration tool from VMWare to Hyper-V required VMM 2016 LTSB i installed 2016 with Update Rollup 6.Read more
As I’m not going, and a bit glad as I have a huge task at home fixing things up after a water leak in the roof. But sad as I see people now heading for Ignite on Twitter. Thinking for myself, stop posting those posts 🙂
I wanted to share the sessions I will be trying to watch live and watch on demand after, I will separate them into the different solutions and post them pr day. As the schedule for live streaming is not yet available for those of us not going to Ignite.Read more
I have been quite on the blog front for a while as when we got home from summer holiday in early August we had lots of water in the 2nd floor bathroom floor which had run down a crack in the floor into the living room underneath. So I have spent almost every free hour after that figuring out where the water was coming from by removing walls. And finding a lot of rot in the construction where the water came in. And the water damage in the living room also kicked off a total renovation of the living room. As the insurance does not cover everything we will need to do some stuff our self.
But today i found some time to write this blog post for you, and it’s about Upgrading your S2D cluster to 2019 🙂Read more
Lately we have seen alot of Event ID 5120 with a status code of STATUS_IO_TIMEOUT or STATUS_CONNECTION_DISCONNECTED during rebooting a node.
Here is a statement from Microsoft about the issue and what to do when rebooting a node.
So VMM 1807 was just released yesterday and there is not a lot of new features in this version, as I think the VMM team is focusing on support for RS5(2019) and new features for it.
In this version it’s an upgrade from 1801 to 1807. So to install 1807 you will need to have 1801 already installed.
In this post we will talk about how to extend your existing S2D cluster and what to think about when extending a S2D cluster.
In my previous post we built a 4 Node S2D cluster based on the following HW
Hi everyone, I am back with another Storage Spaces Direct post. This time I would like to talk to you about planning S2D clusters from 2 node to a 16 node cluster. What you need to think about when doing your planning. This will be a multi part series covering 4 nodes, adding additional nodes. All Flash, Hybrid, MRV and the big one a 16 Node setup.
So i have been working on this case for about 4 weeks. Where a client have been having issues with CSV’s becoming unhealthy and Cache drives going into Lost Communication during backup. We have a MS Support case but as the client do not have Premier Support it’s taking a while to get trough 1st line and getting it escalated.
I have been starting over and over on this blog post since nov 2017 wich coinsided with me chaning jobs, wich i again just did in april of 2018 to work soly with S2D, VMM, Azure Stak and other datacenter products. So i thought it was time to get this done.
So a while ago i started on a blog post series i had hoped to be able to update with you on a regular basis. That did not happen as the new job was quite hectic and i was not able to get the funding i needed for the new S2D lab. So i ended up getting the old HP DL380 G6 servers from my then old job.
After the Nordic Infrastructure Conference i was approached by a company about a new exciting new job working on S2D, VMM, SCOM and Azure Stack as main focus. After some interview’s i decided to join CTGlobal which i started with in April. So now i thought it was time to revive this series. Since then my S2D cluster has been running very good. Upgrading it to Insider Build 17083 which is the last Insider Build that is supported on VMM 1801. New blog post cumming as soon as the VMM team releases a build for RS5 Windows Server 2019.