Welcome back for Part 5 of this System Center VMM series, in this post I will write about Logical Switches.
If you do not know what a Logical Switch is, it’s basically the same as a Standard Switch created with New-VMswitch on a Windows Server OS. But it’s defined and create by VMM, so that you can manage it from VMM. You can also from VMM 2019 convert a Standard Switch to Logical Switch.
As i wrote about in my prev post about DPM 2019, a few day’s ago the System Center team posted a blog on the Windows Server blog post about the upcoming 2019 release this month.
For the VMM 2019 release there is quite a few new features, most of them related to Software Defined things like SDN and S2D. I will go trough the S2D bit’s and one new cool tenant feature that has been asked about alot.
Since mid August there has been a rebalance issue of the storage in a Windows Server 2019 S2D storagepool. When ever you rebooted a node S2D would kick off a repair moving data off the “down” node. If a node was down for hours this would result in a lot of data being moved off that node.
A While back Microsoft changed there recommendation for Priority Flow Control for Clusters running RDMA with SMB3 in any way. If it’s an old fashion Storage Spaces with Scale Out Fileservers or Storage Spaces Direct. With RoCE RDMA the recommendation has always been to use Datacenter Bridging(DCB) and Priority Flow Control(PFC) for SMB3 traffic.
Just before christmas I was asked how one should patch a cluster that has not been patched in over a year. The answer to that question is a bit tricky but I will help to guide you and tell you the pros and cons of doing it in the 2 ways that was tried on 2 different clusters.
Yesterday i was deploying a new HA VMM setup for a client where we are going to baremetal deploy all the physical hosts. As we had a limitation on VMM 2016 GA due to migration tool from VMWare to Hyper-V required VMM 2016 LTSB i installed 2016 with Update Rollup 6.
As I’m not going, and a bit glad as I have a huge task at home fixing things up after a water leak in the roof. But sad as I see people now heading for Ignite on Twitter. Thinking for myself, stop posting those posts 🙂
I wanted to share the sessions I will be trying to watch live and watch on demand after, I will separate them into the different solutions and post them pr day. As the schedule for live streaming is not yet available for those of us not going to Ignite.
I have been quite on the blog front for a while as when we got home from summer holiday in early August we had lots of water in the 2nd floor bathroom floor which had run down a crack in the floor into the living room underneath. So I have spent almost every free hour after that figuring out where the water was coming from by removing walls. And finding a lot of rot in the construction where the water came in. And the water damage in the living room also kicked off a total renovation of the living room. As the insurance does not cover everything we will need to do some stuff our self.
But today i found some time to write this blog post for you, and it’s about Upgrading your S2D cluster to 2019 🙂