Hello all, back with another issue i stumbled upon this week. I am working on redeploying a 2019 S2D Cluster to 2022 and started on the first host to be redeployed with VMM. The deployment process worked fine, but when running my checks including Test-RDMA.ps1 script something was off.
So a few days ago i was firmware and driver patching a S2D cluster for a client that is Bare Metal Deployed from VMM. For those not familiar with Bare Metal Deployment, it deployes a VHDX file to the c:\ drive and changes the bcdboot order so that it points to the VHDX file to boot from.
Firmware upgrades is something i have done a million times and never had any issues with booting before. And it happend on the 2nd node i was upgrading. No matter what i tried with recovery options from the repair boot, there was no fixing it.
VMM 2022 is out and allows you to run Windows Server 2022 and Azure Stack HCI 21H2 wich you wan’t to upgrade to to get new features if you have any of those 2 OS’es. I upgraded my lab last week and had some issues. The VMM 2022 upgrade went perfect.
But upgrading my 2019 cluster agents gave an error. The same with adding new physical hosts to VMM.
So i was setting up a Azure Stack HCI Lab today and came across a error msg when running the set-mocconfig command.
My cluster has static IP addresses on everything and for some reason it then requires the Cloud Service IP CIDR to be set static. And you can’t use a parameter like $cloudservicecidr = “IP Address” You need to specify it in the line.
Error msg i got was
Cloud Service IP CIDR is required when using static IP assignment.
It works if adding -cloudServiceCidr 10.0.0.49 at the end after stable.
Hey everyone, another Failover Cluster issue i came across lately. And i wanted to share this one as i could not find any good resources online for this issue. So here we go.
A client contacted me a few weeks back about a node that would not come online again in the cluster after a reboot. It would not join back to the cluster at all. When looking at the cluster it just did not want to join back to the cluster. The only good error msg’s under failover clustering i could find where these.
Hey everyone, have abit of a strange one this time. About 2 months ago i was patching my Azure Stack HCI test cluster with the July patches and using Cluster Aware Updating via WAC. It went trough fine and the vm’s was up and all good. Nothing indicating any issues.
Hello Friends, it’s been a while since my last post. With covid and all my inspiration has not been the best. And with all my spare time going into renovating the house there has not been so much time. But there will be more posts coming soon.
I got the chance to borrow some Lenovo MX1021 Azure Stack HCI nodes to play with and test. And with the soon to be release public preview of 21H2 release i wanted to start getting some real HW experience. So stay tuned for some Azure Stack HCI blog posts coming soon.
So a few weeks ago i was upgrading a 2016 Cluster to 2019. And suddenly the VMM service crashed out of nowhere. I noticed the client had tried to change a VM by increasing the size of the disk. And it failed by loosing connection to the VMM server.