Last year i had an issue where nodes would suddenly drop out of the Azure pane and no longer show under the cluster or as a Azure Resource. And perhaps it also says not connected recently?
So let’s go trough the scenarios and how to fix them
Hello all, back with another issue i stumbled upon this week. I am working on redeploying a 2019 S2D Cluster to 2022 and started on the first host to be redeployed with VMM. The deployment process worked fine, but when running my checks including Test-RDMA.ps1 script something was off.
So a few days ago i was firmware and driver patching a S2D cluster for a client that is Bare Metal Deployed from VMM. For those not familiar with Bare Metal Deployment, it deployes a VHDX file to the c:\ drive and changes the bcdboot order so that it points to the VHDX file to boot from.
Firmware upgrades is something i have done a million times and never had any issues with booting before. And it happend on the 2nd node i was upgrading. No matter what i tried with recovery options from the repair boot, there was no fixing it.
VMM 2022 is out and allows you to run Windows Server 2022 and Azure Stack HCI 21H2 wich you wan’t to upgrade to to get new features if you have any of those 2 OS’es. I upgraded my lab last week and had some issues. The VMM 2022 upgrade went perfect.
But upgrading my 2019 cluster agents gave an error. The same with adding new physical hosts to VMM.
So i was setting up a Azure Stack HCI Lab today and came across a error msg when running the set-mocconfig command.
My cluster has static IP addresses on everything and for some reason it then requires the Cloud Service IP CIDR to be set static. And you can’t use a parameter like $cloudservicecidr = “IP Address” You need to specify it in the line.
Error msg i got was
Cloud Service IP CIDR is required when using static IP assignment.
It works if adding -cloudServiceCidr 10.0.0.49 at the end after stable.
So last week i had to add a new address space to a vNet as i needed a seperate subnet for Private Endpoints. I added a address space, configured the subnet and setup the private endpoints.
When i started testing i could not reach the private endpoints. I could see the traffic flow in the NSG logs and from other vNets trough the Azure Firewall. And i could not figure it out. I asked a few MVP friends and the answer was that this is a limitation in Azure.
This week we have been deploying a new environment in Azure for a client. With a secured vWAN Hub with Azure Firewall. The vWAN Hub is connected to a Cisco SD-WAN appliance that connects all of the clients physical location. We configured 2 new Domain Controllers, opened up the traffic between the Azure DC and on-premises DC. We could reach the Azure DC’s but not the other way arround.
Hey everyone, another Failover Cluster issue i came across lately. And i wanted to share this one as i could not find any good resources online for this issue. So here we go.
A client contacted me a few weeks back about a node that would not come online again in the cluster after a reboot. It would not join back to the cluster at all. When looking at the cluster it just did not want to join back to the cluster. The only good error msg’s under failover clustering i could find where these.
Hey everyone, have abit of a strange one this time. About 2 months ago i was patching my Azure Stack HCI test cluster with the July patches and using Cluster Aware Updating via WAC. It went trough fine and the vm’s was up and all good. Nothing indicating any issues.