So a client had patched there 2 node S2D cluster this Sunday. When the 2nd node came up it would not join the cluster. So the client started troubleshooting, without realizing that the Quorum file share Witness was not working. Someone had managed to delete the folder where the File Share Witness was on. At first, the first node that was patched and booted was working. But at some point both nodes where booted over and over again.
Let’s say your organization want’s to setup a solution against a 3rd party web solution that is hosted in the cloud like a accounting system. And your organization has a rule about this should be Single Sing-On and use your domain login credentials. You already have Azure AD Connector setup with password sync and have all the users synced to Azure AD. And then you realize that the Provider does not have a finished application with a guide in the Enterprise Application store. So what to do then?
A few days ago a friend of mine asked me if i had any idea on how to get his SSD’s and HDD’s to be attached to the NVME cache devices. As he had added alot of disks over the last 7 months to his S2D cluster. And the normal behavior is that any new Disk will be auto bound to the Cache
A while ago i gave u the first look into Dataon Must, Dataon’s monitoring system that comes with their S2D servers.
Today i want to give you an insight into a new offering that is coming, Barton Glass. Barton Glass is built by Barton Systems member of the Cronos group and 2016 Microsoft Partner of the year in Belgium.
So i built a 2 node S2D cluster a while ago at home on some old HP G6 nodes i got cheap. But have decided to get rid of that and setup a new 3 node cluster with bit’s i can find on ebay. And reuse disks i have. This will be a multipart blog post as the parts are orderd and they come in. And during the build process. I will provide a step by step guide in building it, installing, configuring, monitoring and troubleshooting Storage Spaces Direct. Including switch config.
After patching both our S2D clusters today, i have had the same error after resuming nodes and failing back roles. This happens after installing KB4038782
Update, after installing CU10 october patch. Rebooting a S2D node will not cause this issue, after the initial update boot.
The Physical disk’s stays in maintenance mode.
I wanted to enable Application Insights on my WordPress web app in azure. Normally one can do this by go into the web app and install it first time you open up application insights. This way of doing it does not work.
So i had to dabble my feet into Powershell this week. Something i need to work a lot more on in the future. So expect to see more scripts popping up soon 🙂
What i had to do was create a script that would allow me to roll out an application to many folders under a directory. As this particular program does not “scale” quite well on users so we needed to manually scale out many instances of the same service on the same server. Now this was setup on multiple servers as well for redundancy. Now we created a folder on a share with the files needed and the config file.
So as i am starting a new job in less then 2 months, i thought it was time to move this site from a Virtual Machine running on my current employers S2D cluster to Azure. So i decided to share my way there. So i started googling on how to do this. There where some guides here and there. Some older ones and one from docs.microsoft.com, this one did not move everything. So i started with one, got a timeout error. Tried another did not work.
So we are having some issues with a REFS volume going offline, on a singel server storage pool if there is too much data being written to the volume in the morning. At the moment we have not figured out why. Disks are showing ok. Get-Physicaldisk and Get-Virtualdisk is ok. Everything says it’s healthy. And logs only show REFS being taken offline due to write error.