A while ago i gave u the first look into Dataon Must, Dataon’s monitoring system that comes with their S2D servers.
Today i want to give you an insight into a new offering that is coming, Barton Glass. Barton Glass is built by Barton Systems member of the Cronos group and 2016 Microsoft Partner of the year in Belgium.
So i built a 2 node S2D cluster a while ago at home on some old HP G6 nodes i got cheap. But have decided to get rid of that and setup a new 3 node cluster with bit’s i can find on ebay. And reuse disks i have. This will be a multipart blog post as the parts are orderd and they come in. And during the build process. I will provide a step by step guide in building it, installing, configuring, monitoring and troubleshooting Storage Spaces Direct. Including switch config.
So yesterday i had to replace a disk on a failed HDD in one of our S2D cluster. After replacing the drive and removing the failed drive from the cluster i ran Get-Physicaldisk and noticed i had no disks with Canpool = True. This is normal as S2D will detect the new disk and add it to the Storage Pool to balance the pool correctly.
Updated 27. feb
We have been testing Storage Spaces Direct for a while on our Ebay cluster. We have been running development and some production systems. As the 2nd exchange node, a mediation server and our vmm server.
We have been looking to replace our current Hyper-V solution that consist of HP BL465c G8 and BL490 G7 blade servers attached to HP P2000 G3 MSA over iscsi. This has been slower and slower as we have setup more virtual machines. This was a 12 disk shelf with 11 disks active with one spare. One 15k disk gives about 170 iops, giving a whopping 1870 iops on max speed. On normal load it would use about 1200-1500 IOPS so not a lot of spare IOPS. We had one pr cluster.
Most of you know what S2D(Storage Spaces Direct) is, if you don’t go look at Cosmos Darwin’s post over at Technet to get some good insight about S2D.
What i am going to focus on in this blog is the new Dataon HyperConverged server. Back at Ignite 2016 Dataon released there first offering the S2D-3110 all flash solution pumping out 2.6 Million IOPS in a 1u form factor. Read more
In this blog post i will show you how to setup a Microsoft VPN connection with the new NPS Extension for Azure AD MFA.
This is new service that the Microsoft NPS team just released, that adds an Extension to the Windows Network Policy Server.
When using the NPS extension for Azure MFA, the authentication flow includes the following components:
This is copied from https://docs.microsoft.com/nb-no/azure/multi-factor-authentication/multi-factor-authentication-nps-extension
- NAS/VPN Server receives requests from VPN clients and converts them into RADIUS requests to NPS servers.
- NPS Server connects to Active Directory to perform the primary authentication for the RADIUS requests and, upon success, passes the request to any installed extensions.
- NPS Extension triggers a request to Azure MFA for the secondary authentication. Once the extension receives the response, and if the MFA challenge succeeds, it completes the authentication request by providing the NPS server with security tokens that include an MFA claim, issued by Azure STS.
- Azure MFA communicates with Azure Active Directory to retrieve the user’s details and performs the secondary authentication using a verification method configured to the user.
The following diagram illustrates this high-level authentication request flow:
About a week ago another person in our IT department installed KB3211320 (Update for Windows Server 2016 for x64-based Systems) on our DPM servers.
A friend of mine asked me about this a while ago, as he had setup his S2D cluster with SSD and HDD only. So the SSD’s became the journal drives(caching drives). Now he wanted to replace the SSD’s with NVME disks that he had replaced. Yesterday he did the swap and it worked great.
After System Center 2016 came out i went straight on to setting up DPM 2016 with Modern Backup Storage. This had been running fine for a while. A bit slow for the DPM console to respond after doing a synchronization .
After my initial failure of replacing a NVME Caching card and hitting a bug in the 2016 version i was on, i replaced another one today. As we where starting our cluster out with Intel 750 drives, and these NVME PCIe cards only have 70gb of write’s pr day. So i decided to replace them with the Intel DC P3600. The first failed as can be seen here.
Now this is a cool new feature Microsoft has come up with. It allows you to mange your on premise servers with the Azure Portal. All you need to do is install a gateway server on your local network. Configure some steps in Azure, and install a small program and you are almost good to go.