Home » SFBS » SFBS infra setup

After the high level details on previous post, this post would be on infra setup for SFB. First things would be the storage space setup that going to host the SQL server VMs (requires Hyper-V) as well as few FE servers, apart from providing tiered storage for the entire infra. A checklist helps avoiding mistakes and my checklist for Hyper-V and storage spaces is as bellow.

  • Redundant Power Supply is connected for all hardware like Server, Storage, switch etc.
  • OOB Port like iDRAC is connected to 100Mbps or higher switch Port with Port speed and Full duplex mode set at Switch end
  • Management Port is connected to 1G switch Port with Full Duplex and Port speed set on Switch side.
  • In case of two node cluster setup, a 1G cross cable between the server connected for Heartbeat communication, else an isolated VLAN for same in more than two node cluster.
  • In case of two node cluster setup, a 10G cross cable between the server connected for LiveMigration traffic, else an isolated VLAN
  • System BIOS configured for Hyper-V related settings like processor virtualization, execute bit, IO-DMA and global SR-IOV enabled
  • Logic Process is disabled at BIOS Level
  • NUMA optimized BIOS settings like cluster in a DIA are configured as per vendor recommendation.
  • BIOS boot must of set for UEFI boot mode
  • Two 600 GB local hard disk in RAID 1 configured for OS Installation with OS drive size not exceeding 300GB
  • Remaining space from the above RAID 1 virtual disk is configured as “D” drive with “dump” as volume Label
  • Remaining local hard disk configured in RAID 10 for VM VHD drive
  • System BIOS and firmware for PS, NIC, HBA, RAID controller, disk drive and SSD etc. are up to date as per the vendor recommendation for solution like storage spaces
  • Default password of OOB Port like iDRAC is been changed

OS Deployment

  • Windows Server 2012 R2 Standard edition server core is deployed as OS.
  • Hostname is changed as planned and system is rebooted.
  • Patch update completed through WSUS server
  • System is domain joined under dedicated Hyper-V OU like “OU=Hyper-V Servers,OU=Member Servers,DC=pkpnotes,DC=com” and system rebooted
  • All drivers like chipset, Network, RAID controller and HBA are updated
  • In case of firmware that needs to be deployed from OS, installed and system rebooted
  • SCOM and SCVMM agents are deployed for the system
  • Dell OMSA is deployed into the machine and firewall port configured to allow remote access
  • VHD Drive is formatted and driver letter “V” is assigned to it.
  • A folder named as “Virtual Machines” created inside V: drive for placing the VMs
  • All NICs are renamed in the format of <purpose like mgmt, VM, SOFS>-<Slot and Port ID in system>-<patch penal ID>-<Switch ID>-<Module ID>- <Switch Port Number>-<VLANID>
  • NIC to be used for VM traffic, is connected to redundant 10G switches and SR-IOV is enabled on them. Balance the number of SR-IOV virtual Functions and VMQ as per the planned deployment.
  • Numa spanning is disabled for Hyper-V

Cluster

  • Cluster computer ID is precreated in case of user not having computer account creation permission in the Hyper-V OU
  • In case where VMs needs to be clustered, virtual switch is created from Heartbeat NIC
  • Heartbeat IP address and subnet mask is assigned. Systems are able to ping each other Heartbeat IP.
  • One or two 10G connection for LiveMigration network is configured with IP and subnet mask and systems are able to ping each other LiveMigration IP
  • An external file share witness is configured that to be used for cluster quorum, a must for two node cluster and recommended in case of more than two but even number of cluster nodes setup

Storage Spaces and SOFS

  • If the DAS box requires an Enclosure Management software to check the storage hardware and log, it should be deployed on both the node.
  • Two 10G dedicated and isolated network among the server is configured for SMB multichannel communication
  • Jumbo Frame is enabled for these VLAN, both at switch side and network adapter on servers.
  • NIC used for this SMB multichannel communication should be optimized for low latency communication like maximum RSS queue and SR-IOV and VMQ disabled.
  • SMB multichannel constraint created to force the SOFS SMB traffic to take this dedicated connectivity

During initial phase of the infra build, significant amount of time goes in completing patch update unless you are using up to date build ISO. If not, time can be saved by using a WIM file created prior from a VM that is patched completely and syspreped. Having a folder with required driver, BIOS and firmware specific to the server hardware as well as script to automate the server configuration till you can make RDP to the server would be of additional benefit. All that is required to mount the VHD file and use “DISM.exe /Capture-Image” option to convert the mounted VHD file folder to WIM. Size of the WIM file can be reduced significantly by doing clean-up of the OS with “dism /online /Cleanup-Image /StartComponentCleanup /ResetBase” as well as using “/Compress:max” option with “DISM.exe /Capture-Image” option. Depending on how you are going to access this WIM file, locally connected USB driver or over network share, boot to an appropriate WinPE environment command prompt. For setting up the server using the custom WIM file

  • Using diskpart complete the hard disk configuration
    • Select disk 0 è to select the disk for OS installation. In case of storage spaces, the disks coming from JBOD can alter the disk number. Using “list disk” check the disk number that the size matches with the virtual disk created out of local disk RAID on server
    • Clean
    • Convert GPT
    • Create part efi size=100
    • Format quick fs=fast32 label=System
    • Assign letter=S
    • Create part pri size=307200
    • Format quick fs=ntfs label=Windows
    • Assign letter=C
    • Exit
  • Apply the WIM file content to C driver using “DISM.exe /Apply-Image /ImageFile:<full path to the custom wim file> /Index:1 /ApplyDir:c:\” command
  • Create the required EFI files for the server hardware to complete UEFI boot. “bcdboot c:\windows /s s: /f UEFI”
  • Exist from WinPE command prompt and restart the machine to boot the system into the customised OS

Most of the OOB management console like iDRAC provides the option to obtain the MAC address of the network adaptors in the system. It helps to create a PowerShell script that renames the network interfaces based on their mac address, assigns the IP address and configure required configuration like enabling or disabling SR-IOV. This script can be inhected into te VHD file before creating the WIM file or the WIM file can be mounted with “DISM.exe /Mount-Wim” , make the script modification and unmount with save option to have the WIM file with required PowerShell script. Having this script helps configuration the system from OOB console and take RDP access to complete the remaining configuration. One example of a simple script

Rename-Computer -NewName LSPHV01

Get-NetAdapter|Set-NetIPInterface -Dhcp Disabled
Get-NetAdapter|Set-DnsClient -RegisterThisConnectionsAddress $false
Get-NetAdapter|? MacAddress -eq <MAC_of_Management_NIC>|Rename-NetAdapter -NewName MGMT
Get-NetAdapter|? MacAddress -eq <MAC_of_VM_NIC_to-SW1>||Rename-NetAdapter -NewName VM-S41-SW1M7P13-V50
Get-NetAdapter|? MacAddress -eq <MAC_of_SOFS_NIC_to-SW1>|Rename-NetAdapter -NewName SOFS1-S42-SW1M7P14-V52
Get-NetAdapter|? MacAddress -eq <MAC_of_VM_NIC_to-SW2>|Rename-NetAdapter -NewName VM-S52-SW2M7P13-V50
Get-NetAdapter|? MacAddress -eq <MAC_of_SOFS_NIC_to-SW2>|Rename-NetAdapter -NewName SOFS2-S51-SW2M7P14-V53
Get-NetAdapter|? MacAddress -eq <MAC_of_1G_crosscable>|Rename-NetAdapter -NewName HB-1G_CC
Get-NetAdapter|? MacAddress -eq <MAC_of_10G_crosscable>|Rename-NetAdapter -NewName HB-10G_CC
Get-NetAdapter MGMT | New-NetIPAddress -IPAddress 10.10.10.5 -PrefixLength 24 -DefaultGateway 10.10.10.1
Get-NetAdapter MGMT | Set-DnsClientServerAddress -ServerAddresses 10.10.10.100,10.10.10.200
Get-NetAdapter MGMT | Set-DnsClient -ConnectionSpecificSuffix pkpnotes.com -RegisterThisConnectionsAddress $true
Get-NetAdapter SOFS* | Disable-NetAdapterSriov
Get-NetAdapter SOFS* | Disable-NetAdapterVmq
Get-NetAdapter SOFS* | Enable-NetAdapterRss
Get-NetAdapter SOFS* | Enable-NetAdapterBinding -ComponentID ms_msclient
Get-NetAdapter SOFS* | Enable-NetAdapterBinding -ComponentID ms_server
Get-NetAdapter SOFS* | Set-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet" -DisplayValue "9014 Bytes"
Get-NetAdapter SOFS1-S42-SW1M7P14-V52 | New-NetIPAddress -IPAddress 172.16.3.3 -PrefixLength 24
Get-NetAdapter SOFS2-S51-SW2M7P14-V53 | New-NetIPAddress -IPAddress 172.16.4.3 -PrefixLength 24

Add-computer can help to domain joining the machine to a predefined OU “Add-Computer -DomainName pkpnotes.com -OUPath ” OU=Hyper-V Servers,OU=Member Servers,DC=pkpnotes,DC=com ” -DomainCredential pkpnotes\prasanta

After the cluster completion, storage pool to be created for CSV and SOFS share requirement as well as SOFS cluster role. I have two DAS and I am going to combine half of SSD and HDDC from each enclosure to form a pool, total two pools. For more information on storage spaces, please refer my earlier post on Is storage expensive series.

Update-StorageProviderCache -DiscoveryLevel full

$st_enc = Get-StorageEnclosure
$st_enc0_disk = $st_enc[0] |Get-PhysicalDisk
$st_enc1_disk = $st_enc[1] |Get-PhysicalDisk
$st_SSD_mediacnt = ($st_enc0_disk |? MediaType -EQ SSD |measure).Count
$st_HDD_mediacnt = ($st_enc0_disk |? MediaType -EQ HDD |measure).Count
$p0_ssddisk_enc0 = $st_enc0_disk | ? MediaType -EQ SSD|select -First($st_SSD_mediacnt/2)
$p0_ssddisk_enc1 = $st_enc1_disk | ? MediaType -EQ SSD|select -First ($st_SSD_mediacnt/2)
$p0_hdddisk_enc0 = $st_enc0_disk | ? MediaType -EQ HDD|select -First ($st_HDD_mediacnt/2)
$p0_hdddisk_enc1 = $st_enc1_disk | ? MediaType -EQ HDD|select -First ($st_HDD_mediacnt/2)
$p0_physicaldisk = $p0_ssddisk_enc0 + $p0_ssddisk_enc1 + $p0_hdddisk_enc0 + $p0_hdddisk_enc1
$p1_ssddisk_enc0 = $st_enc0_disk | ? MediaType -EQ SSD|select -Last ($st_SSD_mediacnt/2)
$p1_ssddisk_enc1 = $st_enc1_disk | ? MediaType -EQ SSD|select -Last ($st_SSD_mediacnt/2)
$p1_hdddisk_enc0 = $st_enc0_disk | ? MediaType -EQ HDD|select -Last ($st_HDD_mediacnt/2)
$p1_hdddisk_enc1 = $st_enc1_disk | ? MediaType -EQ HDD|select -Last ($st_HDD_mediacnt/2)
$p1_physicaldisk = $p1_ssddisk_enc0 + $p1_ssddisk_enc1 + $p1_hdddisk_enc0 + $p1_hdddisk_enc1

New-StoragePool -EnclosureAwareDefault $true -StorageSubSystemFriendlyName "Clustered Storage Spaces*" -FriendlyName Pool0 -OtherUsageDescription "SFBS Storage Space with Tier Pool 0" -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -WriteCacheSizeDefault 10GB -PhysicalDisks $p0_physicaldisk
New-StoragePool -EnclosureAwareDefault $true -StorageSubSystemFriendlyName "Clustered Storage Spaces*" -FriendlyName Pool1 -OtherUsageDescription "SFBS Storage Space with Tier Pool 1" -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -WriteCacheSizeDefault 10GB -PhysicalDisks $p1_physicaldisk

$P0_SSDTier=New-StorageTier -StoragePoolFriendlyName Pool0 -FriendlyName Pool0-SSDtier -MediaType SSD
$P0_HDDTier=New-StorageTier -StoragePoolFriendlyName Pool0 -FriendlyName Pool0-HDDtier -MediaType HDD
$P1_SSDTier=New-StorageTier -StoragePoolFriendlyName Pool1 -FriendlyName Pool1-SSDtier -MediaType SSD
$P1_HDDTier=New-StorageTier -StoragePoolFriendlyName Pool1 -FriendlyName Pool1-HDDtier -MediaType HDD

New-Volume -StoragePoolFriendlyName Pool0 -FriendlyName P0-SFBS-FE -FileSystem CSVFS_NTFS -ProvisioningType Fixed -StorageTiers $P0_SSDTier,$P0_HDDTier -StorageTierSizes 200GB,600GB -NumberOfColumns ($st_SSD_mediacnt/2)
get-item C:\ClusterStorage\Volume1|Rename-Item -NewName P0-SFBS-FE

New-Volume -StoragePoolFriendlyName Pool0 -FriendlyName P0-SFBS-BE -FileSystem CSVFS_NTFS -ProvisioningType Fixed -StorageTiers $P0_SSDTier,$P0_HDDTier -StorageTierSizes 200GB,1.5TB -NumberOfColumns ($st_SSD_mediacnt/2)
get-item C:\ClusterStorage\Volume1|Rename-Item -NewName P0-SFBS-BE

New-Volume -StoragePoolFriendlyName Pool0 -FriendlyName P0-SFBS-TEMPDB -FileSystem CSVFS_NTFS -ProvisioningType Fixed -StorageTiers $P0_SSDTier -StorageTierSizes 70GB -NumberOfColumns ($st_SSD_mediacnt/2)
get-item C:\ClusterStorage\Volume1|Rename-Item -NewName P0-SFBS-TEMPDB

New-Volume -StoragePoolFriendlyName Pool1 -FriendlyName P1-SFBS-FE -FileSystem CSVFS_NTFS -ProvisioningType Fixed -StorageTiers $P1_SSDTier,$P1_HDDTier -StorageTierSizes 200GB,600GB -NumberOfColumns ($st_SSD_mediacnt/2)
get-item C:\ClusterStorage\Volume1|Rename-Item -NewName P1-SFBS-FE

New-Volume -StoragePoolFriendlyName Pool1 -FriendlyName P1-SFBS-BE -FileSystem CSVFS_NTFS -ProvisioningType Fixed -StorageTiers $P1_SSDTier,$P1_HDDTier -StorageTierSizes 200GB,1.5TB -NumberOfColumns ($st_SSD_mediacnt/2)
get-item C:\ClusterStorage\Volume1|Rename-Item -NewName P1-SFBS-BE

New-Volume -StoragePoolFriendlyName Pool1 -FriendlyName P1-SFBS-TEMPDB -FileSystem CSVFS_NTFS -ProvisioningType Fixed -StorageTiers $P1_SSDTier -StorageTierSizes 70GB -NumberOfColumns ($st_SSD_mediacnt/2)
get-item C:\ClusterStorage\Volume1|Rename-Item -NewName P1-SFBS-TEMPDB

CSV from backend and SQLTempDB virtual disk would be consumed locally by the SQL servers. I need to share the virtual disk for FE so that other frontend servers running on the standalone hyper-v machines can consume that for their CSuser drive

mkdir C:\ClusterStorage\P0-SFBS-FE\SHARES\FE_VHD1
New-SmbShare –Name FE_VHD1 –Path C:\ClusterStorage\P0-SFBS-FE\SHARES\FE_VHD1 -ContinuouslyAvailable $true –FullAccess pkpnotes\prasanta,pkpnotes\lhv01$,pkpnotes\lhv02$,pkpnotes\lhv03$,pkpnotes\lhv04$ -ScopeName LSPSOFS
Set-SmbPathAcl –ShareName FE_VHD1

mkdir C:\ClusterStorage\P1-SFBS-FE\SHARES\FE_VHD2
New-SmbShare –Name FE_VHD2 –Path C:\ClusterStorage\P1-SFBS-FE\SHARES\FE_VHD2 -ContinuouslyAvailable $true –FullAccess pkpnotes\prasanta,pkpnotes\lhv01$,pkpnotes\lhv02$,pkpnotes\lhv03$,pkpnotes\lhv04$ -ScopeName LSPSOFS
Set-SmbPathAcl –ShareName FE_VHD2

Scope name is the name given for SOFS role in cluster.

A small script can be used to create the required VMs based on

Since skype for business server requires server GUI OS with desktop experience, VHDX file of a fully patched and Syspreped VM was  used to create the required VMs for the infra with the help of a script like the one in  my earlier post . Once the VMs are deployed, the ones getting their VHDX file over SOFS, SMBmultichannel connection is checked to confirm

  • It is using the interface designated for SOFS SMB communication using SmbMultichannelConstraint
  • SMB multichannel is in use to provide multipath and network connection resilience
  • RSS is in use for these connection

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*