Home » SFBS » SFBS Infra

Hyper-V being the platform of choice for Lync is proving good and hence it would be continuing for SFBS. However, the new design has to improve the current IOPS to meet the increase in demand and reduce network latency further, while allowing to reuse the existing infra. Storage spaces with tiring is doing great in the enterprise and hence was the natural choice in SFBS infra. Remodelling of the load balancing and network routing was done to make the network latency improve further.

Current Lync 2013 infra

With augmentation of a storage spaces cluster, the revised infra for SFBS using the existing Lync 2013 infra looks as bellow.

All SQL backend were consolidated into the two storage spaces cluster nodes, I believe it even helps for licensing as well. High availability option for SQL was changed from mirror to always on. Overall VM allocation for the SFBS setup at one site is as bellow. However, this transitioning of both infra remodel and Lync to SFBS has to be done keeping in mind that Lync is already in use and there should not be any distribution of service.

VM Role Host vCPU RAM vhd
sfbsfe101 FE lsphv01 8 32 P1FECSV
sfbsfe102 FE lsphv02 8 32 P2FECSV
sfbsfe103 FE lhv01 8 32 VHD1
sfbsfe104 FE lhv02 8 32 VHD2
sfbsfe105 FE lhv03 8 32 VHD1
sfbsfe106 FE lhv04 8 32 VHD2
sfbsfe107 FE lhv01 8 32 VHD1
sfbsfe108 FE lhv02 8 32 VHD2
sfbsfe109 FE lhv03 8 32 VHD1
sfbsfe110 FE lhv04 8 32 VHD2
sfbsfe201 FE lsphv01 8 32 P1FECSV
sfbsfe202 FE lsphv02 8 32 P2FECSV
sfbsfe203 FE lhv01 8 32 VHD1
sfbsfe204 FE lhv02 8 32 VHD2
sfbsfe205 FE lhv03 8 32 VHD1
sfbsfe206 FE lhv04 8 32 VHD2
sfbsfe207 FE lhv01 8 32 VHD1
sfbsfe208 FE lhv02 8 32 VHD2
sfbsfe209 FE lhv03 8 32 VHD1
sfbsfe210 FE lhv04 8 32 VHD2
sfbsdb101 Backend lsphv01 8 32 P1CSV
sfbsdb102 Backend lsphv02 8 32 P2CSV
sfbsdb201 Backend lsphv01 8 32 P1CSV
sfbsdb202 Backend lsphv02 8 32 P2CSV
sfbsmon01 Monitoring lsphv01 8 32 P1CSV
sfbsmon02 Monitoring lsphv02 8 32 P2CSV
sfbsedg01 Edge lhv01 8 16
sfbsedg02 Edge lhv02 8 16
sfbsedg03 Edge lhv03 8 16
sfbsedg04 Edge lhv04 8 16
sfbswac01 WAC lhv01 4 8
sfbswac02 WAC lhv02 4 8
sfbsmed01 Mediation lhv03 8 16
sfbsmed02 Mediation lhv04 8 16
sfbsprc01 PRC lhv01 8 32 VHD1
sfbsprc02 PRC lhv02 8 32 VHD2
sprcdb01 PRCDB lsphv01 8 32 P1CSV
sprcdb02 PRCDB lsphv02 8 32 P2CSV
sfbsvis01 VIS lhv03 8 16
sfbsvis02 VIS lhv04 8 16

P1CSV and P2CSV are the CVS from storage pool1 and pool2 respectively where SQL VMs running on storage spaces node gets their storage locally from CSV volumes. Same is the case for two FE servers running on storage spaces node. For remaining VMs running on the 4 standalone Hyper-V node, CSDrive volume was over the SOFS. VHD1 portion of the share from Pool1 storage spaces and VHD2 from pool2. Two 10G SOFS communication channels were used to provide redundant connectivity the standalone Hyper-V systems and storage spaces cluster node. SMB multichannel automatically kicks in to distribute the traffic among this two connectivity instead of using them in active-passive mode. SMB multichannel constraint to force SMB traffic over this 10G connectivity.

DNS based load balancing works fine for SIP traffic but for web traffic, external load balancer is required to distribute the traffic among the front end servers in an enterprise pool while checking their service availability. For this the HLB keeps monitoring the port reachability of these server. In case for some reason the server is not listening, it marks that in down state and do not route clients traffic towards them. This is something is not available with DNS round robin and hence not to be used in any large deployment of Lync/SFBS enterprise pool.

In a hardware load balancer(HLB) environment, traffic hits the virtual IP created on the HLB for Lync/SFBS pool and depending on the server availability same is routed back towards one of the FE server. For the load balancing to complete, FE server has to reply the traffic to client through the HLB so that HLB can process the traffic and change the source IP back to VIP, else client would see traffic coming from a different IP i.e. that of the FE server not VIP, hence would discard the packet. This is the reason why default gateway of FE serves has to be IP address of HLB not the L3 switch. In the Hyper-V environment we are using 10G NIC with SR-IOV to provide low latency network. However, the HLB bandwidth and currency, which apart from the hardware capacity caped by the license entitlement, becomes the bottleneck. For web traffic alone we didn’t want to suffer the SIP traffic on this arrangement and hence the default gateway was pointed to server farm L3 switch gateway IP. In situation like this HLB allows to operate in NAT mode. All communication from client would be source NAT(SNAT) to a IP owned by HLB. This IP being in the same subnet where SFBS infra is deployed, FE server replies back to NATed HLB IP address without using default gateway and hence the flow required for load balancing completes. In this mode, the server sees all requests coming from HLB owned IP, not the actual client address. HLB can add extra header to supply actual client IP address, however the backend application should allow to make use of that.

HLB used for Lync infra is having global NAT option only and since it is shared with other application that requires actual client IP in logs, using this NAT option of HLB was not feasible. Hence between the HLB and FE server, brought a layer of Linux HAProxy to provide SFBS webtraffic while meeting the requirement of not to use HLB as default gateway on FE servers.

This Design also helped for location that does not have HLB. Linux HAproxy with KeepAlived can meet the requirement of highly avalable VIP, though in active-passive mode.

With no Cookie-based affinity requirement from Lync 2013 onward, a L4 load balancing is sufficient event though HAProxy does support L7 load balancing. For mobility, routing the traffic from internal BYOD segment to external reverse proxy (HLB in public segment) was a pain, both from security concerns and troubleshooting. In SFBS those traffics were served internally on a new VIP that the external web service name resolves. Same public certificate was used in SSL bridging mode of HLB or assigning to 4443 IIS interface in case of non HLB setup.

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*