Home » Lync » Lync 2013 on Virtual Infra

Migration attempt to OCS from LCS 2005 didn’t go through as the increase in hardware count was not justifiable on the basis of new but yet to be proven AV capabilities. Down the line the product discontinuity forced us to upgrade, for continuing the IM service in enterprise. Lync 2010 with virtualization support came in help and we could accommodate the Lync infra in the hyper-v virtual infra for hosting other IT services. The migration went successful and the initial observations were all good. There are two types of products, one where user needs to be constantly told or educated how to use the product and the other where user gets delighted and start propagating the solution on their own. With no push, user concurrency which used to be around 20% during LCS 2005 grow above 50% with Lync 2010 in no time. Enhancements in IM and presence, Application and desktop sharing in both p2p and conference mode, coupled with ease of use were all the contributors. The infra which was working fine initially, started getting stressed from this increase in usage, mostly from max 4 vCPU limit with hyper-v at that time and disk I/O. Fortunately windows server 2012 RDP just started by this time and helped us to get prepared for the migration. Increased vCPU count with 2012 hyper-v infra and additional disk from storage helped to contained the situation but I know it’s not adequate. In Q1, 2014 for Lync 2013 migration could able to secure some funding for a dedicated infra but barely sufficient for few servers and an entry level storage. This post is about the planning and deployment of lync 2013 on a dedicated virtual infra comprising of 4 server and a 10G iscsi storage. Learnings from earlier deployment of Lync 2010 and product enhancements, including hyper-v in 2012 R2 for sure helped a lot in this.

Design considerations

  • Taking reference of product team guidance for deploying Lync Server 2013 on virtual servers “Planning a Lync Server 2013 Deployment on Virtual Servers” is important
  • Presence information is crucial in real-time communication and a lot of SQL transaction goes in for Lync server to collect and distribute the same for all user. With the new frontend architecture of Lync server 2013, SQL Backend DB is no longer the real-time data store in a pool rather the SQL express based local configuration store in front end servers. So disk IOPS is crucial for Lync servers.
    • A two disk raid-0 could roughly provide the IOPS of a four disk RAID-10 but without fault tolerance. So if the fault tolerance can be shifted, number disks requirement can be reduced.
    • IOPS assurance measures like dedicated LUN, fixed VHD etc.
    • Hype-v replica of the Lync server VM to a high capacity SAS NL disk in base machine could help in situation of RAID-0 failure and server as a backup strategy as well
    • SQL TempDB does not take more space but can impact performance if not sized properly and IOPS from storage are low
  • Network usage of Lync server are high
    • 10G network card with Single-root I/O virtualization (SR-IOV)
    • Dual switch connectivity for network redundancy and hence teaming inside VM
    • When using 1Gb NIC, like for edge servers, care must be taken to check VMQ know issues with the type of NIC card and disable if required
  • Information about a particular user is kept on three Front End Servers in a pool, so during maintenance max two servers from a pool can be taken off i.e. no more than two frontend servers of a pool in a base machine
  • SQL server express in the frontend servers can use max 4 CPU and hence per core SPEC value of the processor and minimum 8 core CPU, as NUMA spanning has to be disabled at hyper-v level, is required.

  • Two Lync 2013 pools
  • Each base machine hosting
    • Two FF server from each pool
    • one Edge server
    • One SQL DB backend server, with SQL DB mirroring two per each pool and total four
    • SQL server temp DB on SSD drive, 4 files each of 4GB
    • Other VMs like monitoring DB, archival, mediation, wac, witness etc. spread across the 4 base machines adequately
    • Replication of VMs from adjacent base machine are stored to a dedicated high capacity storage LUN for backup purpose.
  • All VMs used were generation 2
  • SR-IOV enabled on the 10G NIC used for VM communication, globally at BIOS level and at NIC level
  • Each FF servers are attached to two VM Switches and NIC teaming enabled inside the VM
  • Hyper-v based machine installed with core version of 2012 R2 operating system to minimise
    • Resources used by OS
    • Recourses getting wasted into various GUI application after entering into system. Most of the time operation team make RDP into server and just lefts the session open
    • Attack surface
    • Maintenance overhead with reduced number of patches to install.
  • iSCSI offload was enabled on base machine level
    • Using NIC management tool like BACS4 here, ensured Offload iSCSI Connections is enabled. Note BACSCPL.cpl is available on server core as well.

    • Parameters like IP address and Jumbo Frame is enabled ( Jumbo frame needs to enabled end-to-end, at switch and storage level as well)

  • MPIO with vendor DSM was used to establish multipath with storage
  • iSCSI session can be established from NIC management tool, OS iSCSI control (iscsicpl.exe and is available on server core) or using PowerShell




    $ECLYNCSAN="iqn.1984-05.com.dell:powervault.md3600i.6f01faf000d7e7a500000000526a6740"
    $ECLYNCSAN_CTRL0_IP0="172.16.11.100"
    $ECLYNCSAN_CTRL0_IP1="172.16.22.100"
    $ECLYNCSAN_CTRL1_IP0="172.16.11.101"
    $ECLYNCSAN_CTRL1_IP1="172.16.22.101"
    gwmi CIM_SCSIController|where DriverName -eq "bxois"|select PNPDeviceID | %{
    $SCSIaddpter = $_.PNPDeviceID+"_0"
    New-IscsiTargetPortal -TargetPortalAddress $ECLYNCSAN_CTRL0_IP0 -InitiatorInstanceName $SCSIaddpter
    New-IscsiTargetPortal -TargetPortalAddress $ECLYNCSAN_CTRL1_IP0 -InitiatorInstanceName $SCSIaddpter
    New-IscsiTargetPortal -TargetPortalAddress $ECLYNCSAN_CTRL0_IP1 -InitiatorInstanceName $SCSIaddpter
    New-IscsiTargetPortal -TargetPortalAddress $ECLYNCSAN_CTRL1_IP1 -InitiatorInstanceName $SCSIaddpter
    Connect-IscsiTarget -NodeAddress $ECLYNCSAN -TargetPortalAddress $ECLYNCSAN_CTRL0_IP0 -InitiatorInstanceName $SCSIaddpter -IsMultipathEnabled $TRUE -IsPersistent $TRUE
    Connect-IscsiTarget -NodeAddress $ECLYNCSAN -TargetPortalAddress $ECLYNCSAN_CTRL1_IP0 -InitiatorInstanceName $SCSIaddpter -IsMultipathEnabled $TRUE -IsPersistent $TRUE
    Connect-IscsiTarget -NodeAddress $ECLYNCSAN -TargetPortalAddress $ECLYNCSAN_CTRL0_IP1 -InitiatorInstanceName $SCSIaddpter -IsMultipathEnabled $TRUE -IsPersistent $TRUE
    Connect-IscsiTarget -NodeAddress $ECLYNCSAN -TargetPortalAddress $ECLYNCSAN_CTRL1_IP1 -InitiatorInstanceName $SCSIaddpter -IsMultipathEnabled $TRUE -IsPersistent $TRUE
    }

    When iSCSI connection is used through NIC hardware offload, InitiatorInstanceName in Get-IscsiSession cmdlet show something like

    EBDRV\L4SC&PCI_168E14E4&SUBSYS_100814E4&REV_10\5&e0e7541&0&30054400_0

    Else the OS iSCSI initiator would show like ROOT\ISCSIPRT\0000_0

     

 

Post deployment and user movement, pefmon data showing positive outcome

 

 

2 Replies to “Lync 2013 on Virtual Infra”

  1. Jayakumar says:

    Nice…These are very critical points..

  2. Link says:

    If you already have a domain controller deployed, you can skip the setup of the Domain Contro 1ff8 ller virtual machine and continue to the section which shows you how to configure certificate services for your lync deployment.

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*