Home » Linux » Haproxy » Using Linux based LoadBalancer for SFBS

As stated earlier, in this deployment of SFBS default gateway on frontend server are L3 switch as against the hardware load balancer (HLB) in the case of Lync 2013. This was to ensure the license cap apart from the hardware capability of HLB is not becoming a bottleneck for the SFBS traffic with FE server connected to 10G network interfaces and teaming done. As using NAT in HLB was not an option, it required an intermittent layer for SFBS web services. The choice for Linux based solution was more from the feasibility to run from same Hyper-V infra that hosts the SFBS infra and monitoring through SCOM.

There were two set of deployments

  • For the large location with existing HLB
    • In this deployment multiple Linux VMs running HAproxy provides the load balancing layer and HLB in turn provides the high availability layer for them
  • For new branch office requirement that would go with enterprise pool but does not have HLB
    • This deployment uses two Linux VM running HAproxy and keepalived to maintain a highly avalable virtual IP(VIP) for the web service DNS record.

Both the deployment would serve two listener address for each SFBS pool

IP1 as resolved for internal web FQDN

  • Port 443 for internal Lync / SFB user
  • Port 80 for IP phones to acquire certificate chain as certificate used on FE servers are from internal certificate authority(CA)

IP2 as resolved for external web FQDN

In this deployment of SFBS decided to remove the complexity of routing the traffic from internal mobile clients in BYOD Wi-Fi network to public segment leg of reverse proxy. Off course with TMG out, there was a need to change the reverser proxy solution. With no native load balancing capability and additional dependency on ADFS, web application proxy(WAP) role was not a chose. A similar L4 only solution with SSL bridging was used as reverse proxy.

  • Port 443 mapped to port 4443 i.e. external web service port of FE server

CentOS 7 was my choice because of my Red Hat background and simplicity in updating and fetching additional packages. Like windows server core, here also the minimal ISO helps deploying a system sufficient for the requirement while reducing the attack surface and maintenance overhead from unwanted software. 1GB Gen2 VM with 4GB SSD only storage is serving well for this deployment of above 100K users. During the deployment, ensure to uncheck the “Enable secure Boot” option before starting the VM for OS installation.

It makes life easy if network configuration like IP address, default gateway, DNS and domain name as well as time zone and NTP server details are configured during installation through the installation GUI interface.

Post installation reboot the system and login with root password set during installation.

In case the environment that use proxy to connect to internet, configuration of /etc/yum.conf with proxy details is required as well as authentication details in case proxy requires the same. We need the linux machines to get updates and packages from Centos repository from the nearest mirror in internet.

vi /etc/yum.conf
Add proxy, proxy_user and proxy_password details after “installonly_limit=5” line



It’s a good idea to first complete the OS upgrades and restart the machine with “yum upgrade” command. Once done we are ready to install the required packages to make the machine as load balancer.

  • For the large location with existing HLB (without keepalived requirement)
    yum install tcpdump haproxy policycoreutils-python


  • Location without HLB
    yum install tcpdump haproxy keepalived policycoreutils-python


Here HAProxy is the component that would provide the load balancing capability. Keepalived, where requried would allow to maintain one or more virtual IP address among the two nodes of HAproxy server so that in the event one node not avalable, IP address would be served by other available node. Policycoreutils-python for the basic operation of a Security-Enhanced Linux (SELinux) system and we require the “semanage” command as discussed below. Having TCPDUMP, which is the netmon/ wireshark like network capture toll helps troubleshooting the issues.

CentOS 7 systems comes with SELinux kernel security module and I am not going to disable that for security reason. In the default policy configuration, HAproxy is authorized for default http ports and for SFBS we require additional ports like connecting to FE server on 4443. To see the current objects and filtering for http_port_t type.

semanage port -l | grep '^http_port_t'

To add port 4443 to type http_port_t

semanage port -a -t http_port_t -p tcp 4443

For deployment that requires keepalived to martin the high availability of the virtual IP for web services access, same needs to be configured. This section is skipped for location having HLB to provide high availability among HAproxy nodes.

vi /etc/keepalived/keepalived.conf
global_defs {
router_id haproxy1

vrrp_script haproxy {
script "killall -0 haproxy"
interval 2
weight 2

vrrp_instance 50 {
virtual_router_id 50
advert_int 5
authentication {
auth_type PASS
auth_pass sfbs123

priority 101 # 100 on the backup node
state MASTER
interface eth0

virtual_ipaddress {

track_script {


In this example, 10.20.30.xx is the IP segement and .49 is the ip address for SFBS internal web FQDN and 59 for external web FQDN.

As one of the node will be owning the IP address till the other one goes down, we need configure the kernel to allow an application, HAproxy in this case listen on IP address i.e. bind to a port and IP that is not available on the system.

vi /etc/sysctl.conf


Configuration of HAproxy would be same on both type of deployment, except that in case non keepalived system IP address to listen for the SFBS web services would be IP address assigned to the system.

vi /etc/haproxy/haproxy.cfg

log local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy

stats socket /var/lib/haproxy/stats
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

listen stats # This is the actual system IP where can be accessed with the userid/password defined here to see the current status of HAproxy.

mode http
stats enable
stats uri /stats
stats realm HAProxy\ Statistics
stats auth admin:password
stats admin if TRUE

listen SFBS_int
mode tcp
option persist
balance roundrobin
stick-table type ip size 10240k expire 30m
stick on src
server SFBSFE01 weight 1 check port 80 inter 5000 rise 2 fall 3
server SFBSFE02 weight 1 check port 80 inter 5000 rise 2 fall 3
server SFBSFE03 weight 1 check port 80 inter 5000 rise 2 fall 3
server SFBSFE04 weight 1 check port 80 inter 5000 rise 2 fall 3
option redispatch
option abortonclose
maxconn 40000

listen SFBS_ext
mode tcp
option persist
balance roundrobin
stick-table type ip size 10240k expire 30m
stick on src
server SFBSFE01 weight 1 check port 4443 inter 5000 rise 2 fall 3
server SFBSFE02 weight 1 check port 4443 inter 5000 rise 2 fall 3
server SFBSFE03 weight 1 check port 4443 inter 5000 rise 2 fall 3
server SFBSFE04 weight 1 check port 4443 inter 5000 rise 2 fall 3
option redispatch
option abortonclose
maxconn 40000

For external web service, we need to listen on port 443 and send the request to port 4443 on FE servers. At this stage we are done with the configurations and Linux firewall need to be configured to allow traffic coming in to ports that HAproxy is listing on. First we need to define the firewall profile for HAproxy, telling the system to allow incoming traffic on port 80 and 443 as well as port 8080 for status monitoring.

vi /etc/firewalld/services/haproxy.xml

<?xml version="1.0" encoding="utf-8"?>
<description>HAproxy for SFBS traffic </description>
<port protocol="tcp" port="80"/>
<port protocol="tcp" port="443"/>
<port protocol="tcp" port="8080"/>

By default, the network interfaces would be into public zone which can be verified with firewall-cmd command

firewall-cmd --get-active-zones

For that zone we need to make the haproxy firewall rule permanent.

firewall-cmd --permanent --zone=public --add-service=haproxy

For the deployment with keepalived, additionally we need to permit the multicast traffic that is used as heartbeat check between these two systems.

firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" destination address="" protocol value="ip" accept'

We need to make these services active so that during system start they were automatically started.

systemctl enable haproxy keepalived


3 Replies to “Using Linux based LoadBalancer for SFBS”

  1. i-zhishi says:

    if SFB FE Other Port:


    if SFB Edge:

    intranet or internet about udp,tcp

    how to Linux LB (Haproxy+Keepalived) to apply SFB FE,Edge?

  2. i-zhishi says:

    I’d like to know how to achieve Lync internal and external access load balancer via Linux LB (Haproxy+Keepalived). Any suggestions will be appreciated.

    I understand the traffic flow is:
    Lync FE Edge internal interface
    TCP port….
    UDP port …
    Internet ———–>Edge external interface
    TCP port…
    UDP port…

    1. Prasanta Kumar Panda says:

      Hi, lync clients are designed to make use of DNS load balance. Instead of using any kind of hardware / software loadbalancer, just follow the TechNet guide on configuring DNS record with multiple IPs of your edge servers. This is recommended approach and gives best performance. Its the http/https traffic which suffers from DNS round robin record and hence the need of load balancer to detect the state of backend servers and route the traffic appropriately. This article is meant for that.

Leave a Reply

Your email address will not be published. Required fields are marked *