Microsoft NLB | Edgenexus ADC | |
Load Balancing architecture | Basically Microsoft NLB is deployed on every server in the cluster and works by assigning a virtual IP address (VIP) to the network adapter of each cluster member. Traffic is sent to the VIP, received by all cluster members, accepted by one, dropped by the rest. Microsoft NLB supports two configurations: unicast mode, or multicast mode. Unicast mode replaces the existing MAC address of all cluster members with a new cluster MAC address, which is shared by all nodes. Multicast mode adds the cluster MAC address to the node adapter, but also leaves the original one. With both methods, the nodes share an IP and MAC address, so that when a client asks “who has this IP address” (an ARP request), all nodes respond. * Unicast mode aims to be simple, and has the advantage of working across routers with no problems. However, this method has the negative side effect of flooding switch ports. MS-NLB hides the MAC address of outgoing cluster traffic, switches never learn what ports cluster members are attached to, so traffic destined for the cluster is flooded out all ports. This effectively turns a switch into a hub as far as cluster traffic goes, which can cause network issues with busy clusters. This can be overcome by adding static ARP entries on the switch (if supported), but that can quickly become a management nightmare. Another possible drawback to unicast mode is that cluster members cannot directly communicate with each other without adding a 2nd NIC. * Multicast mode attempts to address switch flooding by using IGMP Multicast support, which tells the switch to direct cluster traffic only to those ports with cluster members attached. However, this assumes the switch supports IGMP snooping and has it enabled. Also, many routers & layer 3 switches do not support this mode because ARP replies associate a unicast IP with a multicast MAC, which may or may not be against standards depending on whether you ask Microsoft or Cisco. No IGMP support means switch flooding. And no IGMP router support means no cluster access outside of that subnet unless a static ARP entry is used. * Planning to implement NLB in a virtualized environment adds complexity. The only one I can speak to from experience is VMWare ESX. They support both modes, however unicast is not recommended. By default, unicast doesn’t work because the virtual switches learn MAC addresses despite the cluster masking outbound traffic, which breaks clustering. This can be overcome by disabling the NotifySwitch feature, but that in turn breaks operations like VMotion. Multicast works, but is subject to the same problems as mentioned above, and made more complex by the many different physical / virtual topologies. | ADC also assigns a VIP for inbound client traffic but as an external load balancer, does not rely on the back-end servers to have any knowledge of the VIP or to receive all inbound traffic. This removes all the issues that NLB faces by co-existing on all the back-end servers in order to cluster them. ALB-X is deployed equally well in virtualised environments as an external load balancer virtual machine as it does as a external hardware load balancer. Furthermore, the ADC functions as a local client to all the servers that are being clustered. This provides several advantages: – Improved security: communication between the ADC and back-end-servers can be on private IP networks that are firewalled from direct external access. The VIP(s) are only directly accessed on the ADC, thus creating a safer security zone for the servers. – Improved performance: ADC functions as an application proxy which enables protocols like HTTP to scale better by off-loading TCP connection management from the back-end servers. High latency, low bandwidth, concurrent client-side connections are terminated by ADC. N the server-side, ADC can maintain multiplexed HTTP session in a few TCP connections, thus reducing resources on the servers to maintain client sessions. |
Load Balancing algorithm and Session Persistence | Microsoft NLB load-balances incoming client requests by directing a selected percentage of new requests to each cluster host; the load percentage is set in the Network Load Balancing Properties dialog box for each port range to be load-balanced. The algorithm does not respond to changes in the load on each cluster host (such as the CPU load or memory usage). However, the mapping is modified when the cluster membership changes, and load percentages are renormalized accordingly. The load-balancing algorithm assumes that client IP addresses and port numbers (when client affinity is not enabled) are statistically independent. This assumption can break down if a server-side firewall is used that proxies client addresses with one IP address and, at the same time, client affinity is enabled. In this case, all client requests will be handled by one cluster host and load balancing is defeated. | ADC provides an extensive range of load balancing algorithms and session persistence options to cope with the wide-ranging requirements of multi-vendor applications and service delivery expectations. ADC can distribute load evenly and recognise server performance dynamically without the need for server-side probes. ADC deploys a range of health monitoring techniques to ensure the availability of the servers and services that are used for load balancing decisions. This is a key benefit of an external load balancer. Where applications require session persistence, there are a number of options in ADC to achieve this in support of many different applications above and beyond the Microsoft environment. |
HTTP/HTTPS support | Microsoft NLB is fundamentally a Layer 4 clustering solution and does not have any application support for HTTP or HTTPS (SSL). MS-NLB relies on the servers to provide the application level intelligence and support e.g. there is not SSL offload. | ADC incorporates a HTTP parser and ability to offload SSL from back-end servers. This improves server-side performance and security. When SSL certificates are managed in the ADC rather than on each server, operations is made easier and scaling is further improved. |
Acceleration: compression, caching | Microsoft NLB does not provide compression or caching techniques to accelerate web applications. | ADC compresses HTTP traffic to improve download times and reduce data centre bandwidth. By caching server responses intelligently, further improvement in web page performance is achieved when content is served by ADC rather than by back-end servers. Ching is include in ADC and an upgrade option for ALB. |
Layer 7 intelligence | Microsoft NLB is fundamentally a Layer 4 clustering solution and does not have any Layer 7 application intelligence. | ADC is an intelligent application proxy with the capability to provide traffic management rules that improves application delivery and security. The flightPATH Rules engine is licensed in ADC and an upgrade option for ALB. |
Scaling | Microsoft NLB is a clustering solution that is limited by its support for Microsoft applications only and reliant on server capacity and interoperability. | ADC will support any TCP-based application and is vendor independent. ADC will scale for very high performance and does not directly rely on the capacity and interoperability of back-end servers. |
Limitations from Microsoft | Reference: http://technet.microsoft.com/en-us/library/ff625247.aspx There are several limitations associated with deploying WNLB with Microsoft Exchange. WNLB can’t be used on Exchange servers where mailbox DAGs are also being used because WNLB is incompatible with Windows failover clustering. If you’re using an Exchange 2010 DAG and you want to use WNLB, you need to have the Client Access server role and the Mailbox server role running on separate servers. Due to performance issues, we don’t recommend putting more than eight Client Access servers in an array that’s load balanced by WNLB. WNLB doesn’t detect service outages. WNLB only detects server outages by IP address. This means if a particular Web service, such as Outlook Web App, fails, but the server is still functioning, WNLB won’t detect the failure and will still route requests to that Client Access server. Manual intervention is required to remove the Client Access server experiencing the outage from the load balancing pool.WNLB configuration can result in port flooding, which can overwhelm networks.Because WNLB only performs client affinity using the source IP address, it’s not an effective solution when the source IP pool is small. This can occur when the source IP pool is from a remote network subnet or when your organization is using network address translation. If you have more than eight Client Access servers in a single Active Directory site, your organization will need a more robust load balancing solution. Although there are robust software load balancing solutions available, a hardware load balancing solution provides the most capacity. For more information about Exchange 2010 server load balancing solutions, see Microsoft Unified Communications Hardware Load Balancer Deployment. External load balancers support very high traffic throughput and can be configured to load balance in many ways. Most external load balancer vendors have detailed documentation about how their product works with Exchange 2010. The simplest way to configure these load balancers is to create a fallback list of the affinity methods that will be applied by the load balancer. For example, the load balancer will try cookie-based affinity first, then SSL session ID, and then source IP affinity. |