Avanu Webmux Network Traffic Manager, network load balancer

Click Here to Submit Your Article
Traditionally, web pages are designed for network links that exhibit slight packet loss and low latency, such as high speed fixed line and  digital subscriber line (DSL) links.  They often fail to meet  the user’s expectations to deliver adequate performance over lower performing network links that we have in the mobile phone world.   User connectivity over bandwidth constrained links combined with high latency and packet loss experience slower  page loads, thereby inflating the response times  for applications. There are varying latencies from the different network  types. Fixed access connection such as DSL typically exhibits  lower latency than wireless networks.   We also have plenty of bandwidth variations between these network types.  Different networks with varying performance metrics have  different results for web applications.   When mobile users access applications that  are designed for faster-fixed line users, the results are unexpected.  Much of this can be aided by a load balancer with an optimized TCP stack for both front and back end connections.   Behavioral Impact   There are many studies available that demonstrate the behavioral impact of poorly performing applications.  The ultimate goal for a web page is to load in less than 100 msec.  After the 1 second mark, the user thought process is interrupted and after 10 seconds their dialogue gets changed. Therefore, the poorly performing applications translate to less revenue.   TCP Insufficiencies   HTTP Web Applications sit on top of Transmission Control Protocol (TCP). However, TCP was developed in the 1980’s and has many performance related inadequacies. It understands that applications exist on networks with no latency related problems.  When TCP was created, the focus was on congestion control mechanisms, not latency.   However, the nature of HTTP makes it very sensitive to high latency.  These days, almost all networks  exhibit varying degrees of high latency.  Latency varies  widely, since  so many factors contribute to it.   TCP has a lot of work to do under the covers, to provide reliable delivery. Unlike UDP, its guaranteed delivery requires a lot of overhead which degrades an application’s performance.   TCP is the prime protocol in use today, and unfortunately you can’t avoid this overhead if you want to communicate over the Internet.   TCP Request/Receive Behavior   TCP does not support multiple parallel sessions.  You need multiple TCP sessions for multiple sessions. It is essentially just a stream of bytes with no internal structure. Its 3-way  handshake exhibits  significant  delays even  before the first user  data is sent. TCP offers you a reliable stream that handles packet loss very well but it doesn’t guarantee timely delivery. Therefore its orientation is not latency/performance focused.   Latency is the Biggest Problem   Latency is the biggest problem with application performance.  If you are expecting a high-speed web  browsing experience, then the only available natural option is to shorten the RTT. After a certain threshold increasing bandwidth does not decrease latency.  We can buy bandwidth, but for latency, we need to shorten the cable.  This is expensive unless we use a Content Delivery Network (CDN). Nevertheless, a CDN cannot be used for everything.  They are not so good with dynamic content and  pose challenges in SSL certificate management.     network traffic manager,  Server  Load Balancing  balancers , WebMux – Network Load Balancing Methods / Solutions,  Application Delivery Network  Controller (ADC),  

Category: