high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the Net, such as below :
Currently, two major versions are supported (NB: comment is outdated):
- version 1.3 – content switching and extreme loads
This version has brought a lot of new features and improvements over 1.2, among which content switching to select a server pool based on any request criteria, ACL to write content switching rules, wider choice of load-balancing algorithms for better integration,content inspection allowing to block unexpected protocols, transparent proxy under Linux, which allows to directly connect to the server using the client’s IP address, kernel TCP splicing to forward data between the two sides without copy in order to reach multi-gigabit data rates, layered design separating sockets, TCP and HTTP processing for more robust and faster processing and easier evolutions, fast and fair scheduler allowing better QoS by assigning priorities to some tasks, session rate limiting for colocated environments, etc…
- version 1.2 – opening the way to very high traffic sites
The same as 1.1 with some new features such as poll/epoll support for very large number of sessions, IPv6 on the client side,application cookies, hot-reconfiguration, advanced dynamic load regulation, TCP keepalive, source hash, weighted load balancing, rbtree-based scheduler, and a nice Web status page. This code is in deep feature freeze and may eventually receive critical fixes only.