I am trying to understand at what point I would need to implement an Asynchronous HTTP Server to overcome request blocking scalability issues, so let me outline a theoretical (and simplistic) scenario.
I have a maximum of 100 concurrent users. Each is likely to have at least 2 browser window /tab instances open an any time. If my understanding of the ICEfaces technology is correct, this translates into 200 'blocked' application server request-processing threads, each awaiting a server triggered event.
This is understandably quite a resource drain however if I were to configure a cluster of two Jboss/Tomcat nodes, each with the default maximum number of threads of 200 each, would I still need to implement the Async HTTP server (assuming I am not interested in inter-node render broadcasts)?
Please ignore. I found a wealth of info in posting http://www.icefaces.org/JForum/posts/list/2840.page I had previously missed. Whilst it doesn't address my specific query (which is something I will need to resolve on my own) it has given me greater insight into ICEFaces blocking etc
Another thing to note is that ICEfaces 1.6 features multi-window asynchronous connectivity to the same host (sort of like a multi-plexing connection architecture). So you'll only use one blocking connection per client.