Messages posted by: maratb  XML
Profile for maratb -> Messages posted by maratb [13]
Author Message

ted.goddard wrote:
Another fix would be to make the InheritableThreadLocal behavior switchable via web.xml parameter (since in retrospect the convenience of InheritableThreadLocal is not worth the other complications).

I thought about it too at first as well. However, won't this break the asynchronous architecture of icefaces by not letting user children threads have access to PersistentFacesState to request asych render back to the web browser?

So I think while easiest to do it would the wrong way to go about it.

Btw, you could flip what I did around and provide a way for the user to register threads that have to have PersistentFacesState in their thread local by way of a factory pattern for example:

 Thread IceFacesThreadFactory.create(Class<? inherits Thread>);
 Thread IceFacesThreadFactory.create(Class<? inherits Runnable>);

The implementation of these methods would register the instance of this thread with some kind of a registry controlled by IceFaces framework, which PersistentFacesState.isInheritable() will consult to see if the thread its is being executed in is already registered with the this registry.

So this way by default no thread gets its threadlocals populated with PersistentFacesState and only explicitly registered user threads get it.

Let me just make sure that the first reaction to this post will not be that this is an application bug.

You can take a look at the link bellow


for a discussion where the app developer raised this issue already but fell on a deaf ear.

Basically I have a web app that uses straight hibernate, c3p0 database thead pool and ehcache. I'm sure that this is quite a typical setup for a web app.

I'be been experiencing a memory bloat of my web app, which would ultimatelly lead to PermGen OOM Exception. I'm still on IceFaces 1.7.2, though looking at 1.8.2 it appears that the code in question is still the same so the bloat is probably still there.

The bloat is due to PersistentFacesState use of InheritableThreadLocal AND the fact that the application code uses third party libraries that spawn child threads in the context of the jsf/servlet request processing. Those child threads inherit a reference to PersistentFacesState via Thread.inheritableThreadLocals.

Even though Thread.inheritableThreadLocals's entries is of type ThrehadLocal.ThreadLocalMap.Entry extends WeakReference<ThreadLocal> the way it is implemented seems to allow for a possibility of a memory leak in the specific scenario that I described. A child thread that keeps Thread.inheritableThreadLocals with an entiry pointing to PersistentFacesState but never uses a its ThreadLocal is not going to release Entry.value to the garbage collector because ThrehadLocal.ThreadLocalMap will only attempt to dereference its Entry on a ThrehadLocal.ThreadLocalMap.get() call.

So here is one way to fix for this bloat.

The basic idea is to allow to filter out ThreadLocal values during child thread initialization by overriding InheritableThreadLocal.childValue() method. How that decision is made can be implementation specific.

Attached code provides for the following: given a collection of class names that represent child thread classes that get spawned by third party framework, take the current stack trace and try to locate the class name from that collection. If found do not propagate this ThreadLocal's value reference to the child's thread Thread.inheritableThreadLocals

I hope people find this useful. This worked out quite nicely for me. I'm attaching a snapshot of YourKit before and afer this fix. You can see that there is no more PersistentFacesState on the second memory snapshot as the biggest in memory object.

Before the fix: http://www.flickr.com/photos/65203168@N05/5938864379/in/photostream

After the fix: http://www.flickr.com/photos/65203168@N05/5938864437/in/photostream

Here is the link to the code bellow in a readable format:


 ------------- PersistentFacesState.java
 package com.icesoft.faces.webapp.xmlhttp
 public class PersistentFacesState implements Serializable {
     private static InheritableThreadLocal localInstance = new InheritableThreadLocal() {
             protected Object childValue(Object parentValue) {
                 if (parentValue == null) {
                     return parentValue;
                 PersistentFacesState state = (PersistentFacesState)parentValue;
                 return state.isInheritable() ? state : null;
     public boolean isInheritable() {
         String value = 
         if (value == null) {
             return true;
         StackTraceElement[] elements = Thread.currentThread().getStackTrace();
         String[] classNames = value.split(";");
         String rhsClassName;
         for (int i = 0; i < classNames.length; ++i) {
             rhsClassName =  classNames[i];
             for (int j = 0; j < elements.length; ++j) {
                 StackTraceElement e = elements[j];
                 String lhsClassName = e.getClassName();
                 if (lhsClassName.equals(rhsClassName)) {
                     return false;
         return true;

------------- web.xml

Can IceFaces commiters please comment on this?

Have you addressed this already? When 1.7.2 comes out do I need to go back and fix this again or it will taken care of?

I don't know if this has been addressed already, but I've been experiencing a deadlock when container's and IceFaces own session invalidation lock each other up.

As a result container becomes completely unresponsive and your web app is effectively dead.

Here is stack traces of two the dead locked threads.

Found one Java-level deadlock:
waiting to lock monitor 0x08d525e4 (object 0xaadfc0d0, a java.util.HashMap),
which is held by "Session Monitor"
"Session Monitor":
waiting to lock monitor 0x08d525a4 (object 0xaadf35b0, a com.icesoft.faces.util.event.servlet.ContextEventRepeater),
which is held by "ContainerBackgroundProcessor[StandardEngine[Catalina]]"

Java stack information for the threads listed above:
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher.notifySessionShutdown(SessionDispatcher.java:161)
- waiting to lock <0xaadfc0d0> (a java.util.HashMap)
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher.access$300(SessionDispatcher.java:21)
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher$Listener.sessionDestroyed(SessionDispatcher.java:227)
at com.icesoft.faces.util.event.servlet.ContextEventRepeater.sessionDestroyed(ContextEventRepeater.java:326)
- locked <0xaadf35b0> (a com.icesoft.faces.util.event.servlet.ContextEventRepeater)
at org.apache.catalina.session.StandardSession.expire(StandardSession.java:702)
- locked <0xac2bfef0> (a org.apache.catalina.session.StandardSession)
at org.apache.catalina.session.StandardSession.isValid(StandardSession.java:592)
at org.apache.catalina.session.ManagerBase.processExpires(ManagerBase.java:682)
at org.apache.catalina.session.ManagerBase.backgroundProcess(ManagerBase.java:667)
at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1316)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1601)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1590)
at java.lang.Thread.run(Thread.java:595)
"Session Monitor":
at com.icesoft.faces.util.event.servlet.ContextEventRepeater.sessionDestroyed(ContextEventRepeater.java:321)
- waiting to lock <0xaadf35b0> (a com.icesoft.faces.util.event.servlet.ContextEventRepeater)
at org.apache.catalina.session.StandardSession.expire(StandardSession.java:702)
- locked <0xac3131b8> (a org.apache.catalina.session.StandardSession)
at org.apache.catalina.session.StandardSession.expire(StandardSession.java:660)
at org.apache.catalina.session.StandardSession.invalidate(StandardSession.java:1111)
at org.apache.catalina.session.StandardSessionFacade.invalidate(StandardSessionFacade.java:150)
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher.notifySessionShutdown(SessionDispatcher.java:176)
- locked <0xaadfc0d0> (a java.util.HashMap)
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher.access$300(SessionDispatcher.java:21)
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher$Monitor.shutdown(SessionDispatcher.java:257)
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher$Monitor.shutdownIfExpired(SessionDispatcher.java:262)
at com.icesoft.faces.webapp.http.servlet.SessionDispatcher$Listener$1.run(SessionDispatcher.java:205)

Found 1 deadlock.

The reason your app is dead is because all threads are now stuck here due to deadlock above:

"TP-Processor25" daemon prio=1 tid=0x09341f28 nid=0x5952 waiting for monitor entry [0xa8ed3000..0xa8ed4130]
at com.icesoft.faces.util.event.servlet.ContextEventRepeater.sessionCreated(ContextEventRepeater.java:309)
- waiting to lock <0xaadf35b0> (a com.icesoft.faces.util.event.servlet.ContextEventRepeater)
at org.apache.catalina.session.StandardSession.tellNew(StandardSession.java:397)
at org.apache.catalina.session.StandardSession.setId(StandardSession.java:369)
at org.apache.catalina.session.ManagerBase.createSession(ManagerBase.java:829)
at org.apache.catalina.session.StandardManager.createSession(StandardManager.java:291)
at org.apache.catalina.connector.Request.doGetSession(Request.java:2312)
at org.apache.catalina.connector.Request.getSession(Request.java:2075)
at org.apache.catalina.connector.RequestFacade.getSession(RequestFacade.java:833)
at org.apache.catalina.connector.RequestFacade.getSession(RequestFacade.java:844)
at org.apache.jasper.runtime.PageContextImpl._initialize(PageContextImpl.java:144)
at org.apache.jasper.runtime.PageContextImpl.initialize(PageContextImpl.java:122)
at org.apache.jasper.runtime.JspFactoryImpl.internalGetPageContext(JspFactoryImpl.java:107)
at org.apache.jasper.runtime.JspFactoryImpl.getPageContext(JspFactoryImpl.java:63)
at org.apache.jsp.index_jsp._jspService(index_jsp.java:44)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:393)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:320)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:266)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697)
at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
at java.lang.Thread.run(Thread.java:595)

I'm attaching a fix for this. Basically SessionDispatcher. notifySessionShutdown() call to Session.invalidate() ends up calling itself back via ContextRepeater and all happily lock on SessionDispatcher.notifySessionShutdown() -> synchronized( SessionMonitors ).
By using a thread local I make sure that SessionDispatcher. notifySessionShutdown() will unwind if it sees that its in a thread loop.

Please review and let me know if this is something you're going to include or it's already been addressed differently. I see that you have this jira http://jira.icefaces.org/browse/ICE-3268 but haven't had time to look if it's the same issue
Do you see any exceptions on the server side?

Also try to set a breakpoint here com.icesoft.faces.webapp.http.core.RequestVerifier.java:43 and see if this is the place your error is coming from. If so then make sure that your browser is allowed to keep cookies around.
IceFaces runtime is running icefaces.org site, so it can't be a culpit :)

If you're using facelets and loadup the whole ui model on first load, and you have lots of screens then you're gonna how a startup penalty. Checkout latest component-showcase with facelets to see how you can mitigate that with late facelets binding.

Also once you go big, load balance your deployment. There are some potential issues that you'll come accross but you can either take a look at the post bellow or figure out how to configure mod_proxy better than I did


Unless you can point out a miss in my conf for mod_proxy I think what I found warrants a JIRA as I cannot see how one can load balance 1.7.1 without the mod that I made to the source code.

ken.fyten - Here a new shiny thread for what I see happening.

I described the issues here http://jira.icefaces.org/browse/ICE-3231.

Before removing my comments there I urge taking a closer look at them.

I think hunting for a Facelet bug might get you nowhere, rather it is the View objects that are leaked hence keeping Facelet's UI component hierarchies around

I have a fix for this. This issue can be looked at as a configuration 'feature' of mod_proxy or a bug of IceFaces. Make your pick! :)

I have moved on to CentOS 5.0 (RHEL 5.0) and Apache/2.2.3. I used mod_proxy and friends from here: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html

I still had exactly the same problem. Sessions were blown away pretty quick if not almost as soon as a user issues a request to the web server.

Basically the issue is so brain dead that it is not even funny that I had to spend 3 days tracking it down!

When mod_proxy is configured for sticky sessions as such:

<Proxy balancer://tomcatCluster>
BalancerMember ajp:// route=node1
BalancerMember ajp:// route=node2

<Location /component-showcase>
ProxyPass balancer://tomcatCluster/component-showcase stickysession=JSESSIONID
ProxyPassReverse balancer://tomcatCluster/component-showcase

then when request arrives at httpd, mod_proxy modifies the value of JSESSIONID header by appending '.' {routename} (e.g 0123456789ABCDEF.node1) and sends this modified request to the actual IceFaces servlet

Now remember than the first web screen *always got loaded* (sometimes partially but still) So by now this IceFaces servlet instance has already got a SessionID. Here comes the second request and it bombs right here ->


if (request.containsParameter("ice.session")) {
--> if (sessionID.equals(request.getParameter("ice.session"))) {
} else {
log.debug("Missmatched 'ice.session' value. Session has expired.");
--> request.respondWith(SessionExpiredResponse);

No such session found so blow the client session away. Ouch!

So a brain dead fix is:
if (sessionID.startsWith(request.getParameter("ice.session")))

but one could always dig something up I suppose on mod_proxy to force it not to modify the sessionid header value or/and extend this fix to provide a regex for a sessionid, so that its value can be at least validated for well formedness or some such.



Thanks for a quick reply! I understand and respect your support priority queue.

I believe though this particular use case does fall into a general load balancing story and will affect general community as a whole.

I only used officially shipped sample application and with a vanilla load balance setup. So I hope that this question and comments related to memory leaks I posted on JIRA will have higher priority :)

What I'll do in the meantime is try to run a plain JSP app without ICEFaces involved. If I'll see the same issues then it must be me doing something silly. Otherwise.... I hope to hear from you :)


In order to avoid memory bloat of icefaces 1.7.1 runtime I tried to load balance icefaces app instances. I had no luck so far.

My conf:
OS: Linux CentOS 4 (2.6.9-5.0.3.EL)
Httpd: Apache/2.0.52
mod_jk: mod_jk-1.2.26-httpd-2.0.61.so
servlet contanier: tomcat 6.0.16
icefaces: 1.7.1
application: ${ICE_FACES_HOME}/samples/component-showcase/facelets/ component-showcase.war (component-showcase with facelets)

httpd.conf contents related to mod_jk:

# Load mod_jk module
LoadModule jk_module modules/mod_jk-1.2.26-httpd-2.0.61.so
# Where to find workers.properties
JkWorkersFile /etc/httpd/conf/workers.properties
# Where to put jk shared memory
JkShmFile /var/log/httpd/mod_jk.shm
# Where to put jk logs
JkLogFile /var/log/httpd/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel debug
# Select the timestamp log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "

<VirtualHost *>
JkMount /component-showcase|/* loadbalancer
JkMount /jkmanager|/* jkstatus
</VirtualHost >


worker.list = loadbalancer, node1, node2, jkstatus

worker.node1.port = 8009
worker.node1.host = localhost
worker.node1.type = ajp13
worker.node1.lbfactor = 1

worker.node2.port = 9009
worker.node2.host = localhost
worker.node2.type = ajp13
worker.node2.lbfactor = 1

worker.loadbalancer.type = lb
worker.loadbalancer.balance_workers = node1, node2
worker.loadbalancer.sticky_session = True


I have 2 tomcat instances running fronted by httpd all on the same host. When I try to open components showcase facelets demo after first page is loaded I immedeately get 'session expired' exception.

If I was to add to workers.properties

worker.loadbalancer.sticky_session_force = True

then jk_mod reports that all tomcat workers are in an error state.

I'd really appriciate if someone would provide giudance on this issue.

Am I misconfiguring something? I must be, right!? If icefaces.org runs fine and I see it's fronted by httpd, I hope you guys eat your own dog food and run the latest version of your product! :)

Just to raise awarness of how bad memory is being leacked by IceFaces 1.7.1 with Facelets.

Have a look at this Jira:


Basically as far as I can see most of leaks come down to the fact that the server creates more than 1 server Views for the same UI View. This leads to MainSessionBoundServlet.views holding stale Views as they aren't removed when the client disposes of a single view that it only knows about, others remain in this map I suppose until server side session expires.

But because facelets are used a single View can take up megabytes of memory and you don't need a lot of sessions to have JVM bomb (OutOfMemory exceptions)
There is another, easier way to "fix" this.

1. Create an instance of implementation of javax.faces.model.DataModel in your backing bean the first time your bean is instantiated in support of ice:dataTable with ice:columns
2. Always reuse this instance whenever you need to recreate your DataModel's content by using its DataModel.setWrappedData({YourContentCollection}) method.

This way ice's implementation will only have to deal with a single instance of DataModel, whose content will be dynamically updated and iceFaces should repaint the table according to new model's content.

In my case this works. The only difference is that I only recreate my table when table is hidden/shown as a child of ice:PanelStack. I'd be interested to hear if this works for people that need to refresh their tables columns and content without hide/show. I'd imagine this should work because iceFaces should notice that the backing bean's state has been mocked with and try to repaint the table.


Profile for maratb -> Messages posted by maratb [13]
Go to:   
Powered by JForum 2.1.7ice © JForum Team