tracker issue : CF-4203666

select a category, or use search below
(searches all categories and all time range)

100% CPU utilization caused by WeakHashMap.class

| View in Tracker

Status/Resolution/Reason: Closed/Withdrawn/CannotReproduce

Reporter/Name(from Bugbase): Yvan Jeanmonod / ()

Created: 12/06/2018

Components: Performance

Versions: 2016

Failure Type: Crash

Found In Build/Fixed In Build: 20160007 /

Priority/Frequency: Normal / Not Reproducible

Locale/System: German / Win 2016

Vote Count: 0

Problem Description:
We operate a Coldfusion Server 2016 with 7 instances on it. Every instance has 20 - 25 web-projects on it.
In irregular intervals (between 7am and 7/8pm, so during our office hours) the CPU goes up to 100%.
Via the FusionReactor we noticed that our requests are stopped by the WeakHashMap.class.

The following are the first 4 stacktrace lines of a blocked request:
"ajp-nio-8012-exec-3" Id=148 RUNNABLE 
   java.lang.Thread.State: RUNNABLE
        at java.util.WeakHashMap.put(
        at coldfusion.runtime.StructWrapper.readResolve(

   Locked ownable synchronizers: 
        - java.util.concurrent.ThreadPoolExecutor$Worker@17c386de

It seems that a thread is waiting for WeakHashMap to execute the method put().

We found this bug already, that refers to a similar problem:
The solution provided in that bug, was a patch in CF11. But since we are working with the version 2016, it can't be our solution.
We assume that the bugfix has also been taken into account in the new version.

Steps to Reproduce:
No way found to reproduce.
The intervals are irregular and the origin is everytime another.

Actual Result:
100% CPU usage in irregular intervals.

Expected Result:

Any Workarounds:
Simply restarting the affected instance. Just killing the requests doesn't work.
The request will go away, but the next requests are going to be queued.



Yvan, Can you please clarify how you concluded that the request is blocked at by the thread state you have shared. Can you share the complete stack trace (rather than a few lines). Better still is it feasible for you to capture and share thread dumps when your CF instances are unresponsive. The stacktraces in the thread dump recorded in the other bug you've referenced does not match with what you have shared here. Any recent changes which could have caused this?
Comment by Piyush K.
30474 | March 12, 2019 08:11:17 AM GMT
Yvan, Are you no longer facing the issue. In are you are, can you share the information requested in my previous note?
Comment by Piyush K.
30600 | April 03, 2019 07:45:27 AM GMT
closing this as there isn't sufficient information to reproduce the issue.. Pls revert with the requested info if this still needs to be looked into.
Comment by Piyush K.
30619 | April 10, 2019 08:03:24 AM GMT