-
-
Notifications
You must be signed in to change notification settings - Fork 942
Description
I'm working on speeding up a Celluloid app, and I've noticed that jruby spends an awful lot of time in ThreadFiber.createThread(). JRuby is supposed to use a thread pool for fibers, but despite the live thread count being mostly consistent, the total created thread count balloons rapidly, and exhibits incrementing thread IDs.
This is jruby-1.7.9 and the behavior happens on both openjdk-1.6.0 and openjdk-1.7.0.
Minimum reproduction case: https://gist.github.com/cheald/8626163
Output is something like:
2360 <Thread 1978280> - 99.1/s
2388 <Thread 1978286> - 99.1/s
2380 <Thread 1978294> - 99.1/s
2370 <Thread 1978298> - 99.1/s
2400 <Thread 1978304> - 99.1/s
2360 <Thread 1978312> - 99.1/s
2388 <Thread 1978316> - 99.1/s
2380 <Thread 1978324> - 99.1/s
2370 <Thread 1978330> - 99.1/s
2400 <Thread 1978334> - 99.1/s
2360 <Thread 1978342> - 99.1/s
2388 <Thread 1978346> - 99.1/s
2380 <Thread 1978354> - 99.1/s
2370 <Thread 1978360> - 99.1/s
2400 <Thread 1978366> - 99.1/s
2360 <Thread 1978372> - 99.1/s
2388 <Thread 1978376> - 99.1/s
2380 <Thread 1978384> - 99.1/s
2370 <Thread 1978390> - 99.1/s
2400 <Thread 1978396> - 99.1/s
2360 <Thread 1978400> - 99.1/s
2388 <Thread 1978406> - 99.1/s
2380 <Thread 1978414> - 99.1/s
2370 <Thread 1978420> - 99.1/s
2400 <Thread 1978426> - 99.1/s
2360 <Thread 1978430> - 99.1/s
2388 <Thread 1978436> - 99.1/s
2380 <Thread 1978444> - 99.1/s
2370 <Thread 1978450> - 99.1/s
You'll notice that the thread IDs are monotonically increasing. This is consistent.
VisuamVM CPU sampling shows that a tremendous amount of time is being spent in createThread():
VM stats show a ton of threads created, even though the live threads are stable:
This feels an awful lot like Fibers are not correctly using a thread pool, and are instead creating new short-lived threads for every fiber, which is resulting in inordinate amounts of overhead.

