0

When using virtual threads in Java 21 there is a way to limit the number of underlying platform threads by specifying jdk.virtualThreadScheduler.parallelism and jdk.virtualThreadScheduler.maxPoolSize as described in this answer. If I set these variables to 1, the execution of all virtual threads will be multiplexed on a single OS thread as I understand. Do I still need to use synchronisation mechanisms like ReentrantLock and volatile in that case, or would it be safe to skip them altogether as long as the shared data is only accessed from the virtual threads?

3
  • There might be only one os thread used. But how do you know that it is the same one on the same cpu core for all the livetime of the jvm? I would guess jdk.virtualThreadScheduler options won't help you with memory consistency effects. Commented Feb 4 at 13:09
  • 2
    Even if we assume it is possible to write code without any kind of synchronization, I would be very wary because the correctness of this code relies on a condition that is outside of its control. Synchronization is still needed I think, even if there is no true parallelism. Imagine a thread entering a critical section; the runtime can interrupt it in the middle of it. If there is no synchronization in place, another thread may enter the same critical section and make a mess. Commented Feb 4 at 13:17
  • Virtual thread engine as far as the distribution of virtual threads across carrier/platform threads is concerned, is based upon ForkJoinPool implementation of Executor. If you look at this one, you might realize that it is impossible to restrict the FJP to using only one thread. Commented Feb 4 at 14:25

2 Answers 2

2

Synchronization/locking prevents subject instructions from being interlaced. Virtual or not doesn't change that.

Remember the old days with single cpu and multiple threads guaranteed to never run instruction in parallel; we still needed synchronization in case of cpu context switching.

Sign up to request clarification or add additional context in comments.

4 Comments

So whenever a virtual thread is mounted on a carrier thread the state of CPU registers and cache gets completely overridden with the data from that virtual thread?
@MikhailVasilyev. For sure, registers. As for CPU cache - looks like yes, according the Is processor cache flushed during context switch in multicore? thread. The thread also discusses the volatiles you asked about.
Better be an academic inquiry.... Every few years we are told to stop doing a legitimate optimization of the past. You should really not try to outsmart the jvm. Stick with the JLS contracts. 10 years ago I was stunned that adding an unused param to a recursive quicksort was making it faster. Remember the perfectly fine double-check idiom which stopped working flawlessly when they decided to reorder statement? We are at the mercy of a couple gurus writing the JIT.
@user2023577 +1 on this. Many hours have been wasted on chasing micro-optimizations based upon conventional wisdom for applications where it really doesn't matter. The latest was re-writing all synchronized blocks to use ReentrantLocks "to support virtual threads" and now (Java 24) synchronized blocks don't pin VTs to platform threads anymore. Yes, certain high-throughput applications benefited massively from re-writes, especially in between the introduction of VTs and now, but it was a waste for most applications.
0

Do I still need to use synchronisation mechanisms like ReentrantLock and volatile in that case

In your vision of virtual threads you concentrate on Continuation, the concept of platform Carrier thread picking up ready-to-go virtual thread and continue the execution of behalf of the latter. But you seem to miss another concept, crucial to virtual threads: Context Switching. When a Carrier thread gets dismounted by one virtual thread and gets mounted on by another, all the footprints (Context) of the first one are eliminated from the Carrier and the appropriate state (Context) of second one is loaded. This includes not only locks and intrinsic synchronization monitors (volatile, I think, is another story as it is more simple code-based insertion of memory barriers and does not represent Context state), but also set of ThreadLocals.

Therefore, your next statement

the shared data is only accessed from the virtual threads

is not fully correct even if you will somehow manage to limit the amount of Carrier threads to only one (which still might be marginally possible if you configure your own Executor for virtual threads). The virtual threads, even if they are always mounted on the same Carrier, don't share more data than the traditional platform ones.

2 Comments

By shared data I meant something like a static variable accessed from multiple virtual threads, but I get what you mean. So context switching in the case of virtual threads is essentially the same as with OS threads, including CPU registers, cache etc getting overridden, the only difference being that it only happens once a virtual thread reaches a blocking system call?
@MikhailVasilyev, yes, exactly, context switching in the case of VT is essentially the same as with platform threads. Under the hood there are, of course, some performance issues, like with ThreadLocals, but at the face value it's the same. As for blocking calls - not only there, a Carrier thread, when a VT mounts on it, is also a subject of OS time-slicing, VT can be interrupted by OS scheduler in the midst of CPU-bound activity, but again the same as with platform thread.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.