A Complete Deep Dive into the Java volatile Keyword
Core Capabilities of volatile
Prevents Instruction Reordering
The double-checked locking (DCL) pattern for lazy singletons is a common use case that demonstrates volatile's reordering prevention capability:
public class LazySingleton {
public static volatile LazySingleton lazyInstance;
private LazySingleton() {}
public static LazySingleton fetchInstance() {
if (lazyInstance == null) {
synchronized (LazySingleton.class) {
if (lazyInstance == null) {
lazyInstance = new LazySingleton();
}
}
}
return lazyInstance;
}
}
Object instantiation is split in to three underlying steps:
- Allocate heap memory space for the object
- Initialize object fields and constructors
- Asign the memory address to the reference variable
Without volatile, compilers and processors may reorder these steps to 1 → 3 → 2 for performance optimization. In multi-threaded environments, this can lead to threads acquiring a reference to a half-initialized object, causing unexpected runtime errors. Declaring the instance variable as volatile prohibits this unsafe reordering.
Guarantees Cross-Thread Visibility
Visibility issues occur when a thread modifies a shared varialbe but other threads cannot see the updated value, caused by per-thread private working memory caches defined by the Java Memory Model (JMM). Volatile solves this problem efficiently, as demonstrated in the following example:
import java.util.concurrent.TimeUnit;
public class VolatileVisibilityDemo {
private static boolean isTerminated = false;
public static void main(String[] args) {
new Thread("Worker-1") {
@Override
public void run() {
while (!isTerminated) {
// Empty loop simulating continuous business processing
}
System.out.println(Thread.currentThread().getName() + " exited successfully");
}
}.start();
try {
TimeUnit.SECONDS.sleep(1);
System.out.println("Main thread sends termination signal after 1 second");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
isTerminated = true;
}
}
Without the volatile modifier on isTerminated, the Worker-1 thread will loop infinitely, as it reads the stale cached value of isTerminated from its working memory instead of fetching the updated value from main memory. Adding volatile forces all writes to the variable to flush to main memory immediately, and all reads to pull the latest value from main memory, ensuring the worker thread receives the termination signal correctly.
Does Not Guarantee Atomicity of Compound Operations
Volatile only guarantees atomicity for single read/write operations (including 64-bit long and double types, which are otherwise split into two 32-bit operations on some architectures). It does not support atomicity for compound operations like increment, which are composed of multiple discrete steps:
public class VolatileAtomicityTest {
volatile int counter;
public void incrementCounter() {
counter++;
}
public static void main(String[] args) throws InterruptedException {
VolatileAtomicityTest testInstance = new VolatileAtomicityTest();
for (int i = 0; i < 1000; i++) {
new Thread(() -> {
try {
Thread.sleep(10);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
testInstance.incrementCounter();
}).start();
}
Thread.sleep(12000);
System.out.println("Final counter value: " + testInstance.counter);
}
}
The output of this program will almost always be less than 1000, because counter++ is split into three independent steps: read the current counter value, add 1, write the new value back to memory. Volatile cannot guarantee these steps execute as an atomic unit, leading to race conditions. To fix this, use AtomicInteger or synchronized blocks to wrap the increment operation.
Underlying Implementation Principle
Abstract of Java Memory Model (JMM)
The JMM defines the interaction between threads and memory:
- All shared variables (instance fields, static fields, array elements) are stored in main memory, accessible to all threads.
- Each thread has a private working memory that stores copies of shared variables used by the thread.
- All read/write operations on shared variables must be performed in the thread's working memory, threads cannot directly access each other's working memory.
- Variable value transfer between threads must be mediated by main memory: the writing thread flushes its updated working memory copy to main memory, then the reading thread loads the updated value from main memory to its own working memory.
Visibility Implementation Mechanism
Volatile's visibility guarantee is implemented via CPU memory barrier instructions. Memory barriers prohibit compiler and processor instruction reordering, and enforce cache coherence operations.
When writing to a volatile variable, the JVM inserts a lock prefixed instruction in the generated assembly code, wich performs two key operations:
- Flushes the current processor's cache line containing the volatile variable back to system main memory.
- Triggers the MESI cache coherence protocol to invalidate all cache lines holding the same memory adress in other processors.
The MESI protocol works by having each processor sniff bus traffic to detect modifications to shared memory addresses. If a processor detects that the memory address corresponding to its local cache line has been modified, it marks its local cache line as invalid, and reloads the latest value from main memory on the next access to that variable.
Reordering Prevention Implementation
To implement volatile's memory semantics, the JMM inserts specific memory barriers during bytecode generation to prohibit unsafe reordering:
- Insert a
StoreStorebarrier before each volatile write operation, to prevent reordering of preceding ordinary writes and the subsequent volatile write. - Insert a
StoreLoadbarrier after each volatile write operation, to prevent reordering of the preceding volatile write and subsequent volatile read/write operations. - Insert a
LoadLoadbarrier after each volatile read operation, to prevent reordering of the preceding volatile read and subsequent ordinary read operations. - Insert a
LoadStorebarrier after each volatile read operation, to prevent reordering of the preceding volatile read and subsequent ordinary write operations.
| Memory Barrier Type | Function |
|---|---|
| StoreStore | Blocks reordering of normal write operations before the barrier and volatile write operations after the barrier |
| StoreLoad | Blocks reordering of volatile write operations before the barrier and any volatile read/write operations after the barrier |
| LoadLoad | Blocks reordering of volatile read operations before the barrier and normal read operations after the barrier |
| LoadStore | Blocks reordering of volatile read operations before the barrier and normal write operations after the barrier |
Applicable Scenarios
Volatile can only be used safely when the following conditions are met:
- Write operations to the variable do not depend on its current value.
- The variable is not part of any invariant involving other variables.
- The variable state is completely independent of other program state.
Scenario 1: Boolean State Flag
Used for one-time status notifications like service shutdown or initialization completion:
volatile boolean shutdownTriggered;
public void triggerShutdown() {
shutdownTriggered = true;
}
public void runBusinessLogic() {
while (!shutdownTriggered) {
// Execute normal business processing
}
}
Scenario 2: One-Time Safe Object Publication
Used to safely publish a lazily initialized object without race conditions for half-initialized instances:
public class BackgroundResourceLoader {
public volatile ResourceHolder sharedResource;
public void initializeInBackground() {
// Perform heavy initialization operations
sharedResource = new ResourceHolder();
}
}
public class BusinessProcessor {
public void executeTask() {
while (true) {
// Use resource only after initialization completes
if (resourceLoader.sharedResource != null) {
processWithResource(resourceLoader.sharedResource);
}
// Perform other processing logic
}
}
}
Scenario 3: Independent Status Observation
Used for periodically publishing observation results for other threads to read, such as sensor data or latest statistics:
public class AuthenticationService {
public volatile String latestAuthenticatedUser;
public boolean verifyCredentials(String username, String password) {
boolean isValid = validateCredentials(username, password);
if (isValid) {
activeSessions.add(new Session(username));
latestAuthenticatedUser = username;
}
return isValid;
}
}
Scenario 4: Volatile Bean Pattern
All fields of the JavaBean are declared volatile, with simple getters and setters that contain no additional logic. Object reference fields must point to effectively immutable objects:
@ThreadSafe
public class Employee {
private volatile String firstName;
private volatile String lastName;
private volatile int employmentYear;
public String getFirstName() { return firstName; }
public String getLastName() { return lastName; }
public int getEmploymentYear() { return employmentYear; }
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public void setEmploymentYear(int employmentYear) {
this.employmentYear = employmentYear;
}
}
Scenario 5: Low-Overhead Read-Write Lock Strategy
When read operations far outnumber write operations, combine volatile for fast reads and synchronized for atomic writes to reduce overhead:
@ThreadSafe
public class LowOverheadCounter {
@GuardedBy("this")
private volatile int count;
public int getCurrentCount() {
return count;
}
public synchronized int increment() {
return count++;
}
}
Scenario 6: Double-Checked Locking for Lazy Singletons
The DCL pattern shown earlier relies on volatile to prevent reordering during object instantiation, ensuring thread-safe lazy initialization with minimal performance overhead.