Thread Safety Mechanisms for Shared Mutable State in Java
Concurrent programming introduces subtle failure modes absent in single-threaded applications. When multiple execution threads manipulate the same mutable data without coordination, race conditions emerge—situations where computational correctness depends on the relative timing of events.
Consider a numeric sequence generator vulnerable to interleaving:
public abstract class SequenceSource {
private volatile boolean terminated = false;
public abstract int nextValue();
public void shutdown() {
this.terminated = true;
}
public boolean isShutdown() {
return terminated;
}
}
The volatile modifier ensures visibility of the termination flag across threads, while boolean operations naturally possess atomicity. A validator task monitors the sequence:
public class SequenceValidator implements Runnable {
private final SequenceSource source;
private final int workerId;
public SequenceValidator(SequenceSource src, int id) {
this.source = src;
this.workerId = id;
}
@Override
public void run() {
while (!source.isShutdown()) {
int val = source.nextValue();
if (val % 2 != 0) {
System.err.println("Anomaly detected: " + val + " is odd");
source.shutdown();
}
}
}
public static void test(SequenceSource src, int concurrency) {
System.err.println("Initializing validation...");
ExecutorService pool = Executors.newCachedThreadPool();
for (int i = 0; i < concurrency; i++) {
pool.execute(new SequenceValidator(src, i));
}
pool.shutdown();
}
}
An unsafe implementation reveals the hazard:
public class UnsafeSequence extends SequenceSource {
private int current = 0;
@Override
public int nextValue() {
++current; // Non-atomic read-modify-write
++current;
return current;
}
public static void main(String[] args) {
test(new UnsafeSequence(), 12);
}
}
If a context switch occurs between the two increments, subsequent threads observe a partially updated state. Even single increment operations require synchronization, as Java increment operators decompose into multiple bytecode instructions rather than executing as indivisible hardware operations.
To prevent such interleaving, mutual exclusion ensures that only one thread accesses critical code paths at any moment. Java implements this through monitor locks associated with every object instance.
When a thread enters a synchronized method, it automatically acquires the monitor of the instance hosting that method. Other threads attempting to invoke any synchronized method on the same object block until the first thread exits and releases the monitor:
public class SynchronizedSequence extends SequenceSource {
private int current = 0;
@Override
public synchronized int nextValue() {
++current;
Thread.yield(); // Encourage context switching
++current;
return current;
}
}
Java monitors are reentrant. If a thread already holds a lock and encounters another synchronized region on the same object (including recursive method calls), the acquisition succeeds. The JVM maintains an acquisition count, releasing the lock only when the count reaches zero upon exiting all nested synchronized regions.
For static methods, synchronization occurs on the Class object rather than an instance, protecting static fields across all class instances.
While intrinsic locks provide automatic acquisition and release, the java.util.concurrent.locks.Lock interface offers explicit control:
import java.util.concurrent.locks.*;
public class ExplicitLockSequence extends SequenceSource {
private int current = 0;
private final Lock guard = new ReentrantLock();
@Override
public int nextValue() {
guard.lock();
try {
++current;
Thread.yield();
++current;
return current;
} finally {
guard.unlock(); // Must release in finally block
}
}
}
Explicit locks support non-blocking acquisition attempts and timed waits:
public class LockAcquisitionStrategies {
private final Lock mutex = new ReentrantLock();
public void tryImmediate() {
boolean captured = mutex.tryLock();
try {
System.err.println("Immediate attempt: " + captured);
} finally {
if (captured) mutex.unlock();
}
}
public void tryWithTimeout() {
boolean captured = false;
try {
captured = mutex.tryLock(2, TimeUnit.SECONDS);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
try {
System.err.println("Timed attempt (2s): " + captured);
} finally {
if (captured) mutex.unlock();
}
}
public static void main(String[] args) {
LockAcquisitionStrategies demo = new LockAcquisitionStrategies();
demo.tryImmediate();
demo.tryWithTimeout();
new Thread(() -> {
demo.mutex.lock();
System.err.println("Lock held by daemon");
}, "Locker") {{ setDaemon(true); }}.start();
}
}
Atomic operations execute without interruption by thread schedulers. For primitive types except long and double, reads and writes possess atomicity. However, 64-bit variables may suffer word tearing on certain JVM implementations, where the value is written or read as two separate 32-bit operations. Declaring such fields volatile ensures atomic access while also establishing happens-before relationships for visibility.
Visibility guarantees that modifications made by one thread become observable to others. Without synchronization or volatile, threads may cache variable values in registers or local processor caches, observing stale data. Synchronization flushes caches to main memory on exit and refreshes on entry, while volatile forces immediate main memory writes on update and reads from memory on access.
How ever, volatile alone cannot secure compound actions such as increment-and-get operations, which require reading, modifying, and writing back. When a field's new value depends on its previous state, synchronization or atomic classes become necessary:
public class VisibilityTest implements Runnable {
private int counter = 0;
public synchronized void addPair() {
counter++;
counter++;
}
public int getCount() {
return counter; // Risky: may observe intermediate state
}
@Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
addPair();
}
}
public static void main(String[] args) {
ExecutorService exec = Executors.newCachedThreadPool();
VisibilityTest test = new VisibilityTest();
exec.execute(test);
exec.shutdown();
while (true) {
int val = test.getCount();
if (val % 2 != 0) {
System.err.println("Inconsistent state: " + val);
System.exit(1);
}
}
}
}
The java.util.concurrent.atomic package provides lock-free alternatives for common operations:
public class AtomicCounter implements Runnable {
private final AtomicInteger value = new AtomicInteger(0);
public int read() {
return value.get();
}
private void advanceByTwo() {
value.addAndGet(2); // Atomic compound operation
}
@Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
advanceByTwo();
}
}
}
Atomic classes utilize compare-and-swap (CAS) hardware primitives rather than blocking locks, offering superior throughput under low contention. However, they address specific use cases; general-purpose synchronization typically remains more straightforward and less error-prone.
Rather than synchronizing entire methods, critical sections restrict mutual exclusion to specific code blocks:
synchronized (monitorObject) {
// Only this block is protected
}
This technique minimizes lock holding duration and improves concurrency by allowing threads to execute non-conflicting portions of methods simultaneously.
Consider a coordinate tracker requiring both thread-safe updates and slow persistence:
public class CoordinateManager {
private int x, y;
private final List<Coordinate> history = Collections.synchronizedList(new ArrayList<>());
public synchronized Coordinate getSnapshot() {
return new Coordinate(x, y);
}
public void move(int deltaX, int deltaY) {
Coordinate snapshot;
synchronized (this) {
x += deltaX;
y += deltaY;
snapshot = new Coordinate(x, y);
}
archive(snapshot); // Executed outside synchronized block
}
private void archive(Coordinate c) {
history.add(c);
try {
TimeUnit.MILLISECONDS.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Alternatively, explicit locks can delimit critical sections:
public class LockBasedManager extends CoordinateManager {
private final Lock mutex = new ReentrantLock();
@Override
public void move(int dx, int dy) {
Coordinate snapshot;
mutex.lock();
try {
// Update logic
snapshot = getSnapshot();
} finally {
mutex.unlock();
}
archive(snapshot);
}
}
Synchronization may occur on arbitrary objects, not solely this. This enables independent locks for unrelated operations within the same class:
public class DualLockResource {
private final Object writeLock = new Object();
public synchronized void compute() {
for (int i = 0; i < 3; i++) {
System.out.println("Computing...");
Thread.yield();
}
}
public void log() {
synchronized (writeLock) {
for (int i = 0; i < 3; i++) {
System.out.println("Logging...");
Thread.yield();
}
}
}
}
Here, compute() acquires the instance monitor while log() acquires writeLock, permitting concurrent execution of both methods by different threads.
Thread-local storage provides variable isolation without synchronization. Each thread accessing a ThreadLocal variable rceeives an independent copy:
public class RequestContext {
private static final ThreadLocal<SessionContext> context =
ThreadLocal.withInitial(() -> new SessionContext(Thread.currentThread().getId()));
public static void increment() {
context.get().touch();
}
public static int getAccessCount() {
return context.get().getTouches();
}
private static class SessionContext {
private int touches;
private final long threadId;
SessionContext(long id) { this.threadId = id; }
void touch() { touches++; }
int getTouches() { return touches; }
}
}
ThreadLocal instances typically remain static final fields. The get() method initializes values per-thread on first access, while set() replaces the thread's current binding. This pattern eliminates sharing entirely, removing contention but restricting data to single-threaded access patterns.