Concurrency-Safe Maps in Go with sync.Map, RWMutex, and Sharded Implementations
Data races with built-in maps
The native map type in Go is not safe for concurrent writes or mixed reads/writes without synchronization. Simultaneous read and write on the same map leads to a runtime fatal error.
func unsafeConcurrentMap() {
dict := map[int]int{}
// Concurrent reader
go func() {
for {
_ = dict[1]
}
}()
// Concurrent writer
go func() {
for {
dict[2] = 42
}
}()
select {} // keep goroutines alive
}
// Runtime (eventually):
// fatal error: concurrent map read and map write
Protecting a map with RWMutex
Embedding a read-write lock around a map provides a straightforward concurrency-safe pattern with low overhead.
// LockedMap wraps a map with a RWMutex for safe access.
type LockedMap struct {
mu sync.RWMutex
m map[string]int
}
func NewLockedMap() *LockedMap {
return &LockedMap{m: make(map[string]int)}
}
func (lm *LockedMap) Put(k string, v int) {
lm.mu.Lock()
lm.m[k] = v
lm.mu.Unlock()
}
func (lm *LockedMap) Get(k string) (int, bool) {
lm.mu.RLock()
val, ok := lm.m[k]
lm.mu.RUnlock()
return val, ok
}
func (lm *LockedMap) Delete(k string) {
lm.mu.Lock()
delete(lm.m, k)
lm.mu.Unlock()
}
Example usage with concurrent readers and writers:
func demoLockedMap() {
store := NewLockedMap()
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
for i := 0; i < 1000; i++ {
store.Put("k"+strconv.Itoa(i%10), i)
}
}()
go func() {
defer wg.Done()
for i := 0; i < 1000; i++ {
store.Get("k" + strconv.Itoa(i%10))
}
}()
wg.Wait()
}
sync.Map: a built-in concurrent map (Go 1.9+)
sync.Map provides a high-performance, lock-minimizing map for specific concurrency patterns. It does not expose the standard map operations and must be used via its methods.
Construction
var sm sync.Map // zero value is usable
p := new(sync.Map) // pointer form
_ = p
Store
sm.Store("user:1", "Alice")
sm.Store(100, time.Now()) // keys and values are interface{}
Load
if v, ok := sm.Load("user:1"); ok {
name := v.(string)
_ = name
}
LoadOrStore
// Returns the existing value if present; otherwise stores the provided value.
actual, loaded := sm.LoadOrStore("cfg:port", 8080)
_ = actual // interface{}
_ = loaded // true if key existed
Range
// Range invokes f for each key/value present when iteration begins.
// It does not guarantee a consistent snapshot.
sm.Range(func(k, v any) bool {
fmt.Printf("%v => %v\n", k, v)
// Storing during Range is allowed.
// Newly added entries may or may not be visited.
if k == "user:1" {
sm.Store("seen:user:1", true)
}
return true // continue iteration
})
Delete
sm.Delete("user:1")
Additional notes
- Range does not traverse a copy. It avoids visiting any key more than once but does not provide a consistent snapshot relative to concurrent writes.
- Values must be type-asserted on load because the API uses interface{}.
Choosing between RWMutex and sync.Map
- Read-mostly workloads or "write once, read many" caches can benefit from sync.Map due to reduced lock contention.
- Highly concurrent access where goroutines operate on mostly disjoint key sets can also perform well with sync.Map.
- For balanced read/write patterns on smaller maps, a map guarded by RWMutex can be faster and simpler.
sync.Map internals (overview)
sync.Map employs a two-level design to reduce lock contention:
- A read-only map (read) is accessed with out locking for fast-path loads.
- A dirty map (dirty) is protected by a mutex and holds recently written or unmigrated entries.
- Miss counting during reads eventually promotes the dirty map into the read-only map to rebalance fast-path lookups.
- Writes typically acquire the mutex, updating the dirty map and potentially amending or promoting structures. Loads succeed lock-free when the key is present in read.
This layout reduces contention under read-heavy or disjoint-key workloads while maintaining correctness.
Sharded concurrent maps (third-party)
For generalized concurrent maps with APIs closer to built-in maps and predictable performance across mixed workloads, a sharded implementation can be used. One popular option shards the keyspace and uses per-shard locks to increase parallelism.
Installation
go get github.com/orcaman/concurrent-map/v2
Basic usage (generics, v2)
import (
cmap "github.com/orcaman/concurrent-map/v2"
)
func demoConcurrentMap() {
// Create a sharded map with string keys and int values.
m := cmap.New[string, int]()
// Insert/overwrite
m.Set("u:1", 10)
m.Set("u:2", 20)
// Read
if v, ok := m.Get("u:2"); ok {
fmt.Println("value:", v)
}
// Delete
m.Remove("u:1")
if _, ok := m.Get("u:1"); !ok {
fmt.Println("missing key u:1")
}
}
Key properteis of sharded maps:
- Segment (shard) locks allow concurrrent operations on different keys with minimal contention.
- APIs often include Set/Get/Remove, with optional Upsert and iterators.
- v2 uses Go generics for typed keys and values; older versions retsrict keys to strings.