Using Spring Cache with Redis
The fundamental principal of caching is to use a global map for unified storage of data throughout the application's lifecycle. Caching is suitable for data that is frequently read but rarely written. Since a cache is essentially a map, effective management is required to prevent memory issues caused by uncontrolled additions without deletions. Therefore, caching relies on a cache manager. Before using a unified caching abstraction, developers often manually query data and set caches using Redis. While this approach can be applied anywhere and anytime with minimal management, it tends to clutter business logic code.
Introducing Dependencies
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
<exclusions>
<exclusion>
<groupId>io.lettuce</groupId>
<artifactId>lettuce-core</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
Overview
Spring Cache simplifies development by allowing automatic cache management through annotations. It provides a standardized way to integrate with various cache providers, including Redis. By using annotations like @Cacheable, you can instruct the framework to store method return values in a cache. This is achieved through AOP, which intercepts method calls. The steps are:
- Enable caching with
@EnableCaching. - Add
@Cacheableon methods whose results should be cached.
The annotation triggers aspect-oriented programming to check if the cache contains the data. If present, the cached result is returned; otherwise, the method executes and its return value is stored.
Cache managers group caches into different regions for various business purposes. A cache entry can belong to multiple groups for repeated caching.
Simple Scenario
Consider a method that retrieves a product from the database. With @Cacheable, the framework adds before-advice and after-returning advice. The before-advice checks for cached entries; if found, it returns immediately. The after-returning advice captures the result and stores it in the cache. By default, data is serialized using JDK serialization, which may not be ideal for interoperability. Therefore, it's common to configure JSON serialization.
Redis Cache Configuration
There are two ways to configure Redis cache:
- Use the default configuration with properties in
application.properties. - Manually define a
RedisCacheConfigurationbean.
When a custom bean is created, the default property-based configuration is overridden. To retain property values, you must load them manually. Using @EnableConfigurationProperties(CacheProperties.class) loads the properties and makes them available for injection.
To set JSON serialization:
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
config = config.serializeKeysWith(
RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()));
config = config.serializeValuesWith(
RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()));
Cache Consistency
Two common patterns for maintaining cache consistency:
- Double-write: Write to cache whenever the database is updated. This does not guarentee real-time consistency but eventually converges.
- Invalidation: Delete the cache after updating the database. This is simpler and often preferred.
Spring Cache provides annotations for these patterns:
@Cacheable(value = {"category"}, key = "#root.method.name", sync = true)
@CacheEvict(value = "category", allEntries = true) // Removes all entries in the "category" partition
@CachePut() // Supports double-write mode
@Cacheable: Writes result to cache only if the key is not already present. The key is derived from the SpEL expression; using quotes disables evaluation. The cache key iscacheName::key.@CacheEvict: Deletes cache entries. WithallEntries = true, it clears the entire partition.
For multiple cache deletionns:
@Caching(evict = {
@CacheEvict(value = "category", key = "'getLevel1Categorys'"),
@CacheEvict(value = "category", key = "'getCatalogJson'")
})
// Equivalent to:
@CacheEvict(value = "category", allEntries = true)
Limitations of Spring Cache
Read Mode
- Cache Penetration: Queries for non-existent keys can hit the database. Spring Cache solves this by caching null values.
- Cache Breakdown: A large number of concurrent requests for an expired key. Spring Cache does not lock by default, but you can enable locking with
@Cacheable(sync = true). This uses a synchronizedgetmethod inRedisCachethat internally calls a non-synchronizedget, and if the value is absent, it uses aCallableto load the data. The flow is:
public synchronized <T> T get(Object key, Callable<T> valueLoader) {
ValueWrapper result = this.get(key);
if (result != null) {
return result.get();
} else {
T value = valueFromLoader(key, valueLoader);
this.put(key, value);
return value;
}
}
The valueLoader is a callback that eventually invokes the original method via CacheOperationInvoker. This local synchronization is sufficient for single JVM instances. For distributed scenarios, a distributed write lock is needed.
- Cache Avalanche: Many hot keys expire simultaneously. Mitigate by adding random expiration times.
Write Mode (Cache-Database Consistency)
- Read-Write Lock: Suitable for read-heavy workloads.
- Canal: Subscribe to MySQL binlog to capture data changes.
- Heavy Read/Write: Direct database queries or use MQ to ensure eventual consistency, accepting slight delays.
Conclusion
Spring Cache is adequate for regular data (frequent reads, infrequent writes, low real-time consistency requirements). For special cases, consider distributed locks, Canal, or queuing mechanisms.
For typical use cases, setting cache expiration times is sufficient.