Distributed Cache in Spring Boot Application with Redis
In a previous post, Spring Boot Application exception handling was introduced. In this article, possible application optimization with Redis cache will be reviewed. There are a lot of possible ways to optimize an application. Hibernate second-level cache, query optimization, or the usage of native SQL (it can be JOOQ) are some of the examples. Also, application architecture can be reviewed which will fully affect the way an application works. In this post distributed Spring Boot cache based on Redis will be implemented. It is one of the simplest ways to achieve optimization.
Candidates for Caching
Not every operation can be cached. Cache bring an overhead to Spring Boot application. In fact, cache introduction can negatively affect performance.
Operations that are good candidates for caching have next characteristics:
- Operation response rarely changes.
- Cache overhead is much lower than the operation time itself.
- There is a high level of contention.
- Operation requires high network latency (e.g. it can be another HTTP call).
- The operation is frequently used.
Basically, it seems that our application does not have good candidates for caching. If you remember, the initial application represents an admin tool for managing product items in the online shopping domain. The number of items is small and they are rarely accessed.
Let’s imagine that ItemController GET API represents an API for the customers of the online shopping domain. In this case, some optimizations can be made.
@GetMapping
public Page<Item> find(@PageableDefault(sort = "id") Pageable pageable) {
return service.findAll(pageable);
}
@GetMapping(value = ENDPOINT_BY_ID)
public Item get(@PathVariable Long id) {
return service.getOne(id);
}
It doesn’t make sense to change ItemController.find method. It returns pages. Let’s say one customer requests the first page and it is cached. In such systems usually, there are some filters that can be applied to results. The second user can request the page with other filters. As a result, we will receive a cache miss. Another cause can be related to pages. Admin can add an item to the database. Based on sorting, it should be added to the first cached page. The new item will not be shown to the user. Moreover, the second page (which is not cached) will have the item from the previous page duplicated. Basically, when we add a single item the whole cache has to be evicted.
On the other hand, ItemController.get can be used for caching. Let’s imagine that some of the items are frequently viewed by users. Moreover, such items can require a lot of computation. A product item can have information about the manufacturer, country of origin, feedback, comments, video, and photo. Also, some data can be dependent on third-party API.
For described situation, ItemController.find can return pages with only basic information about the product item. ItemController.get can respond with the whole set of product item data and can be cached.
Distributed Cache
In this post, the distributed cache with Redis will be implemented. A production application typically has multiple running instances of the same service. Distributed cache brings consistency to the results. It doesn’t matter which running service instance serves the request. The result will always be the same. With in-memory caches, we can easily end up in a situation of non-consistent behavior. For E.g. users can update a product item and see old data after an update. After page refresh, actual data can be presented. After yet another update, it can be old data again.
Spring Boot Cache with Redis
Let’s add distributed Redis cache to our Spring Boot Application.
docker-compose.yml
Redis can be installed on a local machine. However, the same approach as for PostgreSQL in How To Add Persistence Layer Into Spring Boot Application will be used.
We should add another service into docker-compose.yml:
redis:
container_name: sbs_redis
image: redis:6.2.3
ports:
- 6380:6379
healthcheck:
test: [ "CMD", "redis-cli", "ping"]
interval: 1s
timeout: 3s
retries: 30
networks:
- spring-boot-simple
It is very similar to PostgreSQL in How To Add Persistence Layer Into Spring Boot Application. We just add Redis to spring-boot-simple network. It is available at port 6380 after all health checks will be completed.
Dockerfile
Small changes are required for our application itself. Basically, we have to specify just host and port of running Redis.
CMD ["java", "-jar", "-Dspring.datasource.url=jdbc:postgresql://postgres:5432/spring_boot_simple", "-Dspring.redis.host=redis", "-Dspring.redis.port=6379", "/app.jar"]
The same logic is applied as for PostgreSQL. redis is a name of a service in docker-compose.yml and it acts as a host. Port 6379 is used because we are located inside spring-boot-simple network.
To access Redis outside docker network localhost:6380 has to be used.
build.gradle
We want two operations: cache and Redis. As a result we need two Spring Boot starters.
implementation 'org.springframework.boot:spring-boot-starter-cache'
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
Enable Caching
Two enable caching @EnableCaching annotation has to be used. Let’s create a configuration class.
@Configuration
@EnableCaching
public class CacheConfiguration {
public static final String ITEMS_CACHE = "items";
}
ITEMS_CACHE specifies the name of the cache for product items.
The next step is to annotate methods that are expected to be cached.
@Service
@RequiredArgsConstructor
@Transactional(readOnly = true)
public class DefaultItemService implements ItemService {
private final ProductItemRepository repository;
private final ProductItemMapper mapper;
@Override
public Page<Item> findAll(Pageable pageable) {
return mapper.map(repository.findAll(pageable));
}
@Override
@Cacheable(ITEMS_CACHE)
public Item getOne(Long id) {
return mapper.map(repository.getOne(id));
}
@Override
@Transactional
public Item create(CreateItemRequest createItemRequest) {
return mapper.map(repository.save(mapper.map(createItemRequest)));
}
@Override
@Transactional
@CacheEvict(ITEMS_CACHE)
public Item update(Long id, UpdateItemRequest updateItemRequest) {
final var item = repository.getOne(id);
mapper.map(item, updateItemRequest);
return mapper.map(repository.save(item));
}
@Override
@Transactional
@CacheEvict(ITEMS_CACHE)
public void delete(Long id) {
final var item = repository.getOne(id);
repository.delete(item);
}
}
As you can see getOne method has @Cacheable(ITEMS_CACHE). This enables caching for this method.
@CacheEvict(ITEMS_CACHE) is used to remove cached entities for records that were changed.
That is all that have to be done to enable caching. Default serialization is used. As a result we should make ProductItem Serializable. However, we will not use this approach.
The example of cached entity in Redis with default serialization:
127.0.0.1:6379> get "items::1" "\xac\xed\x00\x05sr\x003com.datamify.spring.boot.simple.service.domain.Item\x1a\xa4m\xa6\xe9!\xb4\xd8\x02\x00\x02L\x00\x02idt\x00\x10Ljava/lang/Long;L\x00\x05titlet\x00\x12Ljava/lang/String;xpsr\x00\x0ejava.lang.Long;\x8b\xe4\x90\xcc\x8f#\xdf\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00xp\x00\x00\x00\x00\x00\x00\x00\x01t\x00\x04Book"
We are going to change configuration with the next code:
@Configuration
@EnableCaching
public class CacheConfiguration {
public static final String ITEMS_CACHE = "items";
@Bean
RedisCacheConfiguration redisCacheConfiguration() {
return RedisCacheConfiguration.defaultCacheConfig()
.disableCachingNullValues()
.serializeValuesWith(SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()));
}
@Bean
RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer() {
return RedisCacheManager.RedisCacheManagerBuilder::enableStatistics;
}
}
Only a few options were changed here. Other can also be changed when needed.
In this configuration we specify a JSON serializer, null entities are not cached, and statistics is enabled.
PostgreSQL Logs
Let’s add logging to PostgreSQL configuration in docker-compose.yml.
services:
postgres:
....
command: ["postgres", "-c", "log_statement=all", "-c", "log_destination=stderr"]
Use Redis CLI
Redis CLI can be used from locally installed Redis or from our running Docker container. To use it from Docker container:
docker exec -it sbs_redis /bin/bash
redis-cli
Get Product Item by ID
curl -i --location --request GET 'localhost:8080/items/2' \
--header 'Content-Type: application/json'
After the first run entity has to be cached. There is log in PostgreSQL docker container:
sbs_postgres | 2021-05-24 17:19:41.357 UTC [40] LOG: execute <unnamed>: BEGIN READ ONLY
sbs_postgres | 2021-05-24 17:19:41.357 UTC [40] LOG: execute <unnamed>: select productite0_.id as id1_0_0_, productite0_.title as title2_0_0_ from product_item productite0_ where productite0_.id=$1
sbs_postgres | 2021-05-24 17:19:41.357 UTC [40] DETAIL: parameters: $1 = '2'
sbs_postgres | 2021-05-24 17:19:41.371 UTC [40] LOG: execute S_2: COMMIT
A record is saved into Redis.
127.0.0.1:6379> get "items::2"
"{\"@class\":\"com.datamify.spring.boot.simple.service.domain.Item\",\"id\":2,\"title\":\"Pencil\"}"
127.0.0.1:6379>
After the second execution of GET entity the result will be fetched directly from Redis.
Tests
Tests are based on the same principles as in How To Add Persistence Layer Into Spring Boot Application.
In AbstractIntegrationTest Testcontainers configurations is added.
@ContextConfiguration(initializers = {
AbstractIntegrationTest.RedisContextInitializer.class
})
@Autowired
protected CacheManager cacheManager;
private static final GenericContainer<?> REDIS_CONTAINER;
static {
REDIS_CONTAINER = new GenericContainer<>("redis:6.2.3").withExposedPorts(6379);
REDIS_CONTAINER.start();
}
public static class RedisContextInitializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {
@Override
public void initialize(ConfigurableApplicationContext context) {
TestPropertyValues.of(
"spring.redis.host=" + REDIS_CONTAINER.getHost(),
"spring.redis.port=" + REDIS_CONTAINER.getFirstMappedPort()
).applyTo(context.getEnvironment());
}
}
In RedisContextInitializer Spring Boot Test properties are overridden based on started REDIS_CONTAINER. Also created by Spring Boot CacheManager is autowired. It will be used in tests.
The test itself in ItemControllerIntegrationTest has a next form:
@Test
public void shouldGetItem_FromCache() {
final HttpEntity<Void> voidEntity = new HttpEntity<>(httpHeaders());
final HttpEntity<CreateItemDto> createEntity = new HttpEntity<>(
new CreateItemDto("new item for cache"),
httpHeaders()
);
ResponseEntity<Item> createdItem = restTemplate.exchange(
url("/items"),
HttpMethod.POST,
createEntity,
Item.class
);
assertThat(createdItem.getStatusCode()).isEqualTo(CREATED);
final Long itemId = createdItem.getBody().getId();
final ResponseEntity<Item> foundItem = restTemplate.exchange(
url("/items/" + itemId),
HttpMethod.GET,
voidEntity,
Item.class
);
assertThat(foundItem.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(foundItem.getBody()).isNotNull();
assertThat(foundItem.getBody().getId()).isEqualTo(itemId);
assertThat(foundItem.getBody().getTitle()).isEqualTo("new item for cache");
final Item item = cacheManager.getCache(ITEMS_CACHE).get(itemId, Item.class);
assertThat(item.getId()).isEqualTo(itemId);
final ResponseEntity<Void> deletedItem = restTemplate.exchange(
url("/items/" + itemId),
HttpMethod.DELETE,
voidEntity,
Void.class
);
assertThat(deletedItem.getStatusCode()).isEqualTo(HttpStatus.NO_CONTENT);
final Item itemAfterDelete = cacheManager.getCache(ITEMS_CACHE).get(itemId, Item.class);
assertThat(itemAfterDelete).isNull();
}
To begin with, in this test product item is created. Then it is fetched. CacheManager is checked to hold the cached value. After that, the product item is deleted. It should be removed from the cache.
Summary
To sum up, in this post distributed Redis cache in Spring Boot Application was introduced.
The whole code can be found at Github v5.0.0-redis-cache branch.
To run application locally next command can be used:
docker-compose up --build postgres redis app
Originally published at https://datamify.com on May 25, 2021.