Spring Boot Persistence Implementation. Typical Actions

Oleksii Dushenin
11 min readMay 18, 2021

--

In a previous post, a Spring Boot persistence layer theory was reviewed. In this article described concepts will be added to the Spring Boot Application. To begin with, the Docker Compose features will be used. PostgreSQL with the whole application will be dockerized. Secondly, Liquibase is to be used as a database migration tool. Repository classes will be converted into Hibernate entities, and custom repositories into Spring Data repositories. Code will be covered with unit tests ( Mockito) and integration tests ( Testcontainers).

Docker Compose

Docker Compose allows the usage of existing or custom Docker images for daily activities. For the purpose of this post PostgreSQL is to be used in the application as a RDBMS. Also, the whole application will be used with Docker.

The configuration is located in the repository root file called docker-compose.yml.

version: "3.3"

services:
postgres:
container_name: sbs_postgres
image: "postgres:13.3"
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: spring_boot_simple
ports:
- 5433:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 10s
timeout: 5s
retries: 5
networks:
- spring-boot-simple

app:
container_name: spring_boot_simple_persistence
build:
context: .
ports:
- 8080:8080
networks:
- spring-boot-simple
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/items"]
interval: 10s
timeout: 10s
retries: 10

networks:
spring-boot-simple:

Two services will be created:

They are expected to share the spring-boot-simple network.

Docker Compose Network

The network is created with the next lines of code:

networks:
spring-boot-simple:

Docker Compose PostgreSQL

Configuration for PostgreSQL has the next view:

postgres:
container_name: sbs_postgres
image: "postgres:13.3"
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: spring_boot_simple
ports:
- 5433:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 10s
timeout: 5s
retries: 5
networks:
- spring-boot-simple

Official PostgreSQL Docker image is used: postgres:13.3. It is a good idea not to use the latest (version is not specified) version. By doing this the same version will always be used.

In environment section user settings and database are specified.

Ports section is required in our case. We should have the ability to connect to the database outside of Docker. 5432 — is a PostgreSQL port that will be enabled to us as a port 5433 (can be other value).

Health check is required for PostgreSQL server validation.

Network section tells us that PostgreSQL service will be attached to spring-boot-simple network.

To run PostgreSQL next command has to be executed from the repository root (you must install Docker and Docker Compose):

docker-compose up postgres

Docker Compose Application

The configuration is pretty similar. However, it is more complex. For PostgreSQL existing Docker image was used. For our application, we are going to use Dockerfile (in the repository root).

FROM gradle:7.0.2-jdk11 AS build
COPY . /workdir
WORKDIR /workdir
RUN gradle clean bootJar --no-daemon

FROM openjdk:11.0.11-jre-slim
COPY --from=build /workdir/build/libs/spring-boot-simple*.jar /app.jar
CMD ["java", "-jar", "-Dspring.datasource.url=jdbc:postgresql://postgres:5432/spring_boot_simple", "/app.jar"]

Dockerfile consists of two parts.

Firstly, gradle:7.0.2-jdk11 image is used to build a jar file. It is based on Java 11 which is required for our application. We copy our repository files to the working directory in the image and select this directory. A target jar file is built with gradle command.

After that openjdk:11.0.11-jre-slim image is used to run the application. Created target jar is copied to the image root: /app.jar. Note that -from=build in COPY is required to reference the previous stage (build jar file).

On container startup, it will be executed with overridden database URL. postgres:5432 is used because our database service is called postgres inside a docker-compose.yml, so it acts as a host. Original PostgreSQL port 5432 is specified because our application is running inside the network spring-boot-simple. As a result, it is accessible. If you want to access the database from some admin tool e.g. pgAdmin jdbc:postgresql://localhost:5433/spring_boot_simple is the correct URL.

Having Dockerfile we can come back to docker-compose.yml.

app:
container_name: spring_boot_simple_persistence
build:
context: .
ports:
- 8080:8080
networks:
- spring-boot-simple
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/items"]
interval: 10s
timeout: 10s
retries: 10

It is almost the same as for postgres service.

The first difference is build.context which tells that Dockerfile is located in current directory.

It specified that this service depends on postgres service (that is why health checks are used). The application container will start after postgres container. Also as the application health check GET http://localhost:8080/items is used. It will be replaced with Spring Boot Actuator in the future.

To run application next command can be used:

docker-compose up --build postgres app

-build specifies that the image has to be rebuilt. Without this option previously built image will be used with each call of docker-compose up.

Spring Data JPA

To start working with databases next dependencies have to be added.

implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.postgresql:postgresql:42.2.20'

By doing this next actions take place:

  • Spring JPA auto configuration is enabled.
  • PostgreSQL database is specified as a target one.
  • Spring Data repositories are automatically initialized.

Data Source Configuration

Data source configuration is very simple. Next lines have to be added to application.yaml:

spring:
datasource:
url: jdbc:postgresql://localhost:5433/spring_boot_simple
username: admin
password: admin
driver-class-name: org.postgresql.Driver

HikariCP will be used as a connection pool with default settings. They can be overridden in application.yaml.

Disable Open Session In View

Next property is used to disable global request transactions.

spring:
jpa:
open-in-view: false

Hibernate Entity

In the How to Create Spring Boot Application post ProductItem class was acted as an entity. To make it work with Hibernate only few changes are required.

@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
@Entity
@Table(name = "product_item")
public class ProductItem {

@Id
@GeneratedValue(strategy = SEQUENCE, generator = "product_item_id_generator")
@SequenceGenerator(name = "product_item_id_generator", sequenceName = "product_item_id_seq", allocationSize = 1)
private Long id;

@Column
private String title;

}

@Entity makes it to work with Hibernate. @Table specifies database table.

@Column specifies mapping from field to database column.

@Id tells that the field acts as a database id. It will be generated automatically by PostgreSQL sequence (@GeneratedValue). @SequenceGenerator provides the name of the sequence and the size of the allocated id list.

Spring Data Repositories

If you remember in the How to Create Spring Boot Application post, the in-memory repository was created. In order to switch to Spring Data repository, we should only extend the correct interface.

@Repository
public interface ProductItemRepository extends JpaRepository<ProductItem, Long> {
}

Transactions

The last thing in the application layer is a transaction demarcation. It is done on the service layer.

@Service
@RequiredArgsConstructor
@Transactional(readOnly = true)
public class DefaultItemService implements ItemService {

private final ProductItemRepository repository;
private final ProductItemMapper mapper;

@Override
public Page<Item> findAll(Pageable pageable) {
return mapper.map(repository.findAll(pageable));
}

@Override
public Item getOne(Long id) {
return mapper.map(repository.getOne(id));
}

@Override
@Transactional
public Item create(CreateItemRequest createItemRequest) {
return mapper.map(repository.save(mapper.map(createItemRequest)));
}

@Override
@Transactional
public Item update(Long id, UpdateItemRequest updateItemRequest) {
final var item = repository.getOne(id);
mapper.map(item, updateItemRequest);
return mapper.map(repository.save(item));
}

@Override
@Transactional
public void delete(Long id) {
final var item = repository.getOne(id);
repository.delete(item);
}

}

As you can see read-only transaction is specified on a class level. It is a default transaction specification for all methods. Read-only transaction have better performance.

In places where we have write operation transaction specification is overridden.

Database Migration

The second step for Spring Boot Persistence is the necessity to manage databases. You should be able to create, change and delete database tables and columns. Migration has to be reproducible. The state of the database should be known in each environment.

In this application Liquibase will be used. Another alternative is Flyway.

Liquibase can be easily added to the Spring Boot Persistence Layer.

Liquibase Dependencies

Liquibase is configured automatically by Spring Boot. We only need to add a dependency:

implementation 'org.liquibase:liquibase-core:4.3.5'

Liquibase Tasks

In this example, Liquibase Yaml configuration with SQL files is used. However, other types are available. Please check Liquibase documentation.

There are two main tasks for Liquibase.

  • How to apply DDL (Data Definition Language) statements e.g. create/update/delete table.
  • Hot to insert test data for development purposes.

Liquibase Configuration

Let’s start with the goal. To begin with, we need to configure the Liquibase changelog file. This file holds all required information to perform the migration. The default location is /resources/changelog/db.changelog-master.yaml which can be changed in application.yml.

Secondly, we want to insert data only in the development environment. As a result, we need some separation of contexts. spring.liquibase.contexts solves this issue.

In application.yml next properties are added.

spring:
profiles:
active: dev
liquibase:
contexts: prod

We specified that the active profile is dev. In production active profile is overridden. The default context for Liquibase is prod (it can be any name).

To add test data we should have another context. It is done in profile specific configuration application-dev.yaml.

spring:
liquibase:
contexts: prod, dev

dev context is also used. As a result, on application startup with default configuration schema and test data will be added.

In resources next hierarchy is created:

resources
changelog
migration
dev
0001-initial-data.sql
prod
0001-initial-schema.sql
db.changelog-master.yaml

dev directory will have scripts with test data, prod — with production schema.

0001-initial-schema.sql

DROP TABLE IF EXISTS product_item;

CREATE TABLE product_item
(
id SERIAL,
title VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
);

table product_item is created, sequence product_item_id_seq is created automatically because of SERIAL.

0001-initial-data.sql

INSERT INTO product_item (id, title)
VALUES (1, 'Book'),
(2, 'Pencil');

SELECT setval('product_item_id_seq', 2);

Data are added and current sequence value is set to 2.

The last part is two specify these migrations in /resources/changelog/db.changelog-master.yaml:

databaseChangeLog:
- changeSet:
id: 0001-initial-schema
author: oleksii
context: prod
changes:
- sqlFile:
comment: Initial Schema
path: migration/prod/0001-initial-schema.sql
relativeToChangelogFile: true



# Test Data
- changeSet:
id: 0001-initial-data
author: oleksii
context: dev
changes:
- sqlFile:
comment: Initial Test Data
path: migration/dev/0001-initial-data.sql
relativeToChangelogFile: true

Scripts will be executed in a provided order.

context is related to spring.liquibase.contexts configuration. If the context is not provided in spring.liquibase.contexts migration will be skipped.

Running the application with active PostgreSQL will result in created schema with data.

Liquibase System Tables

Liquibase creates two meta tables:

You can check the structure of these tables.

databasechangelog contains information about applied migrations, databasechangeloglock is responsible for locking the table during execution.

As a result, migration will be applied only once in a multi node deployment.

Tests

The next important step in Spring Boot Persistence is Unit and Integration testing.

Unit Tests

In Unit tests interactions with repositories are mocked with Mockito.

@ExtendWith(MockitoExtension.class)
class DefaultItemServiceTest {

private static final Long ITEM_ID = 1L;

@Mock
private ProductItemRepository repository;

@Mock
private ProductItemMapper mapper;

@InjectMocks
private DefaultItemService service;

@Test
void shouldFindAll() {
final Pageable pageable = PageRequest.of(0, 1);
final int total = 1;
final ProductItem productItem = getProductItem();
final Item item = getItem();

when(repository.findAll(pageable)).thenReturn(
new PageImpl<>(
List.of(productItem),
pageable,
total
)
);

when(mapper.map(new PageImpl<>(List.of(productItem), pageable, total))).thenReturn(
new PageImpl<>(
List.of(item),
pageable,
total
)
);

final Page<Item> items = service.findAll(pageable);
assertThat(items).hasSize(1);
assertThat(items.getContent().get(0)).isEqualTo(item);
}

@Test
void shouldGetItem() {
final ProductItem productItem = getProductItem();
final Item item = getItem();

when(repository.getOne(ITEM_ID)).thenReturn(productItem);
when(mapper.map(productItem)).thenReturn(item);

final Item foundItem = service.getOne(ITEM_ID);
assertThat(foundItem).isEqualTo(item);
}

@Test
void shouldCreateItem() {
final CreateItemRequest createItemRequest = CreateItemRequest.builder()
.title("title")
.build();
final ProductItem productItem = getProductItem();
final Item item = getItem();

when(mapper.map(createItemRequest)).thenReturn(productItem);
when(repository.save(productItem)).thenReturn(productItem);
when(mapper.map(productItem)).thenReturn(item);

final Item createdItem = service.create(createItemRequest);
assertThat(createdItem).isEqualTo(item);
}

@Test
void shouldUpdateItem() {
final UpdateItemRequest updateItemRequest = UpdateItemRequest.builder()
.title("title")
.build();
final ProductItem productItem = getProductItem();
final Item item = getItem();

when(repository.getOne(ITEM_ID)).thenReturn(productItem);
when(repository.save(productItem)).thenReturn(productItem);
doAnswer(invocation -> {
final ProductItem existingItem = invocation.getArgument(0);
final UpdateItemRequest requestedItem = invocation.getArgument(1);
existingItem.setTitle(requestedItem.getTitle());
return null;
}).when(mapper).map(productItem, updateItemRequest);
when(mapper.map(productItem)).thenReturn(item);

final Item updatedItem = service.update(ITEM_ID, updateItemRequest);
assertThat(updatedItem).isEqualTo(item);
verify(mapper).map(productItem, updateItemRequest);
}

@Test
void shouldDeleteItem() {
final ProductItem productItem = getProductItem();

when(repository.getOne(ITEM_ID)).thenReturn(productItem);

service.delete(ITEM_ID);
verify(repository).delete(productItem);
}

private ProductItem getProductItem() {
return ProductItem.builder()
.id(ITEM_ID)
.title("product title")
.build();
}

private Item getItem() {
return Item.builder()
.id(ITEM_ID)
.title("product title")
.build();
}

}

So, repository and mapper are mocked and injected into the service. All interactions with a database are predefined.

Integration Tests

In integration tests the Spring context is initialized only once for all tests. It is achieved with AbstractIntegrationTest.

@ActiveProfiles("test")
@ContextConfiguration(initializers = AbstractIntegrationTest.DatabaseContextInitializer.class)
@SpringBootTest(classes = SpringBootSimpleApplication.class, webEnvironment = RANDOM_PORT)
public abstract class AbstractIntegrationTest {

private static final PostgreSQLContainer<?> POSTGRESQL_CONTAINER;

static {
POSTGRESQL_CONTAINER = new PostgreSQLContainer<>("postgres:13.3");
POSTGRESQL_CONTAINER.start();
}

@LocalServerPort
private int port;

@Autowired
protected TestRestTemplate restTemplate;

@Autowired
protected ObjectMapper objectMapper;

protected String url(String path) {
return "http://localhost:" + port + path;
}

public static class DatabaseContextInitializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {

@Override
public void initialize(ConfigurableApplicationContext context) {
TestPropertyValues.of(
"spring.datasource.url=" + POSTGRESQL_CONTAINER.getJdbcUrl(),
"spring.datasource.username=" + POSTGRESQL_CONTAINER.getUsername(),
"spring.datasource.password=" + POSTGRESQL_CONTAINER.getPassword()
).applyTo(context.getEnvironment());
}

}

}

Spring active profile is set to test with a configuration in application-test.yml.

spring:
datasource:
url: will be set in test
username: will be set in test
password: will be set in test
liquibase:
contexts: prod

We reset data source properties. They are set in AbstractIntegrationTest. It is done to eliminate a situation when tests can connect to your local PostgreSQL due to some misconfiguration. Testcontainers will be used to provide a database.

Liquibase context is set to prod. Data will be populated by other means.

Test Web Application

In order to test Spring Boot Web application next configuration has to be applied in AbstractIntegrationTest:

@SpringBootTest(classes = SpringBootSimpleApplication.class, webEnvironment = RANDOM_PORT)

It states that our application will be started on a random port.

To find out the port number next annotation is used:

@LocalServerPort 
private int port;

The next step is configuration of RestTemplate. TestRestTemplate is configured automatically. We only need to use it:

@Autowired 
protected TestRestTemplate restTemplate;

To use the correct port in our test next utility method was specified:

protected String url(String path) {
return "http://localhost:" + port + path;
}

Test Containers Configuration

Next dependencies are added:

implementation platform('org.testcontainers:testcontainers-bom:1.15.3')
testImplementation('org.testcontainers:postgresql')

It introduces PostgreSQL Testcontainers dependencies.

One PostgreSQL container is used for all tests. Therefore it is initialized in static block.

private static final PostgreSQLContainer<?> POSTGRESQL_CONTAINER;

static {
POSTGRESQL_CONTAINER = new PostgreSQLContainer<>("postgres:13.3");
POSTGRESQL_CONTAINER.start();
}

After that we have to specify in our tests how to use this PostgreSQL container.

To do this this initializer is configured for class:

@ContextConfiguration(initializers = AbstractIntegrationTest.DatabaseContextInitializer.class)

It has the next form:

public static class DatabaseContextInitializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {

@Override
public void initialize(ConfigurableApplicationContext context) {
TestPropertyValues.of(
"spring.datasource.url=" + POSTGRESQL_CONTAINER.getJdbcUrl(),
"spring.datasource.username=" + POSTGRESQL_CONTAINER.getUsername(),
"spring.datasource.password=" + POSTGRESQL_CONTAINER.getPassword()
).applyTo(context.getEnvironment());
}

}

In this code we simple override application-test.yml properties based on the container data.

This is the whole configuration for Testcontainers. Each test that extends AbstractIntegrationTest will use the container.

Item Controller Test

To test our item controller the next test class was created:

@Sql(scripts = {
"/sql/clean.sql",
"/sql/product_items.sql"
})
public class ItemControllerIntegrationTest extends AbstractIntegrationTest {

@Test
public void shouldGetItem() {
final HttpEntity<Void> entity = new HttpEntity<>(httpHeaders());

ResponseEntity<Item> foundItem = restTemplate.exchange(url("/items/1"), HttpMethod.GET, entity, Item.class);
assertThat(foundItem.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(foundItem.getBody()).isNotNull();
assertThat(foundItem.getBody().getId()).isEqualTo(1L);
assertThat(foundItem.getBody().getTitle()).isEqualTo("Test Book");
}

@Test
public void shouldFindAllItems() throws JsonProcessingException {
final HttpEntity<Void> entity = new HttpEntity<>(httpHeaders());

ResponseEntity<String> foundItemsPage = restTemplate.exchange(
url("/items?page=0&size=1"),
HttpMethod.GET,
entity,
String.class
);
assertThat(foundItemsPage.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(foundItemsPage.getBody()).isNotNull();

PageImpl<Item> foundItems = objectMapper.readValue(
foundItemsPage.getBody(),
new TypeReference<TestPageImpl<Item>>() {
}
);

assertThat(foundItems).hasSize(1);
assertThat(foundItems.getContent().get(0).getTitle()).isEqualTo("Test Book");
}

@Test
public void shouldCreateItem() {
final HttpEntity<CreateItemDto> entity = new HttpEntity<>(
new CreateItemDto("new item"),
httpHeaders()
);

ResponseEntity<Item> createdItem = restTemplate.exchange(
url("/items"),
HttpMethod.POST,
entity,
Item.class
);

assertThat(createdItem.getStatusCode()).isEqualTo(HttpStatus.CREATED);
assertThat(createdItem.getBody()).isNotNull();
assertThat(createdItem.getBody().getTitle()).isEqualTo("new item");
assertThat(createdItem.getBody().getId()).isNotNull();
}

@Test
public void shouldUpdateItem() {
final HttpEntity<UpdateItemDto> entity = new HttpEntity<>(
new UpdateItemDto("updated item"),
httpHeaders()
);

ResponseEntity<Item> createdItem = restTemplate.exchange(
url("/items/2"),
HttpMethod.PUT,
entity,
Item.class
);

assertThat(createdItem.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(createdItem.getBody()).isNotNull();
assertThat(createdItem.getBody().getTitle()).isEqualTo("updated item");
assertThat(createdItem.getBody().getId()).isNotNull();
}

@Test
public void shouldDeleteItem() {
final HttpEntity<UpdateItemDto> entity = new HttpEntity<>(httpHeaders());

ResponseEntity<Item> createdItem = restTemplate.exchange(
url("/items/2"),
HttpMethod.DELETE,
entity,
Item.class
);

assertThat(createdItem.getStatusCode()).isEqualTo(HttpStatus.NO_CONTENT);
}

private HttpHeaders httpHeaders() {
final HttpHeaders headers = new HttpHeaders();
headers.setContentType(APPLICATION_JSON);
headers.setAccept(List.of(APPLICATION_JSON));
return headers;
}

}

restTemplate is used from AbstractIntegrationTest and PostgreSQL container is available.

Data are inserted in the database with Spring capabilities:

@Sql(scripts = {
"/sql/clean.sql",
"/sql/product_items.sql"
})

At first, all data are removed from the database because it is shared between all tests. After that data are added.

DELETE FROM product_item;

product_items.sql:

INSERT INTO product_item (id, title)
VALUES (1, 'Test Book'),
(2, 'Test Pencil');

SELECT setval('product_item_id_seq', 2);

Dummy Integration Test

This test (which is empty) was added to check that Spring context (and database) is initialized only once. You can see it in the logs.

Run Tests

Tests can be run with:

./gradlew clean test

Summary

To sum up, in this post practical aspects of Spring Boot Persistence were implemented:

The whole code can be found at Github v1.0.0-persistence branch.

In the next post mappers will be refactored.

Originally published at https://datamify.com on May 18, 2021.

--

--

Oleksii Dushenin
Oleksii Dushenin

Written by Oleksii Dushenin

IT Consultant with 8+ years of industry experience — https://datamify.com/

No responses yet