Build and Deploy PetClinic
Deploy each microservice's backing database
Deployment decisions:
- We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
- We deploy a separate database statefulset for each service.
- Inside each statefulset we name the database "service_instance_db".
- Apps use the root username "root".
- The helm installation will generate a root user password in a secret.
- The applications reference the secret name to get at the database credentials.
Preparatory steps
We assume you already have helm installed.
-
Add the helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
-
Update it:
Deploy the databases
Deploy the databases with a helm install
command, one for each app/service:
-
Vets:
helm install vets-db-mysql bitnami/mysql \
--set auth.database=service_instance_db \
--version 10.3.0
-
Visits:
helm install visits-db-mysql bitnami/mysql \
--set auth.database=service_instance_db \
--version 10.3.0
-
Customers:
helm install customers-db-mysql bitnami/mysql \
--set auth.database=service_instance_db \
--version 10.3.0
The databases should be up after ~ 1-2 minutes.
Wait for the pods to be ready (2/2 containers).
Build the apps, docker images, and push them to image registry
We assume you already have maven installed locally.
-
Compile the apps and run the tests:
-
Build the images (this takes a little over 5 minutes)
mvn spring-boot:build-image
-
Publish the images
Deploy the apps
The deployment manifests are located in manifests/deploy
.
The services are vets
, visits
, customers
, and petclinic-frontend
. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.
vets-service.yaml
| ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vets-service
labels:
account: vets-service
---
apiVersion: v1
kind: Service
metadata:
name: vets-service
labels:
app: vets-service
spec:
ports:
- name: http
port: 8080
selector:
app: vets-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vets-v1
labels:
app: vets-service
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: vets-service
version: v1
template:
metadata:
labels:
app: vets-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
serviceAccountName: vets-service
containers:
- name: vets-service
image: ${PULL_IMAGE_REGISTRY}/petclinic-vets-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /actuator/health/liveness
initialDelaySeconds: 90
periodSeconds: 5
readinessProbe:
httpGet:
port: 8080
path: /actuator/health/readiness
initialDelaySeconds: 15
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
memory: 1Gi
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://vets-db-mysql.default.svc.cluster.local:3306/service_instance_db
- name: SPRING_DATASOURCE_USERNAME
value: root
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: vets-db-mysql
key: mysql-root-password
|
visits-service.yaml
| ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: visits-service
labels:
account: visits-service
---
apiVersion: v1
kind: Service
metadata:
name: visits-service
labels:
app: visits-service
spec:
ports:
- name: http
port: 8080
selector:
app: visits-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: visits-v1
labels:
app: visits-service
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: visits-service
version: v1
template:
metadata:
labels:
app: visits-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
serviceAccountName: visits-service
containers:
- name: visits-service
image: ${PULL_IMAGE_REGISTRY}/petclinic-visits-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /actuator/health/liveness
initialDelaySeconds: 90
periodSeconds: 5
readinessProbe:
httpGet:
port: 8080
path: /actuator/health/readiness
initialDelaySeconds: 15
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
memory: 1Gi
env:
- name: DELAY_MILLIS
value: "0"
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://visits-db-mysql.default.svc.cluster.local:3306/service_instance_db
- name: SPRING_DATASOURCE_USERNAME
value: root
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: visits-db-mysql
key: mysql-root-password
|
customers-service.yaml
| ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: customers-service
labels:
account: customers-service
---
apiVersion: v1
kind: Service
metadata:
name: customers-service
labels:
app: customers-service
spec:
ports:
- name: http
port: 8080
selector:
app: customers-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
labels:
app: customers-service
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers-service
version: v1
template:
metadata:
labels:
app: customers-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
serviceAccountName: customers-service
containers:
- name: customers-service
image: ${PULL_IMAGE_REGISTRY}/petclinic-customers-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /actuator/health/liveness
initialDelaySeconds: 90
periodSeconds: 5
readinessProbe:
httpGet:
port: 8080
path: /actuator/health/readiness
initialDelaySeconds: 15
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
memory: 1Gi
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://customers-db-mysql.default.svc.cluster.local:3306/service_instance_db
- name: SPRING_DATASOURCE_USERNAME
value: root
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: customers-db-mysql
key: mysql-root-password
|
petclinic-frontend.yaml
| ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: petclinic-frontend
labels:
account: petclinic-frontend
---
apiVersion: v1
kind: Service
metadata:
name: petclinic-frontend
labels:
app: petclinic-frontend
spec:
ports:
- name: http
port: 8080
selector:
app: petclinic-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: petclinic-frontend-v1
labels:
app: petclinic-frontend
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: petclinic-frontend
version: v1
template:
metadata:
labels:
app: petclinic-frontend
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
serviceAccountName: petclinic-frontend
containers:
- name: petclinic-frontend
image: ${PULL_IMAGE_REGISTRY}/petclinic-frontend:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /actuator/health/liveness
initialDelaySeconds: 90
periodSeconds: 5
readinessProbe:
httpGet:
port: 8080
path: /actuator/health/readiness
initialDelaySeconds: 15
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
memory: 1Gi
|
Apply the deployment manifests:
cat manifests/deploy/*.yaml | envsubst | kubectl apply -f -
The manifests reference the image registry environment variable, and so are passed through envsubst
for resolution before being applied to the Kubernetes cluster.
Wait for the pods to be ready (2/2 containers).
Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.
kubectl logs --follow svc/customers-service
Test database connectivity
The below instructions are taken from the output from the prior helm install
command.
Connect directly to the vets-db-mysql
database:
-
Obtain the root password from the Kubernetes secret:
-
Create, and shell into a mysql client pod:
kubectl run vets-db-mysql-client \
--rm --tty -i --restart='Never' \
--image docker.io/bitnami/mysql:8.0.37-debian-12-r2 \
--namespace default \
--env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \
--command -- bash
-
Use the mysql
client to connect to the database:
mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
At the mysql prompt:
-
Select the database:
-
List the tables:
-
Query vet records:
Exit the mysql prompt with \q
, then exit the pod with exit
.
One can similarly connect to and inspect the customers-db-mysql
and visits-db-mysql
databases.
Summary
At this point you should have all applications deployed and running, connected to their respective databases.
But we cannot access the application's UI until we configure ingress, which is our next topic.