Highly available Wildfly Applications on Kubernetes
Highly available Wildfly Applications on Kubernetes
Migrating existing applications to Kubernetes can be tricky, especially if your application uses session replication to enable high availability. In a multi-node cluster, each node must contain details about the users' session. Replication is used so that a user can be routed to any node in the cluster and not lose their place.
WildFly/JBoss clusters use JGROUPS to manage cluster replication. Cluster members discover each other using multicast networking. Unfortunately, this technique does not work in Kubernetes. WildFly has adopted a new communication strategy for JGROUPS to allow cluster members to discover each other by interrogating the Kubernetes API. It accomplishes this by looking for other WildFly pods in the same namespace.
Migrating Jboss Cluster Apps to Wildfly
KUBE_PING is the protocol used to achieve WildFly clustering in Kubernetes.
Configure KUBE_PING
The first step in configuring KUBE_PING is to create a WildFly CLI configuration file. This file is used to manipulate the standalone-full-ha.xml file during container creation.
config-server.cli
1embed-server --server-config=standalone-full-ha.xml --std-out=echo
2
3###apply all configuration to the server
4batch
5
6/subsystem=jgroups/channel=ee: write-attribute(name=stack,value=tcp)
7
8/subsystem=jgroups/stack=tcp: remove()
9/subsystem=jgroups/stack=tcp: add()
10/subsystem=jgroups/stack=tcp/transport=TCP: add(socket-binding="jgroups-tcp")
11/subsystem=jgroups/stack=tcp/protocol=kubernetes.KUBE_PING: add()
12/subsystem=jgroups/stack=tcp/protocol=kubernetes.KUBE_PING/property=namespace: add(value=${env.POD_NAMESPACE:default})
13/subsystem=jgroups/stack=tcp/protocol=MERGE3: add()
14/subsystem=jgroups/stack=tcp/protocol=FD_SOCK: add()
15/subsystem=jgroups/stack=tcp/protocol=FD_ALL: add()
16/subsystem=jgroups/stack=tcp/protocol=VERIFY_SUSPECT: add()
17/subsystem=jgroups/stack=tcp/protocol=pbcast.NAKACK2: add()
18/subsystem=jgroups/stack=tcp/protocol=UNICAST3: add()
19/subsystem=jgroups/stack=tcp/protocol=pbcast.STABLE: add()
20/subsystem=jgroups/stack=tcp/protocol=pbcast.GMS: add()
21/subsystem=jgroups/stack=tcp/protocol=MFC: add()
22/subsystem=jgroups/stack=tcp/protocol=FRAG2:add()
23
24/interface=private: write-attribute(name=nic, value=eth0)
25/interface=private: undefine-attribute(name=inet-address)
26
27/socket-binding-group=standard-sockets/socket-binding=jgroups-mping: remove()
28run-batch
29
30###stop embedded server
31
32stop-embedded-server
This script accomplishes a few things.
- Adds the KUBE_PING configuration block.
- Removes the MPING configuration block.ext
An environment variable called POD_NAMESPACE
is used to locate additional cluster members. Any other pods in this namespace will be considered candidates to join the cluster.
The "label" property of KUBE_PING can be used to subdivide clusters further. Only pods with this label will be used within the cluster. This is useful if you have multiple clusters in the same namespace or are running other non-WildFly pods in the namespace.
1/subsystem=jgroups/stack=tcp/protocol=kubernetes.KUBE_PING/property=labels: add(value=${env.kubebernetes_labels:default})
Configure Dockerfile
The following Dockerfile is used to reconfigure the standard WildFly using KUBE_PING. It also adds our application war. A little later on, I will provide a link to a sample application I used to test session replication.
1FROM jboss/wildfly
2RUN /opt/jboss/wildfly/bin/add-user.sh admin redhat --silent
3ADD configuration/config-server.cli /opt/jboss/
4RUN /opt/jboss/wildfly/bin/jboss-cli.sh --file=config-server.cli
5RUN rm -rf /opt/jboss/wildfly/standalone/configuration/standalone_xml_history/*
6ADD application/cluster.war /opt/jboss/wildfly/standalone/deployments/
7EXPOSE 8080 9990 7600 8888
8ENV POD_NAMESPACE default
RBAC
KUBE_PING uses the Kubernetes API to discover other nodes in the cluster. A Service account is required to allow this to occur as the default service account does not have enough privileges. The following bit of YAML creates a ServiceAccount and an associated ClusterRole and a ClusterRoleBinding that has the ability to list the pods within the default namespace.
1apiversion: v1
2kind: ServiceAccount
3metadata:
4 name: jgroups-kubeping-service-account
5 namespace: default
6---
7kind: ClusterRole
8apiversion: rbac.authorization.k8s.io/v1
9metadata:
10 name: jgroups-kubeping-pod-reader
11 namespace: default
12rules:
13- apigroups: [""]
14 resources: ["pods"]
15 verbs: ["get", "list"]
16---
17apiversion: rbac.authorization.k8s.io/v1beta1
18kind: clusterrolebinding
19metadata:
20 name: jgroups-kubeping-api-access
21roleref:
22 apigroup: rbac.authorization.k8s.io
23 kind: clusterrole
24 name: jgroups-kubeping-pod-reader
25subjects:
26- kind: serviceaccount
27 name: jgroups-kubeping-service-account
28 namespace: default
Deployment
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: wildfly
5 labels:
6 app: wildfly
7 tier: devops
8spec:
9 selector:
10 matchLabels:
11 app: wildfly
12 tier: devops
13 replicas: 2
14 template:
15 metadata:
16 labels:
17 app: wildfly
18 tier: devops
19 spec:
20 serviceAccountName: jgroups-kubeping-service-account
21 containers:
22 - name: kube-ping
23 image: ellin.com/wildfly/demo:latest
24 command: ["/opt/jboss/wildfly/bin/standalone.sh"]
25 args: ["--server-config", "standalone-full-ha.xml", "-b", $(pod_ip), "-bmanagement", $(pod_ip) ,"-bprivate", $(pod_ip) ]
26 imagePullPolicy: IfNotPresent
27 ports:
28 - containerPort: 8080
29 - containerPort: 9990
30 - containerPort: 7600
31 - containerPort: 8888
32 env:
33 - name: pod_ip
34 value: "0.0.0.0"
35 - name: kubernetes_namespace
36 valueFrom:
37 fieldRef:
38 apiVersion: v1
39 fieldPath: metadata.namespace
40 - name: kubernetes_labels
41 value: app=wildfly
42 - name: JAVA_OPTS
43 value: -Djdk.tls.client.protocols=tlsv1.2
Testing the configuration.
For testing, I used a very simple Stateful Web App available on GitHub. After building the war with mvn build
I added it to a new docker container built using the sample Dockerfile.
After deploying the application, you can visit the page. A count of visits is kept on a per session basis. Returning to either pod will result in the same count.
- Create a cluster ( I am using Kind)
1kind create cluster
2Creating cluster "kind" ...
3✓ Ensuring node image (kindest/node:v1.18.2) 🖼
4✓ Preparing nodes 📦
5✓ Writing configuration 📜
6⢎ ⠁ Starting control-plane 🕹️
- Load image to Kind.
1kind load docker-image ellin.com/wildfly/demo:latest
- Apply RBAC
1kubectl apply -f rbac.yml
- Apply the Deployment.
1kubectl apply -f deployment.yaml
- Get Pods list
1kubectl get pods
2NAME READY STATUS RESTARTS AGE
3wildfly-6f4cf67765-gdrtl 1/1 Running 0 1s
4wildfly-6f4cf67765-wjkhf 1/1 Running 0 2s
- Create a port forward to each pod.
1kubectl port-forward pod/wildfly-6f4cf67765-gdrtl 8888:8080
and
1kubectl port-forward pod/wildfly-6f4cf67765-wjkhf 8889:8080
- Visit
http://localhost:8888/Stateful-Tracker-1.0.0-SNAPSHOT
and
http://localhost:8889/Stateful-Tracker-1.0.0-SNAPSHOT
At this point, the Number of Visits count should be in sync. Creating a new session in a private window should start a separate count for that new session.