Dynamic provisioning using EFS
Now that we understand the EFS storage class for Kubernetes let's create a Persistent Volume and change the assets container on the assets deployment to mount the Volume created.
First inspect the efspvclaim.yaml file to see the parameters in the file and the claim of the specific storage size of 5GB from the Storage class efs-sc we created in the earlier step:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
  namespace: assets
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
We'll also modify the assets service is two ways:
- Mount the PVC to the location where the assets images are stored
 - Add an init container to copy the initial images to the EFS volume
 
- Kustomize Patch
 - Deployment/assets
 - Diff
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: assets
spec:
  replicas: 2
  template:
    spec:
      initContainers:
       - name: copy
         image: "public.ecr.aws/aws-containers/retail-store-sample-assets:latest"
         command: ["/bin/sh", "-c", "cp -R /usr/share/nginx/html/assets/* /efsvolume"]
         volumeMounts:
         - name: efsvolume
           mountPath: /efsvolume
      containers:
      - name: assets
        volumeMounts:
        - name: efsvolume
          mountPath: /usr/share/nginx/html/assets
      volumes:
        - name: efsvolume
          persistentVolumeClaim:
            claimName: efs-claim
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/created-by: eks-workshop
    app.kubernetes.io/type: app
  name: assets
  namespace: assets
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/component: service
      app.kubernetes.io/instance: assets
      app.kubernetes.io/name: assets
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/component: service
        app.kubernetes.io/created-by: eks-workshop
        app.kubernetes.io/instance: assets
        app.kubernetes.io/name: assets
    spec:
      containers:
        - envFrom:
            - configMapRef:
                name: assets
          image: public.ecr.aws/aws-containers/retail-store-sample-assets:latest
          imagePullPolicy: IfNotPresent
          livenessProbe:
            httpGet:
              path: /health.html
              port: 8080
            periodSeconds: 3
          name: assets
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          resources:
            limits:
              memory: 128Mi
            requests:
              cpu: 128m
              memory: 128Mi
          securityContext:
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: false
          volumeMounts:
            - mountPath: /usr/share/nginx/html/assets
              name: efsvolume
            - mountPath: /tmp
              name: tmp-volume
      initContainers:
        - command:
            - /bin/sh
            - -c
            - cp -R /usr/share/nginx/html/assets/* /efsvolume
          image: public.ecr.aws/aws-containers/retail-store-sample-assets:latest
          name: copy
          volumeMounts:
            - mountPath: /efsvolume
              name: efsvolume
      securityContext: {}
      serviceAccountName: assets
      volumes:
        - name: efsvolume
          persistentVolumeClaim:
            claimName: efs-claim
        - emptyDir:
            medium: Memory
          name: tmp-volume
     app.kubernetes.io/type: app
   name: assets
   namespace: assets
 spec:
-  replicas: 1
+  replicas: 2
   selector:
     matchLabels:
       app.kubernetes.io/component: service
       app.kubernetes.io/instance: assets
[...]
               drop:
                 - ALL
             readOnlyRootFilesystem: false
           volumeMounts:
+            - mountPath: /usr/share/nginx/html/assets
+              name: efsvolume
             - mountPath: /tmp
               name: tmp-volume
+      initContainers:
+        - command:
+            - /bin/sh
+            - -c
+            - cp -R /usr/share/nginx/html/assets/* /efsvolume
+          image: public.ecr.aws/aws-containers/retail-store-sample-assets:latest
+          name: copy
+          volumeMounts:
+            - mountPath: /efsvolume
+              name: efsvolume
       securityContext: {}
       serviceAccountName: assets
       volumes:
+        - name: efsvolume
+          persistentVolumeClaim:
+            claimName: efs-claim
         - emptyDir:
             medium: Memory
           name: tmp-volume
We can apply the changes by running the following command:
namespace/assets unchanged
serviceaccount/assets unchanged
configmap/assets unchanged
service/assets unchanged
persistentvolumeclaim/efs-claim created
deployment.apps/assets configured
Now look at the volumeMounts in the deployment, notice that we have our new Volume named efsvolume mounted onvolumeMounts named /usr/share/nginx/html/assets:
- mountPath: /usr/share/nginx/html/assets
name: efsvolume
- mountPath: /tmp
name: tmp-volume
A PersistentVolume (PV) has been created automatically for the PersistentVolumeClaim (PVC) we had created in the previous step:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-342a674d-b426-4214-b8b6-7847975ae121 5Gi RWX Delete Bound assets/efs-claim efs-sc 2m33s
Also describe the PersistentVolumeClaim (PVC) created:
Name: efs-claim
Namespace: assets
StorageClass: efs-sc
Status: Bound
Volume: pvc-342a674d-b426-4214-b8b6-7847975ae121
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
volume.kubernetes.io/storage-provisioner: efs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 22m (x2 over 22m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
Normal Provisioning 22m efs.csi.aws.com_ip-10-42-11-246.ec2.internal_1b9196ea-2586-49a6-87dd-5ce1d78c4c0d External provisioner is provisioning volume for claim "assets/efs-claim"
Normal ProvisioningSucceeded 22m efs.csi.aws.com_ip-10-42-11-246.ec2.internal_1b9196ea-2586-49a6-87dd-5ce1d78c4c0d Successfully provisioned volume pvc-342a674d-b426-4214-b8b6-7847975ae121
Now create a new file newproduct.png under the assets directory in the first Pod:
And verify that the file now also exists in the second Pod:
chrono_classic.jpg
gentleman.jpg
newproduct.png <-----------
pocket_watch.jpg
smart_1.jpg
smart_2.jpg
test.txt
wood_watch.jpg
Now as you can see even though we created a file through the first Pod the second Pod also has access to this file because of the shared EFS file system.