Volumes and Storage

Last modified by Tomas Terälä on 2025/03/21 14:22

In very broad terms, there are two kinds of storage available in Kubernetes for Pods/containers: Persistent and Ephemeral volumes. As their name suggests, persistent volumes persist between pod restarts where as ephemeral volumes are only tied to the lifetime of the pod.

Why should I use either

If your application needs to put temporary files to somewhere like /tmp/, but you don’t actually want to concern yourself with the lifecycle of the files, you can use ephemeral volumes. They are also great for Secret or ConfigMap volumes, since they only exist while the pods exist. More info on this can be found here.

On the other hand, if your application needs to have constant access to files, either for writing to storage, or retrieving for example images, you should consider using persistent storage. This is defined separately.

 persistentephemeral
––––––––-
Managed Byexternal object called PersistentVolumeClaim or PVCinline definition
LifecyclePVC has to be created/deleted separatelyautomatic creation/deletion according to pod lifecycle
Is the Data Shareable Between Podsdepending on the storageClass on nodeSelectorno
Cost Associatedactually stored somewheretemporary, so “free”
Featuresdepending on StorageClass multiple optionsread/writes but no backups and no persistence between pods

Ephemeral Volumes

Kubernetes docs describe ephemeral volumes in greater detail. But in essence, it is possible to mount the contents of Secret and ConfigMap -objects, or to create empty directories for the container to use.

Empty directories can be configured in a Deployment by adding volumes under spec.template.spec

spec:
 template:
   spec:
     volumes:
        - name: crunchy-pgadmin4-1
         emptyDir: {}

and by mentioning the same name and the path under a specific containers volumeMounts. In this example we are mounting the volume defined above to the container “crunchy-pgadmin4” in path /certs.

spec:
 template:
   spec:
     containers:
        - name: crunchy-pgadmin4
         volumeMounts:
            - name: crunchy-pgadmin4-1
             mountPath: /certs

Persistent Volumes

StorageClasses enable platform administration to define a self-service option for users to get persistent storage. This is usually done by creating a PersistentVolumeClaim or PVC with the correct parameters, which then connects to a storage backend using the StorageClass. The StorageClass defines what options are available, for example dynamic resizing, CSI snapshots, RWX (or ReadWriteMany, where many nodes can mount the same PVC).

After creating a PVC, you can connect it to a Deployment, either from the OpenShift UI Actions -> Add Storage option when looking at the Deployment, or by editing the yaml.

This works exactly the same as above, only that in the volumes section we need to reference the PersistentVolumeClaim that exists by calling for the attribute persistentVolumeClaim.claimName. In the example we are mounting the PVC called data-1-test to path /data/.

spec:
  ...
  template:
   spec:
     volumes:
        - name: data-1-test
         persistentVolumeClaim:
           claimName: data-1-test
    ...
      containers:
        - name: crunchy-pgadmin4
         volumeMounts:
            - name: data-1-test
             mountPath: /data/

You can mix and match emptyDir and PersistentVolumeClaim volumes, but do note that for scaling purposes, PVCs made as RWO, or ReadWriteOnce, mean that the pods have to be on the same node.

Available storage classes

Currently, the Tike container platfrom only has RWO type Storage available. This is due to licensing limitations with the vSphere CSI-driver and the vSan service. We are actively working on adding other storage options, like NetApp Trident, nfs-csi-driver and looking at multiple other solutions.