Best practice for adding persistent storage to a running workload?

Created a container recently that I thought would be okay with just a state database backing in but I now realise that I need to add some persistent storage in order to be able to provision custom plugins that survive boot cycles ๐Ÿคฆ

Is there a way to do this that's considered optimal given that it's already an unideal scenario (and which poses the least risk of data corruption or disruption to the running workload)?

I've seen conflicting tutorials on the internet - some walk you through the process of creating a PV, then a PVC, then assigning it - while others say that the correct approach with modern GKE is to simply create the PVC and let the Engine automatically handle storage class mapping etc. 

Is that the case with standard (ie non-Autopilot) clusters too?

3 2 131
2 REPLIES 2

Hello @danielrosehill, with the PVC created, GKE will dynamically provision the storage using the default storage class or a specific one you designate. This is a convenient and efficient way to ensure your containers have persistent storage, as GKE handles the details behind the scenes. After that, you can simply mount the PVC to your pods. 

Instad of using GKE automatic storage proviosing, you can also rely to external storage provisioning systems or integrated in the clusters (that working as other microservices) like Longhorn, Ceph. These permit you to manage the storage directly from a web UI if you expose their respective dashboards ๐Ÿ™‚

No matter how you add a PV to the pod, the pod is going to be restarted / redeployed.  Do you have data in/on the pod's ephemeral storage that will need to survive a restart?

Top Labels in this Space