Persistent volumes in Kubernetes are storage that is made available to pods, but whose lifecycle is not tied to a pod. In most cases, persistent volumes continue to exist after a pod is deleted or restarted. You'd use a persistent volume to store your database or the files for your web server - all your data that is valuable. A Persistent Volume Claim is the request by an application or user for storage.
Kubernetes abstracts from the actual implementation of the volume, i.e. the pod doesn't see whether the volume is backed by NFS, by local block storage, iSCSI or Quobyte. However, applications and users can request certain volume types and behavior:
Storage Classes can be configured by the administrator to offer different storage backends (local block, shared storage, cloud storage…) or cost (all-flash vs. archival). Users can then pick a storage class to request for their persistent volume in their persistent volume claim.
In addition, the Persistent Volume Claim can also request a specific access mode. This access mode tells Kubernetes how data sharing of the persistent volume across pods should happen:
The volume can only be mounted by a single node for read and write access.
The volume can be mounted by multiple nodes but for read only. This is often the case for block storage backed volumes like iSCSI, which cannot be written to by multiple nodes.
Shared storage that can be mounted for read and write at the same time across many nodes. This includes shared network file systems such as NFS or Quobyte.
A Persistent Volume cClaim is a request from an application / user to create and mount a persistent volume to the pods of the application. The most important parameter is the name, which must be unique in the cluster.
The next part of a Persistent Volume Claim is the description of what type of storage and how much is desired. This includes the Storage Class name, the access mode or the capacity.
An example or a Persistent Volume Claim might look like this:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: quobyte-csi-test spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: quobyte-csi
Storage classes are defined by the administrator and abstract from the actual implementation of the storage. Users and applications can request specific storage classes in their persistent volume claims. Admins can create different storage classes based e.g. on applications like database volumes, or performance (ssd vs. hdd) or cost (fast, cheap).
In the storage class definition the administrator then ties the class to a specific backend (called provisioner) and a storage endpoint, such as an NFS server. Additional parameters can be set for the different storage provisioners to pass on information like the requested media type etc.
Read more about how you can connect the Quobyte Policy Engine to Kubernetes Storage Classes to provide different types of volumes from the same cluster.
CSI stands for Container Storage Interface and CSI plugins are responsible for making persistent storage available to your containerized application. In Kubernetes CSI Plugins listen to the API and when they receive requests for persistent volumes claims that match their provisioner, the CSI Plugin takes care of creating and mounting the requested volume.
How the CSI Plugin itself works and is deployed on a cluster depends on the vendor. They are often deployed as a DaemonSet and run on one node (controller) or on all nodes (e.g. to mount the request volumes on the nodes).
You can learn more about the Quobyte CSI Plugin here, or read our tutorial on how to deploy it on your Kubernetes cluster.
How to set up shared file system (RWX) persistent volumes on Kubernetes with Quobyte
How to connect Kubernetes StorageClasses to the Quobyte Policy Engine
Consolidate your Kubernetes Storage with Quobyte's Multi-tenancy and Self-service.
How to combine flash and HDD in Quobyte for fast and cost-effective Persistent Volumes