Back up Kubernetes MySQL operator clusters

Image with the MySQL logo

from oracle MySQL operator for Kubernetes is a convenient way to automate MySQL database provisioning within your cluster. One of the key features of the operator is the integrated hands-off backup support that increases your resiliency. Backups copy your database to external storage on a recurring schedule.

This article will help you set up backups to an Amazon S3-compatible object storage service. You will also see how to store backups in Oracle Cloud Infrastructure (OCI) storage or local persistent volumes within your cluster.

Prepare a database cluster

Install the MySQL operator in your Kubernetes cluster and create a simple database instance for testing. Copy the YAML below and save it in mysql.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: mysql-root-user
stringData:
  rootHost: "%"
  rootUser: "root"
  rootPassword: "P@$$w0rd"
 
---

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1

Use Kubectl to apply the manifest:

$ kubectl apply -f mysql.yaml

Wait a few minutes while the MySQL operator provisions your pods. Use Kubectl’s get pods command to check progress. You should see four active Pods: one MySQL router instance and three MySQL server replicas.

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
mysql-cluster-0                         2/2     Running   0          2m
mysql-cluster-1                         2/2     Running   0          2m
mysql-cluster-2                         2/2     Running   0          2m
mysql-cluster-router-6b68f9b5cb-wbqm5   1/1     Running   0          2m

Defining a backup plan

The MySQL operator needs two components to backup:

  • A backup schedule which determines when the backup is performed.
  • A backup profile which configures the save location and MySQL export options.

Schedules and Profiles are made independent from each other. This allows you to run multiple backups on different schedules with the same profile.

Each schedule and profile is associated with a specific database cluster. They are created as nested resources within your InnoDBCluster objects. Each database you create with the MySQL operator needs its own backup configuration.

Backup schedules are defined by your database spec.backupSchedules field. Each item requires a schedule field that specifies when to run the backup using a cron expression. Here’s an example that starts a backup every hour:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
   backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup

The backupProfileName field refers to the backup profile to use. You create this in the next step.

Create backup profiles

Profiles are defined in the spec.backupProfiles field. Every profile must have one name and a dumpInstance property that configures the backup operation.

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          # ...

Backup storage is configured per profile in the dumpInstance.storage field. The properties you must provide depend on the type of storage space you are using.

S3 Storage

The MySQL operator can upload your backups directly to S3-compatible object storage providers. To use this method, you need to create a Kubernetes secret that contains a aws CLI configuration file with your credentials.

Add the following content to: s3-secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: s3-secret
stringData:
  credentials: |
    [default]
    aws_access_key_id = YOUR_S3_ACCESS_KEY
    aws_secret_access_key = YOUR_S3_SECRET_KEY

Replace your own S3 access and secret keys, then use Kubectl to make it secret:

$ kubectl apply -f s3-secret.yaml
secret/s3-secret created

Then add the following fields to your backup profile storage.s3 section:

  • bucketName – The name of the S3 bucket to which you want to upload your backups.
  • prefix – Set this to apply a prefix to your uploaded files, such as /my-app/mysql. The prefix allows you to create folder structures within your bucket.
  • endpoint – Set this to your service provider’s URL when using third-party S3-compatible storage. You can omit this field if you are using Amazon S3.
  • config – The name of the secret that contains your reference file.
  • profile – The name of the configuration profile to be used in the reference file. This was set to default in the example above.

Here’s a complete example:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          s3:
            bucketName: backups
            prefix: /mysql
            config: s3-secret
            profile: default

Applying this manifest will trigger hourly database backups to your S3 account.

OCI Storage

The operator supports Oracle Cloud Infrastructure (OCI) object storage as an alternative to S3. It is configured in a similar way. Create a secret for your OCI credentials first:

apiVersion: v1
kind: Secret
metadata:
  name: oci-secret
stringData:
  fingerprint: YOUR_OCI_FINGERPRINT
  passphrase: YOUR_OCI_PASSPHRASE
  privatekey: YOUR_OCI_RSA_PRIVATE_KEY
  region: us-ashburn-1
  tenancy: YOUR_OCI_TENANCY
  user: YOUR_OCI_USER

Then configure the backup profile with a storage.ociObjectStorage stanza:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          ociObjectStorage:
            bucketName: backups
            prefix: /mysql
            credentials: oci-secret

Change the bucketName and prefix fields to set the upload location in your OCI account. The credentials field should refer to the secret that contains your OCI credentials.

Kubernetes volume storage

Local persistent volumes are a third storage option. This is less robust because your backup data is still in your Kubernetes cluster. However, it can be useful for one-time backups and testing purposes.

First, create a persistent volume and its claim:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: backup-pv
spec:
  storageClassName: standard
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp
 
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: backup-pvc
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

This sample manifest is not suitable for production use. You need a suitable . to select storage class and volume mounting mode for your Kubernetes distribution.

Then configure your backup profile to use your persistent volume by a . to add storage.persistentVolumeClaim field:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          persistentVolumeClaim:
            claimName: backup-pvc

Reference is made to the persistent volume claim previously made by the claimName field. The MySQL operator will now place backup data on the volume.

Set backup options

Backups are made using the MySQL Shells dumpInstance utility. This defaults to exporting a full dump from your server. The format writes structure and segmented data files for each table. The output is compressed with zstd.

You can pass options to dumpInstance via the dumpOptions field in a backup profile of a MySQL operator:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  # ...
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        dumpOptions:
          chunking: false
          compression: gzip
        storage:
          # ...

This example disables segmented output, creates one data file per table, and switches to gzip compression instead of zstd. You can find a full reference for available options in: the MySQL documentation.

Restore a backup

The MySQL operator can initialize new database clusters using previously created files from dumpInstance. This allows you to restore your backups directly to your Kubernetes cluster. It is useful in recovery situations or when migrating an existing database to Kubernetes.

Database initialization is controlled by the spec.initDB field on you InnoDBCluster objects. In this stanza, use the dump.storage object to point to the backup location you used earlier. The size corresponds to the equivalent dumpInstance.storage field in backup profile objects.

apiVersion: v1
kind: Secret
metadata:
  name: s3-secret
stringData:
  credentials: |
    [default]
    aws_access_key_id = YOUR_S3_ACCESS_KEY
    aws_secret_access_key = YOUR_S3_SECRET_KEY

---

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster-recovered
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  initDB:
    dump:
      storage:
        s3:
          bucketName: backups
          prefix: /mysql/mysql20221031220000
          config: s3-secret
          profile: default

Applying this YAML file will create a new database cluster initialized with the dumpInstance output in the specified S3 bucket. The prefix field must contain the full path to the dump files in the bucket. Operator backups are automatically saved in time-stamped folders; you need to specify which one you want to recover by setting the prefix. If you are restoring from a permanent volume, use the path field instead of prefix.

Overview

Oracle’s MySQL operator automates MySQL database management within Kubernetes clusters. In this article, you learned how to configure the operator’s backup system to store entire database dumps in a persistent volume or object storage bucket.

Using Kubernetes to scale MySQL horizontally adds resiliency, but remote backups are still vital in case your cluster is compromised or data is accidentally deleted. The MySQL operator can recover a new database instance of your backup if ever needed, simplifying the disaster recovery process.

Add Comment