Backup Volumes
Why you need to think about the backup process.
Thanks to its Side Task functionality, Layerops lets you run containers before and after the main container. This makes it possible to create (re)deployment strategies, including actions to be taken with regard to persistent data. In the event of deployment of a service with a persistent volume, and therefore data to be backed up, LayerOps enables processing to be carried out in as many cases as imaginable.
Pin your service to a specific instance: You can configure your service to always run on a particular machine to ensure data persistence. See deployment constraints
Set up a backup and restore mechanism: It is strongly recommended to configure a backup system. While LayerOps does not currently provide automatic backups (comming soon), you can set up a Side Task to manage this process.
Here are a few concrete examples.
Side Tasks: preStart: a pre-task which checks whether data exists in the persistent volume of the main container, and if not, then the container launched before the main container performs a data restore from an external S3 service.
Side Tasks: postStart: a post task which, after the main container, launches another service with access to the persistent volume, which will compress the specified data, potentially encrypting it, and send it to an S3 service external to the platform. This service is set up as a scheduled task.
Backup Configuration with environment variables
Add sideTasks to your service:
sideTasks:
- dockerConfiguration: # Restore - check before main service launched, if data exists in $BACKUP_SOURCES vars
image: registry.nimeops.net/layerops-public/marketplace/backup-volumes
imageVersion: 2.0.0
command:
- /usr/local/bin/restore
type: preStart
isLongLived: false
- dockerConfiguration: # Backup - run as cron after main service launched, with $BACKUP_CRON_PERIOD definition
image: registry.nimeops.net/layerops-public/marketplace/backup-volumes
imageVersion: 2.0.0
type: postStart
isLongLived: true
Add environment variables to your service
BACKUP_SERVICE_NAME: MY_SERVICE_NAME #name of the service that will be used to name the backup archive file
BACKUP_SOURCES: "/var/www/" #list of paths to backup (separated by a ";")
BACKUP_S3_PROVIDER: Scaleway #must be a S3 provider supported by rclone's S3 storage provider plugin (see https://rclone.org/s3/ )
BACKUP_S3_ENDPOINT: s3.fr-par.scw.cloud #the S3 remote endpoint url
BACKUP_S3_BUCKET: my-backup-bucket #the S3 bucket name
BACKUP_S3_PATH: nextcloud #(optionnal) the S3 subdirectory inside the bucket
BACKUP_S3_REGION: fr-par #the S3 remote region
BACKUP_S3_ACCESS_KEY: #(sensitive)
BACKUP_S3_SECRET_KEY: #(sensitive)
BACKUP_CRON_PERIOD: "0 * * * *" #(optionnal) if set, overrides the default cron settings
BACKUP_ENCRYPT_KEY: (optionnal) #if set, the backup will be encrypted using the given encrypt key
BACKUP_REMOTE_DIR: (optionnal) #if set, Remote dir name (default to files)
BACKUP_RETENTION: "7" #(optionnal) if set, Backup retention in days (default to 7)
BACKUP_COMPRESSION: "targz" #(optionnal) if set, Compression type: zstd (default) and targz
BACKUP_RESTORE_DATE: (optionnal) #if set, specify the date ie 2022_12_26_142528 (for 2022_12_26_142528_var_www.tar.gz)
Use for perdiodic backups
- Setup the environment variables (see above)
- Run the container with default command (/usr/local/bin/cron), as a "long lived" posttask of the main service. NB: the BACKUP_SOURCES should point to paths that are stored within volumes.
Use for restoration before runnin the main service
- Setup the environment variables (see above)
- Run the container with command /usr/local/bin/restore, as a pretask of the main service. NB: the BACKUP_SOURCES should point to paths that are stored within volumes.
Examples
For more information on setting up Side Tasks, please refer to the Side Tasks section.
For complete examples of how to configure Side Tasks for backup and restore: