Automate Database Backup for Castopod with BASH
Sergi Rodríguez
15-07-2024 18:16
6 minutos de lectura
6 minutos de lectura
Castopod is the ultimate self-hosted solution for podcasters. Streamline your workflow, syndicate across platforms, and monetize your content - all in one powerful, open-source package.
I show you here how to automate the backup of your castopod database, with a single and simple bash script that you can run every X hours/days, keeping only the last N copies.
Considerations
- Each time it's runned generate a backup like: backup_20240715_173357.sql.gz using year, month, day, hour, minute and second to build the filename.
- So it's a Gzipped file (it occupy about 7 times less)
- Let you specify how many backups to keep, so you can set this script on your server cronjobs (scheduled tasks) to be runned each X hours/days.
- You must put this file in the same root directory than .env file of Castopod
- To run it from the server cronjobs (run this: crontab -e) you can define a line like this for a daily backup:
@daily sh /home/podcast.myserver.com/backup_database.sh > /dev/null
- It took me about 3 hours of work to get this script running with these requirements, even with the help of Claude AI, which after 8 iterations in code generation, we managed to get it so complete!!
Bash script
You can download it here: castopod_backup_database.sh.zip
This is the content:
bash
#!/bin/bash # # caos30 & Claude AI - 2024-07-15 # # - This script dumps the Castopod database # - It also lets you specify how many backups to keep # - So you can run it from your cron jobs # - The DB connection is taken from the .env Castopod file # Number of backups to keep KEEP_BACKUPS=5 # Determine the script's directory SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" # Function to read variables from .env file read_env() { ENV_FILE="${SCRIPT_DIR}/.env" if [ -f "$ENV_FILE" ]; then while IFS= read -r line || [[ -n "$line" ]]; do if [[ -z "$line" || "$line" == \#* ]]; then continue fi if [[ "$line" =~ ^([^=]+)=(.*)$ ]]; then key="${BASH_REMATCH[1]}" value="${BASH_REMATCH[2]}" value=$(echo "$value" | sed -e 's/^"//' -e 's/"$//') eval "${key//./_}='$value'" else echo "Warning: Malformed line in .env file: $line" fi done < "$ENV_FILE" else echo "The .env file does not exist in ${SCRIPT_DIR}" exit 1 fi } # Read variables from .env file read_env # Assign variables HOST="${database_default_hostname}" DB="${database_default_database}" USER="${database_default_username}" PASS="${database_default_password}" PREFIX="${database_default_DBPrefix}" # Backup file name BACKUP_FILE="${SCRIPT_DIR}/backup_$(date +%Y%m%d_%H%M%S).sql" COMPRESSED_BACKUP_FILE="${BACKUP_FILE}.gz" # Function to perform the backup do_backup() { mysqldump --host="$HOST" --user="$USER" --password="$PASS" \ --skip-lock-tables --no-tablespaces \ "$DB" > "$BACKUP_FILE" } # Function to compress the backup compress_backup() { gzip -f "$BACKUP_FILE" } # Function to cleanup old backups cleanup_old_backups() { local backup_files=($(ls -t "${SCRIPT_DIR}"/backup_*.sql.gz 2>/dev/null)) local count=${#backup_files[@]} if [ $count -gt $KEEP_BACKUPS ]; then echo "Cleaning up old backups..." for ((i=$KEEP_BACKUPS; i<$count; i++)); do echo "Removing old backup: ${backup_files[i]}" rm "${backup_files[i]}" done fi } # Change to the script's directory cd "$SCRIPT_DIR" # Cleanup old backups before creating a new one cleanup_old_backups # Perform the database dump if do_backup; then echo "Backup created successfully: $BACKUP_FILE" # Compress the backup if compress_backup; then echo "Backup compressed successfully: $COMPRESSED_BACKUP_FILE" # Note: gzip has already removed the original file, so we don't need to do it manually else echo "Error compressing the backup." exit 1 fi else echo "Error performing database backup." exit 1 fi # Final cleanup to ensure we don't exceed the limit cleanup_old_backups echo "Backup process completed."
Comentarios 0
Visitas 20
Añada su comentario: