job.answiz.com
  • 4
Votes
name
name Punditsdkoslkdosdkoskdo

How would you usually back up your database?

I am using mysql

How often do you back up your database?

How do you normally backup your database?

Export all data into sql or cvs format and keep it in a folder??

Our standard setup is a HA cluster with two databases one replicating to the other which is read-only.

We do a full back up once a day and then have a per-customer policy of weeding out old backups, generally we keep 4 last daily backups (to survive the weekend ;), 4 last sundays and 4 last first sundays in a month. After that one or two dumps a year are going to the archive to be kept forever.

We also keep the replication logs for however long we can afford to spare the diskspace. They are also pretty useful as the debugging tool as they record exactly who changed what and when.

Theoretically all you need is one full backup and all the replication logs to be able to do a point in time restore, but more frequent full backups will speed up the restore.

One neat trick with backup is to use innodb tables and --single-transaction paramater for mysql dump, that way the backup won't block the database while it runs.

  • 0
Reply Report

I use a script that uses mysqldump to extract the data/schema to a file for each database. The data is backed up by the normal netbackup backup to tape. You can obviously add further bells and whistles but this does a simpe basic dump.

#!/bin/sh
# Find out what databases are in mysql and back them up
# Delete old backups


STARTTIME=` date +%Y%m%d-%H%M `

#BACKUP_DIR="/usr/local/db_backups"
BACKUP_DIR="/var/local/db_backups"
LOGFILE="/var/log/db_backups.log"
USER="root"
PASSWD="<password>"
KEEP="7"

(
echo
echo " ---MySQL backups start ${STARTTIME} ---"
#delete any backup written more than ${KEEP} days ago
echo "Removing files over ${KEEP} days old from ${BACKUP_DIR}:"
/usr/bin/find  ${BACKUP_DIR} -mindepth 1 -mtime +${KEEP} -print -delete



echo
echo "Performing today's dumps"
#find each database running in this instance of mysl
for DB in ` echo "show databases;"|mysql -u${USER} -p${PASSWD} mysql |awk " NR>1 {print $1} " `
do
        #generate a backup file name based on the data base name
        BACKUP_FILE="${BACKUP_DIR}/${DB}-${STARTTIME}.sql"
        echo "Processing database ${DB} into file ${BACKUP_FILE}"
        # dump the database data/schema into the backup file
        mysqldump -u${USER} -p${PASSWD} --add-drop-table ${DB} > ${BACKUP_FILE}
        gzip ${BACKUP_FILE}
done

ENDTIME=` date +%Y%m%d-%H%M `
echo
echo " ---MySQL backups complete ${ENDTIME} ---"
echo
) >>  ${LOGFILE} 2>&1
  • 0
Reply Report

If you want to do MySQL backups properly, without any downtime, you should replicate the database to a spare server. It doesn't need to be hugely powerful, it just needs to cope with the write load of your master database. You shouldn't use this server in production. If the replication never keeps up, you need a more powerful server. You can check by comparing the log file and position from the output of

 > SHOW MASTER STATUSG

on the master and

 > SHOW SLAVE STATUSG

on the slave. I think MySQL5 will show the lag from SHOW SLAVE STATUS.

When you're happy that your slave is keeping up, you can do you backups by doing

  1. Stop the replication with SLAVE STOP; on the slave
  2. Do a mysqldump --opt on the slave server.
  3. Start the replication again with SLAVE START; on the slave

If you do this, then you will have a consistent backup of your databases. This method prevents different databases or even worse, different tables in the same database being out of sync and prevents downtime by locking tables for writes while you do the backup.

A nice benefit of this setup is that you have a copy of your database you can use to run long expensive queries on that won't affect your live service.

Couple of random tips:

  • Don't be tempted to do a file based backup of the mysql data files. It's more hassle than it's worth and MySQL dumps are more flexible.
  • Beware mysqldump locking tables during dumping.
  • Beware inconsistences in dumps unless you lock every table during your dump
  • Use mysqldump --opt, as it's usually the fastest way to import the resulting SQL
  • Dump as often as you can. We dump daily because we have 40GB+ of data.
  • Test your dumps on a spare server occasionally to make sure they work.
  • 0
Reply Report