[HOW TO] Recover Large MailWizz Installation

Jamie Whittingham

Active Member
So we recently decided to migrate our entire database to a new MariaDB Galera Cluster and we were excited with the dev results we had seen but we faced one big issue ... how to import a mysqldump file that was over 100 GB is size.

We tried several times to use the native mysql command line tool but always seemed to run into issues and we decided that we would break out database.sql file into multiple parts. Now, if you still have your production database server running then this is easy enough and here is a little script we found online to help with this task.

Code:
#!/bin/bash

# dump-tables-mysql.sh
# Descr: Dump MySQL table data into separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: @Trutane
# Ref: http://stackoverflow.com/q/3669121/138325
# Notes:
#  * Script will prompt for password for db access.
#  * Output files are compressed and saved in the current working dir, unless DIR is
#    specified on command-line.

[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1

DB_host=$1
DB_user=$2
DB=$3
DIR=$4

[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR

echo -n "DB password: "
read -s DB_pass
echo
echo "Dumping tables into separate SQL command files for database '$DB' into dir=$DIR"

tbl_count=0

for t in $(mysql -NBA -h $DB_host -u $DB_user -p$DB_pass -D $DB -e 'show tables')
do
    echo "DUMPING TABLE: $DB.$t"
    mysqldump -h $DB_host -u $DB_user -p$DB_pass $DB $t | gzip > $DIR/$DB.$t.sql.gz
    tbl_count=$(( tbl_count + 1 ))
done

echo "$tbl_count tables dumped from database '$DB' into dir=$DIR"

Save this as a standard .sh file and off you go.

However, if you have the MailWizz Backup Manager generated database.sql file then you will run into some issues with large files .... so how does one account for this scenario?

Well ... the answer is to split your database.sql file into tables and that can be done by following these simple steps

1. Install nodejs
2. Install mysqldumpsplit
Code:
npm install mysqldumpsplit
3. Run mysqldumpsplit
Code:
mysqldumpsplit database.sql

This will break the main database.sql file into smaller table_name.sql files which are easier to work with. Now this is a great step forward but no one wants to set there and get them in the correct order etc so we have written a small PHP script to handle this part.


PHP:
<?php

$database_user = 'mailwizz_db_user';
$database_name = 'mailwizz_installation';

$folder = '/root/sql_playground/';

if ($handle = opendir($folder)) {
    while (false !== ($entry = readdir($handle))) {
       if ($entry != "database.sql" && $entry != "script.php" && $entry != "." && $entry != "..") {
            // tit bits
            echo "Working on ".$folder.$entry."\n";
            
            // disable key check
            shell_exec("sed  -i '1i SET FOREIGN_KEY_CHECKS=0;' ".$folder.$entry);
            
            // enable key check
            shell_exec('echo "SET FOREIGN_KEY_CHECKS=1;" >> '.$folder.$entry);
            
            // mysql import
            shell_exec("mysql -u".$database_user." -p ".$database_name." < ".$folder.$entry);
            
        }
    }

    closedir($handle);
}

Edit the vars at the top and name the file "script.php" and place it next to database.sql

This script will then process each SQL file and exclude the main database.sql file and import them into your database.

Hope this helps
 
Runs like a dream, i'm running it right now

Code:
Working on /root/sql_playground/sr_april_2_2017-mw_article_to_category.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_price_plan_order_note.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_company_type.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_bounce_server.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_customer_company.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_list_field.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_transactional_email_log.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_price_plan.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_article.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_campaign_temporary_source.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_zone.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_customer_message.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_list_segment_operator.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_customer_group.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-sr_fb_apps.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_campaign_option.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_user_group.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_currency.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_transactional_email.sql
Enter password:
Working on /root/sql_playground/sr_april_2_2017-mw_delivery_server_usage_log.sql

I think I will update the php file so it takes the password as a var and then reads it into the terminal so you dont have to enter the password for each table.
 
So we recently decided to migrate our entire database to a new MariaDB Galera Cluster and we were excited with the dev results we had seen but we faced one big issue ... how to import a mysqldump file that was over 100 GB is size.

We tried several times to use the native mysql command line tool but always seemed to run into issues and we decided that we would break out database.sql file into multiple parts. Now, if you still have your production database server running then this is easy enough and here is a little script we found online to help with this task.

Code:
#!/bin/bash

# dump-tables-mysql.sh
# Descr: Dump MySQL table data into separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: @Trutane
# Ref: http://stackoverflow.com/q/3669121/138325
# Notes:
#  * Script will prompt for password for db access.
#  * Output files are compressed and saved in the current working dir, unless DIR is
#    specified on command-line.

[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1

DB_host=$1
DB_user=$2
DB=$3
DIR=$4

[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR

echo -n "DB password: "
read -s DB_pass
echo
echo "Dumping tables into separate SQL command files for database '$DB' into dir=$DIR"

tbl_count=0

for t in $(mysql -NBA -h $DB_host -u $DB_user -p$DB_pass -D $DB -e 'show tables')
do
    echo "DUMPING TABLE: $DB.$t"
    mysqldump -h $DB_host -u $DB_user -p$DB_pass $DB $t | gzip > $DIR/$DB.$t.sql.gz
    tbl_count=$(( tbl_count + 1 ))
done

echo "$tbl_count tables dumped from database '$DB' into dir=$DIR"

Save this as a standard .sh file and off you go.

However, if you have the MailWizz Backup Manager generated database.sql file then you will run into some issues with large files .... so how does one account for this scenario?

Well ... the answer is to split your database.sql file into tables and that can be done by following these simple steps

1. Install nodejs
2. Install mysqldumpsplit
Code:
npm install mysqldumpsplit
3. Run mysqldumpsplit
Code:
mysqldumpsplit database.sql

This will break the main database.sql file into smaller table_name.sql files which are easier to work with. Now this is a great step forward but no one wants to set there and get them in the correct order etc so we have written a small PHP script to handle this part.


PHP:
<?php

$database_user = 'mailwizz_db_user';
$database_name = 'mailwizz_installation';

$folder = '/root/sql_playground/';

if ($handle = opendir($folder)) {
    while (false !== ($entry = readdir($handle))) {
       if ($entry != "database.sql" && $entry != "script.php" && $entry != "." && $entry != "..") {
            // tit bits
            echo "Working on ".$folder.$entry."\n";
          
            // disable key check
            shell_exec("sed  -i '1i SET FOREIGN_KEY_CHECKS=0;' ".$folder.$entry);
          
            // enable key check
            shell_exec('echo "SET FOREIGN_KEY_CHECKS=1;" >> '.$folder.$entry);
          
            // mysql import
            shell_exec("mysql -u".$database_user." -p ".$database_name." < ".$folder.$entry);
          
        }
    }

    closedir($handle);
}

Edit the vars at the top and name the file "script.php" and place it next to database.sql

This script will then process each SQL file and exclude the main database.sql file and import them into your database.

Hope this helps
Thanks again for posting the above. If you test/implement further database solutions, it'd be interesting to see comparisons with amazon aurora (e.g. re mwz speed).
 
Back
Top