Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

It was necessary for me to backup the site to 2C-Bitrix: Site Management 1 times a day (files and mysql database) and store a history of changes for 90 days.

The site is located on a VDS running CentOS 7 with "1C-Bitrix: Web Environment" installed. Additionally, make a backup copy of the OS settings.

Requirements:

  • Frequency - 2 times a day;
  • Keep copies for the last 90 days;
  • The ability to get individual files for a specific date, if necessary;
  • The backup must be stored in a data center other than VDS;
  • The ability to access the backup from anywhere (another server, local computer, etc.).

An important point was the ability to quickly create backups with minimal consumption of additional space and system resources.

This is not about a snapshot for a quick restore of the entire system, but about files and the database and the change history.

Initial data:

  • VDS on XEN virtualization;
  • OS CentOS 7;
  • 1C-Bitrix: Web environment;
  • Site based on "1C-Bitrix: Site Management", Standard version;
  • The file size is 50 GB and will grow;
  • The database size is 3 GB and will grow.

Standard backup built into 1C-Bitrix - excluded immediately. It is suitable only for small sites, because:

  • Makes a complete copy of the site each time, respectively, each copy will take up as much space as I take up the files, in my case it is 50 GB.
  • Backup is done using PHP, which is impossible with such volumes of files, it will overload the server and will never end.
  • And of course, there can be no talk of any 90 days when storing a full copy.

The solution that offers hosterThis is a backup disk located in the same data center as the VDS, but on a different server. You can access the disk via FTP and use your own scripts, or, if ISPManager is installed on the VDS, via its backup module. This option is not suitable because it uses the same data center.

From all of the above, the best choice for me is an incremental backup according to my own scenario in Yandex.Cloud (Object Storage) or Amazon S3 (Amazon Simple Storage Service).

This requires:

  • root access to VDS;
  • installed duplicity utility;
  • account in Yandex.Cloud.

incremental backup - a method in which only data that has changed since the last backup is archived.

duplicity - a backup utility that uses rsync algorithms and can work with Amazon S3.

Yandex.Cloud vs Amazon S3

There is no difference between Yandex.Cloud and Amazon S3 in this case for me. Yandex supports the main part of the Amazon S3 API, so you can work with it using the solutions that are available for working with S3. In my case, this is the duplicity utility.

The main advantage of Yandex can be payment in rubles, if there is a lot of data, then there will be no link to the course. In terms of speed, Amazon's European data centers work commensurately with Russian ones in Yandex, for example, you can use Frankfurt. I previously used Amazon S3 for similar tasks, now I decided to try Yandex.

Setting up Yandex.Cloud

1. You need to create a billing account in Yandex.Cloud. To do this, you need to log in to Yandex.Cloud through your Yandex account or create a new one.

2. Create Cloud.
Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

3. In the "Cloud" create a "Catalog".
Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

4. For the "Catalogue" create a "Service account".
Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

5. For the "Service account" create keys.
Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

6. Save the keys, you will need them in the future.
Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

7. For the "Catalog" create a "Bucket", files will fall into it.
Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

8. I recommend setting a limit and selecting "Cold Storage".
Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

Setting up scheduled backups on the server

This guide assumes basic administrative skills.

1. Install the duplicity utility on the VDS

yum install duplicity

2. Create a folder for mysql dumps, in my case it is /backup_db in the VDS root

3. Create a folder for bash scripts /backup_scripts and make the first script that will backup /backup_scripts/backup.sh

Script content:

#!`which bash`


# /backup_scripts/backup.sh

# Это условие проверяет не идёт ли в данный момент процесс резервного копирования, если идёт, то на email отправляется сообщение об ошибке (этот блок можно не использовать)
if [ -f /home/backup_check.mark ];
then

DATE_TIME=`date +"%d.%m.%Y %T"`;

/usr/sbin/sendmail -t <<EOF
From:backup@$HOSTNAME
To:<Ваш EMAIL>
Subject:Error backup to YANDEX.CLOUD
Content-Type:text/plain; charset=utf-8
Error backup to YANDEX.CLOUD

$DATE_TIME
EOF

else

# Основной блок отвечающий за резервное копирование
# Если нет ощибки ставим метку и запускаем backup

echo '' > /home/backup_check.mark;


# Удаляем файлы с дампами базы оставшиеся от предыдущего backup

/bin/rm -f /backup_db/*


# Делаем дамп всех mysql баз, предполагается что доступ добавлен в файле /root/.my.cnf

DATETIME=`date +%Y-%m-%d_%H-%M-%S`;

`which mysqldump` --quote-names --all-databases | `which gzip` > /backup_db/DB_$DATETIME.sql.gz


# Добавляем данные для отправки в Яндекс.

export PASSPHRASE=<Придумайте пароль для шифрования архива>
export AWS_ACCESS_KEY_ID=<Идентификатор ключа полученный у Яндекса>
export AWS_SECRET_ACCESS_KEY=<Секретный ключ полученный у Яндекса>


# Запускаем duplicity для резервирования необходимых папок на сервере.
# Данная команда будет создавать полный backup раз в месяц и до следующего месяца добавлять инкрементальные к нему
# -- exclude это папки, которые нужно исключить, я исключаю все папки с кешем битрикса
# --include папки которые нужно резервировать в моём случае это:
# - /backup_db
# - /home
# - /etc
# s3://storage.yandexcloud.net/backup , backup это имя созданного выше бакета

# Техническая особенность и значения некоторых параметров:
# Две строки "--exclude='**'" и "/" нужны, чтобы можно было выше оперировать --include и --exclude для разных папок. Эти две строчки сначала добавляют в бэкап весь сервер "/", потом исключают его "--exclude='**'"
# --full-if-older-than='1M' - создавать полную копию каждый месяц
# --volsize='512' - максимальный размер каждого из файлов в бэкапе в мегабайтах
# --log-file='/var/log/duplicity.log' - куда писать лог файл

`which duplicity` 
    --s3-use-ia --s3-european-buckets 
    --s3-use-new-style 
    --s3-use-multiprocessing 
    --s3-multipart-chunk-size='128' 
    --volsize='512' 
    --no-print-statistics 
    --verbosity=0 
    --full-if-older-than='1M' 
    --log-file='/var/log/duplicity.log' 
    --exclude='**/www/bitrix/backup/**' 
    --exclude='**/www/bitrix/cache/**' 
    --exclude='**/www/bitrix/cache_image/**' 
    --exclude='**/www/bitrix/managed_cache/**' 
    --exclude='**/www/bitrix/managed_flags/**' 
    --exclude='**/www/bitrix/stack_cache/**' 
    --exclude='**/www/bitrix/html_pages/*/**' 
    --exclude='**/www/bitrix/tmp/**' 
    --exclude='**/www/upload/tmp/**' 
    --exclude='**/www/upload/resize_cache/**' 
    --include='/backup_db' 
    --include='/home' 
    --include='/etc' 
    --exclude='**' 
    / 
    s3://storage.yandexcloud.net/backup



# Данная команда нужна для чистки.
# Она оставляет 3 последних полных backup и ассоциированных с ними инкрементальных backup.
# Т.о. у меня остаются backup за 3 месяца, т.к. первая команда каждый месяц делает новый полный backup

`which duplicity` remove-all-but-n-full 3 --s3-use-ia --s3-european-buckets --s3-use-new-style --verbosity=0 --force s3://storage.yandexcloud.net/backup



unset PASSPHRASE
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY

# Удаляем метку об идущем backup

/bin/rm -f /home/backup_check.mark;

fi

4. Run the script for the first time and check the result, files should appear in the Bucket.

`which bash` /backup_scripts/backup.sh

Incremental VDS backup with a site on 1C-Bitrix in Yandex.Cloud

5. Add a script to cron for the root user to be executed 2 times a day, or as often as you need.

10 4,16 * * * `which bash` /backup_scripts/backup.sh

Data recovery from Yandex.Cloud

1. Make a restore folder /backup_restore

2. Make bash restore script /backup_scripts/restore.sh

I give the most requested example of recovering a specific file:

#!`which bash`

export PASSPHRASE=<Пароль для шифрования архива используемый при бэкапе>
export AWS_ACCESS_KEY_ID=<Идентификатор ключа полученный у Яндекса>
export AWS_SECRET_ACCESS_KEY=<Секретный ключ полученный у Яндекса>

# 3 примера, раскомментировать нужный

# Получить статус backup
#`which duplicity` collection-status s3://storage.yandexcloud.net/backup

# Восстановить index.php из корня сайта
#`which duplicity` --file-to-restore='home/bitrix/www/index.php' s3://storage.yandexcloud.net/backup /backup_restore/index.php

# Восстановить index.php из корня сайта 3х дневной давности
#`which duplicity` --time='3D' --file-to-restore='home/bitrix/www/index.php' s3://storage.yandexcloud.net/backup /backup_restore/index.php

unset PASSPHRASE
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY

3. Run the script and wait for the result.

`which bash` /backup_scripts/backup.sh

In the /backup_restore/ folder you will find the index.php file that was previously included in the backup.

You can make finer adjustments to suit your needs.

minus duplicity

Duplicity has one drawback - there is no way to set a channel usage limit. With a normal channel, this does not create a problem, but with a DDoS-protected channel with speed per day billing, I would like to be able to set a limit of 1-2 megabits.

As a conclusion

Backing up to Yandex.Cloud or Amazon S3 provides an independent copy of your website and OS settings that can be accessed from any other server or local computer. This copy is not visible to anyone. control panels hosting, nor in the Bitrix admin panel, which provides additional security.

In the most unfortunate outcome, you can build a new server and deploy the site for any date. Although the most requested functionality will be the ability to access the file for a specific date.

You can use this technique with any VDS or Dedicated servers and sites on any engines, not only 1C-Bitrix. The OS can also be other than CentOS, such as Ubuntu or Debian.

Source: habr.com