Deploy Data Migration Using DM-Ansible

DM-Ansible is a cluster deployment tool developed by PingCAP based on the Playbooks feature of Ansible (an IT automation tool). This guide shows how to quickly deploy a Data Migration (DM) cluster using DM-Ansible.

Prepare

Before you start, make sure you have the following machines as required.

  1. Several target machines that meet the following requirements:

    • CentOS 7.3 (64-bit) or later, x86_64 architecture (AMD64)
    • Network between machines
    • Closing the firewall, or opening the service port
  2. A Control Machine that meets the following requirements:

    • CentOS 7.3 (64-bit) or later, with Python 2.7 installed
    • Ansible 2.5 or later installed
    • Access to the Internet

Step 1: Install system dependencies on the Control Machine

Log in to the Control Machine using the root user account, and run the corresponding command according to your operating system.

  • If you use a Control Machine installed with CentOS 7, run the following command:

    yum -y install epel-release git curl sshpass && um -y install python-pip
  • If you use a Control Machine installed with Ubuntu, run the following command:

    apt-get -y install git curl sshpass python-pip

Step 2: Create the tidb user on the Control Machine and generate the SSH key

Make sure you have logged in to the Control Machine using the root user account, and then perform the following steps.

  1. Create the tidb user.

    useradd -m -d /home/tidb tidb
  2. Set a password for the tidb user account.

    passwd tidb
  3. Configure sudo without password for the tidb user account by adding tidb ALL=(ALL) NOPASSWD: ALL to the end of the sudo file:

    visudo
    tidb ALL=(ALL) NOPASSWD: ALL
  4. Generate the SSH key.

    Execute the su command to switch the user from root to tidb.

    su - tidb

    Create the SSH key for the tidb user account and hit the Enter key when Enter passphrase is prompted. After successful execution, the SSH private key file is /home/tidb/.ssh/id_rsa, and the SSH public key file is /home/tidb/.ssh/id_rsa.pub.

    ssh-keygen -t rsa
    Generating public/private rsa key pair. Enter file in which to save the key (/home/tidb/.ssh/id_rsa): Created directory '/home/tidb/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/tidb/.ssh/id_rsa. Your public key has been saved in /home/tidb/.ssh/id_rsa.pub. The key fingerprint is: SHA256:eIBykszR1KyECA/h0d7PRKz4fhAeli7IrVphhte7/So tidb@172.16.10.49 The key's randomart image is: +---[RSA 2048]----+ |=+o+.o. | |o=o+o.oo | | .O.=.= | | . B.B + | |o B * B S | | * + * + | | o + . | | o E+ . | |o ..+o. | +----[SHA256]-----+

Step 3: Download DM-Ansible to the Control Machine

Make sure you have logged in to the Control Machine using the tidb user account.

  1. Go to the /home/tidb directory.

  2. Run the following command to download DM-Ansible.

    wget http://download.pingcap.org/dm-ansible-{version}.tar.gz

    {version} is the DM version that you expect to download, like v1.0.1 and v1.0.2.

    You can check out DM's published versions on DM Release page. You can also replace {version} with latest to download the latest development version that has not been published.

Step 4: Install DM-Ansible and its dependencies on the Control Machine

Make sure you have logged in to the Control Machine using the tidb user account.

It is required to use pip to install DM-Ansible and its dependencies, otherwise a compatibility issue occurs. Currently, DM-Ansible is compatible with Ansible 2.5 or later.

  1. Install DM-Ansible and the dependencies on the Control Machine:

    tar -xzvf dm-ansible-{version}.tar.gz && mv dm-ansible-{version} dm-ansible && cd /home/tidb/dm-ansible && sudo pip install -r ./requirements.txt

    DM-Ansible and the related dependencies are in the dm-ansible/requirements.txt file.

  2. View the version of Ansible:

    ansible --version
    ansible 2.5.0

Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine

Make sure you have logged in to the Control Machine using the tidb user account.

  1. Add the IPs of your deployment target machines to the [servers] section of the hosts.ini file.

    cd /home/tidb/dm-ansible && vi hosts.ini
    [servers] 172.16.10.71 172.16.10.72 172.16.10.73 [all:vars] username = tidb
  2. Run the following command and input the password of the root user account of your deployment target machines.

    ansible-playbook -i hosts.ini create_users.yml -u root -k

    This step creates the tidb user account on the deployment target machines, configures the sudo rules and the SSH mutual trust between the Control Machine and the deployment target machines.

Step 6: Download DM and the monitoring component installation package to the Control Machine

Make sure the Control Machine is connected to the Internet and run the following command:

ansible-playbook local_prepare.yml

Step 7: Edit the inventory.ini file to orchestrate the DM cluster

Log in to the Control Machine using the tidb user account, and edit the /home/tidb/dm-ansible/inventory.ini file to orchestrate the DM cluster.

dm-worker1 ansible_host=172.16.10.72 ansible_port=5555 server_id=101 mysql_host=172.16.10.72 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306

You can choose one of the following two types of cluster topology according to your scenario:

Option 1: Use the cluster topology of a single DM-worker instance on each node

NameHost IPServices
node1172.16.10.71DM-master, Prometheus, Grafana, Alertmanager, DM Portal
node2172.16.10.72DM-worker1
node3172.16.10.73DM-worker2
mysql-replica-01172.16.10.81MySQL
mysql-replica-02172.16.10.82MySQL
# DM modules. [dm_master_servers] dm_master ansible_host=172.16.10.71 [dm_worker_servers] dm_worker1 ansible_host=172.16.10.72 server_id=101 source_id="mysql-replica-01" mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker2 ansible_host=172.16.10.73 server_id=102 source_id="mysql-replica-02" mysql_host=172.16.10.82 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 [dm_portal_servers] dm_portal ansible_host=172.16.10.71 # Monitoring modules. [prometheus_servers] prometheus ansible_host=172.16.10.71 [grafana_servers] grafana ansible_host=172.16.10.71 [alertmanager_servers] alertmanager ansible_host=172.16.10.71 # Global variables. [all:vars] cluster_name = test-cluster ansible_user = tidb dm_version = {version} deploy_dir = /data1/dm grafana_admin_user = "admin" grafana_admin_password = "admin"

{version} is the version number of the DM-Ansible you use. For details about DM-worker parameters, see DM-worker configuration parameters description.

Option 2: Use the cluster topology of multiple DM-worker instances on each node

NameHost IPServices
node1172.16.10.71DM-master, Prometheus, Grafana, Alertmanager, DM Portal
node2172.16.10.72DM-worker1-1, DM-worker1-2
node3172.16.10.73DM-worker2-1, DM-worker2-2

When you edit the inventory.ini file, pay attention to distinguish between the following variables: server_id, deploy_dir, and dm_worker_port.

# DM modules. [dm_master_servers] dm_master ansible_host=172.16.10.71 [dm_worker_servers] dm_worker1_1 ansible_host=172.16.10.72 server_id=101 deploy_dir=/data1/dm_worker dm_worker_port=8262 mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker1_2 ansible_host=172.16.10.72 server_id=102 deploy_dir=/data2/dm_worker dm_worker_port=8263 mysql_host=172.16.10.82 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker2_1 ansible_host=172.16.10.73 server_id=103 deploy_dir=/data1/dm_worker dm_worker_port=8262 mysql_host=172.16.10.83 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm_worker2_2 ansible_host=172.16.10.73 server_id=104 deploy_dir=/data2/dm_worker dm_worker_port=8263 mysql_host=172.16.10.84 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 [dm_portal_servers] dm_portal ansible_host=172.16.10.71 # Monitoring modules. [prometheus_servers] prometheus ansible_host=172.16.10.71 [grafana_servers] grafana ansible_host=172.16.10.71 [alertmanager_servers] alertmanager ansible_host=172.16.10.71 # Global variables. [all:vars] cluster_name = test-cluster ansible_user = tidb dm_version = {version} deploy_dir = /data1/dm grafana_admin_user = "admin" grafana_admin_password = "admin"

{version} is the version number of the DM-Ansible you use.

DM-worker configuration parameters description

Variable nameDescription
source_idDM-worker binds to a unique database instance or a replication group with the primary-secondary architecture. When the primary and secondary switch, you only need to update mysql_host or mysql_port and do not need to update the source_id.
server_idDM-worker connects to MySQL as a secondary database. This variable is the server ID of the secondary database. Keep it globally unique in the MySQL cluster, where the value range is 0 ~ 4294967295. In v1.0.2 and later versions, the server ID is automatically generated by DM.
mysql_hostThe upstream MySQL host.
mysql_userThe upstream MySQL username ("root" by default).
mysql_passwordThe upstream MySQL user password. You need to encrypt the upstream MySQL user password using dmctl.
mysql_portThe upstream MySQL port (3306 by default).
enable_gtidWhether DM-worker uses GTID to pull the binlog. The prerequisite is that the upstream MySQL has enabled the GTID mode.
relay_binlog_nameSpecifies the file name from which DM-worker starts to pull the binlog. Only used when the local has no valid relay log. In v1.0.2 and later versions, DM pulls the binlog starting from the latest file by default.
relay_binlog_gtidSpecifies the GTID from which DM-worker starts to pull the binlog. Only used when the local has no valid relay log and enable_gtid is true. In v1.0.2 and later versions, DM pulls the binlog from the latest file by default.
flavorIndicates the release type of MySQL ("mysql" by default). For the official version, Percona, and cloud MySQL, fill in "mysql"; for MariaDB, fill in "mariadb". In v1.0.2 and later versions, DM automatically detects the upstream version and fills in the release type.

For details about the deploy_dir configuration, see Configure the deployment directory.

Encrypt the upstream MySQL user password using dmctl

Assuming that the upstream MySQL user password is 123456, configure the generated string to the mysql_password variable of DM-worker.

cd /home/tidb/dm-ansible/resources/bin && ./dmctl -encrypt 'abc!@#123'
MKxn0Qo3m3XOyjCnhEMtsUCm83EhGQDZ/T4=

Step 8: Edit variables in the inventory.ini file

This step shows how to make configuration changes to the inventory.ini file.

Configure the deployment directory

Edit the deploy_dir variable to configure the deployment directory.

  • The global variable is set to /home/tidb/deploy by default, and it applies to all services. If the data disk is mounted on the /data1 directory, you can set it to /data1/dm. For example:

    ## Global variables. [all:vars] deploy_dir = /data1/dm
  • If you need to set a separate deployment directory for a service, you can configure the host variable while configuring the service host list in the inventory.ini file. It is required to add the first column alias, to avoid confusion in scenarios of mixed services deployment.

    dm-master ansible_host=172.16.10.71 deploy_dir=/data1/deploy

Configure the relay log position

When you start DM-worker for the first time, you need to configure relay_binlog_name to specify the position where DM-worker starts to pull the corresponding upstream MySQL or MariaDB binlog.

[dm_worker_servers] dm-worker1 ansible_host=172.16.10.72 source_id="mysql-replica-01" server_id=101 relay_binlog_name="binlog.000011" mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm-worker2 ansible_host=172.16.10.73 source_id="mysql-replica-02" server_id=102 relay_binlog_name="binlog.000002" mysql_host=172.16.10.82 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306

Enable the relay log GTID migration mode

In a DM cluster, the relay log processing unit of DM-worker communicates with the upstream MySQL or MariaDB to pull its binlog to the local file system.

You can enable the relay log GTID migration mode by configuring the following items. Currently, DM supports MySQL GTID and MariaDB GTID.

  • enable_gtid: to enable the GTID mode. This helps improve the handling of migration topology changes, such as a switch between primary and secondary
  • relay_binlog_gtid: to specify the position where DM-worker starts to pull the corresponding upstream MySQL or MariaDB binlog
[dm_worker_servers] dm-worker1 ansible_host=172.16.10.72 source_id="mysql-replica-01" server_id=101 enable_gtid=true relay_binlog_gtid="aae3683d-f77b-11e7-9e3b-02a495f8993c:1-282967971,cc97fa93-f5cf-11e7-ae19-02915c68ee2e:1-284361339" mysql_host=172.16.10.81 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306 dm-worker2 ansible_host=172.16.10.73 source_id="mysql-replica-02" server_id=102 relay_binlog_name=binlog.000002 mysql_host=172.16.10.82 mysql_user=root mysql_password='VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU=' mysql_port=3306

Global variables description

Variable nameDescription
cluster_nameThe name of a cluster, adjustable.
dm_versionThe version of DM, configured by default.
grafana_admin_userThe username of the Grafana administrator (admin by default).
grafana_admin_passwordThe password of the Grafana administrator account, used to import Dashboard by Ansible (admin by default). Update this variable if you have modified it through the Grafana web.

Step 9: Deploy the DM cluster

When ansible-playbook runs Playbook, the default concurrent number is 5. If many deployment target machines are deployed, you can add the -f parameter to specify the concurrency, such as ansible-playbook deploy.yml -f 10.

The following example uses tidb as the user who runs the service.

  1. Edit the dm-ansible/inventory.ini file to make sure ansible_user = tidb.

    ansible_user = tidb

    Run the following command and if all servers return tidb, then the SSH mutual trust is successfully configured:

    ansible -i inventory.ini all -m shell -a 'whoami'

    Run the following command and if all servers return root, then sudo without password of the tidb user is successfully configured:

    ansible -i inventory.ini all -m shell -a 'whoami' -b
  2. Modify kernel parameters, and deploy the DM cluster components and monitoring components.

    ansible-playbook deploy.yml
  3. Start the DM cluster.

    ansible-playbook start.yml

    This operation starts all the components in the entire DM cluster in order, which include DM-master, DM-worker, and the monitoring components. You can use this command to start a DM cluster after it is stopped.

Step 10: Stop the DM cluster

If you need to stop the DM cluster, run the following command:

ansible-playbook stop.yml

This operation stops all the components in the entire DM cluster in order, which include DM-master, DM-worker, and the monitoring components.

Common deployment issues

Service default ports

ComponentPort variableDefault portDescription
DM-masterdm_master_port8261DM-master service communication port
DM-workerdm_worker_port8262DM-worker service communication port
Prometheusprometheus_port9090Prometheus service communication port
Grafanagrafana_port3000The port for the external service of web monitoring service and client (browser) access
Alertmanageralertmanager_port9093Alertmanager service communication port

Customize ports

Edit the inventory.ini file and add the related host variable of the corresponding service port after the service IP:

dm_master ansible_host=172.16.10.71 dm_master_port=18261

Update DM-Ansible

  1. Log in to the Control Machine using the tidb account, enter the /home/tidb directory, and back up the dm-ansible folder.

    cd /home/tidb && mv dm-ansible dm-ansible-bak
  2. Download the specified version of DM-Ansible and extract it.

    cd /home/tidb && wget http://download.pingcap.org/dm-ansible-{version}.tar.gz && tar -xzvf dm-ansible-{version}.tar.gz && mv dm-ansible-{version} dm-ansible
  3. Migrate the inventory.ini configuration file.

    cd /home/tidb && cp dm-ansible-bak/inventory.ini dm-ansible/inventory.ini
  4. Migrate the dmctl configuration.

    cd /home/tidb/dm-ansible-bak/dmctl && cp * /home/tidb/dm-ansible/dmctl/
  5. Use Playbook to download the latest DM binary file, which substitutes for the binary file in the /home/tidb/dm-ansible/resource/bin/ directory automatically.

    ansible-playbook local_prepare.yml