SQL Server Best Practices: Migrating Multiple Instance-level Databases to ApsaraDB RDS for SQL Server

By Wang Jianming, Senior Engineer

We have discussed Using Asymmetric Keys to Implement Column Encryption, Database Recovery Models and Backups, Monitoring Backup and Restore Progress, Using Filegroups to Implement Cold and Hot Data Isolation Backup Solution, and Migrating to ApsaraDB RDS for SQL Server Using SDK in the ApsaraDB for RDS SQL Server in just one click.

Scenario Description

In the Migrating to ApsaraDB RDS for SQL Server Using SDK article that we covered last month, we implemented the automated migration of an on-premises or self-created database on ECS to ApsaraDB RDS for SQL Server. This is a solution for database-level migration to the cloud, that is, only one offline database is migrated to the cloud each time.

However, a user may have multiple SQL Server instances, and each instance may have tens of databases. For medium to large businesses, there could be thousands of databases that must be migrated to the cloud. The traditional solution for database-level migration to the cloud cannot meet the requirements of this scenario and is very inefficient.

To meet the requirements for migrating a large batch of databases to ApsaraDB RDS for SQL Server, we need to simplify the migration steps, and improve the migration-to-cloud efficiency.

Implementation Analysis

In the article Migrating to ApsaraDB RDS for SQL Server Using SDK, we implemented a solution for migrating a single database to the cloud. Similarly, we can use the following steps to implement a solution for migrating instance-level databases to the cloud:

  1. Upload the full backup files of the databases on a user’s offline instances to a folder in OSS.
  2. In OSS, traverse the full database backup files in that folder.
  3. Create a migration-to-cloud task for each backup file .

Input Parameters

According to the preceding analysis, the implementation method needs to include the six following input parameters. See the following table for the description of the six parameters:

access_key_id:   Alibaba Cloud user access key id
access_key_secret: Alibaba Cloud user access key secret
rds_instance_id: RDS SQL instance ID
oss_endpoint: OSS Endpoint address
oss_bucket: OSS Bucket name
directory: path to user database backup files in OSS; pass in "/" if it is a root directory

Specific Implementation

Preparations

The preparations for this migration is similar to the steps described in the Migrating to ApsaraDB RDS for SQL Server Using SDKarticle. Please refer to the “Preparations” section for the article for further details.

Source Code for Python SDK

In this article, RDS SDK for Python is used to implement the migration of databases to RDS SQL Server. You can also use RDS SDK for C#, Java, or another language. The following is the detailed code implementation:

#! /usr/bin/env python-*- coding: utf-8 -*-//*******************************Simple use***************************************************************
# Script name: RDSSQLCreateMigrateTasksBatchly.py
# Author: jianming.wjm@alibaba-inc.com
# Create Date: 2018-05-17 19:27
# Language: Python 2.7.10
# Run platform: Mac OS X 10.12.6
# Purpose:
This script is for batch migration of user offline SQL Server databases to Alibaba Cloud RDS SQL Server.
User FULL backup files are located in their OSS Bucket folders already.
This script helps users to migrate all offline databases backed up under the OSS Bucket folder to RDS SQL.
We achieve this by calling alibaba cloud RDS OPENAPI.
# Limitation:
RDS Edition: Support RDS edition listed below
'2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha',
'2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha',
'2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha'
# Preparation:
1. python 2.7.x installing (I'm using 2.7.10)
pip install aliyun-python-sdk-sts
pip install oss2
# Usage:
Help: python RDSSQLCreateMigrateTasksBatchly.py -h
Example:
python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>

variables description
access_key_id: alibaba cloud user access key id, fg: LTAIKeRvKPRwkaU3
access_key_secret: alibaba cloud user access key secret, fg: BbZ7xhrertQ0dfgMqfAZPByhnp4G2k
rds_instance_id: RDS SQL instance ID, fg: rm-2zesz4564ud8s7123
oss_endpoint: OSS Endpoint address, fg: oss-cn-beijing.aliyuncs.com
oss_bucket: OSS Bucket name, fg: atp-test-on-ecs
directory: Sub folder name under OSS Bucket, fg: Migration/OPENAPIDemo
# Output: There are two output sections, one is the input variables and the other is the migration requests and response.
*********************Input variables*************************************
*********************************************************************************************Migration requests**********************************************************************************************************
# Modify Author: jianming.wjm@alibaba-inc.com
# Modify Date: 2018-05-19 21:43
# Function:
//*******************************Simple use***************************************************************
"""
import json
import os
import sys, getopt
import re
import oss2
import time
from aliyunsdkcore.client import AcsClient
from aliyunsdkrds.request.v20140815 import DescribeMigrateTasksForSQLServerRequest
From Fig. Request. v20140815 import maid
from aliyunsdkvpc.request.v20160428 import DescribeVpcAttributeRequest
def main(argv):
access_key_id = access_key_secret = rds_instance_id = oss_endpoint = oss_bucket = directory = ''
# usage help
try:
opts, args = getopt.getopt(argv,"hk:s:i:e:b:d:",["access_key_id=", "access_key_secret=", "rds_instance_id=", "oss_endpoint=", "oss_bucket=", "directory="])
except getopt.GetoptError:
print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0]))
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0]))
sys.exit()
elif opt in ("-k", "-K", "--access_key_id"):
access_key_id = arg
elif opt in ("-s", "-S", "--access_key_secret"):
access_key_secret = arg
elif opt in ("-i", "-I", "--rds_instance_id"):
rds_instance_id = arg
elif opt in ("-e", "-E", "--oss_endpoint"):
oss_endpoint = arg
elif opt in ("-b", "-B", "--oss_bucket"):
oss_bucket = arg
elif opt in ("-d", "-D", "--directory"):
if arg.endswith("/"):
directory = arg
else:
directory = str("%s/" % arg)
# show the input parameters
print ("\n*********************Input variables*************************************\n" \
"access_key_id = %s\naccess_key_secret = %s\nrds_instance_id = %s\noss_endpoint = %s\noss_bucket = %s\ndirectory = %s\n" \
"************************************************************************"
% (access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory))
# check RDS & OSS region to make sure they are located in the same region.
success, rds_details = rds_instnace_details(access_key_id, access_key_secret, rds_instance_id)
if not success:
print ("%s" % rds_details)
sys.exit()
rds_db_version, rds_engine, rds_region = rds_details["EngineVersion"], rds_details["Engine"], rds_details["RegionId"] success, oss_details = oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket)
if not success:
print ("%s" % oss_details)
sys.exit()
oss_region = oss_details.location
# support db version checking.
if rds_engine ! = 'SQLServer' \
or rds_db_version not in [ '2008r2', '2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha',
'2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha',
'2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha']:
print("RDS engine doesn't support, this is only for RDS SQL Server engine.")
sys.exit()
# RDS & OSS Bucket are not under the same region.
if not oss_region.endswith(rds_region):
print("RDS & OSS Bucket are not located in the same region.")
sys.exit()
# RDS & OSS Bucket are in the same region.
print ("\n*********************Migration requests**********************************")
full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version)
print ("************************************************************************")
def rds_instnace_details(access_key_id, access_key_secret, rds_instance_id):
request = DescribeDBInstanceAttributeRequest.DescribeDBInstanceAttributeRequest()
request.set_DBInstanceId(rds_instance_id)
success, response = _send_request(access_key_id, access_key_secret, request)
if success:
if response["Items"]["DBInstanceAttribute"]:
# print response["Items"]["DBInstanceAttribute"][0]["EngineVersion"]
# print response["Items"]["DBInstanceAttribute"][0]["RegionId"]
return True, response["Items"]["DBInstanceAttribute"][0]
else:
return False, "Couldn't find specify RDS [%s]." % rds_instance_id


return False, response
def full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version):
"""
This supports full backup file migration.
"""
# get all backup objects under sub_folder
key_parts_list, do = oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory), 0
# foreach object
for key_parts in key_parts_list:
print ("\n--%s. [%s] migrate to your RDS: [%s] and the database name will be: [%s]." % (do, key_parts.file_key, rds_instance_id, key_parts.db_name))
do += 1
# print ("%s" % key_parts.sign_url) request = CreateMigrateTaskRequest.CreateMigrateTaskRequest()
request.set_DBInstanceId(rds_instance_id)
request.set_DBName(key_parts.db_name)
request.set_BackupMode("FULL")
request.set_IsOnlineDB(True)
if rds_db_version == '2008r2':
request.set_DBName(key_parts.db_name.lower())
request.set_OSSUrls(key_parts.sign_url)
else:
request.set_OSSUrls("")
request.set_OssObjectPositions("%s:%s:%s" % (oss_endpoint, oss_bucket, key_parts.file_key)) # OSSEndpoint:OSSBucket:OSSFileKey
request.set_CheckDBMode("SyncExecuteDBCheck")
success, response = _send_request(access_key_id, access_key_secret, request) if success:
print response
print ("--I'm sleeping for 2 seconds....")
time.sleep(2)
else:
print ("OPENAPI Response Error !!!!! : %s" % response)
"""
send request to OPENAPI
and get the response details
"""
def _send_request(access_key_id, access_key_secret, request, region='cn-hangzhou'):
request.set_accept_format('json')
try:
clt = client.AcsClient(access_key_id, access_key_secret, 'cn-hangzhou')
client = client.AcsClient(access_key_id, access_key_secret, region)
response_str = client.do_action_with_exception(request)
response_detail = json.loads(response_str)
return True, response_detail
except Exception as e:
return False, e
class oss_key_parts(object):
"""
if the whole object file key looks like below:
Migration/OPENAPIDemo/TestMigration_FULL_20180518153544.bak

then
: param str file_key: OSS object file key.
: param str sub_folder: OSS sub folder name, such as Migration/OPENAPIDemo
: param str file_name: OSS object file name, such as TestMigration_FULL_20180518153544.bak
: param str db_name: database name, such as 'TestMigration'
: param str bak_type: backup type , such as 'FULL'
: param str date: backup date time, such as '20180518153544'
: param str ext: backup file extendsion, such as 'bak'
"""
def __init__(self):-
self.file_key = ''
self.sub_folder = ''
self.file_name = ''
self.db_name = ''
self.bak_type = ''
self.date = ''
self.ext = ''
self.sign_url = ''
"""
parse the OSS file key string into oss key parts
and return oss_key_parts object.
"""
def oss_key_parse(file_key):
key_parts = oss_key_parts()
try:
if file_key.find('/') >= 0:
file_key_parts = file_key.rsplit('/', 1)
else:
file_key_parts = file_key
file_key_parts = ['/', file_key]
key_parts.file_key = file_key
key_parts.sub_folder = file_key_parts[0]
key_parts.file_name = file_key_parts[1]
key_list = file_key_parts[1].rsplit('_', 2) key_parts.db_name, \
key_parts.bak_type, \
key_parts.date, \
key_parts.ext = key_list[0], \
key_list[1], \
key_list[2].rsplit('.', 1)[0], \
key_list[2].rsplit('.', 1)[1]
except Exception, e:
pass
return key_partsdef oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory):
"""
list all OSS objects under the specified subfolder
and return the objects list.
"""
bucket = oss2. Bucket(oss2. Auth(access_key_id, access_key_secret), endpoint, bucket_name)
key_parts_list = []
# OSS Bucket Root
if directory == '/':
for object_info in oss2. ObjectIterator(bucket, delimiter = '/'):
if not object_info.is_prefix():
key_part = oss_key_parse(object_info.key)
# get object sign_url
key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600)
if key_part.ext in['bak', 'trn', 'log', 'diff']:
key_parts_list.append(key_part)
else:
print ("Warning!!!!!, [%s] is not backup file, filtered." % (key_part.file_key))
else:
for i, object_info in enumerate(oss2. ObjectIterator(bucket, prefix=directory)):
# have to the backup files, not folder
if not object_info.is_prefix():
if object_info.key.startswith(directory) and object_info.key ! = directory:
# print ("%s" % (object_info.key))
key_part = oss_key_parse(object_info.key)

# get object sign_url
key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600)
if key_part.ext in['bak', 'trn', 'log', 'diff']:
key_parts_list.append(key_part)
else:
print ("Warning!!!!!, [%s] is not a vaild backup file, filtered." % (key_part.file_key))
if not key_parts_list:
print("There is no backup file on OSS Bucket [%s] under [%s] folder, check please." % (oss_bucket, directory))
return key_parts_list
def oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket):
try:
bucket = oss2. Bucket(oss2. Auth(access_key_id, access_key_secret), endpoint, bucket_name)
bucket_info = bucket.get_bucket_info()
# print ("bucket name:%s, region: %s" % (bucket_info.name, bucket_info.location))
return True, bucket_info
except Exception as e:
return False, e
if __name__ == '__main__':
main(sys.argv[1:])

For the aforementioned code, you can also download the preceding Python scripts.

Implementing the Migration

We will briefly describe how to migrate instance-level databases to the cloud in just one click from the three following perspectives:

  1. View Help
  2. Example
  3. Output result

View Help

You can use -h to see how scripts are used:

python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -h
~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>
### Example Implementation
The following is a specific example:
python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k LTAIQazXKPRwwErT -s BMkIUhroubQOLpOMqfA09IKlqp4G2k -i rm-2zesz5774ud8s71i5 -e oss-cn-beijing.aliyuncs.com -b atp-test-on-ecs -d Migration/OPENAPIDemo

Output Results

The result output of the preceding command is divided into two parts:

  1. The first part is input parameters: shows all your input parameters to query for input errors
  2. The second part is the prompt message: tells you which backup file is migrated to which database of which instance

Here is the input information of an instance:

*********************Input variables*************************************
access_key_id = LTAIQazXKPRwwErT
access_key_secret = BMkIUhroubQOLpOMqfA09IKlqp4G2k
rds_instance_id = rm-2zesz5774ud8s71i5
oss_endpoint = oss-cn-beijing.aliyuncs.com
oss_bucket = atp-test-on-ecs
directory = Migration/OPENAPIDemo/
************************************************************************
*********************Migration requests**********************************--0. [Migration/OPENAPIDemo/TestCollation_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestCollation].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106121', u'RequestId': u'67E0DD7F-7219-4F67-AAE7-B27273921303', u'TaskId': u'68244691', u'DBName': u'TestCollation'}
--I'm sleeping for 2 seconds....
--1. [Migration/OPENAPIDemo/TestMigration_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestMigration].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106122', u'RequestId': u'0916CD14-861B-4BF7-A68A-409E3996B0D3', u'TaskId': u'68244695', u'DBName': u'TestMigration'}
--I'm sleeping for 2 seconds....
--2. [Migration/OPENAPIDemo/testdb_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [testdb].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106123', u'RequestId': u'5835B154-2EE3-4059-BFC4-6F798CDCE546', u'TaskId': u'68244699', u'DBName': u'testdb'}
--I'm sleeping for 2 seconds....
************************************************************************

Summary

This article shows us how to migrate offline users or self-created SQL Server instance-level databases on ECS to the cloud in just one click. This migration solution can significantly improve the migration-to-cloud efficiency and simplify the migration operations, providing much better migration-to-cloud experience for users.

Reference:https://www.alibabacloud.com/blog/sql-server-best-practices-migrating-multiple-instance-level-databases-to-apsaradb-rds-for-sql-server_594483?spm=a2c41.12584191.0.0

Written by

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store