SQL Server Best Practices: Migrating Multiple Instance-level Databases to ApsaraDB RDS for SQL Server

By Wang Jianming, Senior Engineer

We have discussed Using Asymmetric Keys to Implement Column Encryption, Database Recovery Models and Backups, Monitoring Backup and Restore Progress, Using Filegroups to Implement Cold and Hot Data Isolation Backup Solution, and Migrating to ApsaraDB RDS for SQL Server Using SDK in the ApsaraDB for RDS SQL Server in just one click.

Scenario Description

However, a user may have multiple SQL Server instances, and each instance may have tens of databases. For medium to large businesses, there could be thousands of databases that must be migrated to the cloud. The traditional solution for database-level migration to the cloud cannot meet the requirements of this scenario and is very inefficient.

To meet the requirements for migrating a large batch of databases to ApsaraDB RDS for SQL Server, we need to simplify the migration steps, and improve the migration-to-cloud efficiency.

Implementation Analysis

  1. Upload the full backup files of the databases on a user’s offline instances to a folder in OSS.
  2. In OSS, traverse the full database backup files in that folder.
  3. Create a migration-to-cloud task for each backup file .

Input Parameters

access_key_id:   Alibaba Cloud user access key id
access_key_secret: Alibaba Cloud user access key secret
rds_instance_id: RDS SQL instance ID
oss_endpoint: OSS Endpoint address
oss_bucket: OSS Bucket name
directory: path to user database backup files in OSS; pass in "/" if it is a root directory

Specific Implementation


Source Code for Python SDK

#! /usr/bin/env python-*- coding: utf-8 -*-//*******************************Simple use***************************************************************
# Script name: RDSSQLCreateMigrateTasksBatchly.py
# Author: jianming.wjm@alibaba-inc.com
# Create Date: 2018-05-17 19:27
# Language: Python 2.7.10
# Run platform: Mac OS X 10.12.6
# Purpose:
This script is for batch migration of user offline SQL Server databases to Alibaba Cloud RDS SQL Server.
User FULL backup files are located in their OSS Bucket folders already.
This script helps users to migrate all offline databases backed up under the OSS Bucket folder to RDS SQL.
We achieve this by calling alibaba cloud RDS OPENAPI.
# Limitation:
RDS Edition: Support RDS edition listed below
'2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha',
'2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha',
'2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha'
# Preparation:
1. python 2.7.x installing (I'm using 2.7.10)
pip install aliyun-python-sdk-sts
pip install oss2
# Usage:
Help: python RDSSQLCreateMigrateTasksBatchly.py -h
python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>

variables description
access_key_id: alibaba cloud user access key id, fg: LTAIKeRvKPRwkaU3
access_key_secret: alibaba cloud user access key secret, fg: BbZ7xhrertQ0dfgMqfAZPByhnp4G2k
rds_instance_id: RDS SQL instance ID, fg: rm-2zesz4564ud8s7123
oss_endpoint: OSS Endpoint address, fg: oss-cn-beijing.aliyuncs.com
oss_bucket: OSS Bucket name, fg: atp-test-on-ecs
directory: Sub folder name under OSS Bucket, fg: Migration/OPENAPIDemo
# Output: There are two output sections, one is the input variables and the other is the migration requests and response.
*********************Input variables*************************************
*********************************************************************************************Migration requests**********************************************************************************************************
# Modify Author: jianming.wjm@alibaba-inc.com
# Modify Date: 2018-05-19 21:43
# Function:
//*******************************Simple use***************************************************************
import json
import os
import sys, getopt
import re
import oss2
import time
from aliyunsdkcore.client import AcsClient
from aliyunsdkrds.request.v20140815 import DescribeMigrateTasksForSQLServerRequest
From Fig. Request. v20140815 import maid
from aliyunsdkvpc.request.v20160428 import DescribeVpcAttributeRequest
def main(argv):
access_key_id = access_key_secret = rds_instance_id = oss_endpoint = oss_bucket = directory = ''
# usage help
opts, args = getopt.getopt(argv,"hk:s:i:e:b:d:",["access_key_id=", "access_key_secret=", "rds_instance_id=", "oss_endpoint=", "oss_bucket=", "directory="])
except getopt.GetoptError:
print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0]))
for opt, arg in opts:
if opt == '-h':
print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0]))
elif opt in ("-k", "-K", "--access_key_id"):
access_key_id = arg
elif opt in ("-s", "-S", "--access_key_secret"):
access_key_secret = arg
elif opt in ("-i", "-I", "--rds_instance_id"):
rds_instance_id = arg
elif opt in ("-e", "-E", "--oss_endpoint"):
oss_endpoint = arg
elif opt in ("-b", "-B", "--oss_bucket"):
oss_bucket = arg
elif opt in ("-d", "-D", "--directory"):
if arg.endswith("/"):
directory = arg
directory = str("%s/" % arg)
# show the input parameters
print ("\n*********************Input variables*************************************\n" \
"access_key_id = %s\naccess_key_secret = %s\nrds_instance_id = %s\noss_endpoint = %s\noss_bucket = %s\ndirectory = %s\n" \
% (access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory))
# check RDS & OSS region to make sure they are located in the same region.
success, rds_details = rds_instnace_details(access_key_id, access_key_secret, rds_instance_id)
if not success:
print ("%s" % rds_details)
rds_db_version, rds_engine, rds_region = rds_details["EngineVersion"], rds_details["Engine"], rds_details["RegionId"] success, oss_details = oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket)
if not success:
print ("%s" % oss_details)
oss_region = oss_details.location
# support db version checking.
if rds_engine ! = 'SQLServer' \
or rds_db_version not in [ '2008r2', '2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha',
'2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha',
'2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha']:
print("RDS engine doesn't support, this is only for RDS SQL Server engine.")
# RDS & OSS Bucket are not under the same region.
if not oss_region.endswith(rds_region):
print("RDS & OSS Bucket are not located in the same region.")
# RDS & OSS Bucket are in the same region.
print ("\n*********************Migration requests**********************************")
full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version)
print ("************************************************************************")
def rds_instnace_details(access_key_id, access_key_secret, rds_instance_id):
request = DescribeDBInstanceAttributeRequest.DescribeDBInstanceAttributeRequest()
success, response = _send_request(access_key_id, access_key_secret, request)
if success:
if response["Items"]["DBInstanceAttribute"]:
# print response["Items"]["DBInstanceAttribute"][0]["EngineVersion"]
# print response["Items"]["DBInstanceAttribute"][0]["RegionId"]
return True, response["Items"]["DBInstanceAttribute"][0]
return False, "Couldn't find specify RDS [%s]." % rds_instance_id

return False, response
def full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version):
This supports full backup file migration.
# get all backup objects under sub_folder
key_parts_list, do = oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory), 0
# foreach object
for key_parts in key_parts_list:
print ("\n--%s. [%s] migrate to your RDS: [%s] and the database name will be: [%s]." % (do, key_parts.file_key, rds_instance_id, key_parts.db_name))
do += 1
# print ("%s" % key_parts.sign_url) request = CreateMigrateTaskRequest.CreateMigrateTaskRequest()
if rds_db_version == '2008r2':
request.set_OssObjectPositions("%s:%s:%s" % (oss_endpoint, oss_bucket, key_parts.file_key)) # OSSEndpoint:OSSBucket:OSSFileKey
success, response = _send_request(access_key_id, access_key_secret, request) if success:
print response
print ("--I'm sleeping for 2 seconds....")
print ("OPENAPI Response Error !!!!! : %s" % response)
send request to OPENAPI
and get the response details
def _send_request(access_key_id, access_key_secret, request, region='cn-hangzhou'):
clt = client.AcsClient(access_key_id, access_key_secret, 'cn-hangzhou')
client = client.AcsClient(access_key_id, access_key_secret, region)
response_str = client.do_action_with_exception(request)
response_detail = json.loads(response_str)
return True, response_detail
except Exception as e:
return False, e
class oss_key_parts(object):
if the whole object file key looks like below:

: param str file_key: OSS object file key.
: param str sub_folder: OSS sub folder name, such as Migration/OPENAPIDemo
: param str file_name: OSS object file name, such as TestMigration_FULL_20180518153544.bak
: param str db_name: database name, such as 'TestMigration'
: param str bak_type: backup type , such as 'FULL'
: param str date: backup date time, such as '20180518153544'
: param str ext: backup file extendsion, such as 'bak'
def __init__(self):-
self.file_key = ''
self.sub_folder = ''
self.file_name = ''
self.db_name = ''
self.bak_type = ''
self.date = ''
self.ext = ''
self.sign_url = ''
parse the OSS file key string into oss key parts
and return oss_key_parts object.
def oss_key_parse(file_key):
key_parts = oss_key_parts()
if file_key.find('/') >= 0:
file_key_parts = file_key.rsplit('/', 1)
file_key_parts = file_key
file_key_parts = ['/', file_key]
key_parts.file_key = file_key
key_parts.sub_folder = file_key_parts[0]
key_parts.file_name = file_key_parts[1]
key_list = file_key_parts[1].rsplit('_', 2) key_parts.db_name, \
key_parts.bak_type, \
key_parts.date, \
key_parts.ext = key_list[0], \
key_list[1], \
key_list[2].rsplit('.', 1)[0], \
key_list[2].rsplit('.', 1)[1]
except Exception, e:
return key_partsdef oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory):
list all OSS objects under the specified subfolder
and return the objects list.
bucket = oss2. Bucket(oss2. Auth(access_key_id, access_key_secret), endpoint, bucket_name)
key_parts_list = []
# OSS Bucket Root
if directory == '/':
for object_info in oss2. ObjectIterator(bucket, delimiter = '/'):
if not object_info.is_prefix():
key_part = oss_key_parse(object_info.key)
# get object sign_url
key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600)
if key_part.ext in['bak', 'trn', 'log', 'diff']:
print ("Warning!!!!!, [%s] is not backup file, filtered." % (key_part.file_key))
for i, object_info in enumerate(oss2. ObjectIterator(bucket, prefix=directory)):
# have to the backup files, not folder
if not object_info.is_prefix():
if object_info.key.startswith(directory) and object_info.key ! = directory:
# print ("%s" % (object_info.key))
key_part = oss_key_parse(object_info.key)

# get object sign_url
key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600)
if key_part.ext in['bak', 'trn', 'log', 'diff']:
print ("Warning!!!!!, [%s] is not a vaild backup file, filtered." % (key_part.file_key))
if not key_parts_list:
print("There is no backup file on OSS Bucket [%s] under [%s] folder, check please." % (oss_bucket, directory))
return key_parts_list
def oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket):
bucket = oss2. Bucket(oss2. Auth(access_key_id, access_key_secret), endpoint, bucket_name)
bucket_info = bucket.get_bucket_info()
# print ("bucket name:%s, region: %s" % (bucket_info.name, bucket_info.location))
return True, bucket_info
except Exception as e:
return False, e
if __name__ == '__main__':

For the aforementioned code, you can also download the preceding Python scripts.

Implementing the Migration

  1. View Help
  2. Example
  3. Output result

View Help

python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -h
~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>
### Example Implementation
The following is a specific example:
python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k LTAIQazXKPRwwErT -s BMkIUhroubQOLpOMqfA09IKlqp4G2k -i rm-2zesz5774ud8s71i5 -e oss-cn-beijing.aliyuncs.com -b atp-test-on-ecs -d Migration/OPENAPIDemo

Output Results

  1. The first part is input parameters: shows all your input parameters to query for input errors
  2. The second part is the prompt message: tells you which backup file is migrated to which database of which instance

Here is the input information of an instance:

*********************Input variables*************************************
access_key_id = LTAIQazXKPRwwErT
access_key_secret = BMkIUhroubQOLpOMqfA09IKlqp4G2k
rds_instance_id = rm-2zesz5774ud8s71i5
oss_endpoint = oss-cn-beijing.aliyuncs.com
oss_bucket = atp-test-on-ecs
directory = Migration/OPENAPIDemo/
*********************Migration requests**********************************--0. [Migration/OPENAPIDemo/TestCollation_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestCollation].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106121', u'RequestId': u'67E0DD7F-7219-4F67-AAE7-B27273921303', u'TaskId': u'68244691', u'DBName': u'TestCollation'}
--I'm sleeping for 2 seconds....
--1. [Migration/OPENAPIDemo/TestMigration_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestMigration].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106122', u'RequestId': u'0916CD14-861B-4BF7-A68A-409E3996B0D3', u'TaskId': u'68244695', u'DBName': u'TestMigration'}
--I'm sleeping for 2 seconds....
--2. [Migration/OPENAPIDemo/testdb_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [testdb].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106123', u'RequestId': u'5835B154-2EE3-4059-BFC4-6F798CDCE546', u'TaskId': u'68244699', u'DBName': u'testdb'}
--I'm sleeping for 2 seconds....



Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com