Alibaba Cloud Tablestore: A Case Study on Backing up a Massive Volume of Structured Data

Requirements

Tablestore Backup and Restoration Solution

Case Study: Tablestore Backup and Restoration Solution

Determining a Backup Plan and Policy

Using the Tunnel Service SDK to Write Code

private static void createTunnel(TunnelClient client, String tunnelName) {
CreateTunnelRequest request = new CreateTunnelRequest(TableName, tunnelName, TunnelType.BaseAndStream);
CreateTunnelResponse resp = client.createTunnel(request);
System.out.println("RequestId: " + resp.getRequestId());
System.out.println("TunnelId: " + resp.getTunnelId());
}
this.gson = new GsonBuilder().registerTypeHierarchyAdapter(byte[].class, new ByteArrayToBase64TypeAdapter())
.setLongSerializationPolicy(LongSerializationPolicy.STRING).create();
// ByteArrayOutputStream到ByteArrayInputStream会有一次array.copy, 可考虑用管道或者NIO channel.
public void streamRecordsToOSS(List<StreamRecord> records, String bucketName, String filename, boolean isNewFile) {
if (records.size() == 0) {
LOG.info("No stream records, skip it!");
return;
}
try {
CsvWriterSettings settings = new CsvWriterSettings();
ByteArrayOutputStream out = new ByteArrayOutputStream();
CsvWriter writer = new CsvWriter(out, settings);
if (isNewFile) {
LOG.info("Write csv header, filename {}", filename);
List<String> headers = Arrays.asList(RECORD_TIMESTAMP, RECORD_TYPE, PRIMARY_KEY, RECORD_COLUMNS);
writer.writeHeaders(headers);
System.out.println(writer.getRecordCount());
}
List<String[]> totalRows = new ArrayList<String[]>();
LOG.info("Write stream records, num: {}", records.size());
for (StreamRecord record : records) {
String timestamp = String.valueOf(record.getSequenceInfo().getTimestamp());
String recordType = record.getRecordType().name();
String primaryKey = gson.toJson(
TunnelPrimaryKeyColumn.genColumns(record.getPrimaryKey().getPrimaryKeyColumns()));
String columns = gson.toJson(TunnelRecordColumn.genColumns(record.getColumns()));
totalRows.add(new String[] {timestamp, recordType, primaryKey, columns});
}
writer.writeStringRowsAndClose(totalRows);
// write to oss file
ossClient.putObject(bucketName, filename, new ByteArrayInputStream(out.toByteArray()));
} catch (Exception e) {
e.printStackTrace();
}
}

Executing the Backup Policy and Monitor the Backup Process

public class TunnelBackup {
private final ConfigHelper config;
private final SyncClient syncClient;
private final CsvHelper csvHelper;
private final OSSClient ossClient;
public TunnelBackup(ConfigHelper config) {
this.config = config;
syncClient = new SyncClient(config.getEndpoint(), config.getAccessId(), config.getAccessKey(),
config.getInstanceName());
ossClient = new OSSClient(config.getOssEndpoint(), config.getAccessId(), config.getAccessKey());
csvHelper = new CsvHelper(syncClient, ossClient);
}
public void working() {
TunnelClient client = new TunnelClient(config.getEndpoint(), config.getAccessId(), config.getAccessKey(),
config.getInstanceName());
OtsReaderConfig readerConfig = new OtsReaderConfig();
TunnelWorkerConfig workerConfig = new TunnelWorkerConfig(
new OtsReaderProcessor(csvHelper, config.getOssBucket(), readerConfig));
TunnelWorker worker = new TunnelWorker(config.getTunnelId(), client, workerConfig);
try {
worker.connectAndWorking();
} catch (Exception e) {
e.printStackTrace();
worker.shutdown();
client.shutdown();
}
}
public static void main(String[] args) {
TunnelBackup tunnelBackup = new TunnelBackup(new ConfigHelper());
tunnelBackup.working();
}
}

Restoring the Files

public class TunnelRestore {
private ConfigHelper config;
private final SyncClient syncClient;
private final CsvHelper csvHelper;
private final OSSClient ossClient;
public TunnelRestore(ConfigHelper config) {
this.config = config;
syncClient = new SyncClient(config.getEndpoint(), config.getAccessId(), config.getAccessKey(),
config.getInstanceName());
ossClient = new OSSClient(config.getOssEndpoint(), config.getAccessId(), config.getAccessKey());
csvHelper = new CsvHelper(syncClient, ossClient);
}
public void restore(String filename, String tableName) {
csvHelper.parseStreamRecordsFromCSV(filename, tableName);
}
public static void main(String[] args) {
TunnelRestore restore = new TunnelRestore(new ConfigHelper());
restore.restore("FullData-1551767131130.csv", "testRestore");
}
}

Summary

Original Source:

--

--

--

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How GitOps Is Solving Customer Pain Points

A Rundown of the Hexagonal Architecture @Sixt

How to Install Vagrant on ECS Ubuntu 18.04

Zendrive Interns Speak on Road Safety and Personal Growth

🍐⚡️ Liquidity migration to JUIX DEX 🧃

Smoke Testing In Flutter

Understanding the AGPL: The Most Misunderstood License

Open-source software

C Mini Project 1

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alibaba Cloud

Alibaba Cloud

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com

More from Medium

Stream avro data from kafka over ssl to Apache pinot

JMX Exporter for Kafka Metrics

Elasticsearch for Multi-Tenant Architecture

Databricks Workspace SSO: Integration with Keycloak and SAML 2.0

Databricks admin console single sign-on form