PolarDB for MySQL
Function | Description |
---|---|
Schema Migration | If the target schema does not exist, BladePipe will automatically generate and execute CREATE statements based on the source metadata and the mapping rule. |
Full Data Migration | Migrate data by sequentially scanning data in tables and writing it in batches to the target database. |
Incremental Data Sync | Sync of common DML like INSERT, UPDATE, DELETE is supported. |
Data Verification and Correction | Verify all existing data. Optionally, you can correct the inconsistent data based on verification results. Scheduled DataTasks are supported. |
Subscription Modification | Add, delete, or modify the subscribed tables with support for historical data migration. For more information, see Modify Subscription. |
Position Resetting | Reset positions by file position or timestamp. Allow re-consumption of incremental data logs in a past period or since a specific Binlog file and position. |
Table Name Mapping | Support the mapping rules, namely, keeping the name the same as that in Source, converting the text to lowercase, converting the text to uppercase, truncating the name by "_digit" suffix. |
DDL Synchronization |
|
Metadata Retrieval | Retrieve the target metadata with filtering conditions or target primary keys set from the source table. |
Advanced Functions
Function | Description |
---|---|
Incremental Data Write Conflict Resolution Rule | IGNORE: Ignore conflicts (skip writing), REPLACE: Replace the entire row in case of conflicts. |
Handling of Zero Value for Time | Allow setting zero value for time to different data types to prevent errors when writing to the Target. |
Online DDL Compatibility | Support GH-OST, PT-OSC, Aliyun DMS Online DDL. |
Scheduled Full Data Migration | For more information, see Create Scheduled Full Data DataJob. |
Whole Database Sync | Support sync of DDLs to create, delete, and modify tables as well as the data. For more information, see Sync Whole Database. |
Custom Code | For more information, see Custom Code Processing, Debug Custom Code and Logging in Custom Code. |
Data Filtering Conditions | Support data filtering using WHERE conditions, with SQL-92 as the SQL language. For more information, see Data Filtering. |
Setting Target Primary Key | Change the primary key to another field to facilitate data aggregation and other operations. |
Limits
Limit | Description |
---|---|
Character Set | Support utf8, utf8mb4, latin1. Other encodings have not been tested. |
Prerequisites
Prerequisite | Description |
---|---|
Permissions for Account | A privileged account or a normal account with read and write permissions of PolarDbMySQL. |
Enable Binlog | In the PolarDbMySQL instance, click Details > Configuration and Management > Parameter Configuration > Set the value of loose_polar_log_bin to true. |
Character Set | Support utf8, utf8mb4, latin1. Other encodings have not been tested. |
Parameters
Parameter | Description |
---|---|
parseBinlogParallel | Number of threads for parallel parsing of Binlog in Incremental DataJobs. |
parseBinlogBufferSize | Size of the circular buffer for parsing Binlog in Incremental DataJobs. |
maxTransactionSize | Maximum number of data rows per transaction. If exceeded, the transaction will be split and flushed in parts. |
limitThroughputMb | Limit the throughput of incremental Binlogs. |
extraDDL | Support synchronization of additional DDL, including PT, GHOST, ALI_DMS, and PT_GHOST. |
needJsonEscape | Escape special characters in JSON to be written to the target database. |
Tips: To modify the general parameters, see General Parameters and Functions.
Prerequisites
Prerequisite | Description |
---|---|
Permissions for Account | A privileged account or a normal account with read and write permissions of PolarDbMySQL. |
Port Preparation | Allow the migration and sync node (Worker) to connect to the PolarDbMySQL port (e.g., 3306). |
Parameters
Parameter | Description |
---|---|
keyConflictStrategy | Strategy for handling primary key conflicts during write in Incremental DataTask:
|
dstWholeReplace | Convert INSERT and UPDATE operations into full row replacement in the Target. |
mergeMaxInsertSize | When the parallel strategy mergeMaxInsertSize is set to TABLE_IMPORT_OPTIMIZE, it defines the maximum number of rows to merge per batch for the same table (improve parallelism). |
Tips: To modify the general parameters, see General Parameters and Functions.