Oracle
Function | Description |
---|---|
Schema Migration | If the target schema does not exist, BladePipe will automatically generate and execute CREATE statements based on the source metadata and the mapping rule. |
Full Data Migration | Migrate data by sequentially scanning data in tables and writing it in batches to the target database. |
Incremental Data Sync | Sync of common DML like INSERT, UPDATE, DELETE is supported. |
Data Verification and Correction | Verify all existing data. Optionally, you can correct the inconsistent data based on verification results. Scheduled DataTasks are supported. |
Subscription Modification | Add, delete, or modify the subscribed tables with support for historical data migration. For more information, see Modify Subscription. |
Position Resetting | Reset positions by file position or timestamp. Allow re-consumption of incremental data logs in a past period or since a specific Binlog file and position. |
Table Name Mapping | Support the mapping rules, namely, keeping the name the same as that in Source, converting the text to lowercase, converting the text to uppercase, truncating the name by "_digit" suffix. |
Metadata Retrieval | Retrieve the target metadata with filtering conditions or target primary keys set from the source table. |
Advanced Functions
Function | Description |
---|---|
Automatic Dictionary Creation | When using offline dictionaries to parse Oracle Redo files, the dictionary is automatically created during DataJob creation. |
Removal of Target Data before Full Data Migration | Remove the existing data in the Target before running the Full Data Migration, applicable for DataJobs reruning and scheduled Full Data migrations. |
Recreating Target Table | Recreate target tables before running the Full Data Migration, applicable for DataJobs reruning and scheduled Full Data migrations. |
Incremental Data Write Conflict Resolution Rule | IGNORE: Ignore primary key conflicts (skip writing), REPLACE: Replace the entire row in case of primary key conflicts. |
Handling of Zero Value for Time | Allow setting zero value for time to different data types to prevent errors when writing to the Target. |
Scheduled Full Data Migration | For more information, see Create Scheduled Full Data DataJob. |
Custom Code | For more information, see Custom Code Processing, Debug Custom Code and Logging in Custom Code. |
Adding Virtual Columns | Support adding custom virtual columns with fixed values, such as region, ID, etc. |
Setting Target Primary Key | Change the primary key to another field to facilitate data aggregation and other operations. |
Data Filtering Conditions | Support data filtering using WHERE conditions, with SQL-92 as the SQL language. For more information, see Data Filtering. |
Limits
Limit | Description |
---|---|
Incremental Data Sync Performance | Due to LogMiner performance limits and BladePipe's lack of parallel analysis, the performance benchmark is set at 3000 records per second. |
Data Types | Do not support BLOB and derived types. |
Prerequisites
Prerequisite | Description |
---|---|
Permissions for Account | |
Incremental Data Sync Preparation | |
Port Preparation | Allow the migration and sync node (Worker) to connect to the Oracle port (e.g., 1521). |
Parameters
Parameter | Description |
---|---|
fullFetchSize | Fetch size for scaning full data. |
eventStoreSize | Cache size for parsed incremental events. |
logminerUser | User name for connection to Oracle to execute LogMiner SQL. |
logminerPasswd | Password for connection to Oracle to execute LogMiner SQL. |
logminerConnectType | Way to connect to Oracle (PDB) to execute LogMiner SQL, including ORACLE_SID and ORACLE_SERVICE options. |
logminerSidOrService | SID or service name for connection to Oracle (PDB) to execute LogMiner SQL. |
parseRedoSqlParallel | Number of threads for parallel parsing of LogMiner data. |
parseRedoSqlBufferSize | Size of the circular buffer for parsing LogMiner data. |
redoFetchSize | Number of rows of LogMiner analyzed data to fetch each time. |
redoOfferTransMaxSize | Maximum number of unconsumed but committed transactions in cache. |
oraMiningSessionPauseSec | Interval between LogMiner sessions, in seconds. |
maxEventCountPerTxInMem | Maximum number of events per transaction in memory. |
logMiningScnStep | Analysis range specified when Oracle LogMiner analyzes redo logs. |
abandonUnCommitTxTimeoutSec | Automatically abandon transactions that have not been committed for longer than the specified timeout. |
restartTxWithDataTimeoutSec | Automatically restart DataJobs for transactions with data changes that have not been committed for longer than the specified timeout. |
oraUseOnlineDic | Whether to use online logs; false means using offline logs, which may put more stress on Oracle. |
oraReleaseIntervalSec | Interval for recreating connection for analysis to release Oracle server resources. |
oraMiningSessionPauseSec | Interval between execution of LogMiner commands for analysis. |
fallBackScnStep | Distance to keep from the latest Redo log data; 0 means to keep right behind the latest Redo log data. |
sqlCaseConversionEnabled | Whether to enable DDL case conversion (according to the default case rules of the current database). |
Tips: To modify the general parameters, see General Parameters and Functions.
Prerequisites
Prerequisite | Description |
---|---|
Permissions for Account | |
Port Preparation | Allow the migration and sync node (Worker) to connect to the Oracle port (e.g., 1521). |
Parameters
Parameter | Description |
---|---|
keyConflictStrategy | Strategy for handling primary key conflicts during write in Incremental DataTask:
|
dstWholeReplace | Convert INSERT and UPDATE operations into full row replacement in the Target. |
writeStrategy | Strategy of writing data to the Target, including:
|
Tips: To modify the general parameters, see General Parameters and Functions.