1. Tables with Primary Keys (PKs) of type TimeStamp and tables of type FileTable are seeded but not tracked for changes real-time. For those tables the work around is to add those tables to a schedule to reseed tables (a seamless automatic process, but not a real-time solution). TimeStamp column values are not replicated, instead a new unique value is generated on the replica table upon insert/update by SQL Server. (Note: TimeStamp columns do not contain values directly relating to time, it simply is a number which is updated whenever a value in that record is. TimeStamp values are guaranteed to be unique within a database.)
2. Real-Time replication of tables without Primary Key is supported, but need to select CDC (Change Data Capture) under [Advanced] options in UI during initial job configuration (specify CDC in API call parameters if creating new jobs over API). See prerequisites for more information.
The easiest way to to determine if any of the above limitations apply, is to launch a trial CloudBasix instance, initiate a test replication(s).
Alternatively, manually execute Limitations Assessment Scripts against each database, and if necessary, reach out to Support for help with interpreting results.
3. Table truncate when the replication job is configured with CDC (Change Data Capture) is supported, but integration with CloudBasix API is required. Contact Support for more information.
5. Windows/Active Directory logins (user accounts) are not replicated.
6. MSSQL Server level logins/users (along with roles) are replicated, however for security reasons random passwords are assigned. Passwords need to be manually reset on the replica SQL Server after the initial database replication seeding completes. Password changes are also not automatically replicated as part of the schema replication process.
Justification: CloudBasix is designed to be lightweight and to handle MSSQL Server continuous replication over TCP/IP and to work in cross-region (over VPN or data encrypted in transit SSL connections), cross-aws-accounts (data/DR artifacts are stored in multiple AWS accounts, account level security breach would not affect business continuity), on-premise to AWS RDS or EC2 MSSQL Server, and even InterCloud replication scenarios.
7. By design, for security reasons (default use case is DR), complex cases of multiple dropped or renamed table columns might not be automatically replicated in certain rare complex cases (user approval is required if the case is determined to be ambiguous) as part of the continuous schema replication process. Complex DB schema changes are usually part of planned releases, and the semi-automatic handling of those rare ambiguous cases of schema change tracking is handled as part of the planned release rolls out. In case of complex cases of multiple dropped or renamed column(s) the following ambiguous schema change detected error notification will be generated (and reported in runtime logs for each change tracking process run):
If follow the link in the emailed notification, will open respective error logs screen, where a button [Apply Schema Changes] ([Guide Me] in 12.82 and earlier versions) will be enabled:
The following will be reported in runtime logs:
>>ERROR:Ambiguous schema change detected. A column(s) been renamed or dropped in table(s): [dbo].[Managers].CompanyName2, [dbo].[Managers].CompanyName3 ? To resume change tracking, apply proper schema change(s): (1) Click [Apply Schema Changes] button and follow the instructions (note: if you do not see such a button then changes had already been applied) or (2) Execute below scripts, properly adjusted to account for actual schema changes, manually: If column1 was dropped, then you need to execute: alter table tablename drop column column1; (Applicable to redshift replications only): On the Redshift side, you need to execute 2 statements: (1) alter table tablename drop column column1 (2) alter table tablename_stage drop column column1 If column1 was renamed to column2, then you need to execute: EXEC sp_rename N'[tablename].[column1]',N'column2', 'OBJECT'; (Applicable to redshift replications only): On the Redshift side, you need to execute 2 statements: (1) alter table tablename rename column column1 to column2 (2) alter table tablename_stage rename column column1 to column2
When you click the [Apply Schema Changes] button ([Guide Me] in 12.82 and earlier versions), you will land on an interactive wizard which will allow you to drop and/or rename columns without the need to manually execute alter queries against the staging data store and, if applicable to the replication type, against redshift:
1.1 Selected columns to be dropped:
2.1 Map columns to be renamed (1 pair at a time - if more than 1 columns are renamed):
2.2 Renamed column confirmation:
8. Continuous replication of Temporal Tables is fully supported in latest versions. Note: history tables do not have to have Primary Keys (PKs). They are replicated during the initial seeding (PKs are not required at that time), but they are not tracked for changes thereafter. The replica SQL Server inserts new data into the history tables, as values are tracked for changes and inserted into the respective replica temporal tables.
If a temporal table field value is changed multiple times within a very short period of time, and the respective change tracking process runs for a longer period than the timeframe during which multiple changes for same value occurred, the last value change registered will be recorded, but all prior value changes might not be registered into the respective replica history table. Reseeding of the temporal table would fix this, as it will reseed the history table as well (reseeding of a table can be initiated under \Replication]\Analyze). Scheduled reseeding of temporal tables, in addition to being tracked for changes, will be supported in next product version).
See option #2 here: https://cloudbasic.net/documentation/configure-rds-sqlserver-alwayson/sql-server-to-sql-server-replication/
A reference in documentation which will help you to get started with RDS backup into S3 and restore from S3: https://cloudbasic.net/documentation/rds-sql-server-backup-restore-s3/
With the fully-automated replication option (see option #1 here https://cloudbasic.net/documentation/configure-rds-sqlserver-alwayson/sql-server-to-sql-server-replication/), the encrypted objects will be skipped. You can add the scripts to the Pre-Seeding Action Script sections of the [Advanced] settings (during initial replication configuration). Or execute those scripts manually against the replica after the seeding completes.
For security reasons, encrypted objects will be excluded from ongoing DDL change tracking.
Conversion of job tracking method for existing change tracking jobs, without data loss or the need to rebuild the jobs, is supported . Contact Support for more information.