JDBC bulkdataload serverless dataflows stuck in deleting state
Serverless Dataflows Occasionally Stuck in Deleting State
Affected Versions: 2.7 3.0 3.0.3 3.0.6
Fix Version: 3.1
Affected Versions: 2.7 3.0 3.0.3 3.0.6
Fix Version: 3.1
Affected Versions: 2.7 3.0 3.0.3 3.0.6
Fix Version: 3.1
This report covers two related issues affecting JDBC bulkdataload dataflows:
Dataflow auto-retry mechanism is not working as expected
Affected Versions: 3.0 3.0.3 3.0.6
Fix Version: 3.1
Files with identical names from different folders may not upload correctly, causing some of your data to be lost during S3 ingestion.
Affected Versions: 2.7 3.0
Fix Version: 3.1
When you upload files with the same name from different folders in your S3 bucket, Amorphic doesn't always process all of them correctly.
For example, if your S3 bucket has these files:
reports/2025-01-01/daily_sales.csvreports/2025-01-02/daily_sales.csvreports/2025-01-03/daily_sales.csvAmorphic sees them all as just daily_sales.csv and treats them as the same file. This means only the last file gets processed successfully - the others get overwritten and lost.
This happens because the system focuses on the filename only and ignores which folder each file came from. When multiple files are processed at the same time, they conflict with each other and some data gets lost.
A fix is available in Amorphic version 3.1 that will ensure all your files are processed correctly, even when they have the same name.
What you can do right now:
daily_sales_2025-01-01.csv)What's available in version 3.1:
Parquet file uploads fail when you change column data types in Redshift datasets.
Affected Versions: 2.7 3.0 3.0.3 3.0.6
Fix Version: 3.1
Datalab post custom R image changes fail due to corrupted backend image corresponding to the required R configurations
Affected Versions: 3.0 3.0.3 3.0.6
Fix Version: 3.1