Skip to main content
print this page

Version 3.2

· 18 min read
Amorphic
Powering your data to wisdom journey

Version 3.2

Amorphic 3.2 delivers a comprehensive set of new features, enhancements, and bug fixes focused on advancing data cataloging, geospatial analytics, AI-powered data discovery, and platform flexibility. This release introduces 6 major new features—including SAML group-to-tag access mapping, native ArcGIS Online integration, advanced relationship graph visualizations for ArcGIS, and full geospatial data support—accompanied by over 20 enhancements such as improved dataflow validation, enhanced code template management, more flexible OAuth2 datasource authentication, and upgrades to AI services. Additional improvements strengthen auditability, execution monitoring, user experience, and integration with both spatial and external data systems. Overall, version 3.2 delivers greater automation, connectivity, and visibility, enabling users to manage, discover, and secure their data assets more intuitively and efficiently across the Amorphic platform.

Features (06)

  • [CLOUD-6377] - Introduction of SAML group-to-tag access mapping – Users logging in via Single Sign-On (SSO), such as through Entra ID (Azure AD) or any other SAML-supported provider, will now have Amorphic tag access automatically assigned based on their SSO group membership, as long as a corresponding SAML tag mapping exists and this mapping can be configured in the admin/tags page of Amorphic .

  • [CLOUD-6228] - AI Driven Search in Catalog - This feature introduces AI-powered contextual search within Catalog, enabling users to find relevant data based on meaning rather than exact keywords by understanding the intent behind natural language queries. This feature is available only in AI Enabled environments and the user will further need to enable Catalog in the Manage AI Services section under AI Space. Users can choose Semantic Search directly from the search bar while running a query or set it as the default option in the System Settings section of the Administration panel under Application Management.

  • [CLOUD-6212] - Introducing Native ArcGIS Online Datasource Integration – Amorphic now supports ArcGIS Online as a native datasource, enabling metadata cataloging, and optional data ingestion for ArcGIS items such as Dashboards, Web Maps, Web Experiences, and Feature Services. The integration includes API Key and OAuth2 authentication, supports multiple target destinations (S3, Redshift, Lake Formation, etc.), and provides schema detection for spatial data along with batch processing and governed access for analytics.

  • [CLOUD-6140] - Capture and Visualize Relationships between ArcGIS Catalog Assets - Introduced the ability to capture and visualize relationships between ArcGIS catalog assets within Amorphic. A new Dependency Graph tab has been added under Catalog for ArcGIS assets, allowing users to explore how assets such as Hub Site Applications, Hub Pages, Web Maps, Web Experiences, Feature Services and Dashboards are interconnected. Users can select an ArcGIS catalog asset to view its relationships through an interactive graphical interface. The depth parameter enables control over the number of levels displayed in the graph. This enhancement expands the ArcGIS Catalog beyond simple item listing to include a comprehensive visualization of dependencies between resources.

  • [CLOUD-6124] - Support for GeoSpatial Data Storage – This feature introduces full geospatial data capabilities in Amorphic, allowing users to create spatial datasets from ArcGIS Online, SQL Server, or direct file uploads. Users can run spatial queries for advanced analysis, visualize results on interactive maps, and search or preview spatial datasets directly within the data catalog.

  • [[TRACE]CLOUD-5413] - Amorphic Observability Solution Trace Implementation - Trace provides comprehensive usage observability across Amorphic resources, enabling visibility into which users are creating or utilizing specific assets. It also includes robust compliance tracking and reporting features, supporting multiple industry standards such as HIPAA, NIST, CIS, and AWS Foundational Security Best Practices.

Enhancements (23)

  • [CLOUD-6450] - Execution History Support for Advanced Dataload Dataflows (Full Load) – Advanced Dataload now includes execution history tracking for Full Load dataflows, providing improved visibility into past runs. Users can view key details such as start and end times, record counts, errors, and overall status, enabling better monitoring, troubleshooting, and validation of data migrations.

  • [CLOUD-6359] - Enhancements to Code Templates - Replaced the previous iteration of code templates with more reusable and generalized scripts that restore their intended purpose as adaptable, cross-functional resources. Templates now clearly demonstrate their use cases and can be easily adapted across different ETL jobs and users.

  • [CLOUD-6351] - Enhancements to Bulk Load Dataflows: Spatial Data Support for SQL Server – Bulk-load dataflows using SQL Server sources now support ingestion of spatial datasets into S3 Athena and Redshift. Users can enable this by selecting Spatial Dataset during dataset configuration in dataflows and specifying the spatial and CRS columns. This enhancement ensures that spatial data is correctly stored, queried, and visualized in the target systems.

  • [CLOUD-6343] - Enhanced Dataset Selection Validation in Dataflows – Code changes have been implemented to ensure that the same dataset name cannot be selected more than once within a single Dataflow. This enhancement applies to both JDBC Bulk Load and JDBC Advanced Load sources, preventing dataset name conflicts during task registration and avoiding scenarios where the dataflow could remain stuck in a running state.

  • [CLOUD-6316] - Improvements to AI Services : Automated Model Syncing , Open AI Support and Dataset Ingestion Support for Chat sessions -

    • Introduced biweekly auto-sync for AI models, scheduled every 15 days to ensure the latest models are consistently updated. Additionally, introduced support for OpenAI model invocation through Bedrock.
    • Introduced a new API endpoint that enables ingestion of previously uploaded files from chat sessions into existing datasets, allowing seamless integration and reuse of existing data within the platform. [API-Only]
  • [CLOUD-6274] - Enhanced OAuth2 Authentication for External API Datasources – The platform now supports creating datasources using OAuth2 where ClientAuthentication can be passed in the request body, in addition to the previously supported header-based method. This enhancement provides greater flexibility when configuring external API connections/datasources.

  • [CLOUD-6272] - Improved File Extension Handling for External API Datasources dataflows – Files ingested from External API datasources that were previously assigned the generic .others extension will now default to the .json extension when the dataset file type is set to Others. This update improves usability by providing a more meaningful and recognizable file format, making file management and previewing easier and more intuitive.

  • [CLOUD-6226,6281] - SQL AI Improvements: Workbooks, Spatial Query Support, End-to-End Query Execution, Execution Controls, and Smarter Error Recovery - A new workbook component has been introduced to persist chat sessions with SQL AI, providing a structured and reusable way to manage interactions. SQL AI now supports spatial data queries, enabling advanced location-based and geospatial analyses directly within chat workflows. The component also offers full query execution support, allowing users to run and manage SQL queries with AI assistance. Additionally, enhanced error handling has been added: when a query fails, the AI will prompt the user to retry, and if approved, it will regenerate the query using the context of the encountered error. A new system configuration, SQL AI Auto Run Configuration, controls whether natural-language queries in workbooks execute automatically. When enabled, queries run instantly for a smoother workflow; when disabled, users gain greater control by choosing when queries should execute.

  • [CLOUD-6224] - Improved AGENTS Support and System Agent Creation in AWS Regions with Cross-Region Inferences - Fixed an issue where System Agent creation could fail due to the unavailability of Bedrock models in certain Regions. The system now gracefully handles such cases and provides clear instructions for verifying SCP restrictions and recreating System Agents from the UI.

  • [CLOUD-6215] - Added schedules and Global flag support for instances/entities for Bulkdataload instances/entities - JDBC BulkLoad datasources now support scheduled entity creation, along with the ability to create global entities and convert existing entities into global entities. This enhancement enables entities to be shared and reused across multiple datasources.

  • [CLOUD-6214] - Shared Kafka Cluster Support for Scheduled Advanced Dataloads – JDBC Advanced Dataload datasources now support the use of shared Kafka clusters with scheduled execution. Clusters can be pre-created from the Entities page and configured as either global (usable across all datasources) or datasource-specific (restricted to a single datasource), enabling more efficient resource reuse and easier scheduling management.

  • [CLOUD-6205] - Added pagination support for data quality checks runs list call - Introduced pagination, filtering, and attribute projection for listing data-quality-check runs within datasets. Users can now project specific attributes such as startTime, endTime, status, message, and actions, and also sort the results based on these fields. These enhancements improve usability and performance when managing large sets of data-quality-check runs.

  • [CLOUD-6201] - Chat Management Enhancement: Smarter Chat Titles for Faster Navigation and Better Organization - Enhanced chat management with automatic title generation based on conversation content, reducing manual effort and improving overall readability. Chat listings are now sorted by last modified time, making it easier for users to quickly locate and access recent discussions. These improvements streamline navigation and create a more organized chat experience.

  • [CLOUD-6180] - Resource statistics collection for AI core components - Resource Statistics for AI core components has been included under Overview in the Home Page. The components included are AI Agents, Knowledge Bases and GuardRails. Users can now see the AI core components resources in a graphical format along with other resources.

  • [CLOUD-6143] - Improvements in Data Glossaries - Glossaries in Amorphic have been enhanced with the following improvements:

    • Bi-directional Linking: Previously, glossary terms could only be linked to dataset columns through the catalog’s schema section. This enhancement allows users to attach glossary terms directly from the dataset interface as well, making the linking process bi-directional and more intuitive.
    • Importing Glossaries and Terms: Users can now import glossaries and terms into Amorphic using structured JSON files, enabling faster and easier setup of the semantic layer within the application.
    • User Notifications:Support for user activity notifications has been extended to include glossary-related actions. Users will now receive alerts for glossary updates and linking operations.
  • [CLOUD-6136] - Support for custom actions on catalog search results - Enabled users to perform custom data actions on datasets they have access to. They can be used across tools like Playground, Amorphic BI App, Tableau, and Power BI using SQL queries. Added guidance on how datasets can be connected and utilized within these tools for enhanced analytical flexibility.

  • [CLOUD-6135] - Support for markdown format for all description metadata fields across Datasets and Datasources - Added Markdown support for datasets and datasources resource descriptions, allowing rich text formatting while maintaining compatibility with existing plain text and search functionality in catalog. This enhancement will allow users to create more expressive and well-structured descriptions in datasets and datasources to improve readability and context.

  • [CLOUD-6127] - Resource Sync Support for HCLS Components – Added support to automatically synchronize HCLS resources between Amorphic and AWS. Any HCLS resource created or deleted directly from the AWS console (with the appropriate tags applied) will now be reflected in Amorphic, ensuring consistent state and reducing manual reconciliation.

  • [CLOUD-6105] - Advanced dataload dataflows for target location S3Athena now supports the JSON data format - Advancedataload now fully supports JSON data for S3Athena targets, improving data ingestion for dataflows. This enhancement streamlines integrating JSON data into S3Athena, allowing direct querying and analysis of semi-structured data while maintaining Advancedataload's existing framework, data integrity, and performance.

  • [CLOUD-6057] - Support to Create Datasets(both views and regular datasets) from Query results - With this enhancement, users can now use results from queries run in the Playground and upload the results as a file into an existing dataset or create a new dataset — either internal or view type — based on the query output, with S3 Athena or Redshift as the target location. This streamlines data reuse and helps create a dataset faster by reducing time taken by dataset registration or file upload.

  • [CLOUD-6032] - Data profiling for hudi and delta lake datasets - With this enhancement, Hudi and Delta Lake datasets now support data profiling — a feature that was previously unavailable for these dataset types.

  • [CLOUD-6030] - Improved Handling to Prevent Files from Getting Stuck in Processing Status – The Reload dataset file processing flow has been updated to ensure that files no longer remain indefinitely in the Processing state when an internal failure occurs. The system now detects such errors and automatically transitions the file to a stable state, preventing stuck executions and removing the need for manual repair.

  • [CLOUD-6027] - Enhanced Data Validation with Consolidated Error Reporting – Data validation process now reports all detected issues in a single response instead of stopping at the first failure. Errors such as missing values, invalid data types, and column-level mismatches are grouped together, allowing users to review and resolve all problems in one upload attempt, reducing back-and-forth correction cycles.

Bug Fixes (04)

  • [CLOUD-6287] - User Removal Failure from Access Tags – Resolved an issue where a user could not be removed from a tag if they had access to an Insights dashboard both through the tag and individually via direct dashboard assignment. The removal process now works as expected in all cases.

  • [CLOUD-6279] - Fixed Incorrect Partial Failure Status for Successful Dataflows – Resolved an issue where the status of a dataflow was incorrectly marked as partial failure even when all tables were successfully ingested. The backend logic has been corrected to ensure the proper status is now reported.

  • [CLOUD-6273] - RBAC Validation failure for HCLS Omics Resources – Resolved an issue where RBAC permissions were not being enforced during the creation of Health Omics resources, resulting in resources being created every time regardless of access rights. RBAC validation is now correctly applied before resource creation.

  • [CLOUD-6181] - Resolved Bedrock Access Failure for Pre-AI Datalabs and ETL Jobs in AI-Enabled Environments - This fix ensures that all ETL Jobs and Datalabs created in AI enabled Amorphic environments are automatically updated with the required Bedrock permissions without the need for any manual steps, ensuring seamless access to Bedrock models and services from within the users’ analytic workloads.

API Only Features (04)

  • [CLOUD-6358] - Dataset Data Consumption API with Advanced Filtering & Pagination – A new API has been introduced that allows users to retrieve dataset records directly, without requiring a download step or additional processing. The API supports advanced query filtering with logical operators (AND, OR, NOT), range-based conditions, and column-level filtering, enabling precise and efficient data retrieval. Pagination is built in for large datasets, and the response format is standardized with clearer error handling for smoother integration.

  • [CLOUD-6261] - Enhancements in Data Profiling in Amorphic - Enhancements have been added to Data Profiling like separating Auto generated AI suggestions (e.g., PII detection, data classification, etc.) from the core data profiling operation (e.g., min/max, missing values, etc.), as well as the ability to trigger data profiling for a particular dataset from the dataset side, making the process more intuitive.

  • [CLOUD-6167] - Enhancements to Guardrails API’s - Introduced a dedicated API to retrieve the default guard rail (GET /ai/guard-rails/default) and updated the listing API to support fetching component-specific guard rails (GET /ai/guard-rails?component=component-name). This update improves flexibility in managing and retrieving guard rail configurations.

  • [CLOUD-6134] - Introduction to Bedrock Flows in Amorphic - This feature introduces Bedrock Flows as a new type of data pipeline within the application, enabling orchestration of AI-driven workflows. Bedrock Flows allow the user to design and execute generative AI pipelines alongside existing data workflows for advanced automation and intelligence. Native support, in the form of nodes, for features like Knowledge Base nodes and Lambda nodes is available with the new form of data-pipelines. Additionally, utility nodes like storage nodes, collector nodes, iterator node, etc are also available.

UI Features/Enhancements (03)

  • [UI-1890] - Enhanced Catalog Interface - The catalog interface has been updated for improved discovery and search. Updates include a streamlined search experience with keyword and semantic search, an enhanced filter section with organized metrics and clearer controls, improved asset details navigation with smooth animations and a new full-page details screen, better handling of search parameters and URL state management, optimized asset type tabs for easier switching, and an improved empty state with clearer guidance for new users. The redesign also adds support for new ArcGIS datasource asset types (Web Experience, Web Map, Feature Service, Hub Site Application, and Hub Page), an interactive dependency graph visualization for ArcGIS assets that shows lineage relationships, and an enhanced details screen with full-page layout and improved information architecture.

  • [UI-1921] - Improvements to main menu navigation - Navigation has been redesigned with a new services sidebar that organizes services by new categories, keyboard shortcuts (press 'k' to open search), a search overlay for quick access across services, improved mobile menu with accordion-style navigation and better accessibility, category-based organization for faster navigation, and enhanced focus management and keyboard navigation throughout the menu system.

  • [UI-1951] - Improved Governance with Default and Component Guardrails - Introduced an organization-level default guardrail to ensure consistent safety and content filtering across all components. Admins can now easily set a default guardrail from the guardrail details page, reducing manual configuration effort. Component-specific guardrail assignments provide tighter governance, improving compliance and creating a more streamlined user experience.

Cross-Account-Role Updates (01)

  • [CLOUD-6526] - Cross Account Role Permission Changes v3.2 - Updated cross-account role with conditions for various services for fine grained access control along with ability to tag and untag AWS resources for improved management and removed obsolete permissions.

Known Issues (06)

  • [CLOUD-6533] - Occasional failures may be experienced during Redshift dataflow creation - In rare cases, dataflow creation with Redshift as the target (when Create Dataset = True) may fail due to a temporary network issue during table creation. The dataflow may appear in a registration-failed state. Retrying the same creation usually succeeds on the next attempt.

  • [CLOUD-6528] - Automated File Processing Fails for Reload Datasets Ingested from S3 via Scheduled Runs – When files are ingested into a Reload-type dataset through a scheduled execution from an S3 datasource, they may remain stuck in the Pending state unless the ingestion is triggered manually. This is due to a defect where the process does not detect the end-of-transfer marker from the S3 connector, preventing automatic processing from starting.

  • [CLOUD-6525] - No Results Displayed When Filtering Agents by Status on the Listing Page: The user interface fails to display any results when attempting to filter agents by their status on the agent listing page. This issue may occur when agents are filtered by statuses such as "READY" or "FAILED." While the filter functionality is active, it does not return any matching results, which could potentially confuse users.

  • [CLOUD-6507] - Cost tag activation status update and budget actions are skipped when the tag key contains an underscore () character - Cost tag activation status and related budget actions may not update correctly when the tag key contains an underscore () character.

  • [CLOUD-6523] - Dataflow for ArcGIS datasources remains stuck in “Running” state – The Glue job used for ArcGIS ingestion requires ENIs, and each ENI consumes an IP from the subnet. In some cases, AWS does not automatically clean up unused ENIs left behind from previous Glue jobs or connections, which results in IPs not being released. Over time, the subnet may run out of available IPs, causing new Glue jobs to fail. When this happens, the ArcGIS ingestion job fails silently, leaving the dataflow execution stuck in a running state.

  • [CLOUD-6543] - SQL AI Sync Job Execution Failure Due to Missing Tenant Parameters. - This issue occurs when tenant-related SSM parameters are deleted during simultaneous enable/disable operations of SQL AI within the "Manage AI Services" interface, leading to execution failures of SQL AI sync jobs due to missing parameters.

User Actions/Notice (02)

  • Omics Analytics is being deprecated starting with v3.3. Users can continue to use existing Omics Annotation and Variant Stores created in earlier versions, but new Stores cannot be created in v3.3 and later. Before upgrading to v3.3, users must delete all existing Omics Annotation and Variant Stores. For more details, refer to the AWS Documentation.

  • Newer Amazon Bedrock models, such as Claude 3.7 (and subsequent versions), leverage cross-Region inference. This means that an inference request initiated from your Amorphic region will be dynamically routed to one of several predefined destination regions. If even one destination region is restricted by a Service Control Policy (SCP), the inference request fails entirely, even if other regions are permitted. All AI components of Amorphic that rely on these models will fail if SCPs restrict any of the regions required for cross-Region inference. To prevent this, users must ensure that the SCPs set at their organizations must allow Bedrock API actions in every Region where the target models are hosted. For more details, refer to the AWS Documentation

Amorphic CICD (01)

  • [CLOUD-6329] - Extend Amorphic CICD with Guardrail support - Amorphic now supports Guardrails for use with agents in the application. With this new feature in Amorphic, Amorphic CICD also now supports creating guardrails, using templates.
print this page

Version 3.1

· 28 min read
Amorphic
Powering your data to wisdom journey

Version 3.1

Amorphic 3.1 advances AI governance and usability with centralized AI model management, guardrails, and a new Chats experience. This release introduces 15 major features, 51 enhancements, and 26 bug fixes. Focus areas include AI safety and management, richer ingestion/connectivity (Dropbox, SharePoint, Snowflake, Advanced Data Load), developer productivity (Code Templates, Amorphic SDK), monitoring/insights, and stronger governance and cost controls.

Features (15)

  • [CLOUD-5990] - Introduced Amorphic Job Code templates - The Code Templates feature and the ETL Shared library addresses the challenge of enabling non-technical users to create and execute jobs in Amorphic without extensive scripting knowledge, so we introduced a new feature which reduces the template maintenance burden while providing a scalable solution for diverse client requirements. Users can now create custom code templates or use predefined templates from the system and these templates can be attached to a job, which will then be registered with a single click. Templates can be created during job setup or from scratch, while the ETL Shared Library offers ready-to-use methods for faster development.

  • [CLOUD-5683] - Introduced AI Models - Introduced the AI Models component in Amorphic that allows users to sync models available, enable or disable them, and assign models for use across different components in the platform. This centralizes model management and simplifies control over how AI is used within Amorphic. Added support for AI models with inference profiles in Amorphic. Users can now configure a default inference to be used when both on-demand and inference profile options are available, ensuring flexibility and consistency in model execution.

  • [CLOUD-5928] - Introduced AI guardrails - This feature provides comprehensive AI safety controls through content filtering, policy enforcement, and real-time validation. It offers guard rail creation/management, multi-tier security policies (CLASSIC/STANDARD), PII protection, custom word filters, topic blocking, and component-specific configurations. Includes chat protection, audit logging, and cross-region support. This is available under the AI management section.

  • [CLOUD-5926] - Introduced Chats component in AI Space - Introduced a unified consumption layer in Amorphic to interact with models, knowledge bases, and agents via the Chats section. Users can now chat with AI, upload files to ask questions, summarize content, and take notes directly within conversations. This provides a seamless way to explore insights, manage knowledge, and collaborate in one place.

  • [CLOUD-5924] – Added support for Dropbox in SaaS Datasource - Amorphic now supports Dropbox as a new SaaS datasource. Users can securely connect their Dropbox accounts, select files or folders, and seamlessly migrate data into Amorphic datasets for further processing and analysis

  • [CLOUD-5903] - Introduced job run metrics and metric summaries - Amorphic now introduced custom logging for ETL jobs, enabling users to capture and track specific job run metrics. These metrics are automatically compiled into a comprehensive execution summary, providing enhanced insights into job performance and outcomes. With this feature, users gain greater visibility into their ETL processes, making it easier to monitor efficiency, troubleshoot issues, and optimize workflows. This functionality is readily available in the jobs without any additional package configuration.

  • [CLOUD-5843] - Introduced AI Agents (Data labeling , Summarizing & Error diagnosis) - System Agents in Amorphic enable users to use pre-baked agents provided out-of-the-box in the application to interact with their application data residing within Amorphic components. System Agents are designed to perform atomic tasks, and currently we have three :

    • Data Labeller Agent - designed to generate labels for unstructured textual data, powered with capabilities to approve AI generated labels for dataset files.
    • Summarizer Agent - designed to generate concise summaries for unstructured textual data.
    • Error Diagnosis Agent - designed to debug issues with ETL job runs and Data Pipeline executions. It follows a multi-agent architecture that brings together the capabilities of Bedrock Agents as well as Strands Agents to help optimize log and code analysis.
  • [CLOUD-5553] – Introduced JDBC Metadata-Only Datasource - Amorphic now supports a new JDBC datasource type: Metadata Only, allowing users to ingest database metadata from JDBC sources directly into the Amorphic catalog. This enables cataloging of table schemas and structures without importing the full dataset, making it easier to manage and explore database assets efficiently

  • [CLOUD-5840] - Introduced RAG engine management with metrics - Amorphic introduces the new RAG Engine Management Panel under AI Services, providing comprehensive oversight of the underlying RAG engine. Available to all AI-enabled accounts, this service offers visibility into key RAG metrics and supporting backend components, ensuring better monitoring and control. With this enhancement, administrators can more effectively manage and optimize RAG operations, improving both performance and reliability across AI-driven workflows.

  • [CLOUD-5815] – Added support for Snowflake in JDBC Metadata Datasource - Amorphic now adds support for Snowflake as a JDBC metadata datasource, enabling seamless integration with your Snowflake environment. With this capability, you can discover and view Snowflake data assets directly within Amorphic Catalog, improving unified data discovery, governance, and accessibility.

  • [CLOUD-5763] – Added support for SharePoint in SaaS Datasource - Amorphic now supports SharePoint as a SaaS datasource, allowing users to securely connect their SharePoint account and ingest files into Amorphic datasets. This enhancement makes it easier to centralize and manage enterprise content stored in SharePoint, enabling downstream analytics, processing, and collaboration

  • [CLOUD-5753] - Integrated Knowledge bases in Amorphic - Introduced Knowledge Base functionality in AI Space where users can create a knowledge base with their domains and datasets. Users can sync files from the source, query over the files, and view detailed metrics. Sync metrics include insights such as number of files indexed, deleted, and other key stats.

  • [CLOUD-5620] – Introduced JDBC Advanced Data Load Datasource - Amorphic now supports a new JDBC Advanced Data Load datasource, designed to ingest large volumes of data from supported databases such as MariaDB and IBM Db2i/AS400. This feature enables multi-schema, multi-table ingestion with configurable cluster settings, auto-scaling, and transformation rules.

  • [CLOUD-5501] - Introduced Event based Triggers in Amorphic - Added a new feature "Event Trigger" as a new schedule type to enable event-based execution of targets. Users can trigger ETL jobs and Data Pipelines automatically upon the successful completion of a file upload to a dataset. Also, "ingestion" event type that triggers targets upon completion of dataset ingestion jobs. Supports datasets with S3, JDBC, and External API data sources.

  • [CLOUD-5489] - Introduced File-Level Access Control in Datasets - Amorphic now supports file-level access control within datasets. This enhancement allows users to share and restrict access to specific files inside a dataset. On the consumption side, ETL jobs and DataLabs can now access only the files shared with them under Dataset-Restricted Read Access, providing more granular security and flexibility.

Enhancements (49)

  • [CLOUD-6012] - Support for .xlsm/.xls extension for xlsx S3 type Dataset - With this enhancement, under xlsx file type in S3-type datasets, we now support files with .xlsm and .xls extensions along with .xlsx, allowing greater flexibility when working with various Excel formats.

  • [CLOUD-6008] - Support for Additional Worker Types in ETL Jobs - Amorphic now extends support for multiple WorkerTypes in ETL jobs, including Standard, G.1X, G.2X, G.025X, G.4X, G.8X, and Z.2X. With this enhancement, users can leverage the newly added G.4X and G.8X worker types to achieve higher performance and improved scalability for demanding data processing workloads. This update provides greater flexibility in configuring ETL jobs, enabling users to optimize resource allocation based on workload size and complexity.

  • [CLOUD-5991] - Added Toolkit for Amorphic Analytics - Amorphic now introduces a comprehensive utility toolkit delivered as a system shared library for ETL jobs. This library is packaged and prepared by Amorphic for direct consumption, covering most of the common ETL operations needed in analytics workflows. The toolkit includes utilities for data type conversions, column name cleaning, join operations, lookup transformations, UID generation, SQL operations, and union operations. By consolidating these essential functions into a single package, the toolkit simplifies development, streamlines integration, and accelerates ETL processes across the platform.

  • [CLOUD-5969] - Enhanced Lineage Generation w.r.t access control and performance This update enhances lineage generation in Amorphic by improving performance, adding stricter access controls, and introducing new metadata fields to indicate resource access status. Users will now only see resources they are authorized to access, with clear indicators such as access type and restrictions. These improvements make lineage insights faster, more secure, and easier to understand, strengthening both usability and governance across the platform.

  • [CLOUD-5961] - Added Resource-Specific Access Requests Retrieval Functionality - This enhancement adds the ability to retrieve access requests for a specific resource (dataset, job, dashboard, etc.) through the access requests feature. The enhancement allows users with owner/editor permissions on a resource to query and view all pending access requests specifically for that resource, improving the access request management workflow.

  • [CLOUD-5960] - Enhanced permissions for update and manage operations on Iceberg datasets - Enhanced the permissions for update and manage operations on Apache Iceberg datasets across all consumption mediums within our platform. This includes comprehensive support for Iceberg table operations such as VACUUM, OPTIMIZE, MERGE, UPDATE, DELETE, and maintenance commands across Playground, ETL Jobs, Notebooks, Studios, and other Iceberg dataset consumption points.

  • [CLOUD-5937] - Support Dataprofiling for iceberg datasets - Introduced an enhancement to our data profiling capabilities: native support for Iceberg datasets. This new integration allows users to seamlessly profile their data stored in Iceberg tables, gaining deeper insights into its quality, structure, and characteristics. This expansion ensures that our powerful data profiling tools can now be directly applied to this increasingly popular open table format, streamlining data governance and analysis workflows for a broader range of modern data architectures.

  • [CLOUD-5934] - Consolidated Data Load Throttling Alarms for Improved Efficiency - This enhancement optimizes the data load throttling alarm system by consolidating two separate CloudWatch alarms into a single, more efficient alarm configuration. The change eliminates the need for a separate "within limit" alarm and improves the responsiveness and reliability of the automatic data load throttling feature.

  • [CLOUD-5930] - Introduced Templates for Knowledge Base and Guardrails - Amorphic now includes system-defined templates for both Knowledge Base and Guardrails. With this enhancement, users can quickly create Knowledge Base entries and Guardrails directly from pre-built templates, simplifying the setup process and reducing manual effort. These templates provide a standardized starting point, ensuring consistency while allowing customization as needed. This update improves efficiency and accelerates the configuration of Knowledge Base and Guardrails within the platform.

  • [CLOUD-5925] – Added support for additional transformations & filters in JDBC Bulk Data Load – Introduced a new “Capture Old Values” transformation rule for both CDC and Full Load dataflows. This enhancement allows capturing the previous values of all columns before updates, enabling improved change tracking and auditability.

  • [CLOUD-5923] - Enhanced Insights in Dashboards for Dataflows, Jobs, Data Pipelines, and Datasets – Expanded Insights within dashboards to display richer metadata. Users can now view additional details such as Datasource Name, Datasource Type, and Ingestion Type for dataflows; Job Type for jobs; Execution Time for data pipelines; and Domain, File Type, and Target Location for datasets, improving visibility and monitoring across the platform.

  • [CLOUD-5921] - Added Admin Page Configuration for Advanced Data Load Datasources – Added a new admin page configuration that allows administrators to enable or disable the creation of Advanced Data Load datasources. This provides greater governance and cost control, as these datasources can be resource-intensive.

  • [CLOUD-5916] – Ext-API Datasource enhancements – Extended support for External API datasources with multiple authentication mechanisms, three pagination types, custom headers and body, data preview, and test connection capabilities. These enhancements provide users with greater flexibility and control when configuring and validating external API integrations.

  • [CLOUD-5914] – S3 datasource Ingestion enhancements for Large File Volumes – Optimized S3 datasource ingestion to efficiently handle S3 sources with a large number of files. The enhancements include Bloom filter–based deduplication and batch processing, reducing the risk of out-of-memory errors and improving overall scalability and reliability of S3 ingestion jobs.

  • [CLOUD-5889] – Enhanced Activity Logs to include Access Sharing actions – Enhanced activity logs to capture share and revoke actions on resources, including details of associated users and tags. This provides improved visibility and auditability of access changes across the platform.

  • [CLOUD-5860] - Enhanced Cost Explorer feature to handle rate limit exceptions - Cost reports are now more reliable. If AWS temporarily blocks requests due to high traffic, the system will automatically retry so scheduled runs don’t fail.

  • [CLOUD-5852] - Added Catalog Engine/OS management with metrics - Added support to fetch OpenSearch cluster metrics in order to enhance monitoring and observability. Introduced a new feature to retrieve key metrics such as CPU utilization, JVM memory pressure, node count, and cluster health status.

  • [CLOUD-5841] - Enhanced Dataprofiling notifications - Currently way too many notifications are being sent for timed out data profiling jobs which becomes a hassle for end users to track and crucial information might go completely unnoticed. So the process is simplified with only single notification sent to dataset owners and system admins for the data profiling job.

  • [CLOUD-5830] - CDC Dataflows with S3 Athena Support - CDC dataflows now support S3-Athena as a target, in addition to the earlier full load support. This allows more flexible query and analysis options for incremental data.

  • [CLOUD-5826] – Bulk Dataload Flow Enhancements – Enhancement to bulk dataload flows to support editing of shared instances. Users can now change the underlying entity from one to another during dataflow updates without recreating the entire flow.

  • [CLOUD-5819] – External API Datasource Enhancements – Enhanced External API datasources to support updating query parameters and applying runtime schedule overrides. Query parameters now also support dynamic date placeholders, enabling ingestion from APIs with relative date filters.

  • [CLOUD-5813] - Enhanced Data Quality Checks Failure messages - It is an enhancement ticket in which status of auto constraint suggestion job will be fetched after the user starts the job. Also for failed custom DQ constraints an error message will be shown in response for every failed DQ constraint.

  • [CLOUD-5809] - Support for uploading multiple documents in SQL AI - In Train SQL AI, users can now upload multiple files within a single training document. Previously, there was a one-to-one mapping, but now users can upload multiple documents in one training document.

  • [CLOUD-5808] – Support for AWS HealthLake in Ireland Region – Enhanced support for AWS HealthLake(HCLS) by adding compatibility with the Europe(Ireland) region (eu-west-1), following its recent availability from AWS. This ensures Amorphic HCLS workloads can now seamlessly operate with HealthLake in eu-west-1.

  • [CLOUD-5800] - Support for more columns on stl_load_errors redshift system table - The current implementation has limited column support when querying the stl_load_errors Redshift system table, restricting visibility into data loading failures and error details. This enhancement will expand column coverage to include additional error metadata, query context, and diagnostic information available in the system table.

  • [CLOUD-5756] - Added a consolidated single flag for all AI services - A centralized control model for AI has been introduced. A main AI flag now determines overall availability of AI features, while an application-level configuration provides flexibility to enable or restrict AI for individual Amorphic components. This ensures consistent governance, while still allowing tailored use of AI where it creates the most value.

  • [CLOUD-5744] - Updated resource level cost metrics with Amorphic resource names - Cost reports now show clear, user-friendly resource names instead of long AWS ARNs. This makes it easier to understand and analyze costs, with names matching what you see in Amorphic’s UI.

  • [CLOUD-5737] – Dataflow Metadata field enhancements – Enhanced dataflows by aligning mandatory field requirements with datasets. Fields such as Description and Keywords, which are optional in datasets, are no longer enforced as mandatory in dataflows, ensuring consistency and flexibility in metadata management.

  • [CLOUD-5684] - SQLAI Error message improvements - This update enhances error messaging in SQL AI for invalid inputs. Users will now see clearer, more descriptive error details along with resource-based examples to guide them toward correct prompt usage. These improvements make it easier to understand and resolve errors, reducing confusion and helping users generate valid queries more efficiently. This enhancement significantly improves the overall user experience when working with SQL AI.

  • [CLOUD-5681] - Introduced heartbeat feature to keep lambdas alive - Introduced heartbeat mechanism that keeps the Lambda continuously active by sending a heartbeat signal, preventing inactive state errors and ensuring seamless execution avoids internal server errors.

  • [CLOUD-5672] - User deletion enhancements w.r.t Amorphic BI - Enhancements done to fetch quicksight registered users who also have owner/editor access in BI through amorphic for user resource transfer during user deletion.

  • [CLOUD-5658] – Added SSL support and improved error handling in Bulk Data Load feature – Enhanced Bulk Data Load to support SSL flags for PostgreSQL and SQL Server databases, enabling secure connections during data ingestion. Additionally, improved error messages for dataflow failures to provide clearer troubleshooting guidance.

  • [CLOUD-5657] – Improved Dataset Deletion Handling for HCLS Stores – Enhanced resource sync logic to prevent accidental deletion of active HCLS resource datasets. If a dataset’s associated resource is active, the deletion is skipped.

  • [CLOUD-5655] - Catalog Improvements and introduced Stewardship - The data catalog has been enhanced with several key features to improve usability, governance, and collaboration. Timebound queries are now supported, allowing users to filter results by specific time ranges. Users can search across datasets they own for quicker access to relevant assets. Data steward functionalities have been implemented, enabling labeling and updating of custom metadata. Additionally, users can now add comments to assets for better context sharing and collaboration. To ensure accountability, audit logs have been added to track changes to assets.

  • [CLOUD-5636] - Added user option to download multiple files in a dataset - With this enhancement, users can download multiple files by limit, date range, or file list. Supports both analytic and non-analytic datasets with size checks.

  • [CLOUD-5633] - Enhanced Cost Tagging to Support Application-Level, Resource-Level Tags - Enhanced cost management tags with automatic scope-based tagging capabilities. Users can now define application-level and resource-level tags that automatically apply to relevant resources upon creation, improving cost visibility and governance.

  • [CLOUD-5609] – HealthImaging Store Deletion Enhancements – Enhanced HealthImaging functionality so that when a store is deleted, its associated image sets are automatically removed as well. This ensures proper clean-up during both manual deletions and auto-termination workflows.

  • [CLOUD-5555] - Support for more resources in budgets stop resources action - Jobs and DataPipelines will now automatically stop when budget limits are exceeded. You’ll get email reports showing which resources were stopped, helping prevent cost overruns without manual intervention.

  • [CLOUD-5534] - Introduced cost-related guardrails for Data profiling feature - This update adds cost guardrails for Data Profiling, enables DPU adjustments and timeout updates for ad-hoc runs and at job level(for scheduled runs). It reduces job runtime limits from 48 hours and introduces UI warnings for changes to timeouts and other attributes.

  • [CLOUD-5526] - Support for attaching access Tags to Cost Tags for Multi-User Access - Currently, the platform supports assigning multiple users to the same Cost Tag but lacks support for Access Tags. This feature aims to add support for attaching Access Tags to Cost Tags, enabling management access for multiple users simultaneously, improving scalability, and simplifying access control management.

  • [CLOUD-5511] - Introduced flag to disable self-registration for Amorphic - Introduced a new flag to disable self-registration in Amorphic. When enabled, this prevents users from registering themselves into the platform.

  • [CLOUD-5504] - Enhanced DynamoDB data validation to eliminate temporary table creation - Enhance the validation logic to avoid creating temporary tables while maintaining validation accuracy. The current data validation implementation for DynamoDB target locations creates temporary DynamoDB tables as part of the validation process. This results in unnecessary resource creation, instead of a temporary table, custom data validation rules will be applied.

  • [CLOUD-5500] - Support for creating schedules with the same name for different resources - With this enhancement, users can now create schedules with identical names, provided they are for different resources.

  • [CLOUD-5497] - Enhanced resource deletion handling for lineage - Lineage metadata for deleted resources is now preserved. This enhancement improves historical tracking and auditing, allowing you to query past lineage states and access metadata for resources regardless of their current existence on the platform.

  • [CLOUD-5492] - Allowed usage of Bedrock Models in Datalabs - Amorphic now supports the usage of Bedrock models within DataLabs and ETL jobs, enabling users to seamlessly integrate the capabilities of Amazon Bedrock into their data workflows.

  • [CLOUD-5434] - Support for Glue Version 5.0 for ETL jobs - Amorphic now supports ETL jobs with Glue version 5. Newly created jobs will now by default be created with this latest version of Glue, ensuring that the application remains in sync with the latest updates from AWS..

  • [CLOUD-5308] - Optimized DynamoDB scan operations in Access Grants Report - This enhancement improves the performance of the Access Grants report, making it faster and more efficient. Optimized data retrieval reduces delays, ensuring a smoother experience while minimizing unnecessary system resource usage.

  • [CLOUD-5275] – Added Unified API for Bulk Load Entity Updates – Enhanced JDBC datasource management by enabling full edit support for replication instances. Both shared and dedicated Entities(Instances) can now be updated seamlessly through a single, unified API, simplifying administration and reducing operational overhead.

  • [CLOUD-5170] - Support for .whl files in Spark ETL jobs - Amorphic platform now supports .whl file dependencies for PySpark ETL jobs, enabling simplified library packaging and distribution. This enhancement improves dependency management and provides greater deployment flexibility for Python packages and custom modules in Spark workflows.

Bug Fixes (26)

  • [CLOUD-6100] - Prevent uploads of incorrect file types for S3 datasets - Fixed an issue where S3-type datasets allowed uploads of files with mismatched formats. Validation has been added to ensure uploaded files match the defined FileType.

  • [CLOUD-6070] – Fixed Max Recursion Depth Error for Large Table Selections – Resolved an issue in JDBC Bulk Data Load dataflows where selecting a large number of tables caused a "Max recursion depth" error, ensuring stable ingestion for dataflows with many tables.

  • [CLOUD-6025] - User deletion failing when it has a view in 'create failed' state - Fixed the "Key error" runtime error while deleting the resources in 'create-failed' state by updating the user deletion process to safely handle missing metadata (like DatasetId) for failed views. This process logs a warning and skips the problematic view, allowing the deletion to complete successfully.

  • [CLOUD-6016] – Fixed Data Loss for S3 Ingestion of Identically Named Files – Resolved an issue in the S3 datasource ingestion where files with identical names from different directories could overwrite each other, causing data loss and processing failures. Now, unique target keys are generated, and thread-safe processing ensures reliable ingestion

  • [CLOUD-6013] - Resource Sync Improvements for ETL Jobs - Resource sync now continues even if one of your roles lacks permissions. This update improves resource synchronization for ETL jobs by ensuring the process continues even if one of the user’s roles lacks the required permissions. The system now checks for other valid roles to complete the sync, preventing unnecessary failures. With this enhancement, resource sync is more reliable and resilient, providing a smoother experience when managing ETL jobs in Amorphic.

  • [CLOUD-5968] - User unable to stop DQ check schedule execution - Resolved an issue where users were unable to stop Data Quality (DQ) check schedule execution, which previously failed with the error: "Schedule execution cannot be stopped in RUNNING state." The queue name has also been fixed to ensure unhindered access.

  • [CLOUD-5965] - Parquet file processing fails for Redshift datasets after schema datatype modifications - Parquet file loads in Redshift now use explicit column mapping. This ensures data lands in the right columns even after schema changes, preventing failures and improving data integrity.

  • [CLOUD-5964] - Datalabs studio deletion failure with CustomR Image - This fix addresses an issue where Amorphic Datalabs studios created before January 2025 could become stuck in a Deleting state when a CustomR image was attached. The problem was caused by faulty error handling during the deletion process. A fix has now been deployed for new studios, ensuring proper error handling and smooth deletion. This update improves stability and reliability when managing Datalabs environments.

  • [CLOUD-5954] – Fixed Invalid/Expired PAT Token Error in External Amorphic metadata Sync – Resolved an issue where an invalid or expired PAT token during external Amorphic metadata datasource sync displayed unclear errors. Now, a clear message indicates that the token is invalid or expired, guiding users to take corrective action.

  • [Amorphic-BI] [CLOUD-5910] – Fixed HCLS Datasets Listing in BI App – Resolved an issue in the Amorphic BI vertical where HCLS datasets were not appearing. Users can now successfully view and access HCLS datasets, ensuring seamless integration and analysis within the BI application.

  • [CLOUD-5908] - Auto-Retry Mechanism Not Functioning in BulkDataLoad Dataflows – Fixed an issue in JDBC BulkDataLoad dataflows where retry intervals increased incorrectly due to changes introduced during the 3.0 application facelift. The auto-retry mechanism now works as expected.

  • [CLOUD-5812] - ETL Job Failures on Iceberg Table Deletes - A fix was applied to resolve ETL job failures when deleting data from Iceberg tables. The issue stemmed from domain-level access overriding dataset-specific S3 delete permissions, causing authorization errors for delete operations. The permission logic has been updated to ensure Iceberg ETL jobs can now perform delete operations successfully. This enhancement improves reliability and consistency in data management workflows.

  • [CLOUD-5806] - Fixes for Data profiling cost guardrails - Resolved issue where disabled data profiling jobs continued sending "execution ran for more than 2 hours" alerts daily. The system now properly checks job status and ignores old timeout executions from disabled jobs.

  • [CLOUD-5804] - Error while updating Iceberg table properties - Previously, setting unsupported table properties like write.metadata.delete-after-commit.enabled = true resulted in a misleading internal server error. Validation has now been added to correctly detect and block such properties with clear error messages.

  • [CLOUD-5801] – Fixed Dependency Checks for Normal Data Load Datasource Deletion – Resolved an issue where dependent operations were not being checked when deleting a JDBC Normal Data Load datasource, ensuring safe and consistent deletion.

  • [CLOUD-5749] - Catalog Indexing improvements - This update resolves an issue where OS field limitations caused AssetSchema errors during asset handling. The fix removes schema indexing to prevent such failures, ensuring smoother and more reliable processing of assets. With this improvement, users can expect enhanced stability and consistency when working with asset schemas in Amorphic.

  • [CLOUD-5742] - Dataset Creation Succeeded with Invalid Data Classification Inputs - Fixed an issue dealing with inserting validations on data classification input in request body while creating dataset.

  • [CLOUD-5735] - Handling Resource Transfer for Views in CREATE_FAILED state - Valid resources can now have their permissions transferred without errors, even if Redshift views are in a 'Create Failed' state.

  • [CLOUD-5734] - Issue with Cost reporting from EDF: the tagged instance shows up as untagged and with a cost greater than it should in the platform - Tags that don’t follow the standard cost tag naming pattern will no longer be grouped under Untagged. This prevents duplicate or mixed-up costs and makes reports more accurate.

  • [CLOUD-5733] - Cleanup of stale data reference metadata from glossary term metadata - Fixed an issue where stale metadata in glossary term data references was not cleaned up during dataset deletion. This cleanup ensures term names and definitions can be updated smoothly without errors.

  • [CLOUD-5605] – Fixed HealthImaging Store Auto-Termination Failure – Resolved an issue where auto-termination of HealthImaging stores failed when linked to FlexView. The termination workflow now handles active FlexView connections gracefully, ensuring reliable and consistent resource cleanup.

  • [CLOUD-5581] - Improved error handling and email notification details - Provided proper classification of error messages under various valid categories in order to improve readability of error messages in emails.

  • [CLOUD-5547] - Fixed data profiling failures for Redshift dataset in a multi-tenant environment - This fix resolves data profiling failure in redshift target datasets for multi-tenant environments.

  • [CLOUD-5546] - Unable to resource sync lake formation Hudi dataset - This fix enables proper retrieval and successful querying of Hudi datasets created via resourceSync in lake formation.

  • [CLOUD-5539] – Fixed Missing TaskStats for JDBC Full-Load Dataflows – Resolved an issue where TaskStats were not available for S3 Athena full-load dataflows with ServerlessReplication set to true. TaskStats are now correctly reported as expected.

  • [CLOUD-5532] - Support custom notifications for email ingestion - Custom notifications were disabled for email ingestion because of a backend issue. Backtracked and fixed the same.

API Only Features (04)

  • [CLOUD-6018] - Displaying only default templates - Improvised the user experience when creating a resource from a template by introducing a 'IsDefault' option so that users can see those default templates at the top of list making it easier for users to find the most common options as per their choice.

  • [CLOUD-5950] - Enabled re-processing and download options for failed files in datasets - Enhanced datasets files functionality to reprocess the failed files in case of 'APPEND' type dataset. This change also involves the prevention of deletion of files from LZ after failures. Also, enabled download option for failed files.

  • [CLOUD-5938] - Support of ‘Display Name’ for Amorphic datasets - Added a new editable DisplayName attribute for Amorphic datasets, allowing users to customize and rename dataset names for greater flexibility. Users can now easily rename resources avoiding the hassle of deleting and creating resources again. Display name is just an additional attribute from UX perspective and underlying backend resources still follow naming conventions as per AWS standards.

  • [CLOUD-5936] - Support creation of dataset/views from playground query results - With this enhancement, users can now take results from ad-hoc queries run in the Playground and: Upload the results as a file into an existing dataset Create a new view directly from the query Create a new dataset based on the query output.

Cross-Account-Role Updates (01)

  • [CLOUD-5245] - Cross Account Role Permission Changes v3.1 - Updated cross-account role with permissions for Security Hub, Kafka, and Bedrock AWS services, along with the ability to tag and untag AWS resources for improved management.

Known Issues (02)

  • [CLOUD-6177] - System Agents unavailable in unsupported Amorphic deployment regions - System Agents are predefined by the application and run on models selected by the system based on performance and other factors. Due to this, some agents may be unavailable or show validation errors if their underlying models are not accessible or supported for cross-region inference in your Amorphic deployment region.

  • [CLOUD-5964] - Datalabs studio deletion failure with CustomR Image - R Kernel support in Datalab Studio remains an open issue. The AWS-provided R Kernel image shows inconsistent behavior across environments; hence, Amorphic currently supports only the standard Datalab Studio.

Deprecated features (01)

  • [CLOUD-5693] - End of support/life for job versions - Due to the end-of-life for Python 3 (PythonShell jobs) and Glue versions 0.9, 1.0, and 2.0 (Spark jobs), we have implemented necessary platform changes.Email communications with details have also been rolled out regarding the same. Users are advised to upgrade any jobs on these older versions via Amorphic.

User Actions/Notice (04)

  • [CLOUD-6097] - Support for Encoded Table Mapping Rules in Dataflows - Added support for encoded table mapping rules in BulkDataLoad datasource dataflows. This enables users to define and apply complex transformation expressions that were not supported earlier. When creating or updating dataflows via the Amorphic API, users can now encode complex rules and send them to the backend seamlessly.

  • [CLOUD-6006] - Change in Default Columns for Full-Load S3 Dataflows - In JDBC BulkDataLoad S3 target full-load dataflows, two additional columns (op and Record Modified Time) were previously included by default. For new targets, these columns are no longer added automatically, but users can still include them by specifying the extra connection properties when creating dataflows.

  • [CLOUD-5592] - Upgrades to the OpenSearch cluster for enhanced Catalog - Fixed extensive CPU bursting and memory utilization issues causing node drops through upgraded instance class and additional node for OpenSearch cluster for better resource distribution and high availability. Going forward, all deployments will use a minimum of a 3-node OpenSearch cluster on t3.medium.search for improved stability and fault tolerance.

  • [CLOUD-5879] - Added new Tags for AWS Resources - Introduced new tags for aws resources provisioned by Amorphic for improved resource management

    • cwk:projectid
    • cwk:application
    • cwk:provisionedby
    • cwk:createdby
    • This will help to manage and segregate the resources efficiently.
print this page

Version 3.0

· 40 min read
Amorphic
Powering your data to wisdom journey

Version 3.0

Amorphic 3.0 delivers a complete overhaul of the user interface and user experience (UI/UX), offering a modern, intuitive, and streamlined platform for all users. This release introduces 9 major features—including unified services for Dashboards, Datasets, and Data Labs—along with 79 enhancements and 33 critical bug fixes. The redesigned UI/UX focuses on usability and efficiency, making workflows simpler and data management more accessible than ever before.