Install and Upgrade
ORION-281679: SSD firmware upgrade is not supported for Supermicro EBoxes.
ORION-284784: VAST Cluster Install does not support Subnet field values that end with a dot (for example,
10.10.).ORION-280966: Having imported a JSON configuration file in VAST Cluster Install, you need to manually verify that all the populated fields have expected values. Sometimes, depending on the cluster configuration and environment, some of the fields are not populated as expected during the import.
ORION-242658: BMC firmware upgrades are not supported for Supermicro Genoa CNodes.
ORION-222648: NDU that includes automatic adjustment of CNode CPU isolation settings (
isolcpus) is not supported for EBoxes.ORION-214559: A BMC upgrade cannot be performed with an inactive CNode that has been powered off.
Automatic drive firmware upgrade is not performed on drives that have been moved manually from an old DBox to a new DBox during the DBox replacement procedure. (Note that this does not apply to EBoxes.)
Networking
The following limitations apply when implementing L3 networking:
After enabling L3 access for a virtual IP pool, it cannot be disabled.
L3 access is not supported on virtual IP pools for which CNode Port Affinity is configured.
L3 networking is not supported on IB clusters.
L3 networking is not available for VAST on Cloud.
MD5 authentication is not supported.
ORION-266297: When using numbered BGP, each CNode can only be involved in one BGP configuration. To learn which CNodes are currently used in which BGP configurations, use the Network Access -> Virtual IP Pools page that displays CNodes and BGP Configurations for each virtual IP pool.
The following limitations apply when using Open Telemetry-Based Ethernet Network Monitoring:
Up to 70 switches per cluster.
The switch must have OTel configured to send telemetry data over gRPC to the VMS IP address using port 4317.
TLS on the connection used to obtain the metrics from the switch is not supported.
Supported on NVIDIA Cumulus switches 5.12.1 and later. For other switch vendors, inquire VAST Support.
Changing network configuration from VMS (in VAST Web UI: Settings -> Configure Network) is not supported for clusters that have any CNodes that are connected simultaneously to multiple separate client data networks.
Encryption of Data at Rest
ORION-208004: Enabling VAST OS boot drive encryption requires that the node is inactive. Enabling the encryption on an active node may cause a long reboot sequence.
The default tenant does not use keys created or stored within an EKM. To encrypt data within the default tenant with EKM-based keys, use encrypted paths.
The following rules and limitations apply when using Kerberos providers:
MIT Kerberos providers are used for NFSv4 only.
A tenant can be associated with a single Kerberos provider only.
Up to eight Kerberos providers can be defined on a VAST cluster.
Kerberos principals used to authenticate to the VAST cluster must exists as users in an LDAP provider or in a local provider associated with the same tenant as the Kerberos provider. Otherwise, they are squashed to the
nfsnobodyuser.
Quotas
Quotas are not enforced on replication destination directories under a protected path. For example, if the protected path is
/ppath, a quota on/ppath/yourdiris not enforced.The following limitations apply when using Remote Quota Protocol (
rquota):SETQUOTA requests are not supported.
Quota information can be provided for active users only.
Information about group quotas is not provided.
Quality of Service
The prioritization flag is supported for view QoS policies. It cannot be set for user QoS policies.
S3 (including Kafka and VAST Database) and block storage I/Os are not calculated as part of the cluster-wide maximum write bandwidth limit.
Some high-priority optimizations are applied to NFSv3 only.
When the cluster-wide maximum write bandwidth is set, the actual performance may be ±15% of the expected performance.
Use of QoS with RDMA is not supported.
NFS
ORION-115336: If one creates an NFSv4.1-only view and mounts it, and then creates its parent view with NFSv3 only, IO operations on the NFSv4.1-only view succeed but mounts are not allowed.
The following limitation applies when using NFSv3:
In rare cases with large numbers of files and directories, the existence of a view with Global Synchronization enabled under a protected path can block the removal of the protected path.
TLS encryption with NFSv4.1 is not supported for NFSv.4.1 over RDMA.
NFSv4
The following requirements and limitations apply when using NFSv4 file delegations:
The NFSv4 protocol requires Network Time Protocol (NTP) to be configured on both the VAST cluster and the client. While VAST does not enforce this requirement, the absence of proper time synchronization may, in rare cases, result in unexpected behavior.
The client must have an NFSv4 backchannel connection.
NFSv4 delegations are not supported with RDMA.
Using NFSv4 file delegations on views for which other protocols are enabled, may result in unexpected behavior.
Before making a path read-only by using the
/folders/read_onlyendpoint of VAST REST API, ensure that NFSv4 delegations are disabled or returned. Otherwise, unexpected behavior may be encountered.Restoring a snapshot on a tenant that have NFSv4 delegations enabled, may cause unexpected behavior with regard to delegated files. It is recommended to revoke existing delegations before restoring a snapshot.
VAST Cluster recalls granted NFSv4 delegations when a failover process is started.
In some cases when there is very high NFSv4 callback load per cluster or per client, VAST Cluster might start recalling or revoking NFSv4 delegations due to callback request timeouts.
NFSv4 delegations require mounting the client at least as NFS version 4.1. It is recommended to mount as NFS version 4.2 for additional performance improvements. Linux kernels 6.11+ on a client provide more performance advantages (RFC9754), which get backported to older kernels through VAST-NFS.
If you are using VAST-NFS, note that NFSv4 delegations are supported with VAST-NFS 4.0.32 and later. VAST-NFS 4.5 includes enhancements related to support of NFSv4 delegations.
SUSE Linux Enterprise Server 12.x and CentOS 7.x clients are not supported with NFSv4 delegations enabled.
SMB
Views exposed as SMB shares work only if the cluster is joined to Active Directory. This includes both SMB-only and multiprotocol views.
ORION-169707: When the Hyper-V management tool tries to list VAST Hyper-V SMB shares on an SMB server, the
The RPC server is not availableerror can occur if the SMB server is specified using its FQDN. To avoid this error, specify the IP address of the SMB server instead of the FQDN.ORION-160323: After updating permissions for an SMB share in Windows Explorer, a duplicate SMB share can be displayed. The duplicate SMB share disappears upon a refresh (F5).
ORION-134730: An attempt to restore a file can fail if after the restore has started, a quota is set on the path where the file resides.
The following limitations apply when using Alternate Data Streams (ADS):
ADS are not seen or accessible by protocols other than SMB.
ADS will not be retained when files and directories are replicated to S3.
The following limitations apply when using SMB hardlinks:
Creating a hardlink to another SMB-enabled view is not allowed.
Creating hardlinks to directories is not allowed.
Linking an Alternate Data Stream (ADS) is not supported.
Creating a hardlink on a Global Access satellite cluster is not supported.
Creating a hardlink, on the Global Access origin cluster, of a file that has already been opened from a satellite cluster, is not allowed.
The following limitations apply when using SMB change notifications for subdirectories:
SMB recursive change notifications are not available for paths for which Global Namespace is configured.
Using SMB recursive change notifications for nested SMB shares may entail unexpected behavior.
S3
An object to be uploaded via a S3 presigned POST request must have only ASCII characters in its name.
A POST policy (used for S3 presigned POST requests) can be up to 4800 bytes.
ORION-197281: VAST Cluster disables bucket logging set on a bucket from which data is synchronously replicated to another bucket once you set up bucket logging on the replication destination bucket and configure it to use a different logging destination bucket.
ORION-190674: Once created, an S3 bucket cannot be renamed or moved to a different path. Thus, for example, if you try to change the bucket’s path when modifying a view in VAST Web UI, the change does not take effect and the view will still be listed with the old path.
ORION-240888: S3 bucket monitoring does not take into account VMS-originated S3 requests, including those related to features such as:
Lifecycle rules
Bucket notifications
Bucket policies
Object ownership
Bucket versioning
Object locking
Bucket logging
The following limitations apply when using S3 Indestructible Object Mode:
An S3 Bucket view with Indestructible Object Mode cannot have other protocols enabled.
Indestructible Object Mode cannot be set for a view that points to / (root directory).
It is not allowed to have views under the view in Indestructible Object Mode, or at the same path as the Indestructible Object Mode view.
Directories that contain views that have indestructible object mode enabled cannot be moved to the trash folder.
Indestructible Object Mode cannot be used together with S3 Object Locking or S3 Object Versioning.
Indestructible Object Mode cannot be used together with WORM.
Indestructible Object Mode cannot be set for a view that exposes the protocol audit log directory.
A snapshot cannot be restored on a view that is in Indestructible Object Mode.
Views in Indestructible Object Mode are not subject to replication or Global Access.
The following limitations apply when using S3 Versioning:
After versioning has been enabled on a bucket, it cannot be disabled.
Multiprotocol access to versioned objects is not supported. NFS and SMB clients may attempt to access the S3 versioned objects in read-only mode.
MFA Delete is not supported.
ORION-143808: S3 versioning is not supported with global snapshot clones. An attempt to put a versioned object to a bucket at the global snapshot's destination path fails with an internal error.
S3 conditional writes are not supported for versioned buckets.
The following rules and limitations apply when using S3 Server-Side Encryption with Customer-Provided Keys (SSE-C):
If an S3 request contains an
x-amz-server-side-encryption-customer-*header, it is required to use HTTPS for the request to be accepted by the cluster.Only AES256 encryption algorithm is supported.
Objects encrypted with SSE-C cannot be accessed through other access protocols.
For replication environments, it is strongly recommended to avoid using SSE-C unless all of the clusters involved support SSE-C.
ORION-321170: S3 SSE-C is not supported when uploading objects using chunked encoding.
Block
For Rocky Linux-based clients, VAST recommends that the client uses Rocky Linux 9.4 or later.
If a host defined on the VAST cluster does not have any volumes mapped to it, NVMe auto-discovery does not show this subsystem.
A view that is used to expose block storage cannot have other storage protocols enabled.
You cannot enable or disable block storage support on an existing view. Block storage support can only be enabled for a view during view creation and cannot be disabled afterwards.
Block devices can be created on empty directories only.
Nesting of a block view inside an existing block view is not allowed.
The host NQN cannot be modified. To change the NQN, you need to remove the host and then add and map it anew.
ORION-272773: Automatic validation that enforces uniqueness of host NQNs, does not validate NQNs for the default tenant. Only non-default tenants are checked.
VAST Cluster does not support subsystem names that contain a space (in VAST Web UI: Element Store -> Views -> choose to create a view -> Block tab -> Subsystem name field).
When using the VAST Web UI or CLI options to bulk create volumes or hosts, the number of items to be created cannot exceed 256. When mapping hosts to volumes, up to 256 items can be mapped at a time.
The following VAST capabilities are not available with block views:
Access control features (such as ABE, ABAC, WORM)
VAST Audit Log
Replication to a remote peer
Global Access
Remote global snapshot clones
Snapshots on local protected paths are allowed but replication on non-local protected paths is not supported.
An attempt to remove a volume that has snapshots may cause errors for volume objects of snapshots of that volume, if they exist.
The maximum IO block size is limited to 1MB (4GB for unmap).
Attribute-Based Access Control (ABAC)
ABAC is supported on views controlled with SMB, S3 Native and Mixed Last Wins security flavors. ABAC is not supported with NFS flavor.
ABAC is not supported with NFSv3.
ABAC tags cannot be set on the cluster’s root directory (/).
Once assigned, you cannot edit or remove the ABAC tags of a view. Assigning new ABAC tags to an existing view or directory (storage path) is not allowed.
After a child view inherits ABAC tags from the parent view, you cannot update or remove the ABAC tags on the child view.
If you create a view for a directory that already exists, ABAC tags from the existing directory are assigned to the newly created view. In this case, there can be a delay between the view creation time and the time when the view's ABAC tags can be displayed.
If a user does not have any ABAC permissions, the user still can mount an NFSv4 export or map a SMB share to a local drive, but the user is not allowed to perform any operations on the files or directories.
The following features and capabilities cannot be used together with ABAC-tagged views:
If a tenant has ABAC-tagged views, you cannot change or remove the Active Directory provider configured for the tenant.
When using NFSv4, it is not allowed to create hardlinks in views that have ABAC tags.
When using S3:
ABAC cannot be used with anonymous S3 access. You cannot set ABAC tags for views that have anonymous S3 access enabled.
It is not allowed to set ABAC tags on a view that is a target for S3 bucket logging.
Requests from S3 superusers are handled in the same way as for regular users. This means that an S3 superuser is not granted access if the ABAC access check denies access for this user.
A directory under which an ABAC-tagged view exists, cannot be moved to the Trash folder.
Bulk permission updates are not available for ABAC-tagged views.
Lifecycle rules cannot be set for files or directories with ABAC tags.
WORM
You cannot disable WORM for the view once it is enabled.
You can add additional protocols to a WORM-enabled view, but cannot remove a protocol.
You cannot configure a WORM-enabled view for NFS/SMB and S3 together.
Bulk permission updates are not allowed on WORM-enabled views.
Nesting views under a WORM-enabled view is not allowed.
User Impersonation
ORION-216379: When VAST protocol auditing is enabled on a user-impersonated view, only UID of the original user is included in the log. The user's login name and SID are not included.
Bulk Permission Updates
Only one bulk permission update task per tenant can run at a time.
If a client attempts to set permissions on directories or files being updated via a bulk permission update, the result is unpredictable.
A bulk permission update can run only when the target view (the view exposing the files and directories for which you want to update permissions) is on the same tenant as the template view.
Running a bulk permission update on a view where the security flavor does not match that of the template view may result in inaccessible or incompatible permissions set.
Read-only snapshots and VAST special directories (
.vastin S3 buckets,.trash,.snap,.remote) are excluded from bulk permission update.Bulk permission update cannot run on ABAC-tagged views.
Event Publishing
The following limitations apply when using VAST Event Broker:
Producer API:
Messages are limited to 1MB.
In the event record, the key is limited to 126KB and the value is limited to 126KB.
Access to topics by UUID is not supported.
Idempotent producing is not supported.
Automatic creation of topics is not supported.
Consumer API:
No more than 256 consumer groups per view (broker)
The following is not supported:
Consumer group stickiness parameters (such as
group.instance.id)READ UNCOMMITTED isolation level
Cooperative rebalancing
Client rack awareness
Fetch sessions (only full fetch will be applied), delayed fetch parameters
Seek by time
Admin API:
Supported APIs include the APIs to create topics, delete topics, and to delete groups, as well as
describeConfigsandalterConfigsAPIs that let you get and update the topic configurations.
The following Kafka capabilities are not supported:
Over-the-wire compression of messages
Tip
VAST compression of data is supported.
Transactions
Only one virtual IP pool can be associated with a Kafka-enabled view, providing at least one virtual IP per CNode. Once the view has been created, the virtual IP pool cannot be replaced by another one (but it can be modified if needed).
The amount of VAST Event Broker views that you can create on a VAST cluster, is limited by the maximum number of views supported by the cluster and by the maximum number of virtual IP pools (since each broker view requires a dedicated virtual IP pool). See VAST Cluster Scale Guidelines for details.
The amount of event topics that you can create on a VAST cluster, is limited by the maximum number of tables per VAST Database table (see VAST Cluster Scale Guidelines) and the overall amount of event topic partitions.
A topic can have up to 20,000 partitions. The number of partitions in a topic cannot be changed after the topic has been created. Up to 200,000 partitions are supported per VAST Event Broker view.
Event queries based on the topic partition are not supported.
When listing consumer groups, the response is limited to 256 groups per Kafka-enabled view.
Event publishing and consuming operations, as well as topic management operations are not subject to VAST Protocol Auditing or Quality of Service (QoS).
VAST Cluster supports Confluent Kafka Python client 2.4 - 2.8.
Authentication and authorization:
Active Directory/LDAP is not supported. The user must be defined as a VAST local user.
Only one Kafka TLS certificate can be uploaded per VAST cluster.
Data protection:
The Kafka-enabled view needs to be manually created and associated with a virtual IP pool at the destination peer. The pool must have the same name as the one at the source peer.
Fast restore of a protected path containing a Kafka-enabled view is not allowed.
VAST replication of consumer groups is not supported. Consumer group offsets are not replicated.
VAST Catalog
The maximum path length supported by VAST Catalog is 1024 characters.
ORION-197741: VAST Catalog cannot be enabled on a cluster that uses encryption keys managed through EKM, including per-tenant and per-path encryption keys.
VAST Database
View properties are not supported.
Queries to a view must include full table names.
Redefining a view is supported for Spark clients only.
User-defined column names and comments are lost if the schema of the query changes when redefining a view.
The VAST connector for Trino does not support the SECURITY clause in CREATE VIEW statements.
The following requirements and limitations apply when using Row/Column-Level Security:
Row filtering and column masking are supported for Trino query engines only.
The Trino user's identity policy must have the
EndUserImpersonationandGetRowColumnSecurityactions allowed.
The following limitations apply to Sorted Tables:
Once enabled for a table, the sorting cannot be disabled.
Sorting can be performed for tables that contain 512k or more rows. Although you can enable sorting on a table regardless of its size, smaller tables do not get sorted.
Once sorted, tables cannot be unsorted or have their sorted columns changed.
Only one sorting order can be defined on a table. Up to four sorted columns are supported.
Sorted column values cannot be updated to different values.
Sorted tables cannot be replicated. Tables that are replicated cannot be sorted.
Sorting cannot be enabled on tables that have semi-sorted projections. Semi-sorted projections cannot be added to sorted tables.
Nested data types are not supported for sorted columns.
Tables that expose the internal row ID with the
vastdb_rowidcolumn cannot be sorted.When calculating the sorting status, a constant value within the Sorted value range is returned for tables that contain less than 5 million rows.
The following limitation applies to vector search:
VAST Cluster 5.4 does not include performance optimization for vector operations.
VAST DataEngine
If VAST DataEngine is enabled on the cluster, replication of the tenant's root directory is not allowed.
Up to 32 engines are supported per VAST cluster.
ORION-288279: VAST DataEngine does not support timestamps with a time zone.
The following rules and limitations apply when running Trino clusters on VAST:
Not less than three CNodes are required per Trino cluster (one for the coordinator, two or more for the workers).
In case of CNode HA events, the Trino cluster remains accessible and running queries as long as one coordinator CNode and one of the worker CNodes are up.
A VAST cluster upgrade (NDU) performed while running a Trino cluster may cause Trino query failures. The Trino cluster will be fully operational after the NDU is complete.
Replication
Data cannot be moved into or out of a path that is protected by either async replication or S3 replication.
Protected paths with async replication cannot be nested.
it is not possible to change which protection policy controls any replication stream.
ORION-208123: Local user accounts are not subject to replication.
The following limitation applies to VAST Database asynchronous replication:
ORION-179909: VAST Database asynchronous replication cannot be used together with Global Access or synchronous replication on the same path.
Only committed database transactions are replicated.
VAST Catalog and audit log are not replicated.
Multi-tenant replication of VAST Databases is not supported.
The following limitations apply to synchronous replication for S3:
Synchronous replication is supported for S3 buckets only.
It is not allowed to configure local snapshots, asynchronous replication or Global Access on the protected path for which synchronous replication is configured.
S3 lifecycle rules are not replicated.
S3 keys are replicated asynchronously.
Synchronously replicated directories are not subject to bulk permission updates.
The following limitations apply to synchronous replication:
A protected path using synchronous replication can only contain S3 buckets. You cannot use synchronous replication for paths that are exposed to any other protocols.
Global Access
NFSv3, SMB and S3 access protocols are supported. NFSv4 is not supported.
If a view is configured with both NFSv4 and SMB, it must be controlled with the NFS security flavor.
VAST Database is not supported.
Lease expiration time can only be set when creating a global access protected path. You cannot change lease expiration time when you modify a global access path.
VAST Catalog does not provide information on the cached data on the remote cluster.
ORION-194805: Applications that use SMB2 Byte Range Locks are not supported when the SMB client is connected via a remote Global Access protected path. Examples of such applications are Microsoft Office suite on macOS, Microsoft Hyper-V, AutoDesk 3ds Max and some Adobe Premiere plugins.
ORION-194613: If some files have additional hardlinks, the amount of bytes reported as prefetched can be higher than the actual amount prefetched.
The following limitations apply when using Global Access for S3 buckets:
Identity policies must be enabled at the cluster to which they get replicated.
The following VAST capabilities are not supported on destination buckets:
S3 event notifications
S3 Indestructible Object Mode
Lifecycle policies
Write Once Ready Many (WORM)
Bucket logging is only supported if both the source and destination buckets are in the same protected path.
Bucket replication between two clusters is only supported when the bucket is associated with the default S3 view policy.
S3 endpoints are not replicated.
VAST on Cloud
ORION-145141: Creating a tenant with EKM encryption is not supported on VoC clusters.
ORION-113036: After you reregister the same VoC cluster in Uplink, information about the previously registered instance of this cluster is no longer available in Uplink.
VAST on Cloud clusters do not support expansion or OS upgrade.
Ongoing changes on a data path that you cloned using a global snapshot clone are not synced with the VAST on Cloud cluster. The data you work with is sourced from the specific snapshot that you clone.
VAST on Cloud clusters on AWS are supported only if the instance type (which is set during the creation procedure) is On-demand and the Resiliency setting is enabled.
In the event of downtime, data is rebuilt while the cluster comes back online. Recovery from any subsequent failure that may occur during the rebuild is not guaranteed.
VAST DataSpace
VAST DataSpace requires that each cluster participating in the inter-connection is running VAST Cluster 5.0 or later.
ORION-135966: The inter-connecting clusters must have connectivity to each other through the clusters' management networks.
Authentication and Authorization
ORION-202335: If the cluster has Active Directory domain auto-discovery enabled, the discovered domains are kept in cache for quite a long time. If you modify an existing provider's configuration while auto-discovery is on, VMS may still report the old cached entries. To avoid this, rerun auto-discovery or remove and re-add the provider.
ORION-195524: Following a cluster recovery and while the Active Directory provider is still inaccessible, VAST Cluster can resume IO of provider users if they use NFSv3 or NFSv.4.1 with NTLM authentication. IO of provider users accessing through SMB or NFSv4.1 with Kerberos authentication is not resumed during this period.
ORION-187136: Identity policies are replicated as disabled to the destination peer, where if needed, they can be enabled manually.
ORION-152475: An access denied error is returned for NFSv3 or NFSv4 requests if they are checked against an identity or bucket policy with an
s3:ExistingObjectTagcondition statement in it.The following limitations apply when using Kerberos/NTLM authentication:
ORION-143944: When using Kerberos/NTLM Authentication to authorize SMB users from non-trusting domains, the DOMAIN\username format cannot be used to specify users of remote domains. The username@domain format must be used instead.
ORION-134299: When the tenant is set to use Kerberos/NTLM authentication to authorize SMB users from non-trusting domains, both NFS and SMB must use the native SMB authentication (Kerberos), and not Unix-style UID/GIDs.
ORION-141763: Before enabling or disabling NTLM authentication, you need to leave the cluster's joined Active Directory domain. After NTLM authentication is enabled or disabled, rejoin the domain.
The following limitations apply to Multi-Forest Authentication:
VAST Cluster does not allow adding two different Active Directory configuration records with the same domain name but different settings for multi-forest authentication and/or auto-discovery.
Names of users' domains are not displayed in data flow analytics.
If a trusted domain becomes unavailable and then recovers, SMB clients can use it to connect to the VAST cluster only after a period of time, but not immediately upon domain recovery.
Clients cannot establish SMB sessions immediately after a trusted domain recovers from a domain failure.
If a group exists on an Active Directory domain in a trusted forest and the group scope is defined as DomainLocal, VAST Cluster does not retrieve such a group when querying Active Directory, so members of such a group are denied access despite any share-level ACLs that can rule otherwise.
If TLS is enabled, the SSL certificate has to be a CA-signed certificate that is valid for all of the domain controllers in all trusted forests. If the certificate is not valid for a domain controller, this domain controller is not recognized.
ORION-156168: In a multi-forest environment, after migrating a group account from the forest of the cluster’s joined domain to another forest, information about historical group membership is not kept, so users in the migrated group might not be able to access resources to which they used to have access prior to the migration.
The following limitations apply when using netgroups:
Hosts should have both forward and reverse DNS entries. When VAST Cluster gets the netgroup hostname response from a NIS or LDAP server, it resolves the hostname via DNS.
Netgroups are only used to allow or deny clients' access via NFS. VAST Cluster does not accept netgroup entries in host-based access rules for other access protocols.
The following rules and limitations apply when using STS, IAM roles and OIDC providers:
STS is supported with HTTPS only.
Use of STS requires that the S3 protocol is enabled for the cluster.
One OIDC provider per VAST cluster tenant.
When replicating IAM roles:
ORION-278747: IAM role replication may take several minutes to complete in case the cluster is under high load or there is a very large amount of roles to replicate.
STS access keys are replicated only when the protection path is in SYNC state.
ORION-278217: When processing the assume role operation, the cluster may be raising a
Failed to replicate STS temp keysalert until the synchronous replication stream/protection path reaches the SYNC state.
ORION-285727: In some rare cases when IAM roles are subject to replication, if you delete an IAM role on a replication source, the deleted role gets recreated as a result of the replication. The recreation might take place, for example, in case a HA event occurs during role deletion, or if you modify the role on the replication destination (at the same time as the role is being deleted on the replication source). If you encounter this issue, delete the recreated role again both on the replication source and destination.
ORION-287563: Only five OIDC keys can be returned for a manual request to get or refresh OIDC keys.
VMS
ORION-212118: If a wrong VMS authentication token is passed, the cluster responds with 403 FORBIDDEN but not with 401 UNAUTHORIZED.
Tenant client metrics can only be collected for NFSv3 and NFSv4.
ORION-187584: An empty realm (which does not contain any objects) cannot be assigned to a role.
ORION-131386: When there is a parent directory that has a very large number of child directories, a total of children’s capacity values displayed in the Capacity page can exceed the capacity value shown for the parent directory.
When setting a VMS login banner text via the VAST CLI, multiple lines are not supported.
The following limitations apply when installing a SSL certificate for VMS:
Only RSA-generated public keys are supported.
Password-protected private keys are not supported.
VAST REST API
In VAST Cluster 5.4, the response to a GET request sent to the
/topics/endpoint contains only the database name and topic name fields. It does not include other fields that were available in VAST Cluster 5.3.VAST REST API documentation at
<VMS IP>/docsdoes not include updates for features introduced in VAST Cluster 5.4.
Platform and Control
The following limitations apply to EBoxes:
ORION-193794: Power cycling of an EBox where the leader was running may result in significant IOPS degradation until the EBox is up again. Contact VAST Support for a workaround.
DBox migration is not available for EBoxes.
The following limitations apply to conversion to write buffer RAID:
Conversion from VAST releases prior to 3.4 is not supported.
This capability is not supported for clusters with TLC drives, and also for VAST on Cloud clusters.
The cluster must include the following minimum number of DBoxes:
DBox Type
DBox HA enabled
DBox HA disabled
Ceres
15
4
Mavericks
22
4
Use of flash write buffers with DBox HA capability enabled requires at least 10 boxes.
The following rules and limitations apply when using Rack-Level Resiliency:
Every DBox must be associated with a single failure domain.
At least seven failure domains must be defined.
The number of DBoxes in the rack cannot exceed the number in any other rack by more than one.
The total available DBox capacity in a rack cannot be more than twice the available capacity in any other rack.
If two of the racks in a rack level-resilient cluster go down, bringing up only one of them does not return the cluster to normal operation. The cluster restores service only after the second of the racks goes up.
Call Home and Support
An attempt to delete a support bundle before it is completely created may cause unexpected behavior.