1. Multi-Tenancy Considerations

Prev Next

Multi-Tenancy Considerations

This guide is intended for AI-focused Cloud Service Providers (CSPs) integrating VAST Data clusters into a secure, scalable, and multi-tenant architecture. It covers all core networking constructs, isolation mechanisms, and access policies necessary to onboard and manage multiple tenants using NFS, SMB, and S3 protocols.

1. Core Concepts

Tenants

Tenants represent isolated administrative and data domains within the VAST cluster.

Each tenant has:

  • A separate directory hierarchy in the Element Store.

  • An identity provider (local, AD, LDAP, or NIS) to manage users, groups, and credentials.

  • Unique encryption keys, quotas, and quality-of-service (QoS) settings.

  • Firewall network rules (source/dest IPs) to isolate network access.

VIPs and VIP Pools

  • Virtual IPs (VIPs) are the addresses used by VAST Compute Nodes (CNodes) to provide data services like NFS, SMB, S3, Kafka, etc.

  • A VIP Pool is a group of VIPs associated with a specific tenant or shared among all tenants.

  • VIP Pools can be tagged with VLAN IDs to create Layer-2 network segmentation.

  • Horizontal scalability is achieved by distributing VIPs across multiple CNodes.

  • High availability is achieved by migrating virtual IPs during failover.

Client IP Ranges

  • Used for tenant resolution when VIP Pools are shared across multiple tenants.

  • Each tenant defines which source IPs are allowed to access its data.

  • Source IP matching is used to route connections to the correct tenant and deny access to unauthorized users.

VAST DNS & VMS

  • VAST DNS runs on a dedicated VIP and resolves internal hostnames for protocol traffic.

  • VAST Management System (VMS) is the control plane for administrators to manage tenants, views, access, quotas, and system configuration.

    • It supports GUI and REST API access

    • Each tenant can be restricted to only see their own resources

    • Access can be limited to specific source IPs.

2. Protocol-Specific Identifications

Secure multi-tenancy in VAST requires identifying the correct tenant for each request without relying on the client’s user identity. Each protocol uses a different mechanism:

  • NFS / SMB: Identification is based on client IP address and VIP pool. Both must match the tenant’s configuration for access to be allowed. NFS/SMB views can have the same name across tenants (e.g., multiple tenants can have a /projects share).

  • S3: Access and secret keys are assigned per user within a tenant, even for local users. For example, user1 in TenantA has a different set of S3 keys than user1 in TenantB. When an S3 request is received, the cluster maps the provided access key to a specific tenant and user.
    For anonymous or public access, the globally unique bucket name is used for identification. This behavior aligns with AWS S3 semantics, ensuring both authenticated and unauthenticated requests are directed to the correct tenant.

Note: VIP Pool and Client IP filtering can also be used for S3 tenant identification, but are optional and not required for access. For more information: Configuring S3 Multi-Tenancy

3. Network Design for Multi-Tenancy - File (NFS/SMB)

Multi-tenancy in VAST can resolve tenant access using either VIP Pool assignment or Client IP-based resolution.

In the first option, a VIP Pool is explicitly assigned to a single tenant, so any client connecting through that VIP is automatically routed to the associated tenant, regardless of source IP. This model is ideal for dedicated network environments or VLAN-isolated deployments.

In the second option, the VIP Pool is shared among multiple tenants, and VAST determines the correct tenant by comparing the client’s source IP against each tenant’s configured IP ranges, enabling flexible access control even in overlapping or NAT-ed networks.

Option 1: VIP Pool Assigned to a Specific Tenant

This option provides deterministic and strict isolation by binding a VIP Pool to a specific tenant. Any connection through that VIP is automatically associated with the tenant — no IP-based resolution is performed.

  • The VIP Pool is explicitly assigned to a single tenant.

  • Client IP Ranges are ignored.

  • Any client connecting to the VIP is mapped directly to the associated tenant.

  • Simplifies routing and tenant isolation in environments with dedicated network segments.

Notes:

  1. Creating a dedicated VIP Pool per tenant increases configuration overhead and consumes VIP Pool resources, making this a good fit for environments supporting 10-20 tenants. Any client that can reach a tenant’s VIP Pool may see all of that tenant’s views, potentially exposing data unless ACLs, network segmentation, or hypervisor-based isolation are enforced.

  2. Scale limit consideration: The cluster-wide VIP limit is 2K across all pools. Assuming 4 IPs per VIP Pool, this allows 500 pools total. For larger-scale deployments, use Option 2 (ClientIP-based segmentation) so that many tenants can share larger pools without per-tenant pool bloat.

The diagram illustrates a VAST (Virtual Appliance Storage Technology) cluster with two tenants, AlphalIndustries and BetaCorp, each allocated to their respective VIP pools through SMB and NFS connections. The client connects to these VIP pools via IP addresses within specified ranges.

VAST Cluster VIP pool configurations

View Management:

  • Views associated with the tenant are visible to any client connecting via the tenant’s VIP.

  • View names must be unique within each tenant.

  • Multiple tenants can use the same view names (e.g., /projects).

  • Access control is still enforced via:

    • Share-level ACLs

    • Filesystem permissions

    • User identity mapping

Use Cases:

  • Dedicated per-tenant VIP Pools.

  • Environments using VLANs or VRFs for network isolation.

  • Tenants with non-overlapping network access.

  • Preferred when IP resolution is not desired or needed.

Option 2: Client IP-Based Tenant Resolution

This model enables multi-tenant access through a shared VIP Pool. The system uses the client’s source IP to determine which tenant’s views to expose. It’s particularly useful in environments where multiple tenants share the same endpoint, and NAT is used to create IP uniqueness.

  • VIP Pool is not assigned to any tenant (marked as global or left blank).

  • Each tenant defines Client IP Ranges.

  • When a client connects:

    • The system checks the client’s source IP against the defined ranges for all tenants.

    • The first matching tenant is selected.

    • If no match is found → access is denied.

Notes:

  1. Client IPs can be difficult to manage in certain environments, such as a virtualized environment where multiple tenant VMs sit behind a single hypervisor IP, making accurate tenant mapping more complex (See section below on VPC support).

  2. Despite this, VAST recommends the Client IP approach for environments that require better scalability, isolation, and access control, especially in large-scale or dynamic multi-tenant deployments.

  3. For NFS multi-tenancy with a shared VIP Pool, it is recommended to allocate at least 4 client VIPs per CNode. There is no downside to using a larger pool.

The VAST Cluster configuration includes separate SMB and NFS shares for tenants AlphaIndustries and BetaCorp, each with designated client IP ranges and access via the VIP Pool. Client3 is denied access due to its IP address not matching any specified range in either tenant's configuration.

VIP Pool Client IP config

View Management:

  • Views are accessible only to clients whose IP matches the tenant’s defined range.

  • View names must be unique within each tenant.

  • Tenants may define overlapping view paths (e.g., /projects).

  • Access control enforced via:

    • Share-level ACLs

    • Filesystem permissions

Use Cases:

  • Shared VIP Pools across many tenants.

  • Overlapping CIDRs with NAT-ed access.

  • Large-scale CSP environments with dynamic client allocation.

  • Multi-tenant VPC or peered cloud network topologies.

Client IP Filters in VPCs

  • Tenants frequently use overlapping CIDRs (e.g., multiple tenants with 10.0.0.0/24).

  • In these cases, use NAT at the VPC boundary to map tenant traffic to unique, routable source IPs.

  • These NAT-translated IPs are then used in VAST's Client IP Ranges to allow/deny access per tenant.

The diagram illustrates a VAST Cluster architecture with clients accessing tenant-specific home directories via SMB and NFS, where each tenant (AlphalIndustries and BetaCorp) has its own range of client IP addresses. Clients connect through NAT to access shared resources within their respective tenant environments.

VIP Pool Client IP Filters

VIP & VLAN Strategy

  • Create separate VIP Pools per tenant where possible.

  • Apply VLAN tags to each VIP Pool to isolate traffic at Layer 2 and support strict separation across different CSP tenants.

  • Assign VIP Pools to specific tenants. Shared VIP Pools may be used with additional source IP filters.

  • Each VIP in the pool is advertised through DNS for high availability and automatic load balancing.

4. DNS Configuration and Load Balancing

The VAST cluster provides a dedicated DNS VIP to resolve internal service endpoints such as data access paths and management interfaces. This DNS service is critical for multi-node deployments, where clients need to resolve hostnames to one or more active VIPs associated with CNodes for protocols such as NFS, SMB, or S3.

When configuring DNS delegation, a subdomain (e.g., vast.example.com) is delegated to the DNS VIP of the VAST cluster. The VAST DNS service responds to name resolution requests (e.g., nfs.vast.example.com, s3.vast.example.com) with a list of IPs from the assigned VIP pool. This enables round-robin load balancing across CNodes, distributing client connections efficiently and enabling high availability.

The underlying IP-to-hostname mappings can be managed entirely within the VAST system, eliminating the need for external DNS updates when changes occur. The DNS TTL (time-to-live) can also be configured to balance resolution caching behavior with responsiveness to changes.

To configure DNS in VMS:

  1. Navigate to Network Access → DNS in the left-hand menu.

  2. Click Create DNS Service.

  3. Provide:

    • DNS Service IP (e.g., 10.7.199.60)

    • DNS Service Gateway

    • DNS Service Suffix (e.g., filer-kfs2.vastdata.com)

    • DNS Service Subnet CIDR (e.g., 24)

    • Optional: Set VLAN, L3, and BGP Configuration if needed.

  4. Enable the service to activate DNS-based resolution within the cluster.

Note: DNS enables hostname-based access to NFS, S3, and SMB services and supports load balancing by resolving to the appropriate virtual IPs.

The screenshot displays the "Update DNS" configuration page in VAST, where users can modify details such as the DNS Service Name, IP address, and various related configurations like VLANs and Port Types to ensure proper network access and service availability.

Screenshot of updating DNS

5. Configuration Example with vastpy-cli

Get the Tenant ID

vastpy-cli get tenants fields=id,name
id |name          
---+--------------+
24 |company-y     
23 |company-x     
11 |syncrep       
25 |company-z     
.. |...    

Add allowed IP ranges for Tenant company-z

vastpy-cli post vippools tenant_id=25 name=company-z-pool ip_ranges='[["10.11.0.10","10.11.0.13"]]' subnet_cidr=24

Show VIP pools with fields: id, name, Start, and end IPs

vastpy-cli get vippools fields=id,name,start_ip,end_ip
id |name                |start_ip       |end_ip         
---+--------------------+---------------+---------------+
45 |company-z-pool      |10.11.0.10     |10.11.0.13     
.. |...                 |...            |...     

Delete VIP Pool

vastpy-cli delete vippools/44

Create a VLAN-tagged VIP Pool for company-z

vastpy-cli post vippools \
  name=company-z-vlan-pool \
  vlan=120 \
  netmask=255.255.255.0 \
  gateway=10.120.0.1 \
  subnet_cidr=10.120.0.0/24 \
  interface_group_name=eth-group-a \
  ips=10.120.0.100-10.120.0.110 \
  status=ACTIVE \
  tenant_id=25