Run the VAST Web UI Cluster Install Utility

Prev Next

Follow this guide to run the Cluster Install utility after racking and cabling the cluster hardware and configuring the switches.

Important

Follow a prepared plan for the specific installation when completing all fields. Specifically, specific input values for your installation should be specified in the VAST Install Wizard Settings tab of your site survey. The field descriptions below are intended as guidance to help you implement a planned configuration.

Connect to a CNode Tech Port, Copy Package, SSH to Management CNode

  1. Configure the Ethernet interface on your laptop to be on the following subnet: 192.168.2.0/24.

  2. Connect your laptop to the technician port on any one of the CNodes. This CNode will become the Management CNode.

    QuickInstallGraphics_12.png

    Technician Port Location: Cascade Lake model CNode

    techport_extra_trim.png

    Technician Port Locations on Ice Lake model CBox Rear Panel - Port Position Varies with CNode Position

  3. Run the following commands to copy the VAST Cluster package file (e.g. release-<x.x.x-xxxxxx>.vast.tar.gz) to the CNode.

    scp <package file path> vastdata@192.168.2.2:/vast/bundles/
    

    Where <package file path> is the local path to the package file.

    Note

    Make sure there is only one VAST Cluster package file located at /vast/bundles/.

    You'll be prompted for the password on running each command. The default password is vastdata.

  4. Log in to the management CNode via SSH and run the vast_bootstrap.sh script which is included in VAST OS:

    username@host:~$ ssh vastdata@192.168.2.2
    [vastdata@localhost ~]$ cd /vast/bundles
    [vastdata@localhost bundles]$ vast_bootstrap.sh
    
  5. Confirm the action:

    Are you sure you want to reimage? this will wipe the current system [Y/n] Y
    unpacking release-<x.x.x-xxxxxx>.vast.tar.gz, this may take a while
    

    The script extracts the package files and runs the VAST Management Server (VMS) container.

  6. When the vast_bootstrap.sh  script is complete, the following message is displayed:

    bootstrap finished, please connect at https://192.168.2.2
    

    While still connected to the technician port, open a web browser on your laptop and browse to https://192.168.2.2.

    The VAST Web UI opens and displays the VAST DATA - End User License Agreement.

  7. Click I Agree.

    The login page appears.

  8. Log in using the default admin user and password:

    • Username: admin

    • Password: 123456

  9. The Cluster Install dialog appears, presenting the Included nodes screen.

    At this stage, the Easy Install utility attempts to discover the CNodes and DNodes that comprise the cluster.

    Tip

    Nodes are discoverable provided the switches were configured before you began running Easy Install.

Included Nodes

The Included nodes screen shows all discovered nodes in the DBox and CBox or EBox tabs. The nodes are grouped by the DBox and CBox or EBox in which they are housed. By default, they are all included in the installation.

Note

If nodes are not discovered, the switches in the cluster require configuration.

Do the following:

  1. In the DBox or EBox tab, in the  Internal network topology field, select the network infrastructure mode to configure on all DNode NICs, depending on the type of the cluster's internal network:

    • ETH. Sets the DNode interfaces to Ethernet mode. Supports Ethernet infrastructure for the internal network.

    • IB. Sets the DNode interfaces to InfiniBand mode. Supports InfiniBand infrastructure for the internal network.

  2. In the DBox and CBox tabs, or in the EBox tab, expand each DBox and CBox or EBox and review the details of the nodes to verify that all nodes are discovered. For each node, its IP address, host name, and OS version is displayed.

  3. Review any errors that may have been detected during validation with any of the discovered nodes.

    To review any hardware errors that were detected, click Show Errors at the bottom of the screen. Errors may pertain to CPU, memory, disks, NVRAMs, port connectivity, or licensing issues.

    The error text refers to the affected node, enabling you to match each error to a node listed above. To identify the position of the affected node, hover over a node to see where it is located in its CBox or DBox.

  4. Resolve any issues before continuing with the installation. In the event that faulty hardware was received in the shipment, consult VAST Support on how to proceed.

    The following options are available:

    • Remove and either fix and reinsert, or replace a faulty component with a new one.

      After replacing, click Discover Now to repeat host discovery and validation.  Check again the discovered hosts and errors.

    • Exclude nodes.

      In case of critical errors that cannot be resolved on site before continuing, you can identify the affected node and exclude it from the installation. Report the errors to VAST Support and arrange return and replacement of hardware. Replacement nodes can be added to the cluster once it is already active.

      To exclude a node:

      1. Uncheck the node you want to exclude (using the checkbox to the left of the CNode/DNode name).

      2. Verify that the Excluded: field shows the correct count of excluded nodes.

  5. Continue when no errors remain, or when any remaining errors are determined not to be critical to the installation.

  6. In the DBox tab or the EBox tab (whichever is applicable), review and/or configure the following for each DBox or EBox:

    Rack

    Use this field to define racks as needed and to assign the DBox or EBox to a rack.

    Use discovered identifying details such as the IP address to identify the physical location of the box.

    To assign the box to a rack, select the rack from the dropdown.

    To rename the rack, click the pen icon (pens.png) and edit the name.

    Subsystem

    Leave the default value (0) unless the installation requires multiple subsystems. If needed, hover to reveal the edit button (VInstallEditFieldButton.png) and set the subsystem per DBox per the plan.

    The subsystem is used in the formation of the IP addresses that are allocated to the nodes for the cluster's internal network. Multiple subsystems expand the number of IP addresses that are available for allocation. The default setting is 0 for all DNodes and CNodes, which configures a single subsystem. A single subsystem enables the allocation of up to 254 IP addresses. There are three internal IP addresses allocated to each CNode and to each DNode.

    Follow the installation plan for the cluster and allocate a subsystem to each node as planned.

    Valid range: 0-63.

    Unit

    Enter a meaningful indicator for the position of the box in the rack. For example, U1 for the lowest unit position in the rack.  

    Important

    In addition to entering the rack unit indicator, drag and drop the boxes into the order that reflects the actual order of positioning of the units. This ensures that the boxes are installed in the correct order. To drag a box, hover to the left of the box name until you are able to grab and drag it.

    draganddroprackpositions.png

    Network Type

    This is automatically set to match the global Internal network topology setting that appears at the top of the tab. This setting determines the mode configured on the DNode NICs in the DBox/EBox for connectivity to the cluster's internal network.

  7. In the CBox tab (if applicable), review and/or configure the following for each CBox:

    Rack

    Use this field to define racks as needed and to assign the CBox to a rack.

    Use discovered identifying details such as the IP address to identify the physical location of the CBox.

    To assign the CBox to a rack, select the rack from the dropdown.

    To rename the rack, click the pen icon (pens.png) and edit the name.

    Subsystem

    Leave the default value (0) unless the installation requires multiple subsystems. If needed, hover to reveal the edit button (VInstallEditFieldButton.png) and set the subsystem per DBox per the plan.

    The subsystem is used in the formation of the IP addresses that are allocated to the nodes for the cluster's internal network. Multiple subsystems expand the number of IP addresses that are available for allocation. The default setting is 0 for all DNodes and CNodes, which configures a single subsystem. A single subsystem enables the allocation of up to 254 IP addresses. There are three internal IP addresses allocated to each CNode and to each DNode.

    Follow the installation plan for the cluster and allocate a subsystem to each CBox as planned.

    Valid range: 0-63.

    Unit

    Enter a meaningful indicator for the position of the CBox in the rack. For example, U1 for the topmost unit position in the rack.  

    Important

    In addition to entering the rack unit indicator, drag and drop the boxes into the order that reflects the actual order of positioning of the units. This ensures that the boxes are installed in the correct order. To drag a box, hover to the left of the box name until you are able to grab and drag it.

    draganddroprackpositions.png

    Network Type

    This setting determines the network modes for the CNode NICs. Verify that the setting is correct for each CBox and CNode and change if needed.

    In some installations, CNode configuration is not homogeneous and you need to set different network types for different CNodes.

    Note

    Only those Network Type options are shown that are compatible with the DNode network topology set in the DBox tab. If you are not able to set the CNode network types correctly, verify that the DNode network topology is configured correctly. The CNode NICs that are connected to the cluster's internal network always need to be set to the same mode as the DNode network topology.  

    If all CNodes on all CBoxes require the same external network mode, select the mode from the Define per all dropdown:

    • IB if all CNode external NIC ports are connected to InfiniBand networks and not to an Ethernet network.  

    • ETH if all CNode external NIC ports are connected to Ethernet networks and not to an InfiniBand network.

    • IB ETH if all CNodes' external NICs have a left* port connected to an external InfiniBand network and a right port connected to an external Ethernet network.

    • ETH IB if all CNodes' external NICs have a left port connected to an external Ethernet network and a right port connected to an external Infiniband network.

    Otherwise, if there is variation between the CNodes or the CBoxes, choose one of the following for the CBox:

    • IB if all CNode external NIC ports on the CBox are connected to InfiniBand networks and not to an Ethernet network.  

    • ETH if all CNode external NIC ports on the CBox are connected to Ethernet networks and not to an InfiniBand network.

    • IB ETH if all CNodes on the CBox have an external NIC with its left port connected to an external InfiniBand network and its right port connected to an external Ethernet network.  

    • ETH IB if all CNodes on the CBox have an external NIC with its left port connected to an external Ethernet network and its right port connected to an external InfiniBand network.

    • MIX. If the CBox has CNodes that need to be configured with different external network types. Then set the network type as needed for each CNode:

      • IB if the CNode external NIC is connected to InfiniBand networks and not to an Ethernet network.  

      • ETH if the CNode external NIC is connected to Ethernet networks and not to an InfiniBand network.

      • IB ETH if the CNode has an external NIC with its left port connected to an external InfiniBand network and its right port connected to an external Ethernet network.

      • ETH IB if the CNode has an external NIC with its left port connected to an external Ethernet network and its right port connected to an external InfiniBand network.

    * Left and right refer to the left and right from the perspective of the technician facing the ports.

    Note

    With the ETH IB option, the IB port supports either HDR or EDR cable speed. With the IB ETH setting, the IB port is limited to EDR cable speed.

  8. In the CBox tab (if applicable), review and/or configure the following for each CNode:

    ETH MTU

    (field shown if applicable)

    For dual-NIC CNodes where a NIC is directly connected to an external Ethernet network, use this field to set the MTU for that Ethernet network.  

    IB MTU

    (field shown if applicable)

    For dual-NIC CNodes where a NIC is directly connected to an external InfiniBand data network, use this field to set the MTU for that InfiniBand network.

    Default: 2044

    Take care to set a supported MTU for the NIC mode:

    • If NB IB type is Connected, the maximum IB NB MTU is 65520.

    • If NB IB type is Datagram, the maximum IB NB MTU is 4092.

    IB type

    (field shown if applicable)

    Sets the type(s) of external InfiniBand network(s) that the CNode is connected to: Datagram or Connected (default).

    Set this field to Datagram.

    Note

    The IB Connected mode is deprecated.

    Skip NIC

    (field shown if applicable)

    If CNodes are dual-NIC CNodes and have NICs that are not in use (not connected to any network), use this field to specify which NIC is not connected on each CNode and hence should not be included in the network configuration.

    For each CBox, choose one of the following:

    • Internal. If the NIC to the right of the CNode panel (used for internal connectivity in the default scheme) is not connected on all CNodes in the CBox.  

      Note

      Not available for Ice Lake CBoxes.

    • External. If the NIC to the left of the CNode panel (used for external connectivity in the default scheme) is not connected on all CNodes in the CBox.

      Note

      This is the only available and valid option for an unconnected NIC on Ice Lake CNodes. In the case of Ice Lake models, when facing the rear panel, the NIC that can be unconnected is the left NIC on the two right CNodes; it's the right NIC on the two left CNodes.

    • No (default). Leave this option selected if both NICs are connected on all CNodes in the CBox.

    • MIX. If the configuration is not uniform across all CNodes in the CBox. Then set the Skip NIC setting as needed for each CNode:

      • Internal. If the NIC to the right of the CNode panel (used for internal connectivity in the default scheme) is not connected.  

        Note

        Not available for Ice Lake CBoxes.

      • External. If the NIC to the left of the CNode panel (used for external connectivity in the default scheme) is not connected.

        Note

        This is the only available and valid option for an unconnected NIC on Ice Lake CNodes. In the case of Ice Lake models, when facing the rear panel, the NIC that can be unconnected is the left NIC on the two right CNodes; it's the right NIC on the two left CNodes.

      • No (default). Leave this option selected if both NICs are connected.

    Reverse nics

    (field shown if applicable)

    Note

    This setting is not applicable for Ice Lake models of CBox.

    Use this setting if the CNode is a dual-NIC CNode and the network connectivity scheme for the NICs needs to be reversed from the default.

    In the default scheme, the left NIC is dedicated to the external network. The two QSFP28 ports on the left NIC are connected to the client data network switches. The right NIC is dedicated to the internal network and its ports are connected to the cluster switches. If your installation plan follows this default connectivity scheme for a given CNode, do not enable Reverse nics for that CNode.

    Enable Reverse nics on a CNode only if this scheme is reversed in according to your installation plan. In the reverse scheme, the left NIC QSFP28 ports on each CNode connect to the cluster switches while the right NIC ports connect to the client network switches (external to the cluster).    

    For each CBox, choose one of the following:

    • Yes. To enable Reverse nics on all CNodes on the CBox.

    • No. To disable Reverse nics on all CNodes on the CBox.

    • MIX, If the setting should not be uniform across the  CNodes in the CBox. Then select Yes or No as appropriate for each CNode.

  9. Click Continue to general settings.

    The General settings screen appears.

General Settings

In the General settings screen, do the following:

  1. Complete the fields in the Required settings pane:

    Important

    You may find that Easy Install fills the field values from a previous installation. You can use the Clear all settings button to clear all filled values and make sure you don't set the wrong values for the current installation.

    Note

    To reset the pane's required fields to their defaults, click the Restore to defaults button in the top right corner of the pane.

    Cluster name

    A name for the cluster.

    PSNT

    The cluster's PSNT. A PSNT is an asset identifier that links the components of a cluster.

    Management VIP

    A virtual IPv4 or IPv6 address configured on the management interfaces on all CNodes. VAST Management System (VMS) listens on this IP. The IP should be on the management subnet.

    Click within the field or choose Expand to display an IP address entry dialog. Enter the IP address and click +Add. The entry is added to the IPV4 or IPV6 list respectively. Click Save Changes to close the dialog.

    MGMT IPv4

    The IPv4 mask for the management subnet in CIDR notation.

    Complete this field if an IPv4 address is specified in the Management VIP field.

    MGMT IPv6 CIDR

    The IPv6 prefix length for the management subnet.

    Complete this field if an IPv6 address is specified in the Management VIP field.

    External gateway

    The IPv4 or IPv6 address of the default gateway for the management network.

    Click within the field or choose Expand to display an IP address entry dialog. Enter the IP address and click +Add. The entry is added to the IPV4 or IPV6 list respectively. Click Save Changes to close the dialog.

    Management network

    This field specifies the interface to be used for the management network:

    • Outband. Allocates the external management IP to the onboard left or right port, depending whether B2B is enabled or not.

    • Inband. Allocates the external management IP to the bond0 interface.

    • Bond. Creates a bond interface (bond1) on the two RJ45 ports, allowing for redundancy. Negates the ability to have a technician interface.

    • Northband. For clusters with dual NIC CNodes where one NIC is directly connected to an external client network, this option allocates the external management IP to the first port on the NIC that was allocated for external usage.

      Note

      This option is not compatible with standard IPMI configuration. It is compatible with B2B IPMI configuration. Therefore, if you set Management network to Northband, you must also enable B2B IPMI when you set the General Settings and fill the B2B template field. Do not fill the CNodes IPMI pool and  DNodes IPMI pool fields.

    DNS IPs

    The IPv4 or IPv6 address(es) of any DNS servers that will forward DNS queries to the cluster.

    Click within the field or choose Expand to display an IP address entry dialog. Enter the IP address(es) and click +Add. The entry or entries are added to the IPV4 or IPV6 list respectively. Click Save Changes to close the dialog.

  2. In the Racks & External IP Pools section, for each rack:

    1. Select the rack from the dropdown and assign IP address pools:

      CNode management external IP pool / EBox External IP Pool

      The IP pool from which to assign IPs for the management network to all CNodes or EBoxes in the rack. The pool should contain enough IPs for all CNodes or EBoxes in the rack.

      To add IPs:

      1. Click inside the field. A CNode management external IP pool dialog appears.

        At the top of the dialog, a message appears telling you how many IPs to add.

      2. Add an IPv4 or IPv6 address, a series of IPs separated by commas, or a range of IPs using a hyphen to indicate a range of values for the final octet. For example, 173.30.200.104-105

      3. Click +Add.

      4. Repeat the previous two steps as needed until all IPs in the pool are entered.

      5. Click Save Changes.

        The IPs are added to the field.

      For example, for an installation with one CBox, there are four CNodes, so you need to supply four IPs that were designated for the management external IP pool in the installation plan. The recommendation "You should add exactly 4 IPs" is displayed.

      CNodes IPMI pool / EBox External IPMI Pool

      An IP pool from which to assign an IP to the IPMI interface of each CNode or EBox.

      Set this IP pool if and only if the planned deployment uses the standard IPMI network configuration.

      If you are deploying B2B IPMI networking, do not configure this IP pool. Configure a B2B template instead (see step 4).

      Note

      If Management network is set to Northband, you must configure B2B IPMI network configuration. Therefore, in that case, do not fill this field.

      The CNodes or EBoxes in the rack will be assigned IPMI IPs in the same order as they are assigned management external IPs. The CNode that receives the first IP in the management external IP pool receives the first IP in the CNodes IPMI pool and so on.

      To add IPs:

      1. Click inside the field. A CNodes IPMI Pool dialog appears in the IP pool area.

      2. Add an IPv4 or IPv6 address, a series of IPs separated by commas, or a range of IPs using a hyphen to indicate a range of values for the final octet. For example, 173.30.200.111-113

      3. Click +Add.

      4. Repeat the previous two steps as needed until all IPs in the pool are entered.

      5. Click Save Changes.

        The IPs are added to the field.

      DNode management external IP pool

      The IP pool from which to assign IPs for the management network to all DNodes in the rack. The pool should contain enough IPs for all DNodes in the rack.

      To add IPs:

      1. Click inside the field. A DNode management external IP pool dialog appears.

        At the top of the dialog, a message appears telling you how many IPs to add.

      2. Add an IPv4 or IPv6 address, a series of IPs separated by commas, or a range of IPs using a hyphen to indicate a range of values for the final octet. For example, 173.30.200.104-105

      3. Click + Add.

      4. Repeat the previous two steps as needed until all IPs in the pool are entered.

      5. Click Save Changes.

        The IPs are added to the field.

      For example, for an installation with one Mavericks DBox, there are two DNodes, so you need to supply two IPs that were designated for the management external IP pool in the installation plan. The recommendation "You should add exactly 2 IPs" is displayed.

      DNodes IPMI pool

      An IP pool from which to assign an IP to each IPMI interface.

      For Mavericks DBoxes, provide an IP per DNode.

      For CERES DBoxes, provide an IP per DTray. This is half of the number of DNodes.

      Set this IP pool if and only if the planned deployment uses thestandard IPMI network configuration.

      If you are deploying B2B IPMI networking, do not configure this IP pool. Configure a B2B template instead (see step 4).

      Note

      If Management network is set to Northband, you must configure B2B IPMI network configuration. Therefore, in that case, do not fill this field.

      The DNodes will be assigned IPMI IPs in the same order as they are assigned management external IPs. The DNode that receives the first DNode IP in the management external IP pool receives the first IP in the DNodes IPMI pool and so on. (For CERES DNodes, the IPMI IP is duplicated on both DNodes in each DTray. Otherwise, the order is the same in principle.)

      To add IPs:

      1. Click inside the field. A DNodes IPMI Pool dialog appears in the IP pool area.

      2. Add an IPv4 or IPv6 address, a series of IPs separated by commas, or a range of IPs using a hyphen to indicate a range of values for the final octet. For example, 173.30.200.111-113

      3. Click +Add.

      4. Repeat the previous two steps as needed until all IPs in the pool are entered.

      5. Click Save Changes.

        The IPs are added to the field.

      Examples:

      • 173.30.200.114,173.30.200.115

    2. Click Add to Table.

      A table is shown below the Racks & External IP Pools section displaying the IP pools per rack that you configured and added to the table.

  3. To enable rack resiliency, toggle Enable Rack Level Resiliency. This should be enabled only after racks and DBoxes have been configured, in the preceding steps.

  4. In the lower part of the General Settings screen, click Start with General Settings to display the General Settings pane.

  5. In the General Settings pane, make the settings as needed for your installation:

    Note

    To reset the pane's required fields to their defaults, click the Restore to defaults button in the top right corner of the pane.

    IPMI default gateway

    The IP of a default gateway for the IPMI interfaces on the CNodes and DNodes, if different from the management network default gateway.    

    Examples:

    • 173.30.200.1

    IPMI netmask

    The subnet mask for the IPMI default gateway.

    DNS search domains

    Enter the domains on your data network on which client hosts may reside. If you provide these, you will be able to specify hosts by name instead of IP when setting up export policies, call home settings, webhook definitions and so on. VAST Cluster will use these domains to look up host IPs on the DNS server.

    Internal Eth MTU

    If the cluster's internal network infrastructure is Ethernet, then use this field to set the MTU size for the CNode and DNode internal network interfaces. The MTU should be aligned with the switches.

    Default: 9216

    For installations with dual-NIC CNodes, see also Eth NB MTU.

    Internal IB MTU

    If the cluster's internal network infrastructure is Infiniband, then use this field to set the MTU size for CNode and DNode internal network interfaces. Default: 2044

    Take care to set a supported MTU for the NIC mode:

    • If IB type is Connected, the maximum IB NB MTU is 65520.

    • If IB type is Datagram, the maximum IB NB MTU is 4092.

    NTP server

    The IP(s) of any NTP server(s) that you want to use for time keeping. Enter a comma-separated list of IPs.

    For example: 172.30.100.10

    Customer IP

    An IP on the client data network. This IP is used to test connectivity.

    Management inner VIP

    A virtual IP on the internal network that is used for mounting the VMS database.  

    Default: 172.16.4.254

    B2B template

    B2B is a networking configuration option that isolates the IPMI network from the management network.  A B2B IP is generated per node as 192.168.3.x, where x is a node index. Optionally, you can set a different B2B template. For example, if you set the B2B template to be 10.250.100 then the B2B IPs will be 10.250.100.x.

    Default: 192.168.3

    IPoIB Mode

    The mode of the InfiniBand interfaces: Datagram or Connected (default).

    Note

    The IB Connected mode is deprecated.

    Set this to match the InfiniBand type of the internal VAST network, if applicable.

    License

    Enter the license key for the cluster.

    If no license key is entered, a temporary 30 days license is installed.

    Big Catalog Virtual IP Pool

    The virtual IP pool used for VAST Catalog queries. This IP pool is created during installation. By default, it is created with the following IPs: 172.16.254.[240-248].

    Enter an alternate pool of eight IP addresses if you want to override the default.

    Big Catalog CIDR

    The network CIDR for the Big Catalog Virtual IP Pool.

    Similarity

    Enabled by default. Enables similarity-based data reduction on the cluster. This can also be enabled or disabled after installation.

    DBox HA

    Enables NVRAM High Availability (HA) for DBoxes.

    Support for DBox NVRAM HA is limited. Before enabling this feature, review its usage and limitations. It is possible to enable the feature at a later time after installation, although it will cause a drive layout rewrite.

    B2B IPMI

    Enables auto configuring the IPMI ports on the nodes with IP addresses according to the B2B template.

  6. Select the Customized Network Settings tab and make the settings as needed for your installation:

    Note

    To reset the pane's required fields to their defaults, click the Restore to defaults button in the top right corner of the pane.

    Data Vlan

    For Ethernet configurations, enter the VLAN to isolate the cluster's internal network from the data network. In case of a conflicting use of the default VLAN, enter a different VLAN that is not already used on the client network.

    Default: 69

    Example: 108

    CNode management external VLAN

    Sets a VLAN on the CNode management network external interfaces.

    Subnet

    Sets a custom subnet for the cluster's internal network.

    Default: 172.16

    The Data VLAN isolates the internal network from the external network. If you anticipate IP address collisions with the default subnet, such as in an IB configuration, you can set a custom subnet.

    Each CNode and DNode is allocated three IP addresses for three networks within the subnet. These are generated within the subnet from a combination of:

    • An index called the subsystem for the CNodes (0 by default) and for the DNodes (0 by default), which is a starting value for the third octet for the first internal network.

    • A subnet mask called the data netmask, which determines the size of the subnet for each of the internal networks for the CNodes and for the DNodes. The default and recommended data netmask is 255.255.192.0.

    • An index per CNode and DNode. These indexes can be configured. By default, they start with 1 for CNodes and 100 for DNodes. (See Start index cnode and Start index dnode).

    The IPs for these interfaces are generated on the nodes as subnet.subsystem.x, where x is an index per node.

    For example, if the subnet is 10.200, with the default subsystem, data netmask 255.255.192.0 and start indexes, the following IPs are generated for the internal network interfaces on the first DNode: 10.200.0.100, 10.200.64.100, 10.200.128.100. The following IPs are generated for the internal network interfaces on the first CNode: 10.200.0.1, 10.200.64.1, 10.200.128.1. IPs for the equivalent interfaces for subsequent CNodes and DNodes are incremented from these.

    The subnet mask for the internal network is 255.255.192.0. Each DNode and CNode is configured with three interfaces on the network.

    Selected QoS type

    Sets the QoS flow control type to run on Mellanox interfaces:

    • Global Pause

      Note

      Do not select Global Pause if you are deploying Cumulus switches. Using this option with Cumulus switches can cause significant performance issues.

    • Priority Flow Control

    Docker IP

    Specifies a docker bridge IP (used internally) in case it needs to be changed from the default due to IP conflicts. Default: 172.17.0.1

    Docker CIDR

    Specifies a docker bridge IP subnet as a CIDR index in case it needs to be changed from the default due to IP conflicts. Default: 16

    Hostname Prefix

    Specifies a non-default prefix for all hostnames, if preferred.  

    Technician interface CNode

    Changes the IP configured on the technician interface on the CNodes.

    Default: 192.168.2.2

    Start index CNode

    Sets the start value for the indexes appended to internal IPs for the CNodes (see also Subnet). Default: 1

    Technician interface DNode

    Changes the IP configured on the technician interface on the DNodes.

    Start index DNode

    Sets the start value for the indexes appended to internal IPs for the DNodes (see also Subnet).

    Default: 100

  7. If needed, go to the  Advanced settings pane to configure advanced settings.  

    Caution

    Do not change Advanced settings unless guided to do so by VAST Support.

    Note

    The MD Triplication slider enables or disables the metadata triplication feature, which is supported for EBox clusters only. This feature improves the resiliency of the system by allowing it to lose two metadata sections (on SCM or DNode RAM) at the same time without any loss of metadata or access. Enabling triplication improves resiliency at the cost of reducing the amount of metadata available to the system by 33%. In addition, enabling this feature will increase write latency and reduce the max write IOPS of the system.

    The installation is configured by default to continue when it encounters failed CNodes. You can configure the limit for the number of failed CNodes that can be skipped (as a percentage of total number of CNodes being installed) before the installation process halts. You can also disable the feature altogether.

    In the Added by You section, add this field:

    failing_components_enabled

    Enable or disable the ability of the installation to continue if it encounters failed CNodes (enabled by default).

    failing_components_cnodes_failure_to_fail_percentage

    The percentage of CNodes that can be skipped if they fail installation, and the installation process continues. If the number of failed CNodes exceeds this percentage, the installation process halts.

    Default is 25 (%), maximum is 50 (%).

  8. Select the Call Home tab and make settings as needed for your installation.

    The Call Home feature sends non-sensitive data from your VAST Cluster to the VAST support server to enable us to provide proactive analysis and fast response on critical issues. The collected data is sent by HTTPS to a VAST Data AWS S3 bucket that we maintain for this purpose.

    Note

    To reset the pane's required fields to their defaults, click the Restore to defaults button in the top right corner of the pane.

    1. Complete the  General Setup fields:

      Max upload concurrency

      The maximum number of parts of a file to upload simultaneously to the AWS S3 bucket.

      Valid values: A positive integer, 1 or higher

      Default: 1

      Max upload bandwidth

      The maximum bandwidth for uploading to the AWS S3 bucket. The maximum upload bandwidth (in bytes) for call home bundles.

    2. Complete the  Intervals Setup fields:

      Log frequency (minutes)

      The frequency with which system logs and traces are sent to the support server. If disabled, the data is not sent.

      Luna max frequency (hours)

      The interval (in hours) to send Luna results to the support server. If disabled, no Luna data is sent.

      Enabled

      When enabled, VAST Cluster sends alerts to the VAST support server.

    3. Under Proxy Setup, enter proxy server details if you would like the data to be sent through your own proxy server.

    4. Complete the Misc fields:

      Verify SSL

      Enables SSL verification. Disable if, for example, you are sending the call home data through a proxy server that does not have an SSL certificate recognized by VAST Cluster. VAST Cluster recognizes SSL certificates from a large range of widely recognized certificate authorities (CAs). VAST Cluster may not recognize an SSL certificate signed by your own in-house CA.

      Support channel

      Enables VAST Data Support to run remote call home bundle collection commands on the cluster.

      Obfuscated

      Obfuscates data in call home bundles, metrics and heartbeats. The following types of information are replaced with a non-reversible hash: file and directory names, IP addresses, host names, user names, passwords, MAC addresses.

      Upload via VMS

      Uploads a non-aggregated call home bundle via VMS. Otherwise, the upload is done from each node.

      Note

      For aggregated call home bundles, the upload is always via VMS.

      Enabling this option requires a proxy to be set up.

      Compress Method

      Sets the compression method used to compress call home bundles:

      • zstd (default)

      • gzip

      S3 Access Key

      Sets the S3 access key to upload bundles to an S3 bucket.

      S3 Secret Key

      Sets the S3 secret key to upload bundles to an S3 bucket.

      S3 Bucket Name

      Specifies the name of the S3 bucket key to which to upload bundles.

      S3 Bucket Subdirectory

      Specifies a subdirectory in the bucket.

  9. If you want to enable encryption of data at rest on the cluster, select the Encryption tab, enable the Encryption slider and then:

    Note

    VAST Cluster encryption of data at rest is FIPS 140-3 capable.

    1. Select which type of encryption to enable on the cluster:

      • INTERNAL. Encryption with keys managed internally. This is the only type of encryption that can be disabled after installation or enabled after installation.

      • CIPHER_TRUST_KMIP. Encryption with keys managed externally on Thales Group CipherTrust Data Security Platform.

      • FORTANIX_KMIP. Encryption with keys managed externally on Fortanix DSM.

      • HASHICORP_KMIP. Encryption with keys managed externally on HashiCorp Vault Enterprise.

      • ENTRUST_KMIP. Encryption with keys managed externally on Entrust KeyControl.

      • AKEYLESS_KMIP. Encryption with keys managed externally on the Akeyless platform. 

      If you selected INTERNAL, continue with step 9. If you selected an external key manager (EKM) option, continue here.

    2. Add up to five EKM servers: For each server, enter the server IP address in the Server Address field and the port in the Port field, and then click Add To Table.  

    3. Enter the SSL certificate for the connection to the EKM servers:

      • Click the +Add Certificate link under EKM Certificate and then paste the content of the certificate file into the text field provided. Include the "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" lines from the certificate file content.

      • Click the +Add Key link under EKM Private Key and paste the content of the private key file of the SSL certificate into the test field provided.

      • Optionally, enter a CA certificate: Click the +Add Certificate link under EKM CA certificate and paste the content of the CA certificate file.

    4. For Thales Group CipherTrust Data Security Platform only:

      • In the Auth Domain field, you can specify a subdomain of the EKM root domain (optional).

        The subdomain needs to be created on the Thales CipherTrust manager.

        When the deployment is complete, encryption groups created on the cluster will have their encryption-at-rest keys generated within the specified subdomain.

      • In the Use proxy field, you can specify a proxy server through which to connect to the EKM server (optional). Select the check box and then provide the host and port of the proxy server in the fields provided. Specify Host in the format  https://proxy-address.

  10. Review the settings you made and ensure that they match the installation plan.

  11. When you're ready to proceed, click Submit.

    The installation begins.

  12. Select Activities from the left navigation menu to navigate to the Activities page and monitor the task progress.

    The task name is cluster_deploy. As the task progresses, the activities page shows details of each step.

  13. If the installation fails at any stage, the cluster_deploy task state shows Failed and the details of the failure are displayed.

    If the process is configured to continue on failures with CNodes (in step Step 7, in the Advanced Settings section), the installation will continue until completion, or until the limit of CNode failures is reached (this is the default behavior).

    The Cluster Install dialog also reopens to enable you to choose whether to resume or restart the installation.

    Note

    If the dialog is closed, you can open it by clicking the reopen_installer.png button at the top of the page to open it.

    If this occurs:

    1. Check the logged details of the failure and work to resolve the cause of the failure.

    2. When you have resolved the cause of failure, choose one of the following:

      • To resume the installation from the last successfully completed step before the failure, click Resume Installation.

        The installation resumes and you can follow the task in the Activities page again. The Activities page shows you which steps are being skipped as the task progresses.

      • If any changes to any of the installation parameters are needed, or if you needed to replace any component devices, you must start over. To wipe the completed steps and start the installation from the beginning, click Start Over.

    When the installation is done, the cluster_deploy task state changes to COMPLETED and the cluster status displayed at the top left of the page changes to Online: cluster_online.png.

    Check the log details for failed CNodes that may have been skipped during the installation (the installation continues if the limit of skipped CNodes is not reached). These failed CNodes are not included in the cluster. When the issues causing these failures are resolved, you can add these units to the cluster while after it is running, in the VAST Web UI, using Add CNode, in CNode tab of the Infrastructure page.

    You can now disconnect from the technician port. The cluster's VAST Web UI is now accessible by browsing to the configured management VIP from network locations that have network access to the management VIP.

    To begin managing the cluster, browse to the management VIP and log in using the default user name admin and password 123456.