Overview
Clusters that are already in production can be expanded with the addition of one or more CBox(es) and/or DBox(es) . CBoxes add more compute service, while DBoxes add more storage.
Depending on your application workload, your data needs, and the performance you are experiencing, the addition of either or both CBoxes and DBoxes may be beneficial. Always consult with VAST Data to determine the best course of action for you.
After consulting your sales engineer, ordering and receiving new CBoxes and/or DBoxes, follow the subsequent sections to expand your cluster.
After expanding the cluster, you might find it beneficial to enable DBox HA or perform a rewrite for storage efficiency.
Prerequisites
Unused switch ports available for connecting the new CNodes and DNodes.
Please consult your VAST Data sales engineer for help designating switch ports and ensuring that they are configured with the correct port designations for CNodes and/or DNodes as required.
Rack space and PSUs available for the new CBoxes and DBoxes.
Required Equipment
New CBox(es) and/or DBox(es) ordered for the expansion, with rail mount kits, power cables, and network cables for connecting each CNode and DNode to the cluster's switches.
Step 1: Prepare Network Information
Management Subnet and Gateway | Verify the subnet and gateway that was supplied during cluster installation for the cluster's CNode and DNode management IPs. You will need to supply the same subnet and gateway for the management IPs of the new CNodes and/or DNodes. |
DNode, CNode Management External IPs | Establish unused external management IPs on the management subnet to allocate to each new CNode and/or DNode. To check the external management IPs in use on the other CNodes and DNodes in the cluster from the VAST Web UI, browse to the CNodes and DNodes tabs of the Infrastructure page and look at the IPs in the Management IP column. |
CNode, DNode IPMI pool | Establish IP addresses for the CNode and DNode IPMI interfaces. This is one IP address per server except in the case of CERES DBoxes, where you will need one per DTray or half of the number of DNodes. |
Step 2: Rack Mount and Cable the New CBoxes and DBoxes
Unpack and rack mount each new CBox and DBox in one of the cluster's current racks or an adjacent rack. For help with these tasks, see the VAST Cluster Hardware Install Guide. For specific guidance regarding rack mount positions, consult your Sales Engineer.
Consult your Sales Engineer on the correct cabling scheme and connect the new CNodes and DNodes to correctly configured designated switch ports, using appropriate NIC ports and cables.
Step 3: Add the CBoxes and DBoxes in VMS
In the Infrastructure page of the VAST Web UI, select either the CBoxes tab or the DBoxes tab, and click Create Boxes.
The DBox and CBox Expansion dialog appears.
Click Discover Now.
Hosts are discovered and validated. The process takes time. You can monitor it by following the host_discovery activity in the Activities page. When the process is complete, the discovered CBoxes and DBoxes appear in the DBox and CBox tabs.
In the DBox and CBox tabs, review the details of the DBoxes and CBoxes that were discovered. Expand each DBox and CBox to see the nodes are discovered.
Review any errors that may have been detected during validation with any of the discovered nodes.
To review any hardware errors that were detected, click Show Errors at the bottom of the screen. Errors may pertain to CPU, memory, disks, NVRAMs, port connectivity, or licensing issues.
The error text refers to the affected node, enabling you to match each error to a node listed above. In order to identify the position of the affected node, you can hover over a node to see where it is located in its CBox or DBox.
Resolve any issues before continuing with the expansion. In the event that faulty hardware was received in the shipment, consult VAST Data Support on how to proceed.
The following options are available:
You can remove and either fix and reinsert, or replace a faulty component with a new one. After replacing a faulty component, click Discover Now to repeat host discovery and validation. Check the discovered hosts again for errors.
Exclude nodes. In case of validation errors that cannot be resolved on site before continuing and are critical, you can identify the affected node and exclude it from the expansion. In this case, please report the errors to Support and arrange return and replacement of hardware.
To exclude a node:
Uncheck the node you want to exclude (using the checkbox to the left of the CNode/DNode name).
Verify that the Excluded: field shows the correct count of excluded nodes.
For each DBox that you are adding, review and/or configure the following for each new DNode, in the DBox tab:
Subsystem
Leave the default value (0) unless the expansion requires use of a non default subsystems. This is available only if the cluster was configured to support a large subnet at cluster installation.
If you need to change the subsystem, hover to reveal the edit button (
) and set the subsystem per DNode per the plan.The subsystem is used in the formation of the IP addresses that are allocated to the nodes for the cluster's internal network. Multiple subsystems expand the number of IP addresses that are available for allocation. The default setting is 0 for all DNodes and CNodes, which configures a single subsystem. A single subsystem enables the allocation of up to 254 IP addresses. There are three internal IP addresses allocated to each CNode and to each DNode.
Follow the expansion plan for the cluster and allocate a subsystem to each node as planned.
Valid range: 0-63.
Network Type
This is automatically set to match the global DNode network topology setting that appears at the top of the tab. This setting determines the mode configured on the DNode's NICs for connectivity to the internal cluster network.
For each CBox that you are adding, review and/or configure the following for each CNode, in the CBox tab:
Subsystem
Leave the default value (0) unless the installation requires multiple subsystems. If needed, hover to reveal the edit button (
) and set the subsystem per CNode per the plan. For more information, see the explanation in the previous step.
Network Type
This determines the network modes for the CNode NICs. Verify the setting is correct for each CNode and change if needed.
Note
VMS only makes options available if they are compatible with the DNode network topology setting in the DBox tab, which needs to match the network mode of the internal cluster network. If you are not able to set the CNode network types correctly, verify that DNode network topology is set correctly. The CNode NICs that are connected to the internal cluster network always need to be set to the same mode as the DNode network topology.
Possible values:
IB. Choose this option if the CNode is connected to Infiniband networks and not to an Ethernet network.
ETH. Choose this option if the CNode is connected to Ethernet networks and not to an Infiniband network.
IB ext ETH. Choose this option if the CNode is connected to an internal cluster network running on Infiniband infrastructure and to an external Ethernet network.
ETH ext IB. Choose this option if the CNode is connected to an internal cluster network running on Ethernet infrastructure and to an external Infiniband network.
External Eth MTU
(field present if applicable).
For dual NIC CNodes where a NIC is directly connected to an external Ethernet network, use this field to set the MTU for that Ethernet network.
External IB MTU
(field present if applicable).
For dual NIC CNodes where a NIC is directly connected to an external Infiniband data network, use this field to set the MTU for that Infiniband network.
Default: 2044
Take care to set a supported MTU for the NIC mode:
If NB IB type is Connected, the maximum IB NB MTU is 65520.
If NB IB type is Datagram, the maximum IB NB MTU is 4092.
External IB type
(field present if applicable).
Sets the type(s) of external Infiniband network(s) that the CNode is connected to:
Connected (default).
Datagram
Skip NIC
(field present if applicable).
If the CNode is a dual NIC CNode and has a NIC that is not in use (not connected to any network), use this field to specify which NIC is not connected and should not be included in the network configuration:
Internal. If the NIC to the right of the CNode panel (used for internal connectivity in the default scheme) is not connected.
Note
Not available for Ice Lake CBoxes.
External. If the NIC to the left of the CNode panel (used for external connectivity in the default scheme) is not connected.
Note
This is the only available and valid option for an unconnected NIC on Ice Lake CNodes. In the case of Ice Lake models, when facing the rear panel, the NIC that can be unconnected is the left NIC on the two right CNodes; it's the right NIC on the two left CNodes.
None (default). Leave this option selected if both NICs are connected.
Reverse nics
(field present if applicable).
Note
This setting is not applicable for Ice Lake models of CBox.
Use this setting if the CNode is a dual NIC CNode and the network connectivity scheme for the NICs needs to be reversed from the default.
In the default scheme, the left NIC is dedicated to the external network. The two QSFP28 ports on the left NIC are connected to the client data network switches. The right NIC is dedicated to the internal network and its ports are connected to the cluster switches. If your installation plan follows this default connectivity scheme for a given CNode, do not enable Reverse nics for that CNode.
Enable Reverse nics on a CNode only if this scheme is reversed in according to your installation plan. In the reverse scheme, the left NIC QSFP28 ports on each CNode connect to the cluster switches while the right NIC ports connect to the client network switches (external to the cluster).
Click Continue to general settings.
In the General Settings screen, complete the following fields:
MGMT IPV4 CIDR
Provide the IPv4 subnet mask that was provided at cluster installation for the management subnet, in CIDR notation.
MGMT IPV6 CIDR
Reserved for future use. Do not fill this field.
External gateway
The IP address of the default gateway for the management network. Supply the same IP address that was provided at cluster installation.
This is the gateway through which to route traffic to the management external IPs that you will supply in the relevant fields.
To add the address:
Click within the field or choose Expand to display an IP address entry dialog.
Enter the IP address and click +Add.
Click Save Changes to close the dialog.
Empty box
Enable this setting only if you are replacing a DBox by adding a new DBox that is empty of SSDs and NVRAMs, into which you will migrate the SSDs and NVRAMs from a DBox that you are removing. See Replacing a DBox for the full DBox replacement procedure.
CNode management external IP pool
Specify an IP pool from which to assign IPs for the management network to the new CNodes that you are adding to the cluster. The pool should contain enough unused IPs on the management network for all of the new CNodes and the number you need to provide is displayed as a recommendation.
To check which management external IPs are already assigned to existing node in the cluster, go to the DNodes page in the Web UI and look at the IPs in the Management IP column for all DNodes and CNodes.
Examples:
173.30.200.100,173.30.200.101,173.30.200.102,173.30.200.103
To add addresses:
Click within the field or choose Expand to display an IP address entry dialog.
Enter an IP address or a range of IP addresses (for example, 173.30.200.100-102) and click +Add.
Repeat as needed for additional IP addresses.
Click Save Changes to close the dialog.
DNode management external IP pool
The IP pool from which to assign IPs for the management network to all DNodes. The pool should contain enough IPs for all DNodes in the cluster.
For example, for an installation with one Mavericks DBox, there are two DNodes, so you need to supply two IPs that were designated for the management external IP pool in the installation plan. The recommendation "You should add exactly 2 IPs" is displayed.
Examples:
173.30.200.104,173.30.200.105
To add addresses:
Click within the field or choose Expand to display an IP address entry dialog.
Enter an IP address or a range of IP addresses (for example, 173.30.200.100-102) and click +Add.
Repeat as needed for additional IP addresses.
Click Save Changes to close the dialog.
CNodes IPMI pool
An IP pool from which to assign an IP to the IPMI interface of each new CNode.
Set this IP pool if and only if the cluster uses the standard IPMI network configuration.
If the cluster is deployed configured with the B2B IPMI networking option, do not configure this IP pool.
The CNodes will be assigned IPMI IPs in the same order as they are assigned management external IPs. The CNode that receives the first IP in the management external IP pool receives the first IP in the CNodes IPMI pool and so on.
Examples:
173.30.200.110,173.30.200.111,173.30.200.112,173.30.200.113
To add addresses:
Click within the field or choose Expand to display an IP address entry dialog.
Enter an IP address or a range of IP addresses (for example, 173.30.200.100-102) and click +Add.
Repeat as needed for additional IP addresses.
Click Save Changes to close the dialog.
DNodes IPMI pool
An IP pool from which to assign an IP to each IPMI interface.
For Mavericks DBoxes, provide an IP per DNode.
For CERES DBoxes, provide an IP per DTray. This is half of the number of DNodes.
Set this IP pool if and only if the planned deployment uses the standard IPMI network configuration.
If the cluster is deployed with the B2B IPMI networking option, do not configure this IP pool.
Add IPs as described for CNodes IPMI pool.
The DNodes will be assigned IPMI IPs in the same order as they are assigned management external IPs. The DNode that receives the first DNode IP in the management external IP pool receives the first IP in the DNodes IPMI pool and so on. (For CERES DNodes, the IPMI IP is duplicated on both DNodes in each DTray. Otherwise, the order is the same in principle.)
Examples:
173.30.200.114,173.30.200.115
To add addresses:
Click within the field or choose Expand to display an IP address entry dialog.
Enter an IP address or a range of IP addresses (for example, 173.30.200.100-102) and click +Add.
Repeat as needed for additional IP addresses.
Click Save Changes to close the dialog.
Click Advanced settings and set the following optional advanced settings if you need to:
CNodes start index
This setting specifies a custom start value for the range of indexes that are allocated to the new CNodes. The index for each CNode appears in its name and host name and is used as the value of the fourth octet of each of the CNode's internal IP addresses.
By default, the added CNodes are allocated indexes in the 1-99 range that follow sequentially from the range that was already allocated to CNodes on the same subnet in the cluster. For example, if you are adding one CBox which has four CNodes and indexes 1-8 were already allocated to other CNodes on the same subnet in the cluster, the new CNodes will receive indexes 9, 10, 11 and 12.
If you choose to set this value, indexes will be allocated to all the new CNodes as a contiguous range starting from the index you specify. For example, if indexes 1-8 and 13-16 are already in use, while indexes 9-12 were in use previously on CNodes that were removed from the cluster at a previous time, you can specify 9 in order to use indexes 9-12, provided you are not adding more than four CNodes.
Valid range: 1-99
DNodes start index
This setting specifies a custom start value for the range of indexes that are allocated to the new DNodes. The index for each DNode appears in its name and host name and is used as the value of the fourth octet of each of the DNode's internal IP addresses.
By default, the added DNodes are allocated indexes in the 100-253 range that follow sequentially from the range that was already allocated to DNodes on the same subnet in the cluster. For example, if you are adding one DBox which has two DNodes and indexes 100-104 were already allocated to other DNodes on the same subnet in the cluster, the new DNodes will receive indexes 105 and 106.
If you choose to set this value, indexes will be allocated to all the new DNodes as a contiguous range starting from the index you specify.
Valid range: 100-253
Hostname Prefix
This prefix, if specified, is used in the template for hostnames for the new CNodes and DNodes. If not specified, the host names follow the format of existing hosts in the cluster.
Click Submit.
If you added only CBoxes, the expansion is now completed and you can monitor it's progress by following the add_boxes activity from the Activities page.
When expansion is complete, you can also check the CNodes tab, find the new CNodes and verify that each CNode's state is active.
If your expansion includes DBoxes, continue.
Check the state of the newly added DNodes and any newly added CNodes:
On the DNodes tab, find the new DNodes and verify that each DNode's state is init.
On the CNodes tab, find the new CNodes and verify that each CNode's state is active.
Click .
The DBox Expand dialog appears.
In the NVRAM section layout, select Automatic to have the cluster automatically select the correct settings for the drive layout.
If advised by VAST Support, select a target NVRAM Section Layout from the dropdown list.
Click Expand.
The expansion begins and you can monitor it's progress by following the add_boxes activity from the Activities page.