Overview
This procedure describes how to migrate a live VAST cluster from an L2-based external network (VIP Pool with ARP/broadcast-based VIP mobility) to an L3-based external network (VIP Pool with BGP-advertised /32 host routes). The migration is performed in a rolling fashion to minimize downtime: a stable L2 subset is maintained throughout, and clients are migrated to the new L3 VIP Pool before the L2 VIP Pool is decommissioned.
📝 Note: API, GUI, and CLI
VAST configuration examples in this document use the Python API client to support automation and scripting workflows. All operations can be performed equally well through the VAST Management UI (recommended for one-off migrations) or thevastcli(VCLI) command-line tool. The API examples are provided as a reference and to make the steps scriptable end to end.
⚠️ Important: IP Address Change Required
L3 VIP Pools use a different IP range from the L2 VIP Pool. It is not possible to retain the same VIP addresses. All clients must unmount from the L2 VIPs and remount using the new L3 VIP addresses. Plan client migration windows accordingly.
⚠️ Performance Note
During the migration, CNodes are temporarily removed from the active L2 VIP Pool and reassigned in batches to the L3 VIP Pool. If the cluster is operating near maximum compute capacity, expect a measurable performance degradation during the transition window. Schedule the migration during a low-utilization period whenever possible.
Prerequisites
- VAST cluster with dual-NIC CNodes (one NIC for internal fabric, one NIC dedicated to external client traffic).
- Administrative access to the VAST Management UI and/or CLI (
vastcli). - Administrative access to the leaf switches serving the external client network.
- A new dedicated IP subnet allocated for the L3 VIP Pool (e.g., a
/24or larger block from which/32host routes will be advertised). - BGP AS numbers agreed upon for both the VAST cluster and the leaf switches. The examples in this document use VAST AS
99, Leaf-1 AS65101, and Leaf-2 AS65102. Note that leaf switches may use the same ASN or different ASNs, depending on your network design; both are valid. Usingany_external_asn: truein the VAST BGP config (Step 4) means VAST will accept peers regardless of their ASN, so no VAST-side change is required either way. - IPv6 must be enabled on CNode external interfaces and switch-facing ports. BGP unnumbered uses IPv6 link-local addresses (
fe80::) for peer discovery and session establishment. No IPv4 address is required on the physical interfaces; IPv4 VIP routes are carried via RFC 5549. - Maintenance window communicated to all active NFS/S3/SMB clients.
Reference Topology
Each CNode has two dedicated external ports (in this example: enp65s0f0 and enp65s0f1), each connected to a separate leaf switch. The two leaves peer with a spine layer for inter-leaf routing. Both leaves must be configured for every port change.
Clients
│
┌──────┴──────┐
▼ ▼
┌─────────────────────┐
│ Spine │ AS 65200
└────────┬────────────┘
┌──────┴──────┐
▼ ▼
┌────────┐ ┌────────┐
│ Leaf-1 │ │ Leaf-2 │ ← BGP peers (L3 external), ASN 65101 / 65102
│ 65101 │ │ 65102 │
└───┬────┘ └────┬───┘
│ enp65s0f0 │ enp65s0f1 (External NICs , one per leaf)
└──────┬──────┘
┌──────────┴──────────────┐
│ CN-1 CN-2 CN-3 CN-4 │ ← VAST CNodes, AS 99
└─────────────────────────┘
⚠️ All switch-side steps must be run on both Leaf-1 and Leaf-2. Changing only one leaf will result in asymmetric forwarding and BGP session failures on the unconfigured leaf.
💡 The spine sits between the two leaves. VIP routes from Leaf-1 are visible on Leaf-2 via the spine and vice versa; this is normal and expected in the BGP table.
Key Concepts
| Concept | L2 VIP Pool | L3 VIP Pool |
|---|---|---|
| VIP mobility mechanism | Gratuitous ARP / L2 broadcast | BGP /32 host route withdrawal + re-advertisement |
| IP range requirement | Shared subnet with clients | Dedicated range, routed via BGP |
| Switch configuration | Access/trunk VLAN | BGP peering on CNode-facing interfaces |
| VLAN required | Yes | No , N/A |
| Gateway IP required | Yes | No , N/A |
| Client impact on failover | Transparent (same IP) | Transparent (same IP, route reconverges) |
| Migration IP change | N/A | New IPs required |
Step 1 - Plan and Split CNodes into Batches
Before making any changes, document the current state and divide CNodes into logical batches.
1.1 - Document the current VIP Pool
From the VAST UI navigate to Cluster → Network → VIP Pools and record:
- VIP Pool name (e.g.,
vippool-l2) - Current CNode membership (all CNodes, typically)
- Active VIP range (e.g.,
10.100.0.10 – 10.100.0.50) - Associated VLAN / subnet
From the API:
client.vippools.get()
client.cnodes.get()
1.2 - Define the batching strategy
Split CNodes into two groups:
- Batch A (L2-stable), CNodes that will remain in the L2 VIP Pool throughout the migration and provide continuity for clients not yet remounted. Recommended: keep at least 50% of CNodes here, or a number sufficient to serve active client load.
- Batch B (L3-first), CNodes to be moved to the L3 VIP Pool first, in rolling sub-batches.
💡 Recommended split for clusters fully utilizing compute. Keep at least 2/3 of CNodes in Batch A during early phases. Move CNodes in sub-batches of 1–2 at a time to minimize performance impact.
Example , 6-node cluster:
| CNode | Initial Assignment |
|---|---|
| CNode-1 | Batch A (retain in L2) |
| CNode-2 | Batch A (retain in L2) |
| CNode-3 | Batch A (retain in L2) |
| CNode-4 | Batch B (move to L3, wave 1) |
| CNode-5 | Batch B (move to L3, wave 2) |
| CNode-6 | Batch B (move to L3, wave 2) |
Step 2 - Restrict the Existing L2 VIP Pool to Batch A CNodes
By default, VAST assigns all CNodes to the VIP Pool. You must explicitly limit it to only Batch A before the L3 pool is created.
2.1 - Edit the L2 VIP Pool
From the VAST UI:
- Navigate to Cluster → Network → VIP Pools.
- Click Edit on the current L2 VIP Pool.
- Under CNodes, deselect all Batch B CNodes. Confirm only Batch A CNodes remain.
- Save.
From the CLI (Python API):
# Restrict vippool to CNode IDs 1, 2, 3 (Batch A)
client.vippools[<l2_pool_id>].patch(cnode_ids=[1, 2, 3])
To find your VIP Pool ID and CNode IDs first:
# List all VIP pools
client.vippools.get()
# List all CNodes
client.cnodes.get()
2.2 - Validate VIPs have redistributed to Batch A
# Check which CNodes are active in the L2 pool
pool = client.vippools[<l2_pool_id>].get()
print(pool['active_cnode_ids'])
print(pool['cnodes'])
Confirm all VIPs are hosted on CNode-1, CNode-2, or CNode-3. Allow up to 60 seconds for VIP redistribution to complete. Clients on Batch A should experience a brief failover and reconnect without losing their sessions.
Step 3 - Configure BGP on the Arista Switches
Before creating the L3 VIP Pool in VAST, prepare the switch-side BGP configuration so that peers are ready when CNodes begin advertising routes.
BGP unnumbered is used here. Sessions are established using IPv6 link-local addresses (fe80::); no IPv4 address is required on the physical CNode-facing interfaces. IPv4 VIP routes (/32) are carried over these sessions via RFC 5549 (IPv4 routes with an IPv6 next-hop).
⚠️ Run all switch-side commands on both Leaf-1 and Leaf-2**.** Each CNode has one external port connected to each leaf. Configuring only one leaf will leave half the BGP sessions unestablished.
3.1 - Enable IPv6 link-local on each CNode-facing interface
Each switch port connecting to a CNode external NIC must be converted to a routed port with IPv6 enabled. The full interface configuration also includes MTU, speed, and QoS settings required for VAST RoCEv2 traffic.
Run the following on both Leaf-1 and Leaf-2, substituting the correct port for each CNode (see Port Reference table in Prerequisites):
interface Ethernet<X>/<Y>
description <CNode-N> External
mtu 9214
speed 200g-4
no switchport
ipv6 enable
service-profile VAST-ROCEv2
qos trust dscp
!
uc-tx-queue 3
random-detect ecn count
Example - enabling L3 on CN-1 (Leaf-1: Et3/5, Leaf-2: Et3/5):
! On Leaf-1
interface Ethernet3/5
description CN-1 External
mtu 9214
speed 200g-4
no switchport
ipv6 enable
service-profile VAST-ROCEv2
qos trust dscp
!
uc-tx-queue 3
random-detect ecn count
! Repeat the identical config on Leaf-2 Ethernet3/5
ipv6 enabledoes not require a global IPv6 address; it only activates the link-local, which is sufficient for BGP unnumbered.
3.2 - Configure the BGP peer group and interface neighbors
Instead of specifying peer IPs, each CNode-facing interface is registered directly as a BGP neighbor. The switch automatically discovers the peer’s link-local address via IPv6 ND.
Run on both Leaf-1 and Leaf-2 (substituting the correct ASN per leaf):
! Leaf-1 (ASN 65101)
router bgp 65101
router-id <leaf-1-loopback-ip>
no bgp default ipv4-unicast
maximum-paths 8
neighbor VAST-CNODES peer group
neighbor VAST-CNODES remote-as 99
neighbor VAST-CNODES description VAST CNodes L3 Unnumbered
neighbor VAST-CNODES bfd
neighbor VAST-CNODES maximum-routes 1000
! Add one entry per CNode-facing interface
neighbor interface Ethernet3/5 peer-group VAST-CNODES ! CN-1
neighbor interface Ethernet3/1 peer-group VAST-CNODES ! CN-2
neighbor interface Ethernet4/5 peer-group VAST-CNODES ! CN-3
neighbor interface Ethernet4/1 peer-group VAST-CNODES ! CN-4
address-family ipv4
neighbor VAST-CNODES activate
neighbor VAST-CNODES next-hop address-family ipv6 ! RFC 5549
! Leaf-2 (ASN 65102) , same structure, different ASN and port mapping
router bgp 65102
router-id <leaf-2-loopback-ip>
no bgp default ipv4-unicast
maximum-paths 8
neighbor VAST-CNODES peer group
neighbor VAST-CNODES remote-as 99
neighbor VAST-CNODES description VAST CNodes L3 Unnumbered
neighbor VAST-CNODES bfd
neighbor VAST-CNODES maximum-routes 1000
neighbor interface Ethernet3/5 peer-group VAST-CNODES ! CN-1
neighbor interface Ethernet3/1 peer-group VAST-CNODES ! CN-2
neighbor interface Ethernet4/1 peer-group VAST-CNODES ! CN-3
neighbor interface Ethernet4/5 peer-group VAST-CNODES ! CN-4
address-family ipv4
neighbor VAST-CNODES activate
neighbor VAST-CNODES next-hop address-family ipv6 ! RFC 5549
VAST’s cluster BGP ASN is
99. All CNode-facing interfaces should be pre-configured here; BGP will only establish sessions on interfaces where a CNode is actively participating in the L3 VIP Pool.
3.3 - Save configuration
After any switch change, save on both leaves:
write memory
3.4 - Advertise L3 VIP routes toward clients
Once VAST CNodes advertise their /32 VIP routes via BGP, the switch installs them in the RIB. If clients are on a different segment, ensure these routes are propagated toward the client-facing uplinks:
router bgp 65101 ! (and 65102 on Leaf-2)
address-family ipv4
network 10.200.0.0/24 ! Optional: aggregate the L3 VIP range toward client uplinks
If clients are directly connected to the same switch, the /32 routes received from VAST will be forwarded automatically without additional configuration.
Step 4 - Create the BGP Configuration Object in VAST
4.1 - Create the BGP configuration
With BGP unnumbered, VAST does not peer with a specific switch IP; it discovers the peer via the IPv6 link-local address on the interface. The VAST BGP configuration object, therefore, does not require explicit Peer IPs; instead, it relies on interface-level link-local peering initiated from both sides.
From the VAST UI:
- Navigate to Cluster → Network → BGP.
- Click Add BGP Configuration.
- Fill in the following fields:
| Field | Example Value | Notes |
|---|---|---|
| Name | bgp-l3-external |
Descriptive name |
| Self ASN | 99 |
VAST cluster ASN |
| Any External ASN | true |
Accept peers of any ASN , correct for dual-leaf where Leaf-1 and Leaf-2 have different ASNs |
| Peer IPs | (leave empty) | Not required for unnumbered BGP , peering is via link-local |
- Save.
💡 If your VAST version requires at least one Peer IP entry, enter the switch’s link-local address for one of the CNode-facing interfaces (e.g.,
fe80::...%eth1). In most deployments, the field can be left empty for unnumbered mode.
From the API:
client.bgpconfigs.post(
name="bgp-l3-external",
self_asn=99,
any_external_asn=True # Accepts peers of any ASN, no need to specify Leaf-1/Leaf-2 ASNs explicitly
)
any_external_asn: truemeans VAST will accept BGP sessions from any peer ASN, which is the correct setting for a dual-leaf deployment where each leaf has a different ASN. You do not need to enumerate peer ASNs separately.
Step 5 - Create the L3 VIP Pool
5.1 - Create the new VIP Pool
From the VAST UI:
- Navigate to Cluster → Network → VIP Pools.
- Click Add VIP Pool.
- Configure the following:
| Field | Value | Notes |
|---|---|---|
| Name | vippool-l3 |
|
| Subnet | 10.200.0.0/24 |
Must be a different range from the L2 VIP Pool |
| CIDR per VIP | /32 |
Required for L3 , each VIP is advertised as a host route |
| VIP count | (as required) | Two VIPs per CNode |
| BGP Configuration | bgp-l3-external |
Select the BGP config created in Step 4 |
| VLAN | (leave empty / 0) | Not required for L3 , routing is handled by BGP, no VLAN needed |
| Gateway IP | (leave empty) | Not required for L3 , there is no default gateway for a routed VIP pool |
| CNodes | Batch B CNodes only | e.g., CNode-4 only for wave 1 |
| Enable | Yes |
- Save.
⚠️ CIDR must be
**/32**for L3 VIP Pools. Using any other prefix length will result in incorrect route advertisements.
💡 Unlike L2 VIP Pools, an L3 VIP Pool requires no VLAN ID and no gateway IP. These fields should be left empty. Setting a VLAN or gateway on an L3 pool is unnecessary and may cause unexpected behavior.
From the API:
# First, get the BGP config ID created in Step 4
bgp_configs = client.bgpconfigs.get()
# Identify the ID of 'bgp-l3-external' from the output
# Create the L3 VIP Pool, start with Batch B CNodes only (e.g., CNode ID 4)
# Note: no vlan or gw_ip needed for L3
client.vippools.post(
name="vippool-l3",
start_ip="10.200.0.1",
end_ip="10.200.0.4", # One IP per CNode is typical
subnet_cidr=32, # Must be /32 for L3
enable_l3=True,
bgp_config_id=<bgp_config_id>,
cnode_ids=[4], # Batch B first wave only
enabled=True
)
Step 6 - Validate BGP Sessions
Allow 30–240 seconds for BGP to establish, then verify from both sides.
6.1 - Validate from the
switches
Run on both Leaf-1 and Leaf-2. With BGP unnumbered, neighbors are identified by their link-local address and the interface on which they were discovered.
show bgp summary
show ip bgp 10.200.0.0/24
show ip route bgp
Expected: all CNode peers in the L3 pool should show Established. /32 host routes for each active VIP should appear in both the BGP table and the RIB.
Example output (show bgp summary on Leaf-1):
Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State/PfxRcd
fe80::a:b:c:1%Et3/5 4 99 42 38 0 0 00:02:11 2
fe80::a:b:c:2%Et3/1 4 99 39 36 0 0 00:01:58 2
To identify which interface a link-local neighbor maps to:
show ip bgp neighbors fe80::a:b:c:1%Et3/5 | grep "BGP neighbor is"
6.2 - Validate from the CNode (server-side)
VAST uses FRR (Free Range Routing) on each CNode to manage BGP. FRR runs inside the VAST core container, not in the host OS. To access it, first attach to the container from the CNode host, then run vtysh:
# Step 1, SSH to the CNode host
ssh vastdata@<cnode-hostname>
# Step 2, Attach to the VAST core container
/vast/data/attachdocker.sh
# Step 3, Once inside the container, run vtysh
sudo vtysh -c "show bgp summary"
💡
attachdocker.shis auto-generated by the VAST core container and usesdocker execto attach an interactive shell. It is located at/vast/data/attachdocker.shon every CNode.
Both external interfaces (enp65s0f0 toward Leaf-1, enp65s0f1 toward Leaf-2) should show Established with the peer ASNs of your respective leaf switches.
To verify the CNode is correctly advertising its VIPs, check the advertised routes per interface:
# Run these inside the container (after attachdocker.sh)
sudo vtysh -c "show bgp ipv4 neighbors enp65s0f0 advertised-routes"
sudo vtysh -c "show bgp ipv4 neighbors enp65s0f1 advertised-routes"
The locally originated /32 VIPs will appear with Next Hop :: and origin ? (incomplete), Both are normal and expected:
::next-hop confirms RFC 5549 is working correctly , IPv4 routes being carried over the IPv6 link-local session.- Origin
?(incomplete) indicates the VIPs are redistributed routes rather than statically declared network statements, which is how VAST injects them into FRR.
Routes learned from the fabric (e.g., client subnets) will show an AS path through the leaves and spine; this is also expected.
6.3 - Validate from VAST
client.clusters.bgp_table.get()
This returns per-port BGP state for every CNode. Each CNode will appear twice , once for p1 (enp65s0f0, toward Leaf-1) and once for p2 (enp65s0f1, toward Leaf-2). Both ports should show ESTABLISHED for any CNode added to the L3 VIP Pool.
Example output for a healthy CNode:
{'cnode_id': 4, 'cnode_name': 'cnode-X-4', 'state': 'ESTABLISHED', 'port': 'p1', ...}
{'cnode_id': 4, 'cnode_name': 'cnode-X-4', 'state': 'ESTABLISHED', 'port': 'p2', ...}
CNodes still in the L2 pool (not yet migrated) will show IDLE , this is expected:
{'cnode_id': 1, 'cnode_name': 'cnode-X-1', 'state': 'IDLE', 'port': 'p1', ...}
{'cnode_id': 1, 'cnode_name': 'cnode-X-1', 'state': 'IDLE', 'port': 'p2', ...}
⚠️ If a CNode that has been added to the L3 pool shows
IDLEon either port, BGP has not established on that NIC. Check the corresponding switch port on the relevant leaf (p1→ Leaf-1,p2→ Leaf-2) before proceeding.
6.4 - Validate VIP advertisement
# Check active CNodes and VIP state on the L3 pool
pool = client.vippools[<l3_pool_id>].get()
print(pool['active_cnode_ids'])
print(pool['sync'])
sync should be SYNCED and active_cnode_ids should reflect the CNodes added to the pool.
Step 7 - Validate Client Reachability to the L3 VIP Pool
Before migrating any clients, confirm that the new VIPs are reachable from the client network.
7.1 - Ping test from a client host
# From a Linux client
ping -c 4 10.200.0.1 # Replace with an active L3 VIP
7.2 - Routing verification from client
ip route get 10.200.0.1
traceroute 10.200.0.1
Confirm the path traverses the Arista switch and reaches the correct CNode.
7.3 - NFS/S3/SMB mount test (non-production)
Before migrating production clients, perform a test mount:
# NFS
mount -t nfs 10.200.0.1:/path/to/export /mnt/test-l3
# Confirm connectivity
df -h /mnt/test-l3
ls /mnt/test-l3
umount /mnt/test-l3
Step 8 - Migrate Clients to the L3 VIP Pool
⚠️ Client remount is required. Sessions mounted on L2 VIPs cannot be transparently migrated to L3 VIPs. Clients must unmount from the old VIPs and remount using the new L3 VIP addresses. Coordinate with application owners.
8.1 - Notify clients and application teams
Provide the new VIP range to all teams that will be remounting. Suggested communication:
- Old mount target:
10.100.0.x(L2 VIP Pool) - New mount target:
10.200.0.x(L3 VIP Pool) - Remount window: [agreed time]
8.2 - Remount clients (NFS example)
# Graceful unmount from L2 VIP
umount /mnt/vastdata
# Remount on L3 VIP
mount -t nfs -o vers=3,rsize=1048576,wsize=1048576 10.200.0.1:/export /mnt/vastdata
# Verify
df -h /mnt/vastdata
For persistent mounts (/etc/fstab), update the VIP address accordingly:
# Old
10.100.0.10:/export /mnt/vastdata nfs vers=3,rsize=1048576,wsize=1048576 0 0
# New
10.200.0.1:/export /mnt/vastdata nfs vers=3,rsize=1048576,wsize=1048576 0 0
Step 9 - Validate Stability Before Proceeding
Before expanding the L3 pool by moving additional CNodes, validate that the current state is stable.
9.1 - Checklist
- All migrated clients have been successfully mounted on L3 VIPs.
- No NFS/S3/SMB errors in client-side logs
- BGP sessions remain Established (re-run
show bgp summaryon Arista) - VIP distribution is balanced across the L3-pool CNodes
- Remaining L2 clients are still stable on the L2 VIP Pool.
9.2 - Monitor for a soak period
Allow at least 15–30 minutes of stable operation under real workload before proceeding to move additional CNodes. For high-sensitivity environments, extend this to several hours.
Step 10 - Iteratively Move Remaining CNodes from L2 to L3
Once stability is confirmed, move the remaining Batch A CNodes to the L3 VIP Pool in rolling waves.
10.1 - Remove a CNode from the L2 VIP Pool
From the VAST UI:
- Edit
vippool-l2→ remove the next CNode from the membership. - Save and wait for VIP redistribution (up to 60 seconds).
- Confirm L2 VIPs have migrated to the remaining CNodes.
From the API:
# Remove CNode 3 from L2 pool (retain CNodes 1 and 2)
client.vippools[<l2_pool_id>].patch(cnode_ids=[1, 2])
10.2 - Add the CNode to the L3 VIP Pool
From the VAST UI:
- Edit
vippool-l3→ add the CNode just removed from the L2 pool. - Save.
From the API:
# Add CNode 3 to the L3 pool (alongside CNode 4 already there)
client.vippools[<l3_pool_id>].patch(cnode_ids=[3, 4])
10.3 - Validate BGP and VIPs after each wave
# VAST side , check L3 pool state
pool = client.vippools[<l3_pool_id>].get()
print(pool['active_cnode_ids'])
print(pool['sync'])
# Check BGP state per CNode per port
client.clusters.bgp_table.get()
! Arista side , run on both Leaf-1 and Leaf-2
show bgp summary
show ip route bgp | grep 10.200.0
# CNode side , SSH to the newly added CNode, then attach to the container
/vast/data/attachdocker.sh
sudo vtysh -c "show bgp summary"
Also, run write memory on both leaves after each wave:
write memory
10.4 - Repeat until only 1–2 CNodes remain in the L2 VIP Pool
Continue moving CNodes one at a time (or in small groups if performance headroom allows), validating stability after each wave.
💡 Keep at least 1 CNode in the L2 VIP Pool until all clients have been confirmed as remounted on L3.
Step 11 - Final Client Migration and L2 VIP Pool Deletion
Once all clients have been remounted on the L3 VIP Pool and the L2 VIP Pool has no active client sessions:
11.1 - Confirm no active mounts on L2 VIPs
In the VAST UI, navigate to Monitoring → Active Clients, and filter by the L2 VIP range to confirm there are no active sessions.
From the API:
# Check for any clients still connected via L2 VIP range
client.clients.get()
Filter the output for any IP matching the L2 VIP range (e.g., <L2-VIP-SUBNET>).
11.2 - Delete the L2 VIP Pool
From the VAST UI:
- Navigate to Cluster → Network → VIP Pools.
- Click Delete on
vippool-l2. - Confirm deletion.
From the API:
client.vippools[<l2_pool_id>].delete()
11.3 - Move the last CNode(s) to the L3 VIP Pool
# Add the final CNodes to the L3 pool
client.vippools[<l3_pool_id>].patch(cnode_ids=[1, 2, 3, 4])
11.4 - Remove L2 VLAN configuration from Arista (optional cleanup)
If the L2 VLAN used for the old VIP Pool is no longer needed, clean up the switch:
! Remove old external VLAN/SVI used for L2 VIP Pool
no interface Vlan100
no vlan 100
Final Validation Checklist
- All CNodes are members of
vippool-l3 vippool-l2has been deleted- BGP sessions: all CNodes show Established on both VAST and Arista
- All
/32VIP routes are present in the Arista routing table - All clients are mounted on L3 VIPs only
- No errors in client application logs
- Old L2 subnet and VLAN removed from switch configuration (if applicable)
/etc/fstabor automounter configurations updated on all client hosts
Rollback Plan
If the L3 VIP Pool fails to establish correctly before clients are migrated:
- Remove CNodes from
vippool-l3:
client.vippools[<l3_pool_id>].patch(cnode_ids=[])
- Return removed CNodes to
vippool-l2:
client.vippools[<l2_pool_id>].patch(cnode_ids=[1, 2, 3, 4])
- Verify L2 VIPs redistribute back to all CNodes , check
active_cnode_idsin the response. - Clients on L2 should reconnect automatically (VIP mobility).
- Revert the switch ports on both Leaf-1 and Leaf-2 back to L2 trunk mode for each affected CNode:
! Example: revert CN-1 external port (Et3/5 on both leaves)
interface Ethernet3/5
switchport
no ipv6 enable
Then remove from BGP:
router bgp 65101 ! (use 65102 on Leaf-2)
no neighbor interface Ethernet3/5 peer-group VAST-CNODES
Save on both leaves:
write memory
Per-CNode port reference for revert:
| CNode | Leaf-1 Port | Leaf-2 Port |
|---|---|---|
| CN-1 | Et3/5 | Et3/5 |
| CN-2 | Et3/1 | Et3/1 |
| CN-3 | Et4/5 | Et4/1 |
| CN-4 | Et4/1 | Et4/5 |
⚠️ Rollback is not possible once clients have unmounted from L2 VIPs and the L2 VIP Pool has been deleted. Ensure client migration is fully confirmed before proceeding to Step 11.
Reference: Quick Command Summary
| Action | VAST Python API | Arista EOS | CNode (FRR) |
|---|---|---|---|
| List VIP Pools | client.vippools.get() |
||
| Get VIP Pool state | client.vippools[<id>].get() |
||
| Update VIP Pool CNodes | client.vippools[<id>].patch(cnode_ids=[...]) |
||
| Delete VIP Pool | client.vippools[<id>].delete() |
||
| List CNodes | client.cnodes.get() |
||
| Create BGP config | client.bgpconfigs.post(...) |
||
| Check BGP sessions | client.clusters.bgp_table.get() |
show bgp summary |
attachdocker.sh → sudo vtysh -c "show bgp summary" |
| Check advertised routes | show ip bgp |
attachdocker.sh → sudo vtysh -c "show bgp ipv4 neighbors <iface> advertised-routes" |
|
| Check RIB | show ip route bgp |
||
| Restart FRR (if BGP Idle) | attachdocker.sh → sudo systemctl restart frr |
||
| Save switch config | write memory |
||
| List active clients | client.clients.get() |