There are many situations where having a writable copy of an existing large dataset is advantageous. Perhaps you need to test how a new version of your application behaves, or you want to test enabling a new protocol for access, without impacting production workloads or changing access to data for existing users and apps.
The good news is that with VAST, you can. Starting with our 4.6 release, we have a few nifty tricks up our sleeve. This guide walks through the most common scenarios and provides step-by-step instructions for using clones. This guide is based largely on the VAST 5.0 SW release; however, the steps should remain mostly identical across releases 4.6 and above.
For a little more background on clones, and the difference between LOCAL and GLOBAL (remote) clones, check out this section of our Admin guide: Overview of Global Snapshot Clones
This guide is focused on Local Clones. Stay tuned for one on Global Clones!
Note: If the source folder you wish to clone contains >10 million files & folders, please contact VAST Customer Success to discuss. Additionally, if you want to create many clones simultaneously, also reach out to VAST CS.
Creating Clones
To create a clone on the same cluster where the existing data resides, the process is quite simple.
Create a snapshot of the path. If you have a policy that automatically creates snapshots for the path you care about, or if you already have a snapshot for the path you wish to clone, you can skip this step.

VMS Menu

Create Snapshot

Define Snapshot parameters
In the list of Snapshots, find and right-click the one you want to clone, and click
clone.
Clone snapshot
Enter some details

Enter clone snapshot details
Some notes about the configuration:
Target Tenant: If you have multiple tenants configured on your system, you can elect to have the clone go to an alternate tenant compared to the source. There are caveats to this, however, and they should be discussed with a member of the VAST Technical staff. For more information on tenants, refer to 5.0: Overview of Tenants.
Background Sync
When enabled (default), all metadata and data blocks will be effectively ‘copied’ or ‘hydrated’ into the new location. The capacity will be reclaimed by VAST Data Reduction (mainly due to Dedup); however, it will consume CPU and I/O on the CNodes while it runs. While it is running, you can see the status in the “Global Snapshot Clones” tab:

Monitor progress
While it is running, the new path is fully accessible for read & write operations. Any metadata or data blocks that have not yet been hydrated remain fully accessible for I/O requests; however, their latency will be slightly higher, as the system hydrates them on demand.
Once the Background Sync completes, all metadata & data blocks are fully hydrated. At this point in time, the path will behave like any other path on the system, which means:
Snapshots can be taken of the new path
Clones can then be taken. eg: clone of clone
The VAST Catalog will include the path.
Quotas set on the path will be accurate.
If disabled, then NO metadata or data blocks will be hydrated into the new location unless there is an explicit metadata or I/O request by a client. This is the same behavior observed in the previous scenario (Background Sync = enabled), where I/O latency may be slightly higher as the system hydrates on demand. Once hydrated, I/O latency will be the same as when accessing the original dataset.
Note that in Background Sync=disabled mode:
You can not take snapshots of the new path.
You cannot, thus, make clones of the new path.
Quotas will not be accurate if set on the new path.
No matter how much I/O is done on the path, nor how much time elapses, the clone will NEVER be fully hydrated.
Note that you CAN convert an in-progress clone to enable Background sync. The method is to “resume” the clone:

Global snapshot clones - resume sync
Once you have “resumed” a clone, it now behaves just as if you had enabled Background Sync in the first place.
Stopping/Removing clones
It's quite common to want to remove or destroy clones, especially if they are used as part of a dev/test workflow. The process to do so depends on how they were created.
If you created a clone with Background Sync=enabled and the hydration process finished:
Right-click the clone and click ‘remove.’

Remove clone
On the subsequent Screen:

Confirm removal of clone
The default is NOT to remove the directory. Effectively, this means the clone is removed from the list without deleting any data.
If you enable “Remove Cloned Directory,” all data within the new path will be destroyed. Note that the original source snapshot/data will not be removed.
If you created a clone with Background Sync=disabled OR the hydration process has not yet completed:
Right-click the clone and click Stop:

Stop the clone process
This will remove the clone from the list AND destroy all data in the clone. If Background Sync is enabled but in progress, it will stop the Background Sync task.
Choosing Clone Options
Now that you know how to do it, let's go through a couple of scenarios for using clones to explain why you might choose one setting over another. Mainly the Background Sync option.
When would you enable Background Sync?
You want the clone to be “long-lived.”
You want to take snapshots of the clone.
These can serve as sources for downstream clones.
You want the catalog to track what is inside.
You want to apply Capacity and/or file-based quotas to the clone.
When would you disable Background Sync?
Short-lived clones, which you expect to throw away.
Clones where only a small percentage of the data is likely to be accessed.
Hence, you don’t want to waste the Background Sync I/O on data that likely won’t be used.
When you don’t need to track via Catalog, Quotas, or create downstream snapshots.
Performance
As always with performance, “it depends.” That said, here are some things to keep in mind:
If enabled, while Background Sync is running, it will consume CPU and I/O resources, as it effectively makes a full copy of the data, which then gets run through VAST’s normal Data Reduction process. While the impact is generally minimal, it will effectively ‘subtract’ from the amount of write BW & IOPS that the system can sustain for NFS/SMB/S3 protocol access. On systems where client workloads are generally beneath the ‘ceiling’ of what the cluster can normally sustain, there should be little to no noticeable impact.
In terms of the latency for operations on data that has not yet been hydrated, many factors are at play. Note that the system is hydrating ‘blocks’, and not files. Additionally, it is doing so in parallel when the size of the client requests are large enough, or if there are many in-flight requests. Let's take a look at a couple of examples:
Metadata enumeration of a small directory:
Original/Source:
time find . |wc -l
153
real 0m0.090s
user 0m0.002s
sys 0m0.014sClone (Background Sync=disabled)
# time find . |wc -l
153
real 0m0.089s
user 0m0.004s
sys 0m0.014sJust a tiny bit slower. But, it can be variable, so let's do a larger directory, containing 114,000 files:
Original/Source:
# time find smallfiles/ | wc -l
114234
real 0m2.516s
user 0m0.031s
sys 0m0.484sClone:
# time find clones/smallfiles/ | wc -l
114234
real 0m15.686s
user 0m0.035s
sys 0m0.502sHowever, if another client comes along and performs the same traversal after hydration has occurred:
# time find clones/smallfiles/ | wc -l
114234
real 0m2.435s
user 0m0.027s
sys 0m0.471sIn terms of actual I/O, here’s a simple DD, sequentially reading a 10GB file:
Original dir:
# time dd if=bigfiles/10gb.file of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 9.91857 s, 1.1 GB/s
real 0m10.029s
user 0m0.006s
sys 0m3.940sClone:
# time dd if=clones/bigfiles/10gb.file of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 23.892 s, 449 MB/s
real 0m23.896s
user 0m0.017s
sys 0m4.472s
However, subsequent reads will, of course, be at full speed.
Other Tidbits & Caveats
View configuration, Quota Configuration, and any other configuration details are NOT cloned. If you wish to create a new “View” to access this path, you would do so using the normal procedure for exposing any path.
If you have the VAST Catalog enabled on your cluster, you will need to contact VAST Customer Success to enable cloning on your cluster.
The VAST Catalog will not track metadata inside clones that are being rehydrated (in “Syncing” state). Metadata will only be tracked once it has been fully rehydrated.
The VAST Catalog will not track metadata inside clones that did NOT have “Background Sync” enabled.
Until data is ‘hydrated’, commands such as ‘du’ may not show the full size ‘on disk’.
EG:
Source Dir:
du -hs dataflowdemo/scratch-demo/
77G dataflowdemo/scratch-demo/Destination Dir
# du -hs clones/scratch-nobg
0 clones/scratch-nobgUsing DU’s “--apparent-size” flag will show identical values, though:
# du --apparent-size -hs dataflowdemo/scratch-demo
80G dataflowdemo/scratch-demo
# du --apparent-size -hs clones/scratch-nobg
80G clones/scratch-nobgResource Links
Local clones:
cli:Clone vcli