This article details how best to get data to the locations that need it, when you make changes to your Panzura file system.
Consider for a moment that the business has outgrown the original file system design; the originally designated lock master has started to struggle to keep up with demand or there’s now too many eggs in one basket.
Perhaps the original design was for three sites, accessing a centrally stored file system and therefore a single lock owner and now the business has ten or more geo dispersed sites.
Traditionally data would be copied using the usual methods; Robocopy, Windows Explorer or some other third-party tool. All of these would require the data to be downloaded from the cloud to a client, copied to the new location and uploaded as part of the new node’s dataset.
Panzura offers a tool that can be leveraged from the WebUI and only copies the metadata.
This removes the need for a physical download while placing the data in the new location and under the new lock owners’ control. The process does not duplicate physical data in the cloud since the metadata still points to the original object location and the original node path can be deleted.
Within the WebUI, navigate to: Maintenance > Diagnostic Tools > Diagnostic Tools
Within the Command Type select “run-cmd” and in the Parameters enter the following:
fs_ddcopy -t 10 -S /cloudfs/source-filesystem/path/to/move -D /cloudfs/destination-filesystem/path/to/place
Before executing the copy, it’s a good idea to change the WebUI timeout to 0. This will avoid losing the connection to the executing task.
Managed capacity however will be affected. The copied files are considered separately managed, and therefore will add the managed capacity usage until the original location is removed.
For more, take a look at Copy or Restore Data with fs_ddcopy