Introduction
User or project directories can be restored by using fs_ddcopy, a recursive file system copy command that runs on the node. This method does not copy the restored files from the cloud into the local node's cache. Instead, fs_ddcopy restores the metadata for the files.
For Windows clients, Windows Previous Version (WPV) provides the same result as fs_ddcopy, and is managed by individual users, so that the administrator does not need to be involved in most small user restorations. However, WPV is single-threaded and performance issues can arise when copying many files or large files. In contrast, since fs_ddcopy deals only with the restored file metadata instead of copying the files themselves, it is a much faster operation. Using fs_ddcopy can be 5-10 times faster on a single large file and can be another 10 times faster with 10 threads running over a large number of files.
Pre-requisites
Panzura CloudFS nodes must be running CloudFS version 8.1.0.0.17445 or newer. Destination node’s snapshots should be in sync (specifically with the source node).
Best Practices
● fs_ddcopy should always be run from the destination node and, preferably, to an empty directory.
● Panzura recommends that large scale copies using fs_ddcopy be performed in low demand time periods. Contents should be copied to an empty or new subdirectory within the share created by an Administrator user, this ensures the right to access the new directory after the copy is complete.
● The source and destination path MUST be expressed in the POSIX format (for example: “/cloudfs/node/dir/”) instead of the UNC format (for example: “\\node\cloudfs\node\dir\”).
When using the command line:
● If the source and destination path have spaces or special characters, then surround the parameters with quotes: fs_ddcopy -t 10 -S "/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne mayson/" -D "/cloudfs/server_nyc/users/wayne mayson new/"
● Increasing the number of threads can help with the performance of the fs_ddcopy process. The following command uses 10 threads to copy from the source to destination. fs_ddcopy -t 10 -S /cloudfs// - D/cloudfs//
Supported Arguments
The following options are supported for fs_ddcopy.
Usage: fs_ddcopy [-s ] [-NL] [-g fileglob] [-S srcpath] [-D dstpath] [-t num_threads] [-T timeout_secs] [-R retries] [-fhnvV] [-d serialize_fmt] [-m log_mask] [-o option_file]
-g, Include all files that match glob.
-D, Write files to this path.
-S, Read files from this path.
-N, No work performed.
-L, list attrs/xattrs
-r, Do not recursively walk the file system
-s, Sort by:
n - name
N - name reversed
s - size
S - size reversed
t - time
T - time reversed
t, For recursive walks, use multiple threads
-T, Set individual timeout (Default=0 is no timeout).
-R, Set retries per attempt (Default=2 is two retries).
-d, Dump internal state of the system on completion.
decimal value or:
Format: J - JSON. X - XML. B - BEF. T - Text. C - CSV. Quantity: r - recurse, a - about (more detail), p - Pretty, d - Debug Examples: Jra - JSON, recurse, about
X - XML, top level only
-f, force overwrite of all existing files (Starting in CloudFS v8.0, default behavior does not overwrite existing files.)
-h, Display this help summary.
-n, Print information on what would have been done, but do not execute.
-m, Set logging mask.
-o, Take options from file [option_file].
-v, Enable verbosity. Add more 'v's to increase verbosity.
-V, Print version information.
Examples
Example 1: Restore to a new location
This example shows how to restore data to a new location while temporarily maintaining the existing destination. All operations in this example are performed on the node named “server_nyc”.
Source:
/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/
Destination (newly created):
/cloudfs/server_nyc/users/wayne_m_new/
1. First, an administrator creates an empty directory in the users share called “wayne_m_new”.
2. The fs_ddcopy command is then launched from destination node “server_nyc”. In this case, the source and destination node are the same.
fs_ddcopy -t 10 -S /cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/ -D /cloudfs/server_nyc/users/wayne_m_new/
3. After the copy is complete, you can rename as follows:
Source: /cloudfs/server_nyc/users/wayne_m/
Renamed Source: /cloudfs/server_nyc/users/wayne_m_old/
Destination: /cloudfs/server_nyc/users/wayne_m_new/
Renamed Destination: /cloudfs/server_nyc/users/wayne_m/
4. You can now review any content in wayne_m_old that is not in the new wayne_m to verify that the correct content is pulled forward, and then remove the older content.
Example 2: Overwrite
You can use fs_ddcopy to directly overwrite a user directory. In this example, the Source and Destination folders are:
Source:
/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/
Destination (existing):
/cloudfs/server_nyc/users/wayne_m/
1. The fs_ddcopy command is then launched from destination node “server_nyc”. In this case, the source and destination node are the same. Take note of the ‘-f’ switch, which forces overwrite of existing files:
fs_ddcopy -f -t 10 -S
/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/ -D /cloudfs/server_nyc/users/wayne_m/
2. The restore operation is additive and behaves the same as a typical Windows file server when overwriting. Any files that are in the destination directory before the operation and not overwritten continue to be in the destination directory.
If there is any chance that a virus has infected any of the files in the destination directory, do not use fs_ddcopy to overwrite.
Error Codes
ERROR # |
ERROR MESSAGE |
DETAILS |
001 |
Src stat fails, skipping copy |
REASON: |
002 |
Src not a regular file, directory, or link, skipping copy |
REASON: |
003 |
Src copy failed @<line number> |
REASON: |
004 |
Src 1:Failed |
REASON: |
005 |
Src 2:Failed |
REASON: |
006 |
Src 3:Failed to copy link xattrs |
REASON: |
007 |
Src 4:Failed to copy xattrs |
REASON: |
008 |
No such file or directory |
REASON: |
009 |
Copy interrupted |
REASON: |
010 |
Try again |
REASON: |
011 |
Permission denied |
REASON: |
012 |
Out of memory |
REASON: |
013 |
Can’t write to <dest> |
REASON: |
014 |
Unkown (errno: <errno>) |
REASON: |
015 |
Try again |
REASON: |
016 |
Permission denied |
REASON: |
017 |
Out of memory |
REASON: |
018 |
Can’t write to <dest> |
REASON: |
019 |
Unkown (errno: <errno>) |
REASON: |
Summary
fs_ddcopy can be used to copy files & directories between different locations in the CloudFS very quickly. Since data is already in the cloud, and tracked in the CloudFS via metadata, the fs_ddcopy process can then skip copying the data, and copy only the metadata. Therefore, content is moved much more quickly than conventional methods (for example: rsync, cp, or robocopy).
Recover from ransomware, corruptions, accidental deletions with fs_ddcopy in a fast, efficient manner.