Copy or Restore Data for Files and Directories With fs_ddcopy Using Command Line

Introduction

User or project directories can be restored by using fs_ddcopy, a recursive file system copy command that runs on the node. This method does not copy the restored files from the cloud into the local node's cache. Instead, fs_ddcopy restores the metadata for the files.

For Windows clients, Windows Previous Version (WPV) provides the same result as fs_ddcopy, and is managed by individual users, so that the administrator does not need to be involved in most small user restorations. However, WPV is single-threaded and performance issues can arise when copying many files or large files. In contrast, since fs_ddcopy deals only with the restored file metadata instead of copying the files themselves, it is a much faster operation. Using fs_ddcopy can be 5-10 times faster on a single large file and can be another 10 times faster with 10 threads running over a large number of files.

Pre-requisites

Panzura CloudFS nodes must be running CloudFS version 8.1.0.0.17445 or newer. Destination node’s snapshots should be in sync (specifically with the source node).

Best Practices

● fs_ddcopy should always be run from the destination node and, preferably, to an empty directory.

● Panzura recommends that large scale copies using fs_ddcopy be performed in low demand time periods. Contents should be copied to an empty or new subdirectory within the share created by an Administrator user, this ensures the right to access the new directory after the copy is complete.

● The source and destination path MUST be expressed in the POSIX format (for example: “/cloudfs/node/dir/”) instead of the UNC format (for example: “\\node\cloudfs\node\dir\”).

When using the command line:

● If the source and destination path have spaces or special characters, then surround the parameters with quotes: fs_ddcopy -t 10 -S "/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne mayson/" -D "/cloudfs/server_nyc/users/wayne mayson new/"

● Increasing the number of threads can help with the performance of the fs_ddcopy process. The following command uses 10 threads to copy from the source to destination. fs_ddcopy -t 10 -S /cloudfs// - D/cloudfs//

Supported Arguments

The following options are supported for fs_ddcopy.

Usage: fs_ddcopy [-s ] [-NL] [-g fileglob] [-S srcpath] [-D dstpath] [-t num_threads] [-T timeout_secs] [-R retries] [-fhnvV] [-d serialize_fmt] [-m log_mask] [-o option_file]

-g, Include all files that match glob.

-D, Write files to this path.

-S, Read files from this path.

-N, No work performed.

-L, list attrs/xattrs

-r, Do not recursively walk the file system

-s, Sort by:

n - name

N - name reversed

s - size

S - size reversed

t - time

T - time reversed

t, For recursive walks, use multiple threads

-T, Set individual timeout (Default=0 is no timeout).

-R, Set retries per attempt (Default=2 is two retries).

-d, Dump internal state of the system on completion.

decimal value or:

Format: J - JSON. X - XML. B - BEF. T - Text. C - CSV. Quantity: r - recurse, a - about (more detail), p - Pretty, d - Debug Examples: Jra - JSON, recurse, about

X - XML, top level only

-f, force overwrite of all existing files (Starting in CloudFS v8.0, default behavior does not overwrite existing files.)

-h, Display this help summary.

-n, Print information on what would have been done, but do not execute.

-m, Set logging mask.

-o, Take options from file [option_file].

-v, Enable verbosity. Add more 'v's to increase verbosity.

-V, Print version information.

Examples

Example 1: Restore to a new location

This example shows how to restore data to a new location while temporarily maintaining the existing destination. All operations in this example are performed on the node named “server_nyc”. 

Source:

/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/

Destination (newly created): 

/cloudfs/server_nyc/users/wayne_m_new/

1. First, an administrator creates an empty directory in the users share called “wayne_m_new”.

2. The fs_ddcopy command is then launched from destination node “server_nyc”. In this case, the source and destination node are the same.

fs_ddcopy -t 10 -S /cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/ -D /cloudfs/server_nyc/users/wayne_m_new/

3. After the copy is complete, you can rename as follows:

Source: /cloudfs/server_nyc/users/wayne_m/

Renamed Source: /cloudfs/server_nyc/users/wayne_m_old/

Destination: /cloudfs/server_nyc/users/wayne_m_new/

Renamed Destination: /cloudfs/server_nyc/users/wayne_m/

4. You can now review any content in wayne_m_old that is not in the new wayne_m to verify that the correct content is pulled forward, and then remove the older content.

Example 2: Overwrite

You can use fs_ddcopy to directly overwrite a user directory. In this example, the Source and Destination folders are:

Source:

/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/

Destination (existing):

/cloudfs/server_nyc/users/wayne_m/

1. The fs_ddcopy command is then launched from destination node “server_nyc”. In this case, the source and destination node are the same. Take note of the ‘-f’ switch, which forces overwrite of existing files:

fs_ddcopy -f -t 10 -S

/cloudfs/server_nyc/.snapshot/monthly.2/users/wayne_m/ -D /cloudfs/server_nyc/users/wayne_m/

2. The restore operation is additive and behaves the same as a typical Windows file server when overwriting. Any files that are in the destination directory before the operation and not overwritten continue to be in the destination directory.

If there is any chance that a virus has infected any of the files in the destination directory, do not use fs_ddcopy to overwrite. 

Error Codes

ERROR #

ERROR MESSAGE

DETAILS

001

Src stat fails, skipping copy

REASON:
ERR 001 <src> ​stat fails, skipping copy
Cannot stat the source file

RECOMMENDATION:
1. Check the existence and the accessibility of the source file
2. Retry fs_ddcopy from source parent directory

002

Src not a regular file, directory, or link, skipping copy

REASON:
ERR 002 <src> not a regular file, directory, or link, skipping copy
Wrong file type: fs_ddcopy will not copy file types other than a regular file, directory, or link

RECOMMENDATION:
1. Check the type, existence, and accessibility of the source file
2. Retry fs_ddcopy from source parent directory

003

Src copy failed @<line number>

REASON:
ERR 003 <src> copy failed @<line number> due to <reason>
Internal error

RECOMMENDATION:
Retry fs_ddcopy from the source parent directory

004

Src 1:Failed

REASON:
ERR 004 <src> 1:Failed with <reason>
The destination file exists. but CloudFS cannot stat the source file or cannot copy its extended attributes

RECOMMENDATION:
1. Check the existence and the accessibility of the source file
2. Manually fix file attributes, if necessary
3. Retry fs_ddcopy from source parent directory

005

Src 2:Failed

REASON:
ERR 005 <src> 2:Failed with <reason>

RECOMMENDATION:
1. Check the existence and the accessibility of the source file and make sure no other processes are accessing the target file
2. Retry fs_ddcopy from source parent directory

006

Src 3:Failed to copy link xattrs

REASON:
ERR 006 <src> 3:Failed to copy link xattrs {<reason>}
Failure to copy extended attributes for a link.

RECOMMENDATION:
1. Check the existence and the accessibility of the source link
2. Manually fix file attributes, if necessary
3. Retry fs_ddcopy from source parent directory

007

Src 4:Failed to copy xattrs

REASON:
ERR 007 <src> 4:Failed to copy xattrs {<reason>}
Failure to copy the extended attributes for a file.

RECOMMENDATION:
1. Check the existence and the accessibility of the source file, and make sure no other processes are accessing the target file
2. Manually fix file attributes, if necessary
3. Retry fs_ddcopy from source parent directory

008

No such file or directory

REASON:
The target directory does not exist.

RECOMMENDATION:
Change the destination to an existing path. (errno: 2)

009

Copy interrupted

REASON:
The system call was interrupted.

RECOMMENDATION:
Retry fs_ddcopy process. (errno: 4)

010

Try again

REASON:
The target directory is temporarily unavailable.

RECOMMENDATION:
Wait a few minutes and retry fs_ddcopy process.(errno: 11)

011

Permission denied

REASON:
Permission setting of destination directory does not allow the specified access.

RECOMMENDATION:
Change destination path to a valid or writable one, or change permissions for target directory. (errno: 13)

012

Out of memory

REASON:
No more space available on the device.

RECOMMENDATION:
Add disk to the device. (errno: 28)

013

Can’t write to <dest>

REASON:
The target directory is Read-only.

RECOMMENDATION:
Change the destination path to a valid or writable one, or change settings for target directory. (errno: 30)

014

Unkown (errno: <errno>)

REASON:
Unexpected error occurred.

RECOMMENDATION:
Please contact Panzura Support for assistance. (errno: <errno>)

015

Try again

REASON:
The target directory is temporarily unavailable.

RECOMMENDATION:
Wait a few minutes and retry fs_ddcopy process.(errno: 11)

016

Permission denied

REASON:
Permission setting of destination directory does not allow the specified access.

RECOMMENDATION:
Change destination path to a valid or writable one, or change permissions for target directory. (errno: 13)

017

Out of memory

REASON:
No more space available on the device.

RECOMMENDATION
Add disk to the device. (errno: 28)

018

Can’t write to <dest>

REASON:
The target directory is Read-only.

RECOMMENDATION:
Change the destination path to a valid or writable one, or change settings for target directory. (errno: 30)

019

Unkown (errno: <errno>)

REASON:
Unexpected error occurred.

RECOMMENDATION:
Please contact Panzura Support for assistance. (errno: <errno>)

Summary

fs_ddcopy can be used to copy files & directories between different locations in the CloudFS very quickly. Since data is already in the cloud, and tracked in the CloudFS via metadata, the fs_ddcopy process can then skip copying the data, and copy only the metadata. Therefore, content is moved much more quickly than conventional methods (for example: rsync, cp, or robocopy). 

Recover from ransomware, corruptions, accidental deletions with fs_ddcopy in a fast, efficient manner.