Deploying a Panzura Cloud File System Cluster

This chapter provides guidelines and suggestions for deploying a Panzura Cloud File System cluster, including deploying the filers, seeding the Cloud File System with files and directories, connecting users, and tuning for performance.

Before deploying your filers, see CloudFS Minimum Requirements.

Planning a Deployment

Planning a Panzura deployment involves determining the number and location of filers, the role that each filter will play, the product models and capacities, and strategy for high availability (HA).

The following sample Panzura deployment for AEC Corporation (aec‐example.com) has three working sites—Paris, New York, and London. A Panzura Filer is physically deployed at each site. Users connect to their local filer, see the shared file system, and experience LAN access speeds to the data in the global file system.

The three active Panzura filers are deployed as follows.

  • The filer in Los Angeles, la.aec‐example.com, is configured as a Master Filer.

Project directory: /cloudfs/la/aec‐project‐01

An SMB share for the directory: /aec‐project‐01

User connection to SMB share: \\la.aec.com\aec‐project‐01

  • The filers in the London and Paris offices, london.aec‐example.com and london.aec‐example.com, are configured as subordinates to the Los Angeles Filer.

Users at these sites connect to the SMB share \\london.aec.com\aec‐project‐01 or \\paris.aec.com\aecproject‐01.

  • The Phoenix filer is a dedicated standby for the Los Angeles filer and is set up using the HA‐Local option.

The Amsterdam filer is a standby for the London and Paris filers and is set up using the HA‐Global option.

Deploying Panzura CloudFS

The next sections describe the high‐level steps to deploy a multiple‐site Panzura CloudFS configuration, such as the one if the previous example. For configuration details, see the following sections:

Step 1: Install and Configure the Panzura Filers

A CloudFS deployment runs on a cluster of globally distributed Panzura filers, which can be physical or virtual. The filers are configured on their local networks, attached to DNS, and connected to the cloud back‐end.

Install the designated master and each of the subordinates according to the instructions in Setting Up the Panzura Filer. Following installation, the filers should all be running and joined to the CloudFS.

Following installation and during normal operations, any updates to the master configuration are automatically replicated to the subordinates. See for a list of replicated information.

Step 2: Seed CloudFS with Files and Directories

It is important to seed CloudFS with project data before users access their local Panzura filer for the first time. In preparation for seeding data, create a directory structure that supports both current and future projects. Then upload project data and confirm that each filer in the cluster has the same view of the global file system and directory structure.

This process is done on the Master Filer.

Seed CloudFS with File Data

  1. Mount your local Panzura filer to your desktop via an SMB share.
  2. Create a directory structure that supports current and future project files.
  3. Upload project files to the appropriate directories on the filer.
  4. Wait for the data to synchronize.
  5. Mount each of the remote filers.
  6. Confirm that the entire file system can be viewed from each filer.
  7. Observe CloudFS performance using the ingress and egress rate counters in the Panzura WebUI.

Windows Tools for Seeding Data

Microsoft Windows offers GUI and command line tools for migrating data:

  • The Windows Explorer GUI can be used to drag and drop files to the CloudFS SMB share. However, this method of copying files does not preserve file or folder permissions (ACLs).
  • Robocopy (Robust File Copy) is a command‐line utility—included with Windows Server 2012 and 2008—used to copy files and preserve file and folder permissions. Robocopy is scriptable, logs the copy process, features retry capabilities, and works around locked files.

Linux Tools for Seeding Data

Linux OS distributions include the rsync tool that can be used to copy files and directories from one server to another over an SSH connection. It is scriptable, preserves file permissions, and copies only new or changed files to the destination folder.

Time It Takes to Seed CloudFS with Data

When files are uploaded to an SMB share on the local Panzura filer, the files and metadata are immediately uploaded to the shared cloud storage back‐end. The files and metadata become available to all the other filers in the cluster, which immediately download the metadata and synchronize their file systems. When working with normal amounts of data, the cycle of uploads and downloads is nearly instantaneous and invisible to the end user.

However, when seeding large amounts of data to a filer, be aware that it takes time to upload files and file system metadata to the shared cloud storage back‐end. The time to complete this upload and download cycle is governed by the speed of the network links connecting the clients to the filers and the filers to the shared cloud storage back‐end. When data is uploaded to a share on the local filer, the PFOS operating system creates a system snapshot to capture the state of the file system and to identify the files that have been created or changed. Before uploading files to the cloud, the data in the files is broken down into smaller chunks of data, called drive files, which are uploaded sequentially to the shared cloud storage back‐end. The file data is uploaded first followed by the file system metadata snapshot.

Remote filers constantly poll the shared cloud storage back‐end looking for new metadata snapshots.

When new metadata snapshots are found, they are downloaded one by one and applied to the local file system. After the last metadata snapshot is downloaded and applied, the global file system is fully synchronized among all the filers.

The time required to move data to CloudFS and share the updated file system metadata with all the filers in the cluster is a function of the following values:

  • T1 = time to transfer files on a LAN from the local file server to the local Panzura filer.
  • T2 = time to upload the files (drive files) and the metadata snapshot from the Panzura filer to the shared cloud storage back‐end.
  • T3 = time to download the metadata snapshot from shared cloud storage to the remote Panzura filers.

The remote filers poll the cloud every 30 seconds, so the time to find and begin to download the metadata snapshot is no more than 30 seconds.

The time required to move files between a file server and the Panzura filer is governed by the speed of the local area network. Actual network speed is determined by the available bandwidth on the network.

With that in mind, the following formula calculates the minimum amount of time to move data to the Panzura filer, and between filers and the cloud.

time (sec) = amount of data (GB) * 8 / network speed (Gb/s)

For example, with 100GB of data and a 1Gb/s LAN, the calculations are as follows:

  • T1 = 100*8/1 = 800 seconds
  • T2 = 100*8/1 = 800 seconds
  • T3 = 30

Thus, the minimum time to upload and share 100GB of data with a 1Gb/s LAN is:

T1+T2+T3 = 800+800+30 = 1630 seconds = 27.2 minutes.

Network connections to the cloud are frequently a lot less than 1Gb/sec

Step 3: Connect Users to CloudFS

When CloudFS is running efficiently, it provides fast access to files that are distributed in the global file system. However, it takes time to distribute files to the appropriate filers. Until files are cached in the local filers, it appears that the system is running slowly. By design, the decision to cache data is an automated process that is triggered when a user accesses the data. Therefore, users will experience slow system performance as the system becomes balanced and data is cached at the appropriate filer.

Also, when a large amount of data is uploaded to CloudFS, users who want to access that data will experience slow access times until the files are downloaded and cached to their local filer. Subsequent file access will be fast, and updates to cached files will be shared quickly among all the filers in the cluster.

Windows Explorer users will experience a delay when viewing a directory with a large number of new files for the first time. Windows Explorer must open every file in the directory before it can display the directory listing. If these files are not yet cached locally, Windows Explorer becomes unresponsive and appears to hang. Once the Panzura filer downloads all of the files from that directory to the CloudFS cache, Windows Explorer performs normally.

In all cases, cache policies can be used to prepopulate the cache and improve performance for a particular folder. Even crawling the file folders (reading the files) can be used to populate the cache with data before users connect to the filer. It is a common practice for a knowledgeable system administrator to “walk” particular project directories in specific locations ahead of time to improve the local user’s first experience by ensuring the files they’re likely to use are already cached in the local filer.

Connect Users to CloudFS

  1. Users mount their local filer via SMB.
  2. Users browse file directories, then access and update files.
  3. Files become cached locally and I/O performance increases.
  4. Observe CloudFS performance using the ingress and egress rate counters in the WebUI.

Step 4: Observe CloudFS Performance

Only pertains to Panzura CloudFS7

PFOS provides tools for monitoring CloudFS. Three counters are used to observe the flow of data through CloudFS: rate of data ingress, rate of data egress, and the synchronization of system snapshots. The ingress and egress counters are viewed from the dashboard in the WebUI. The synchronization counters are viewed from the Diagnostic Tools menu in the WebUI.

The following discussion of data flow within CloudFS uses a two‐filer deployment, with filers named LOCAL and REMOTE.

Observe Uploads on a Local Panzura Filer

  1. Open the Dashboard page in the Web interface.
  2. Copy files to an SMB share on the local filer.
  • The ingress rate increases as data is copied to the filer.
  • The egress rate increases as drive files and snapshots are copied to the cloud back‐end.

Only after the drive file uploads are completed will the file system metadata snapshot be uploaded.

The egress rate returns to zero after all drive files and snapshots are successfully uploaded.

Observe Downloads on a Remote Panzura filer

Open the Dashboard page in the Web interface.

  • The remote filer polls the shared cloud back‐end storage every 30 seconds for the latest snapshots.
  • The snapshot sequence number of the latest snapshot is compared with the snapshot sequence number of the local metadata snapshot.
  • If the sequence numbers don’t match, the filer will proceed to download metadata snapshots, one by one, until the sequence numbers match again.
  • The ingress rate increases as the metadata snapshots are downloaded.
  • The ingress rate returns to zero after all the file system metadata snapshots are successfully downloaded.

When completed, the local and remote file systems are synchronized.

Confirm that the file system is synchronized across all filers.

To confirm the file system is synchronized, check the Active Filer Sync Status on the dashboard.

Step 5: Tune CloudFS Performance

PFOS provides an automated, intelligent read cache, Smart Cache, that increases file I/O performance.

Over time and through general usage, the system dynamically populates the Smart Cache with hot data from files being read by users. The caching algorithm monitors the frequency of file access, and how recently files were accessed, to determine what data to cache and what data to eject from the cache.

CloudFS performance can be tuned to increase the performance of specific files and directories with the use of cache policies. Cache policies govern what files are cached locally on the filer. These policies are also used to pre‐populate the cache to guarantee LAN speed access to files before users access them for the first time.

Set Up High Availability Protection

You can protect your cloud file system in the event of a filer failure using these high availability (HA) options:

  • HA Global: One or more filers are protected by one or more shared standbys, which can be separated geographically from the filers they protect.
  • HA Local: An active filer is protected by a dedicated standby. When the active filer fails, the passive standby assumes its identity and takes over operations. HA Local is similar to the methods used by legacy enterprise storage product. In this configuration, an active filer is protected by a dedicated, passive standby. When the active filer fails, the standby takes over ownership of the file system and the filer operations. The following HA Local options are supported:
  • Local: The active and standby filers have different hostnames and IP addresses.
  • Local with shared address: The active filer and passive standby have an additional shared hostname and IP address, which simplifies the takeover process. (Maximum length of the shared hostname is 15 characters.)

Important HA‐Local Notes

When using HA‐Local with shared address, only the shared hostname should be joined to Active Directory. The individual filers in the pair should be configured with the domain (and NetBIOS group) information, but not joined.

The filers to be used as standby (HA‐Local or HA‐Global) must not previously have been members of the CloudFS, and must not re‐use any prior filer hostname. A repurposed filer must have a new hostname and a new Cloud Filer ID (CCID) number, which is embedded in the License to Operate (LTO).

See the following sections for more information about HA:

  • For a general overview of the high availability feature, see Overview - Panzura CloudFS.
  • For instructions on setting up HA, see High Availability Settings. Select one of the HA options for configuration mode when using the setup wizard.
  • For instructions on initiating a high availability takeover, including best practices for takeover, see High Availability Operations.
  • We do not support disabling VIP without recreating or configuring the HA-Local set up. The UI has a radio button that can be enabled or disabled. If you disable VIP, VIP is not deleted. This will be fixed in the upcoming releases.
  • If HA-Local with VIP is deleted, the filer needs to be rejoined using the original hostname. If this occurs end users should join with the old host name and not the VIP. If the users do not reconnect using the old hostname, there will be a disruption in operations.