Configuring Network Settings

To view or assign network settings, navigate to the following page in the CloudFS WebUI:

Configuration > Network Settings


If you use the Command Line Interface (CLI) to perform initial configuration, some IP settings are already configured. You can use this page to change them if necessary.

Node Modes and Network Ports

Network ports on the nodes are defined as follows:

  • LAN: Connects to the client side. This port is used by SMB/CIFS or NFS clients to access data on the node. For nodes configured with a single network connection, also referred to as a One‐ARM deployment, cloud traffic will also use this interface.
  • WAN: Connects to the cloud side. This port is used for cloud network traffic. nodes with two network connections, also referred to as an In‐Line deployment.

Depending on the node model, the LAN1 interface may have any of the following names: LAN1, bge0, ix0. Likewise, the WAN1 interface may have any of the following names: WAN1, bge1, ix1.

The node can be deployed in the network in either of the following modes:

One‐Arm Mode

The node is connected to the network through a single interface, GB1. The node is not directly in the traffic path between the internal and external networks.


If you change the IP address of the interface through which you logged onto the WebUI, your management session is terminated as soon as you click Save.

Inline Mode

The node is connected to the network by separate LAN (GB1) and WAN (GB2) interfaces.

When using inline mode, make sure that the two networks, GB1 and GB2, are on different subnets and that the DNS server is not on the GB2 (WAN network). Because GB2 is intended only for cloud traffic, if the DNS server is on the WAN network, the internal firewall blocks the DNS traffic. This is the preferred method.

NIC Teaming

Teaming is supported for PCI and on-board NICs. The interfaces that are available for NIC teaming are identified with icons, illustrating the RJ45 ports.

On physical (not virtual) nodes, NIC teaming allows you to aggregate multiple network interfaces into one virtual interface for link aggregation and failover.

  • LACP‐Switch Dependent mode: combines multiple interfaces for increased throughput. Must be connected to a switch that supports, and is configured for Active Link Aggregation Control Protocol (LACP). LACP distributes traffic bi‐directionally while also responding to the failure of individual links. LACP balances outgoing traffic across the active ports based on protocol header information and accepts incoming traffic from any active port. It does not load balance on a per packet basis.

    Note: PZOS 8.0 REQUIRES that the switch configuration is "Active LACP." The exact hashing algorithm used from the switch side may be selected by the user.

  • Failover‐Switch Independent mode: allows traffic to continue to flow in the event of a link failure, provided that one member of the aggregated network interface has an operational link. When the second member becomes available again, the link team is automatically reconstituted. Failover and recovery are transparent to the end user, but the failure and recovery information is written to the appropriate log.

Configuration Notes

The following rules apply to NIC teaming:

  • NIC teaming is supported only on physical nodes (not virtual nodes). Configured NIC teams persist and do not need to be reconfigured when the node reboots or is upgraded.
  • You can use NIC teaming with high availability active/standby configurations.
  • The interfaces for a NIC team must have the same maximum speed (10Gb or 1Gb). They can be on the same NIC or different NICs. For reliability, it is a best practice to team across different NICs if possible.
  • NIC teaming is supported in inline or one‐arm mode. In one‐arm the following are supported:
  • 2ea 1GbE ports or
  • 2ea 10GbE ports

In inline mode, the following are supported:

  • 2ea 1GbE ports for the client (LAN) or cloud (WAN) side
  • 2ea 10GbE ports for the client or cloud side
  • You can configure the specific Ethernet interfaces to team using the WebUI or CLI.
  • You can create a NIC team on the client side, cloud side, or both.
  • It is possible to change the assignment of an interface from cloud side to client side or vice versa. To do this, first remove the interface from its current assignment by changing its Teaming Mode to None ‐ Single Interface. This makes the interface available for selection on the other side.
  • If your configuration uses the Dell iDRAC Express for out of band management, note that it shares the network interface with the node and that iDRAC does not support NIC Teaming. It is possible to upgrade the iDRAC to iDRAC Enterprise, which uses a dedicated network port. This upgrade can be purchased directly from Dell.

Hover over a colored icon to display details about the current configuration. (See "NIC Teaming" in Network Setting Options.)

Starting in CloudFS8, due to use of FreeBSD 12, enforcement of LACP configuration is strict. Any mismatched LACP configuration may require reconfiguring your switch to maintain uninterrupted operation. For CloudFS 7 and earlier, LACP configuration enforcement is not strict, and mismatched configuration with your switch did not cause interruption of traffic.

Configuration Steps

  1. Select the teaming mode for the client side, cloud side, or both.
  2. Select a port for Interface 1. When you make your selection, the available ports for Interface 2 are presented.
  3. Select the Interface 2 port. If possible, select team members on different cards to improve reliability.
  4. When you have finished configuring the client side, cloud side, or both, click Submit to save the configuration. NIC teaming is now operational, and icon colors on the page are updated to represent the team configuration. To view the current status of configured NIC teams, navigate to Maintenance > Diagnostics Tools and run the show interface command.

When you configure a team, the icon colors are updated after the configuration is saved.

Using the CLI

To use the CLI instead, you can configire NIC teaming using the following configuration commands.

Inline Mode (LAN and WAN on separate NICs)

To switch to 10GbE:

deploy-mode inline
configure-network lan ix0
configure-network wan ix1

To switch to 1GbE:

deploy-mode inline
configure-network lan bge0
configure-network wan bge1

One-Arm Mode (LAN and WAN on the same NIC)

First, make sure the deployment mode (deploy-mode) is one-arm mode (default mode).

To switch to 10GbE one arm:

deploy-mode onearm
configure-network lan ix0

To switch to 10GbE one-arm failover (switch independent; does not need switch config):

deploy-mode onearm
configure-network lan ix0 ix1 failover

To switch to 1GbE one arm:

deploy-mode onearm
configure-network lan bge0

To switch to 1GbE one-arm failover (switch independent; does not need switch config)

deploy-mode onearm
configure-network lan bge0 bge1 failover
  • The default TCP congestion algorithm has been changed to Cubic.
  • This has been shown to have performance improvement for WAN and LAN traffic for larger latency environments, while maintaining performance for other traffic.
  • Customers using virtual cloud instances, or with clients with high latency connections, may see some improvements.

Jumbo Frame Support

Enabling jumbo frames can improve performance in high-speed (gigabit Ethernet or higher speed) networks; however, they only work if fully supported on network devices and they increase CPU and memory load on the node. Make sure that you understand the use of jumbo frames before enabling this feature.

Network Setting Options

The following table describes the network settings you can configure.

Network Setting Description
Interface Mode
Interface Mode

Set the configuration for the GB1 interface:

  • DHCP: Select this checkbox to have addresses assigned automatically by an external Dynamic Host Configuration Protocol (DHCP) server. If you select this checkbox, other settings in this section are disabled.
  • Static: Assign static (unchanging) IP addresses to the following network resources:
    • IP Address: IP address and subnet mask assigned to the interface, and the default gateway.

    • Hostname: node hostname (read-only).

    • Domain: Name of the domain where the node is installed.

    • Primary/Secondary DNS Server IPs: IP addresses of the preferred DNS server, and a backup in case the preferred DNS server does not respond.

  • Enable Jumbo Frame: Disable or enable jumbo frames. (Before enabling, first see Jumbo Frame Support.)

Note: One-Arm deployments use GB1 for both client access and connecting to the object storage. GB1 is the only interface that needs to be configured for LAN traffic. Inline deployments use both the GB1 and GB2 interfaces. In this case, GB1 will be dedicated to client access to the node.

Deploy Mode

Deploy Mode

Choose One-Arm (default) or Inline. If you choose Inline, this section displays the following additional settings for the GB2 interface and static route table.

  • IP Address: IP address assigned to the interface.
  • Subnet Mask: Subnet mask assigned to the interface (format x.x.x.x).
  • Default Gateway: IP address of the network gateway.
  • Enable Jumbo Frame: Disabled or enabled jumbo frames.
Network Hosts

Network Hosts

Click Add Network Host to add a host to the local hostname file for address resolution without the use of an external DNS server. A host can be another node, a Windows domain node, or another system on the network.

Specify the following for each host, and click Add:

  • Host name (fully qualified domain name [FQDN])
  • IP Address

Add or remove additional entries as needed. To remove an entry, select its checkbox and click Delete. If you need to modify an entry, delete the existing entry and then add a new one.

Note: The capitalization used when first configuring the node host name persists for all CloudFS configuration operations. You cannot change the capitalization at a later time; however, you can change to a different host name.

Static Routes
Add Static Route

If you select the Inline network configuration option, use this section to enter static routes for the GB2 interface. To add a route, click Add Static Route and enter the following information:

  • IP Address: IP address for the route table entry. To configure a default route, enter
  • Netmask: Subnet mask assigned to the interface (format x.x.x.x). To configure a default route, enter
  • Gateway IP: IP address of the network gateway. Then click Add to add the entry to the static routes table displayed on the page. To remove an entry, select its checkbox and click Delete. Add or remove additional entries as needed.
NIC Teaming
Client Side

The NIC Teaming section lists the NICs installed in the node. Hover over a colored icon to display details about the current configuration:

  • White: Available for teaming but currently not configured.
  • Blue: Configured as the client interface. This applies whether the NIC is standalone or is part of a NIC team.
  • Red: Configured as the cloud interface. This applies whether the NIC is standalone or is part of a NIC team.
Bandwidth Limit

Select the toggle to activate that policy. You can specify and activate up to five policies. If the conditions for a specific policy are met, then all of the following policies are ignored.

From Day To Day

From Hour to Hour

Set the day and time range for the bandwidth limit.


  • To set a bandwidth limit for all time, set a rule with day range Monday to Sunday and time range 12am to 12pm.
  • To increase bandwidth between 9pm and 6am every day, set the day range Monday to Sunday and the time range 9pm to 6am.
  • To decrease bandwidth between 6am to 9pm every day, set the day range Monday to Sunday and the time range 6am to 9pm.
  • To set a rule for 24 hours every Tuesday, set the day range Tuesday to Monday and the time range 12 AM to 12 PM.
Upload (Mbps) Maximum bandwidth for file upload.
Download (Mbps) Maximum bandwidth for file download.
iDRAC Settings
IP address

Enter the IP address of the iDRAC interface, then click the iDRAC link to connect to the iDRAC.

Proxy Server Settings
Proxy Server

Click the slider to enable Proxy Server communication (http / https) to your object store. Currently only supported for AWS S3.

  • Web Proxy server: IP address / Host name of proxy server
  • TCP Port Number: Specify port

Note: With PZOS and above, Panzura also offers support for external communications with support. At this moment, proxy server works with one-arm mode only. * This setting is available under Configuration > Network Settings > Advanced

*Only pertains to Panzura nodes running version 8