To view or assign network settings, navigate to the following page in the Filer's WebUI:
Configuration > Network Settings
If you use the Command Line Interface (CLI) to perform initial configuration, some IP settings are already configured. You can use this page to change them if necessary.
Filer Modes and Network Ports
Network ports on the filer are defined as follows:
- LAN: Connects to the client side. This port is used by SMB/CIFS or NFS clients to access data on the filer. For filers configured with a single network connection, also referred to as a One‐ARM deployment, cloud traffic will also use this interface.
- WAN: Connects to the cloud side. This port is used for cloud network traffic. Filers with two network connections, also referred to as an In‐Line deployment.
Depending on the filer model, the LAN1 interface may have any of the following names: LAN1, bge0, ix0. Likewise, the WAN1 interface may have any of the following names: WAN1, bge1, ix1.
The filer can be deployed in the network in either of the following modes:
The filer is connected to the network through a single interface, GB1. The filer is not directly in the traffic path between the internal and external networks.
If you change the IP address of the interface through which you logged onto the WebUI, your management session is terminated as soon as you click Save.
The filer is connected to the network by separate LAN (GB1) and WAN (GB2) interfaces.
When using inline mode, make sure that the two networks, GB1 and GB2, are on different subnets and that the DNS server is not on the GB2 (WAN network). Because GB2 is intended only for cloud traffic, if the DNS server is on the WAN network, the internal firewall blocks the DNS traffic. This is the preferred method.
Teaming is supported for PCI and on-board NICs. The interfaces that are available for NIC teaming are identified with icons, illustrating the RJ45 ports.
On physical (not virtual) filers, NIC teaming allows you to aggregate multiple network interfaces into one virtual interface for link aggregation and failover.
- LACP‐Switch Dependent mode: combines multiple interfaces for increased throughput. Must be connected to a switch that supports, and is configured for Active Link Aggregation Control Protocol (LACP). LACP distributes traffic bi‐directionally while also responding to the failure of individual links. LACP balances outgoing traffic across the active ports based on protocol header information and accepts incoming traffic from any active port. It does not load balance on a per packet basis.
Note: PZOS 8.0 REQUIRES that the switch configuration is "Active LACP." The exact hashing algorithm used from the switch side may be selected by the user.
- Failover‐Switch Independent mode: allows traffic to continue to flow in the event of a link failure, provided that one member of the aggregated network interface has an operational link. When the second member becomes available again, the link team is automatically reconstituted. Failover and recovery are transparent to the end user, but the failure and recovery information is written to the appropriate log.
The following rules apply to NIC teaming:
- NIC teaming is supported only on physical filers (not virtual filers). Configured NIC teams persist and do not need to be reconfigured when the filer reboots or is upgraded.
- You can use NIC teaming with high availability active/standby configurations.
- The interfaces for a NIC team must have the same maximum speed (10Gb or 1Gb). They can be on the same NIC or different NICs. For reliability, it is a best practice to team across different NICs if possible.
- NIC teaming is supported in inline or one‐arm mode. In one‐arm the following are supported:
- 2ea 1GbE ports or
- 2ea 10GbE ports
In inline mode, the following are supported:
- 2ea 1GbE ports for the client (LAN) or cloud (WAN) side
- 2ea 10GbE ports for the client or cloud side
- You can configure the specific Ethernet interfaces to team using the WebUI or CLI.
- You can create a NIC team on the client side, cloud side, or both.
- It is possible to change the assignment of an interface from cloud side to client side or vice versa. To do this, first remove the interface from its current assignment by changing its Teaming Mode to None ‐ Single Interface. This makes the interface available for selection on the other side.
- If your configuration uses the Dell iDRAC Express for out of band management, note that it shares the network interface with the filer and that iDRAC does not support NIC Teaming. It is possible to upgrade the iDRAC to iDRAC Enterprise, which uses a dedicated network port. This upgrade can be purchased directly from Dell.
Hover over a colored icon to display details about the current configuration. (See "NIC Teaming" in Network Setting Options.)
Starting in Freedom 8, due to use of FreeBSD 12, enforcement of LACP configuration is strict. Any mismatched LACP configuration may require reconfiguring your switch to maintain uninterrupted operation. For Freedom 7 and earlier, LACP configuration enforcement is not strict, and mismatched configuration with your switch did not cause interruption of traffic.
- Select the teaming mode for the client side, cloud side, or both.
- Select a port for Interface 1. When you make your selection, the available ports for Interface 2 are presented.
- Select the Interface 2 port. If possible, select team members on different cards to improve reliability.
- When you have finished configuring the client side, cloud side, or both, click Submit to save the configuration. NIC teaming is now operational, and icon colors on the page are updated to represent the team configuration. To view the current status of configured NIC teams, navigate to Maintenance > Diagnostics Tools and run the show interface command.
When you configure a team, the icon colors are updated after the configuration is saved.
Using the CLI
To use the CLI instead, you can configire NIC teaming using the following configuration commands.
Inline Mode (LAN and WAN on separate NICs)
To switch to 10GbE:
configure-network lan ix0
configure-network wan ix1
To switch to 1GbE:
configure-network lan bge0
configure-network wan bge1
One-Arm Mode (LAN and WAN on the same NIC)
First, make sure the deployment mode (deploy-mode) is one-arm mode (default mode).
To switch to 10GbE one arm:
configure-network lan ix0
To switch to 10GbE one-arm failover (switch independent; does not need switch config):
configure-network lan ix0 ix1 failover
To switch to 1GbE one arm:
configure-network lan bge0
To switch to 1GbE one-arm failover (switch independent; does not need switch config)
configure-network lan bge0 bge1 failover
- The default TCP congestion algorithm has been changed to Cubic.
- This has been shown to have performance improvement for WAN and LAN traffic for larger latency environments, while maintaining performance for other traffic.
- Customers using virtual cloud instances, or with clients with high latency connections, may see some improvements.
Jumbo Frame Support
Enabling jumbo frames can improve performance in high-speed (gigabit Ethernet or higher speed) networks; however, they only work if fully supported on network devices and they increase CPU and memory load on the filer. Make sure that you understand the use of jumbo frames before enabling this feature.
The following table describes the network settings you can configure.
Set the configuration for the GB1 interface:
Note: One-Arm deployments use GB1 for both client access and connecting to the object storage. GB1 is the only interface that needs to be configured for LAN traffic. Inline deployments use both the GB1 and GB2 interfaces. In this case, GB1 will be dedicated to client access to the filer.
Choose One-Arm (default) or Inline. If you choose Inline, this section displays the following additional settings for the GB2 interface and static route table.
Click Add Network Host to add a host to the local hostname file for address resolution without the use of an external DNS server. A host can be another filer, a Windows domain filer, or another system on the network.
Specify the following for each host, and click Add:
Add or remove additional entries as needed. To remove an entry, select its checkbox and click Delete. If you need to modify an entry, delete the existing entry and then add a new one.
Note: The capitalization used when first configuring the filer host name persists for all CloudFS configuration operations. You cannot change the capitalization at a later time; however, you can change to a different host name.
|Add Static Route||
If you select the Inline network configuration option, use this section to enter static routes for the GB2 interface. To add a route, click Add Static Route and enter the following information:
The NIC Teaming section lists the NICs installed in the filer. Hover over a colored icon to display details about the current configuration:
Select the toggle to activate that policy. You can specify and activate up to five policies. If the conditions for a specific policy are met, then all of the following policies are ignored.
From Day To Day
From Hour to Hour
Set the day and time range for the bandwidth limit.
|Upload (Mbps)||Maximum bandwidth for file upload.|
|Download (Mbps)||Maximum bandwidth for file download.|
Enter the IP address of the iDRAC interface, then click the iDRAC link to connect to the iDRAC.
|Proxy Server Settings|
Click the slider to enable Proxy Server communication (http / https) to your object store. Currently only supported for AWS S3.
Note: With PZOS 126.96.36.199 and above, Panzura also offers support for external communications with support. At this moment, proxy server works with one-arm mode only. * This setting is available under Configuration > Network Settings > Advanced
*Only pertains to Panzura Filers running version 8