Panzura Application Programming Interface

CloudFS supports configuration and management through a RESTful Application Programming Interface (API). The API provides open access to Filer configuration and management features, as well as a full range of data collection and processing options.

The API provides programmatic access to configuration and management options available in the WebUI.

Accessing API Syntax Information

In addition to this overview, detailed syntax information for the Panzura API is accessible through the WebUI.

To access API information for configuration and management:

  1. Log into the WebUI on a filer running CloudFS 8.0.0.0 or later.
  2. In the URL field, add the following to the end of the URL: /apidocs
  3. Press Enter.
  4. Right-click on swaggar.json and select Save. A JSON glob containing the API request syntax appears in a browser window.
  5. Copy and paste the JSON glob into your favorite JSON formatter, then save the formatted version in a file for future reference.

API Authentication and Authorization

Authentication for Filer API sessions is based on the administrative username and password for the filer (or Master Filer, if deploying a cluster). A username that has administrative (read-write) access to the filer must be used.

Authorization for an individual API session is based on a session token generated by the filer for that session.

To begin an API management session, use the following request:

POST /pzapi/auth/login

The body of the request contains the administrative username and password for logging into the filer. In the following example, the default username and password (admin, admin) are used to authenticate to a filer through the API:

If the credentials are valid, the response contains a session token. Every subsequent request in the same API management session must include this token.

To end the API session, use the following request:

POST /pzapi/auth/logout

WebUI API Console

Within the filer’s WebUI, you can access syntax information for the API and you can construct and run individual API requests and view the responses.

Although a typical application that integrates with the filer probably will not access the API this way, through the WebUI’s API console, the WebUI’s API console provides an easy way to view the syntax and experiment with requests.

Requests sent through the API console take effect on the filer. The API console is not a tutorial. Any valid API requests you send through the WebUI API console actually do take effect on the filer.

The API console does not include the SCS API requests.

Accessing API Syntax Information

To access API syntax information or test individual requests:

  1. Log onto the Filer WebUI.
  2. In the URL field, add the following to the end of the URL: /apidocs
  3. Press Enter.

In the following example, the API console on filer 10.41.1.110 is accessed:

Logging Into the API

To begin a management session over the API, you must send a login request. Within the request, include the username and password required for read-write access to the filer. The response contains an authorization token. Use this token to authenticate subsequent requests.

To log into an API session through the WebUI API console:

  1. In the Auth section, click on POST /pzapi/auth/login. The row expands to show syntax information along with a sample request and response.
  2. Click Try It Out.
  3. If applicable, edit the username and password in the request body to match the credentials for read-write access on the filer.
  4. Click Execute. The curl request formed by the API and the response received from the filer are shown.
  5. In the Response Body field, highlight and copy the session key returned by the API.

Authorizing Subsequent Requests

  1. At the top of the page, click Authorize. The Available Authorizations dialog opens.
  2. Paste the session key into both Value fields, then click both Authorize buttons. Each button changes from Authorize to Logout.
  3. Click x in the upper right corner to close the Available Authorizations dialog.

Sending Requests

After logging in and entering the authorization token, you can use the API console to try out API requests. For example, click the following request name on this page to create an API request to add a CIFS fileshare to the CloudFS:

PATCH /pzapi/config/cifsShare/

The following request creates CIFS share mytest1 at share path /cloudfs/rma90/mytest1:

The sharename must be same as share name used in the API request URL. The sharepath must be the full path of the directory. The defaultReferral can be empty.

After editing the request parameters, click Execute. The request is sent to the filer and the new share is created. To verify the new share, use the following request:

GET /pzapi/config/cifsShare/{sharename}

Statistics Collection System

The Statistics Collection System (SCS) API requests are included on the API console. To send an SCS request, use the following syntax:

https://filer-ip/scs.cgi/command-string

Commonly Used SCS Requests

Here are some commonly used requests:

SCS API Command String Description
/data/list Responds with a list of all available data series.
/report/list Responds with a list of all available reports.
/data/[xml | json | txt]/series-name Responds with the data from a specific data series.
/report/[xml | json | txt]/report-name Responds with the data for a specific report.
/data/json/events?host=filer&gtype=eE Responds with a list of events for the specified filer. See Syntax for Specifying Filers.

 

Syntax for Specifying Filers

A filer can be specified using the following syntax:

  • Single FQDN host name
  • Comma-separated FQDN list
  • localhost *

The first two are FQDN-based.

The localhost option applies the query to the local host only. The is useful for scripts that query each filer independently, without the requirement to embed the machine FQDN in each request. The * option indicates all available FQDNs for which this specific filer has data. This includes the local host on a Subordinate and all included Subordinates for a given Master along with the Master itself. The default is *.

An un-assigned host field will include results for all filers that are reporting to the filer being queried. In general, API callers make requests using 'host=localhost'.

The [xml | json | txt] options specify serialization formats.

SCS Request Options

The SCS API request options are described below.

Data Format

Data sent to and received from the SCS API uses the following format:

List-of [ {Timestamp, Value}, {Timestamp, Value}, …]

Timestamps are Unix EPOC Timestamps with per-second granularity.

All values are 64-bit integers.

String values in a list are delimited by the "at" sign ( @ ). Single string values do not use the delimiter.

Data Access and Output

SCS API Command String Description
scs.cgi REST CGI binary executable that supports statistics access.
/data/[xml | json | txt]/series-name Responds with data for a single series in XML, JSON, or text format.
/report/xml | json | txt]/report-name Responds with the requested report, in the specified format.
/data/list Responds with a list of the names of all the data series.
/report/list Responds with a list of all available reports, by name.

 

REST URL Options

The following options are supported in the request URLs of SCS API requests:

SCS API URL Option Description
host=filer-fqdn

The filer-fqdn can be a comma-delimited set of one or more host names. The following characters can be used for wildcard matching:

? – Matches on any single character.

* – Matches on all characters up to any length.

Examples:

host=hostname1.com,hostname2.edu,hostname3.gov,*.com

This option matches on the specified hostnames (hostname1.com, hostname2.edu, and hostname3.gov), and on any other hostname that ends with ".com".

host=cc1-ca.pixel*

range=start-secondsend-seconds

Specifies the time range for the data being requested.

start-seconds – Specifies the start of the time range.

For a specific date or time range, use a positive integer specify the number of seconds since the beginning of the Unix EPOC, enter a positive number. The number specifies the number of seconds after the beginning of the Unix EPOC, to use as the beginning range. For recent activity up to the present, instead specify a negative integer for how many seconds back to begin collection. For example, to get data for the most recent 5 minutes, use -300 as the start-seconds value.

end-seconds – Specifies the end of the time range. To indicate the present, use 999999999.

Examples:

range=0,999999999

This range covers all available data, from the beginning of the Unix EPOC (0) to the present (999999999).

range=-600,999999999

This range covers data for the most recent 10 minutes.

interval=seconds

Specifies the interval for data. Normally, the interval is dynamic and returns about 2k data points. For example, 1M points would be collated to 2k by default.

If you specify an interval, this becomes the goal interval. If interval=1, all of the data is returned. CloudFS does not sample anything on an interval under 1 second.

points=num Specifies the number of data points to include in the response.
filter=filter-string

Applies a string filter to the data. Filters are used to filter logs and events by contents. For example, to filter on events that contain the string Cache miss, use the following filter:

filter=Cache miss

gtype=[cge][EDAsSN]

Specifies how to package the data, based on the graph type.

See Graph Types and Data Packaging.

Graph Types and Data Packaging

Specifies how to package the data, based on the graph type:

  • cge – Specifies the types of values to retrieve from CloudFS:
  • c – Counters. These are monotonic values such as LAN byte counters.
  • g – Gauge. These are up/down values used to show changes over time, such CPU load.
  • e – Events, logs, and other string types.
  • EDAsSN – Specifies how to process or prepare the data:
    • E – Makes events edge triggered. An edge triggered event is one that generates an event message only when the condition is first detected (the beginning edge of the condition). No additional event messages are generated for the condition while it continues to occur.
    • D –Makes data differentials. The deltas between each data point are returned, instead of the data points themselves. For example, if Ds is used, a series of LAN byte counts (100 105 125 137) is returned as the deltas between values (100, 5, 20, 12). The first value is the delta from 0. Each addition value is the delta from the previous value.
    • A – Aggregates multiple data series together into a single data series. This creates a data sum of the series per lined-up unit of time.
    • S – Normalizes the data into 1-second increments. This option is useful when asking questions such as "how many times per second is this condition occurring?". Since samples can be taken at varying intervals, this options always normalizes to a per-second integer based on the real sample time stamps.
    • N – For events, returns only the events that are active "now". This option returns only those events that are occurring at the time the data is collected. For edge and level events, this option indicates the current state of CloudFS, regardless of the time at which the sample was taken (which typically is hourly).
    • s – Simplifies the dataset into major pivots (inflection points) in a graph. In many cases using the s option provides the same graph results (same curves) as unsimplified data but uses fewer system resources to process.

For example, without simplification, a statistic counter that has a constant value of 100 over a sample interval of 5 minutes would return 2016 samples. The 5-minute interval collects 12 samples per hour. For a week's worth of data, an unsimplified series contains 12 samples/hour * 24 Hours * 7 days) for a week of data. For the same data, if simplification is used, SCS returns 2 samples, one at the beginning and one at the end of the total requested period.

In either case (with or without simplification), the graphed data would look the same, while saving significantly on compute and bandwidth. For data series with many high/low values, one should expect that all of the peaks and valleys be retained while dropping all intermediary points that do not substantially change the shape of the graph.

The s option always is used with a ‘points=XX’ option set.