Wednesday, September 3, 2008

GPFS Administration


GPFS provides an administration model that is consistent with standard
AIX and Linux file system administration while providing extensions for the
clustering aspects of GPFS. These functions support cluster management
and other standard file system administration functions such as quotas,
snapshots, and extended access control lists.
GPFS provides functions that simplify cluster-wide tasks. A single GPFS
command can perform a file system function across the entire cluster and
most can be issued from any node in the cluster. These commands are
typically extensions to the usual AIX and Linux file system commands.
Rolling upgrades allow you to upgrade individual nodes in the cluster while
the file system remains online. With GPFS Version 3.1 you could mix GPFS
3.1 nodes with different patch levels. Continuing that trend in GPFS Version
3.2 you can run a cluster with a mix of GPFS Version 3.1 and GPFS Version
3.2 nodes.
Quotas enable the administrator to control and monitor file system usage
by users and groups across the cluster. GPFS provides commands to
generate quota reports including user, group and fileset inode and data block
usage. In addition to traditional quota management, GPFS has an API that
provides high performance metadata access enabling custom reporting
options on very large numbers of files.
A snapshot of an entire GPFS file system may be created to preserve the
file system's contents at a single point in time. This is a very efficient
mechanism because a snapshot contains a map of the file system at the time
it was taken and a copy of only the file system data that has been changed
since the snapshot was created. This is done using a copy-on-write technique.
The snapshot function allows a backup program, for example, to run
concurrently with user updates and still obtain a consistent copy of the file

system as of the time that the snapshot was created. Snapshots provide an
online backup capability that allows files to be recovered easily from common
problems such as accidental deletion of a file.
An SNMP interface is introduced in GPFS Version 3.2 to allow monitoring
by network management applications. The SNMP agent provides information
on the GPFS cluster and generates traps in the event a file system is mounted,
modified or if a node fails. In GPFS Version 3.2 the SNMP agent runs only on
Linux. You can monitor a mixed cluster of AIX and Linux nodes as long as the
agent runs on a Linux node.
GPFS provides support for the Data Management API (DMAPI) interface
which is IBM’s implementation of the X/Open data storage management API.
This DMAPI interface allows vendors of storage management applications
such as IBM Tivoli® Storage Manager (TSM) to provide Hierarchical Storage
Management (HSM) support for GPFS.
GPFS enhanced access control protects directories and files by providing a
means of specifying who should be granted access. On AIX, GPFS supports
NFS V4 access control lists (ACLs) in addition to traditional ACL support.
Traditional GPFS ACLs are based on the POSIX model. Access control lists
(ACLs) extend the base permissions, or standard file access modes, of read
(r), write (w), and execute (x) beyond the three categories of file owner, file
group, and other users, to allow the definition of additional users and user
groups. In addition, GPFS introduces a fourth access mode, control (c), which
can be used to govern who can manage the ACL itself.
In addition to providing application file service GPFS file systems may be
exported to clients outside the cluster through NFS or Samba. GPFS has
been used for a long time as the base for a scalable NFS file service
infrastructure. Now that feature is integrated in GPFS Version 3.2 and is called
clustered NFS. Clustered NFS provides all the tools necessary to run a GPFS
Linux cluster as a scalable NFS file server. This allows a GPFS cluster to
provide scalable file service by providing simultaneous access to a common
set of data from multiple nodes. The clustered NFS tools include monitoring of
file services, load balancing and IP address fail over.

No comments: