Wednesday, September 3, 2008

Data availability

GPFS is fault tolerant and can be configured for continued access to data
even if cluster nodes or storage systems fail. This is accomplished though
robust clustering features and support for data replication.
GPFS continuously monitors the health of the file system components.
When failures are detected appropriate recovery action is taken automatically.
Extensive logging and recovery capabilities are provided which maintain
metadata consistency when application nodes holding locks or performing
services fail. Data replication is available for journal logs, metadata and data.
Replication allows for continuous operation even if a path to a disk or a disk
itself fails.
GPFS Version 3.2 further enhances clustering robustness with connection
retries. If the LAN connection to a node fails GPFS will automatically try and
reestablish the connection before making the node unavailable. This provides
for better uptime in environments experiencing network issues.
Using these features along with a high availability infrastructure ensures a
reliable enterprise storage solution.

GPFS Administration

Administration

GPFS provides an administration model that is consistent with standard
AIX and Linux file system administration while providing extensions for the
clustering aspects of GPFS. These functions support cluster management
and other standard file system administration functions such as quotas,
snapshots, and extended access control lists.
GPFS provides functions that simplify cluster-wide tasks. A single GPFS
command can perform a file system function across the entire cluster and
most can be issued from any node in the cluster. These commands are
typically extensions to the usual AIX and Linux file system commands.
Rolling upgrades allow you to upgrade individual nodes in the cluster while
the file system remains online. With GPFS Version 3.1 you could mix GPFS
3.1 nodes with different patch levels. Continuing that trend in GPFS Version
3.2 you can run a cluster with a mix of GPFS Version 3.1 and GPFS Version
3.2 nodes.
Quotas enable the administrator to control and monitor file system usage
by users and groups across the cluster. GPFS provides commands to
generate quota reports including user, group and fileset inode and data block
usage. In addition to traditional quota management, GPFS has an API that
provides high performance metadata access enabling custom reporting
options on very large numbers of files.
A snapshot of an entire GPFS file system may be created to preserve the
file system's contents at a single point in time. This is a very efficient
mechanism because a snapshot contains a map of the file system at the time
it was taken and a copy of only the file system data that has been changed
since the snapshot was created. This is done using a copy-on-write technique.
The snapshot function allows a backup program, for example, to run
concurrently with user updates and still obtain a consistent copy of the file

system as of the time that the snapshot was created. Snapshots provide an
online backup capability that allows files to be recovered easily from common
problems such as accidental deletion of a file.
An SNMP interface is introduced in GPFS Version 3.2 to allow monitoring
by network management applications. The SNMP agent provides information
on the GPFS cluster and generates traps in the event a file system is mounted,
modified or if a node fails. In GPFS Version 3.2 the SNMP agent runs only on
Linux. You can monitor a mixed cluster of AIX and Linux nodes as long as the
agent runs on a Linux node.
GPFS provides support for the Data Management API (DMAPI) interface
which is IBM’s implementation of the X/Open data storage management API.
This DMAPI interface allows vendors of storage management applications
such as IBM Tivoli® Storage Manager (TSM) to provide Hierarchical Storage
Management (HSM) support for GPFS.
GPFS enhanced access control protects directories and files by providing a
means of specifying who should be granted access. On AIX, GPFS supports
NFS V4 access control lists (ACLs) in addition to traditional ACL support.
Traditional GPFS ACLs are based on the POSIX model. Access control lists
(ACLs) extend the base permissions, or standard file access modes, of read
(r), write (w), and execute (x) beyond the three categories of file owner, file
group, and other users, to allow the definition of additional users and user
groups. In addition, GPFS introduces a fourth access mode, control (c), which
can be used to govern who can manage the ACL itself.
In addition to providing application file service GPFS file systems may be
exported to clients outside the cluster through NFS or Samba. GPFS has
been used for a long time as the base for a scalable NFS file service
infrastructure. Now that feature is integrated in GPFS Version 3.2 and is called
clustered NFS. Clustered NFS provides all the tools necessary to run a GPFS
Linux cluster as a scalable NFS file server. This allows a GPFS cluster to
provide scalable file service by providing simultaneous access to a common
set of data from multiple nodes. The clustered NFS tools include monitoring of
file services, load balancing and IP address fail over.

GPFS file system

The file system

A GPFS file system is built from a collection of disks which contain the file
system data and metadata. A file system can be built from a single disk or
contain thousands of disks, storing Petabytes of data. A GPFS cluster can
contain up to 256 mounted file systems. There is no limit placed upon the
number of simultaneously opened files within a single file system. As an
example, current GPFS customers are using single file systems up to 2PB in
size and others containing tens of millions of file.

Application interfaces

Applications can access files through standard UNIX® file system
interfaces or through enhanced interfaces available for parallel programs.
Parallel and distributed applications can be scheduled on GPFS clusters to
take advantage of the shared access architecture. This makes GPFS a key
component in many grid-based solutions. Parallel applications can
concurrently read or update a common file from multiple nodes in the cluster.
GPFS maintains the coherency and consistency of the file system using a
sophisticated byte level locking, token (lock) management and logging.
In addition to standard interfaces GPFS provides a unique set of extended
interfaces which can be used to provide high performance for applications with
demanding data access patterns. These extended interfaces are more
efficient for traversing a file system, for example, and provide more features
than the standard POSIX interfaces.

Performance and scalability

GPFS provides unparalleled performance especially for larger data objects
and excellent performance for large aggregates of smaller objects. GPFS
achieves high performance I/O by:
• Striping data across multiple disks attached to multiple nodes.
• Efficient client side caching.
• Supporting a large block size, configurable by the administrator, to fit
I/O requirements.
• Utilizing advanced algorithms that improve read-ahead and writebehind
file functions.
• Using block level locking based on a very sophisticated scalable token
management system to provide data consistency while allowing
multiple application nodes concurrent access to the files.
GPFS recognizes typical access patterns like sequential, reverse
sequential and random and optimizes I/O access for these patterns.
GPFS token (lock) management coordinates access to shared disks
ensuring the consistency of file system data and metadata when different
nodes access the same file. GPFS has the ability for multiple nodes to act as
token managers for a single file system. This allows greater scalability for high
transaction workloads.
Along with distributed token management, GPFS provides scalable
metadata management by allowing all nodes of the cluster accessing the file
system to perform file metadata operations. This key and unique feature
distinguishes GPFS from other cluster file systems which typically have a
centralized metadata server handling fixed regions of the file namespace. A
centralized metadata server can often become a performance bottleneck for
metadata intensive operations and can represent a single point of failure.
GPFS solves this problem by managing metadata at the node which is using
the file or in the case of parallel access to the file, at a dynamically selected
node which is using the file.
What is GPFS?
The IBM General Parallel File System (GPFS) provides unmatched
performance and reliability with scalable access to critical file data. GPFS
distinguishes itself from other cluster file systems by providing concurrent
high-speed file access to applications executing on multiple nodes of an AIX
cluster, a Linux cluster, or a heterogeneous cluster of AIX and Linux nodes. In
addition to providing file storage capabilities, GPFS provides storage
management, information life cycle tools, centralized administration and
allows for shared access to file systems from remote GPFS clusters.
GPFS provides scalable high-performance data access from a two node
cluster providing a high availability platform supporting a database application,
for example, to 2,000 nodes or more used for applications like modeling
weather patterns. Up to 512 Linux nodes or 128 AIX nodes with access to
one or more file systems are supported as a general statement and larger
configurations exist by special arrangements with IBM. The largest existing
configurations exceed 2,000 nodes. GPFS has been available on AIX since
1998 and Linux since 2001. It has proven time and again on some of the
world's most powerful supercomputers1 to provide efficient use of disk
bandwidth.
GPFS was designed from the beginning to support high performance
computing (HPC) and has been proven very effective for a variety of
applications. It is installed in clusters supporting relational databases, digital
media and scalable file serving. These applications are used across many
industries including financial, retail and government applications. Being tested
in very demanding large environments makes GPFS a solid solution for any
size application.
GPFS supports various system types including the IBM System p™ family
and machines based on Intel® or AMD processors such as an IBM System
x™ environment. Supported operating systems for GPFS Version 3.2 include
AIX V5.3 and selected versions of Red Hat and SUSE Linux distributions.
This paper introduces a number of GPFS features and describes core
concepts. This includes the file system, high availability features, information
lifecycle management (ILM) tools and various cluster architectures.

IBM General Parallel File System ( GPFS )

GPFS (General Parallel File System) is a high-performance shared-disk clustered file system developed by IBM.

Like some other cluster filesystems, GPFS provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX 5L clusters, Linux clusters, or a heterogeneous cluster of AIX and Linux nodes. In addition to providing filesystem storage capabilities, GPFS provides tools for management and administration of the GPFS cluster and allows for shared access to file systems from remote GPFS clusters.

GPFS has been available on AIX since 1998 and on Linux since 2001, and is offered as part of the IBM System Cluster 1350.

Versions of GPFS.

Versions:

GPFS 3.2, September 2007
GPFS 3.2.1-2, April 2008
GPFS 3.2.1-4, July 2008
GPFS 3.1
GPFS 2.3.0-29

Architecture

GPFS provides high performance by allowing data to be accessed over multiple computers at once. Most existing file systems are designed for a single server environment, and adding more file servers does not improve performance. GPFS provides higher input/output performance by "striping" blocks of data from individual files over multiple disks, and reading and writing these blocks in parallel. Other features provided by GPFS include high availability, support for heterogeneous clusters, disaster recovery, security, DMAPI, HSM and ILM.

Saturday, August 16, 2008

Restoring the Virtual I/O Server

As there are 4 different ways to backup the Virtual I/O Server, so there are 4 ways to restore it.

Restoring from a tape or DVD

To restore the Virtual I/O Server from tape or DVD, follow these steps:

1. specify the Virtual I/O Server partition to boot from the tape or DVD by
using the bootlist command or by altering the bootlist in SMS menu.
2. insert the tape/DVD into the drive.
3. from the SMS menu, select to install from the tape/DVD drive.
4. follow the installation steps according to the system prompts

Restoring the Virtual I/O Server from a remote file system using a nim_resources.tar file

To restore the Virtual I/O Server from a nim_resources.tar image in a file system, perform the following steps:

1. run the installios command without any flag from the HMC command line.
a) Select the Managed System where you want to restore your Virtual I/O Server
from the objects of type "managed system" found by installios command.
b) Select the VIOS Partition where you want to restore your system from the
objects of type "virtual I/O server partition" found

c) Select the Profile from the objects of type "profile" found.
d) Enter the source of the installation images [/dev/cdrom]:
server:/exported_dir
e) Enter the client's intended IP address:
f) Enter the client's intended subnet mask:
g) Enter the client's gateway:
h) Enter the client's speed [100]:
i) Enter the client's duplex [full]:
j) Would you like to configure the client's network after the installation
[yes]/no?

2. when the restoration is finished, open a virtual terminal connection (for
example, using telnet) to the Virtual I/O Server that you restored. Some
additional user input might be required



Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


Restoring the Virtual I/O Server from a remote file system using a mksysb image
To restore the Virtual I/O Server from a mksysb image in a file system using NIM, complete the following tasks:

1. define the mksysb file as a NIM object, by running the nim command.
#nim -o define -t mksysb -a server=master –a
location=/export/ios_backup/filename.mksysb objectname
objectname is the name by which NIM registers and recognizes the mksysb
file.
2. define a SPOT resource for the mksysb file by running the nim command.
#nim -o define -t spot -a server=master -a location=/export/ios_backup/
SPOT -a source=objectname SPOTname
SPOTname is the name of the SPOT resource for the mksysb file.
3. install the Virtual I/O Server from the mksysb file using the smit command.
#smit nim_bosinst
The following entry fields must be filled:
“Installation type” => mksysb
“Mksysb” => the objectname chosen in step1
“Spot” => the SPOTname chosen in step2
4. start the Virtual I/O Server logical partition.
a) On the HMC, right-click the partition to open the menu.
b) Click Activate. The Activate Partition menu opens with a selection of
partition profiles. Be sure the correct profile is highlighted.
c) Select the Open a terminal window or console session check box to open a
virtual terminal (vterm) window.
d) Click (Advanced...) to open the advanced options menu.
e) For the Boot mode, select SMS.
f) Click OK to close the advanced options menu.
g) Click OK. A vterm window opens for the partition.
h) In the vterm window, select Setup Remote IPL (Initial Program Load).
i) Select the network adapter that will be used for the installation.
j) Select IP Parameters.
k) Enter the client IP address, server IP address, and gateway IP address.
Optionally, you can enter the subnet mask. After you have entered these
values, press Esc to return to the Network Parameters menu.
l) Select Ping Test to ensure that the network parameters are properly
configured. Press Esc twice to return to the Main Menu.
m) From the Main Menu, select Boot Options.
n) Select Install/Boot Device.
o) Select Network.
p) Select the network adapter whose remote IPL settings you previously
configured.
q) When prompted for Normal or Service mode, select Normal.
r) When asked if you want to exit, select Yes.



Integrated Virtualization Manager (IVM) Consideration

If your Virtual I/O Server is managed by the IVM, prior to backup of your system, you need to backup your partition profile data for the management partition and its clients as IVM is integrated with Virtual I/O Server, but the LPARs profile is not saved with the backupios command.

There are two ways to perform this backup:
From the IVM Web Interface
1) From the Service Management menu, click Backup/Restore
2) Select the Partition Configuration Backup/Restore tab
3) Click Generate a backup

From the Virtual I/O Server CLI
1) Run the following command
#bkprofdata -o backup

Both these ways generate a file named profile.bak with the information about the LPARs configuration. While using the Web Interface, the default path for the file is /home/padmin. But if you perform the backup from CLI, the default path will be /var/adm/lpm. This path can be changed using the –l flag. Only ONE file can be present on the system, so each time the bkprofdata is issued or the Generate a Backup button is pressed, the file is overwritten.

To restore the LPARs profile you can use either the GUI or the CLI

From the IVM Web Interface
1) From the Service Management menu, click Backup/Restore
2) Select the Partition Configuration Backup/Restore tab
3) Click Restore Partition Configuration

From the Virtual I/O Server CLI
1) Run the following command
#rstprofdata –l 1 –f /home/padmin/profile.bak

It is not possible to restore a single partition profile. In order to restore LPARs profile, none of the LPARs profile included in the profile.bak must be defined in the IVM.

Backup of Virtual I/O Server

Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server


Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data

1. Extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data

2. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =

3. Fill the value of HDISKNAME with the name of the disk to which you want to restore to

4. Put back the modified bosinst.data in the nim_resources.tar image
#tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.
Use the following steps

1. extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data
2. extract the mksysb from the nim_resources.tar
#tar -xvf nim_resources.tar ./5300-00_mksysb
3. extract the original bosinst.data
#restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
4. view the original target_disk_data
#grep -p target_disk_data ./var/adm/ras/bosinst.data

The above command displays something like the following:

target_disk_data:
PVID = 00c5951e63449cd9
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0
LOCATION = 0A-08-00-5,0
SIZE_MB = 140000
HDISKNAME = hdisk0

5. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one
6. add the modified file to the nim_resources.tar
#tar -uvf nim_resources.tar ./bosinst.data


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

Backup of Virtual I/O Server

Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server


Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data

1. Extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data

2. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =

3. Fill the value of HDISKNAME with the name of the disk to which you want to restore to

4. Put back the modified bosinst.data in the nim_resources.tar image
#tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.
Use the following steps

1. extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data
2. extract the mksysb from the nim_resources.tar
#tar -xvf nim_resources.tar ./5300-00_mksysb
3. extract the original bosinst.data
#restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
4. view the original target_disk_data
#grep -p target_disk_data ./var/adm/ras/bosinst.data

The above command displays something like the following:

target_disk_data:
PVID = 00c5951e63449cd9
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0
LOCATION = 0A-08-00-5,0
SIZE_MB = 140000
HDISKNAME = hdisk0

5. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one
6. add the modified file to the nim_resources.tar
#tar -uvf nim_resources.tar ./bosinst.data


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

Updating VIO server Patch level update

Applying updates from a local hard disk

To apply the updates from a directory on your local hard disk, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):

NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Create a directory on the Virtual I/O Server.
$ mkdir
Using ftp, transfer the update file(s) to the directory you created.
Apply the update by running the updateios command
$ updateios -accept -dev
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel

B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin.
Create a directory on the Virtual I/O Server.
$ mkdir
Using ftp, transfer the update file(s) to the directory you created.
Apply the update by running the updateios command
$ updateios -accept -install -dev
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel


NOTE:- If you are updating from an ioslevel prior to 1.3.0.1, the updateios command may indicate several failures (i.e. missing requisites) during fix pack installation. These messages are expected. Proceed with the update if you are prompted to "Continue with the installation [y/n]".

Applying updates from a remotely mounted file system

If the remote file system is to be mounted read-only, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):
NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt
Apply the update by running the updateios command.
$ updateios -accept -dev /mnt
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin.
Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt
Apply the update by running the updateios command
$ updateios -accept -install -dev /mnt
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
Back to top
Applying updates from the CD/DVD driveThis fix pack can be burned onto a CD by using the ISO image file(s). After the CD has been created, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):
NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Place the CD-ROM into the drive assigned to VIOS.
Apply the update by running the updateios command
$ updateios -accept -dev /dev/cdX
where X is the device number 0-N assigned to VIOS
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin
Place the CD-ROM into the drive assigned to VIOS
Apply the update by running the following updateios command:
$ updateios -accept -install -dev /dev/cdX
where X is the device number 0-N assigned to VIOS
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel

Expanding rootvg disk in VIO environment where 2 VIO servers have been implemented for redundancy.

This article depicts the procedure for expanding a rootvg volume group for a POWER5 LPAR where two VIO Servers have been implemented for redundancy. It also assumes that the rootvg is mirrored across both VIO Servers. This procedure is not supported by IBM, but does work.
POWER5 LPAR:
• Begin by unmirroring your rootvg and remove hdisk1 from the rootvg volume group. If there are any swap or dump devices on this disk you may need to remove them first before you can remove hdisk1 from the rootvg volume group.

• Once the disk has been removed from the rootvg, remove it from the LPAR by executing the following:
#rmdev -l hdisk1 - d

• Now you execute the bosboot command and update your bootlist now that hdisk1 has been removed and is no longer part of the system:
#bosboot -a bootlist -o -m normal hdisk0

VIO Server (where hdisk1 was created):

• Remove the device from the VIO Server using the rmdev command:
#rmdev -dev < bckcnim_hdisk1 >

• Next you will need to access the AIX* OS part of the VIO Server by executing:
#oem_setup_env

• Now you have two options: you can extend the existing logical volume or create a new one if there is more than enough disk space left. In this example I will be using bckcnim_lv. smitty extendlv and add additonal LP's or smitty mklv

• Exit out of oem_setup_env by just typing "exit" at the OS prompt.

• Now that you are back within the restricted shell of the VIO Server, execute the following command. You can use whatever device name you wish. I used bckcnim_hdisk1 just for example purposes:
#mkvdev -vdev bckcnim_lv -vadapter < vhost# > -dev bckcnim_hdisk1

POWER5 LPAR:
• Execute cfgmgr to add the new hdisk1 back to LPAR:
#cfgmgr

• Add hdisk1 back to the rootvg volume group using the extendvg or smitty extendvg.

• Mirror rootvg using the mirrorvg command or smitty mirrorvg

• Sync the mirroring process to the background and wait to complete. This is very important and must complete before dealing with what represents the hdisk0 logical volume.

• Now you must execute bosboot again and update the bootlist again:
#bosboot -a
#bootlist -o -m normal hdisk0 hdisk1

Friday, August 15, 2008

Recovering a Failed VIO Disk

Recovering a Failed VIO Disk

Here is a recovery procedure for replacing a failed client disk on a Virtual IO
server. It assumes the client partitions have mirrored (virtual) disks. The
recovery involves both the VIO server and its client partitions. However,
it is non disruptive for the client partitions (no downtime), and may be
non disruptive on the VIO server (depending on disk configuration). This
procedure does not apply to Raid5 or SAN disk failures.

The test system had two VIO servers and an AIX client. The AIX client had two
virtual disks (one disk from each VIO server). The two virtual disks
were mirrored in the client using AIX's mirrorvg. (The procedure would be
the same on a single VIO server with two disks.)

The software levels were:


p520: Firmware SF230_145 VIO Version 1.2.0 Client: AIX 5.3 ML3


We had simulated the disk failure by removing the client LV on one VIO server. The
padmin commands to simulate the failure were:


#rmdev -dev vtscsi01 # The virtual scsi device for the LV (lsmap -all)
#rmlv -f aix_client_lv # Remove the client LV


This caused "hdisk1" on the AIX client to go "missing" ("lsvg -p rootvg"....The
"lspv" will not show disk failure...only the disk status at the last boot..)

The recovery steps included:

VIO Server


Fix the disk failure, and restore the VIOS operating system (if necessary)mklv -lv aix_client_lv rootvg 10G # recreate the client LV mkvdev -vdev aix_client_lv -vadapter vhost1 # connect the client LV to the appropriate vhost


AIX Client


# cfgmgr # discover the new virtual hdisk2
replacepv hdisk1 hdisk2
# rebuild the mirror copy on hdisk2
# bosboot -ad /dev/hdisk2 ( add boot image to hdisk2)
# bootlist -m normal hdisk0 hdisk2 ( add the new disk to the bootlist)

# rmdev -dl hdisk1 ( remove failed hdisk1)


The "replacepv" command assigns hdisk2 to the volume group, rebuilds the mirror, and
then removes hdisk1 from the volume group.

As always, be sure to test this procedure before using in production.

Configuring MPIO for the virtual AIX client

Virtual SCSI Server Adapter and Virtual Target Device.
The mkvdev command will error out if the same name for both is used.

$ mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev hdiskpower0
Method error (/usr/lib/methods/define -g -d):
0514-013 Logical name is required.

The reserve attribute is named differently for an EMC device than the attribute
for ESS or FasTt storage device. It is “reserve_lock”.

Run the following command as padmin for checking the value of the attribute.
$ lsdev -dev hdiskpower# -attr reserve_lock

Run the following command as padmin for changing the value of the attribute.
$ chdev -dev hdiskpower# -attr reserve_lock=no

•Commands to change the Fibre Channel Adapter attributes And also change the following attributes of the fscsi#, fc_err_recov to “fast_fail” and dyntrk to “yes”

$ chdev -dev fscsi# -attr fc_err_recov=fast_fail dyntrk=yes –perm

The reason for changing the fc_err_recov to “fast_fail” is that if the Fibre
Channel adapter driver detects a link event such as a lost link between a storage
device and a switch, then any new I/O or future retries of the failed I/Os will be
failed immediately by the adapter until the adapter driver detects that the device
has rejoined the fabric. The default setting for this attribute is 'delayed_fail’.
Setting the dyntrk attribute to “yes” makes AIX tolerate cabling changes in the
SAN.

The VIOS needs to be rebooted for fscsi# attributes to take effect.

VIO VLAN Setup

HA VIO server setup

VIO server Detail

VIO Server General setup

Wednesday, July 2, 2008

VIO Commands


VIO Server Commands


lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)

Create Shared Ethernet Adapter (SEA) on VIO Server


mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)

Create Virtual Storage Device Mapping


mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.





VIO command from HMC
#viosvrcmd -m -p -c "lsmap -all

(this works only with IBM VIO Server)

see man viosvrcmd for more information

VIO Server Installation & Configuration

IBM Virtual I/O Server
The Virtual I/O Server is part of the IBM eServer p5 Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.

Installation
You have two options to install the AIX-based VIO Server:
1. Install from CD
2. Install from network via an AIX NIM-Server

Installation method
#1 is probably the more frequently used method in a pure Linux environment as installation method #2 requires the presence of an AIX NIM (Network Installation Management) server. Both methods differ only in the initial boot step and are then the same. They both lead to the following installation screen:

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM STARTING SOFTWARE IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM PLEASE WAIT... IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMElapsed time since release of system processors: 51910 mins 20 secs

------------------------------------------------------------------------------- Welcome to the Virtual I/O Server. boot image timestamp: 10:22 03/23 The current time and date: 17:23:47 08/10/2005 number of processors: 1 size of memory: 2048MB boot device: /pci@800000020000002/pci@2,3/ide@1/disk@0:\ppc\chrp\bootfile.exeSPLPAR info: entitled_capacity: 50 platcpus_active: 2This system is SMT enabled: smt_status: 00000007; smt_threads: 2 kernel size: 10481246; 32 bit kernel
-------------------------------------------------------------------------------




The next step then is to define the system console. After some time you should see the following screen:


******* Please define the System Console. *******Type a 1 and press Enter to use this terminal as the system console.


Then Choose language of installation


>>> 1 Type 1 and press Enter to have English during install.


This is the main installation menu of the AIX-based VIO-Server:



Welcome to Base Operating System
Installation and Maintenance
Type the number of your choice and press Enter. Choice is indicated by >>>.>>>

1 Start Install Now with Default Settings
2 Change/Show Installation Settings and Install
3 Start Maintenance Mode for System Recovery

88 Help ? 99 Previous Menu

>>> Choice [1]:


Select Hard disk where you need to install VIO base operating system as we do in AIX Base operating system.


Once the installation is over. You will get login Prompt similar to AIX server.

VIO server is nothing but AIX on top of that Virtualisation software loaded on it. Generally on VIO server we do not host any application. Its basically used for sharing I/O resources ( DISK & Network ) to the client LPAR hosted in same Physical server.


Initial setup
After the reboot you are presented with the VIO-Server login prompt. You can't login as user root as you have to use the special user id padmin. No initial default password is set. Immediately after login you are forced to set a new password.


Before you can do anything you have to accept the I/O Server license.
This is done with the license command

#license -accept

Once you are logged in as user padmin you find yourself in a restricted Korn shell with only a limited set of commands. You can see all available commands with the command help. All these commands are shell aliases to a single SUID-binary called ioscli which is located in the directory /usr/ios/cli/bin. If you are familiar with AIX you will recognize most commands but most command line parameters differ from the AIX versions.
As there are no man pages available you can see all options for each command separately by issueing the command help . Here is an example for the command lsmap:

$ help lsmap
Usage: lsmap {-vadapter ServerVirtualAdapter -plc PhysicalLocationCode
-all}
[-net] [-fmt delimiter]
Displays the mapping between physical and virtual devices.
-all Displays mapping for all the server virtual adapter
devices.
-vadapter Specifies the server virtual adapter device
by device name.
-plc Specifies the server virtual adapter device
by physical location code.
-net Specifies supplied device is a virtual server
Ethernet adapter.
-fmt Divides output by a user-specified delimiter.



A very important command is oem_setup_env which gives you access to the regular AIX command line interface. This is provided solely for the installation of OEM device drivers


Virtual SCSI setup

To map a LV
# mkvg: creates the volume group, where a new LV will be created using the mklv command
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with the LV
# mkvdev: maps the virtual SCSI server adapter to the LV
# lsmap -all: shows the mapping information

To map a physical disk
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with a physical disk
# mkvdev: maps the virtual SCSI server adapter to a physical disk
# lsmap -all: shows the mapping information

Client partition commands

No commands needed, the Linux kernel is notified immediately

Create new volume group datavg with member disk hdisk1
# mkvg -vg datavg hdisk1

Create new logical volume vdisk0 in volume group
# mklv -lv vdisk0 datavg 10G

Maps the virtual SCSI server adapter to the logical volume
# mkvdev -vdev vdisk0 -vadapter vhost0

Display the mapping information
#lsmap -all

Virtual Ethernet setup

To list all virtual and physical adapters use the lsdev -type adapter command.

$ lsdev -type adapter

name status description
ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ide0 Available ATA/IDE Controller Device
sisscsia0 Available PCI-X Dual Channel Ultra320 SCSI Adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

Choose the virtual Ethernet adapter we want to map to the physical Ethernet adapter.

$ lsdev -virtualname status description
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

The command mkvdev maps a physical adapter to a virtual adapter, creates a layer 2 network bridge and defines the default virtual adapter with its default VLAN ID. It creates a new Ethernet interface, e.g., ent3.
Make sure the physical and virtual interfaces are unconfigured (down or detached).

Scenario A (one VIO server)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
ent3 Available
en3
et3

This has created a new shared ethernet adapter ent3 (you can verify that with the lsdev command). Now configure the TCP/IP settings for this new shared ethernet adapter (ent3). Please note that you have to specify the interface (en3) and not the adapter (ent3).

$ mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Scenario B (two VIO servers)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1


Configure the TCP/IP settings for the new shared ethernet adapter (ent3):

$mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Client partition commands
No new commands needed just the typical TCP/IP configuration is done on the virtual Ethernet interface that it is defined in the client partition profile on the HMC



Thursday, June 19, 2008

Creating LPAR from command line from HMC

Create new LPAR using command line

mksyscfg -r lpar -m MACHINE -i name=LPARNAME, profile_name=normal, lpar_env=aixlinux, shared_proc_pool_util_auth=1,min_mem=512, desired_mem=2048, max_mem=4096, proc_mode=shared, min_proc_units=0.2, desired_proc_units=0.5,max_proc_units=2.0, min_procs=1, desired_procs=2, max_procs=2, sharing_mode=uncap, uncap_weight=128,boot_mode=norm, conn_monitoring=1, shared_proc_pool_util_auth=1


Note :- Use man mksyscfg command for all flag information.

Onother method of creating LPAR through configuration file we need to create more than one lPAR at same time

Here is an example for 2 LPARs, each definition starting at new line:

name=LPAR1,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/11/1,7/client/9/vio2a/11/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1
name=LPAR2,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/12/1,7/client/9/vio2a/12/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1

Copy this file to HMC and run:

mksyscfg -r lpar -m SERVERNAME -f /tmp/profiles.txt

where profiles.txt contains all LPAR informations as mentioned above.

To change setting of your Lpar use chsyscfg command as mentioned below.

Virtual scsi creation & Mapping Slots
#chsyscfg -m Server-9117-MMA-SNXXXXX -r prof -i 'name=server_name,lpar_id=xx,"virtual_scsi_adapters=301/client/4/vio01_server/301/0,303/client/4/vio02/303/0,305/client/4/vio01_server/305/0,307/client/4/vio02_server/307/0"'

IN Above mentioned command we are creating Virtual scsi adapter for client LPAR & doing Slot mapping with VIO servers. In above scenario there is two VIO servers for redundancy.


Slot Mapping

Vio01_server ( VSCSI server slot) Client ( Vscsi client Slot)
Slot 301 Slot 301
Slot 303 Slot 303

VIO02_server (VSCSI sever Slot) Client ( VSCSI client Slot)
Slot 305 Slot 305
Slot 307 Slot 307


These Slot are mapped in such a way if Any disk or logical volume are mapped to Virtuals scsi adapter through VIO command "mkvdev".

Syntax for Virtual scsi adapter


virtual-slot-number/client-or-server/supports-HMC/remote-lpar-ID/remote-lpar-name/remote-slot-number/is-required


As in command above mentioned command mksyscfg "virtual_scsi_adapters=301/client/4/vio01_server/301/0"

means

301 - virtual-slot-number
client-or-server - client (Aix_client)
4 -- Partiotion Id ov VIO_01 server (remote-lpar-ID)
vio01_server - remote-lpar-name
301 -- remote-slot-number (VIO server_slot means virtual server scsi slot)
1 -- Required slot in LPAR ( It cannot be removed from DLPAR operations )
0 --means desired ( it can be removed by DLPAR operations)


To add Virtual ethernet adapter & slot mapping for above created profile

#chsyscfg -m Server-9117-MMA-SNxxxxx -r prof -i 'name=server_name,lpar_id=xx,"virtual_eth_adapters=596/1/596//0/1,506/1/506//0/1,"'

Syntax for Virtual ethernet adapter


slot_number/is_ieee/port_vlan_id/"additional_vlan_id,additional_vlan_id"/is_trunk(number=priority)/is_required

means

So the adapter with this setting 596/1/596//0/1 would say it is in slot_number 596, Its is ieee, the port_vlan_id is 1, it has no VLAN id assigned, It is not a trunk adapter and it is required.

Thursday, June 12, 2008

Listing LPAR information from HMC command line interface

To list managed system (CEC) managed by HMC

# lssyscfg -r sys -F name

To list number of LPAR defined on the Managed system (CEC)

# lssyscfg -m SYSTEM(CEC) -r lpar -F name,lpar_id,state

To list LPAR created in your system use lsyscfg command as mentioned below.

# lssyscfg -r prof -m SYSTEM(CEC) --filter "lpar_ids=X, profiles_names=normal"

Flags

m-> Managed System name
lpar_ids -> Lpar ID (numeric Id for each LPAR created in the Managed system (CEC)
profile_name -> To choose profile of LPAR


To start console of LPAR from HMC

# mkvterm -m SYSTEM(CEC) --id X

m- > managed system (ex -p5-570_xyz)
id - > LPAR ID

To finish a VTERM, simply press ~ followed by a dot .!

To disconnect console of LPAR from HMC

# rmvterm -m SYSTEM(CEC) --id x

To access LPAR console for diffrent Managed system from HMC

#vtmenu


Activating Partition

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxx -r lpar -F name,lpar_id,state,default_profile VIOS1.3-FP8.0,1,Running,default linux_test,2,Not Activated,client_default hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar -o on -b norm --id 2 -f client_default

The above example would boot the partition in normal mode. To boot it into SMS menu use -b sms and to boot it to the OpenFirmware prompt use -b of.

To restart a partition the chsysstate command would look like this:

hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar --id 2 -o shutdown --immed --restart

And to turn it off - if anything else fails - use this:
hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar --id 2 -o shutdown --immed
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id,state
VIOS1.3-FP8.0,1,Running
linux_test,2,Shutting Down


Deleting Partition

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id
VIOS1.3-FP8.0,1
linux_test,2
hscroot@hmc-570:~> rmsyscfg -m Server-9110-510-SN100xxxx -r lpar --id 2
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id
VIOS1.3-FP8.0,1


Saturday, May 3, 2008

Enabling the Advanced POWER Virtualization Feature






Enabling the Advanced POWER Virtualization Feature

Before we could use the virtual I/O, we had to determine whether the machine was enabled to use the feature. To do this, we right-clicked on the name of the target server in the HMC’s ‘Server and Partition’ view and looked at that server’s properties. Figure 4 shows it did not have the feature enabled.



Users can enable this feature by obtaining a key code from their IBM sales representative using information that the HMC gathers about their machine when the user navigates to Show Code Information in the HMC. Figure 5 shows how to navigate there as well as how to get to the HMC dialog box used to enter the activation code which renders the system VIO-capable. We obtained an access code and entered it in the dialog box in Figure.





VIO server setup example

Virtual I/O Example
A user who currently runs applications on a POWER4 system may want to upgrade to a POWER5 system running AIX 5.3 in order to take advantage of virtual I/O. If so, do these three things:
y Create a Virtual I/O Server. y Add virtual LANs. y Define virtual SCSI devices.
In our example, we had an IBM eServer p5 550 Express with four CPUs that was running one AIX 5.3 database server LPAR, and we needed to create a second application server LPAR that uses a virtual SCSI disk as its boot disk. We wanted to share one Ethernet adapter between the database and application server LPARs and use this shared adapter to access an external network. Finally, we needed a private network between the two LPARs and we decided to implement it using virtual Ethernet devices (see Figure 3). We followed these steps to set up our system:
1. Enabled the Advanced POWER Virtualization feature.
2.Installed the Virtual I/O Server. y Created the Public Ethernet VLAN.
3. Installed the Virtual SCSI Devices.
4. Installed the Private Ethernet VLAN.

Virtual I/O Server installation overview

The Virtual I/O Server The Virtual I/O Server is a dedicated partition that runs a special operating system called IOS. This special type of partition has physical resources assigned to it in its HMC profile. The administrator issues server partition IOS commands to create virtual resources which present virtual LAN, virtual SCSI adapters, and virtual disk drives client partitions. The client partition’s operating systems recognize these resources as physical devices. The Virtual I/O Server is responsible for managing the interaction between the client LPAR and the physical device supporting the virtualized service. Once the administrator logs in to the Virtual I/O Server as the user padmin, he or she has access to a restricted Korn shell session. The administrator uses IOS commands to create, change, and remove these physical and virtual devices as well as to configure and manage the VIO server. Executing the help command on the VIO server command line lists the commands that are available in padmin’s restricted Korn Shell session

Virtual I/O Server installation


 VIO Server code is packaged and shipped as an AIX mksysb image
on a VIO DVD
 Installation methods
– DVD install
– HMC install - Open rshterm and type “installios”; follow the
prompts
– Network Installation Manager (NIM)
 VIO Server can support multiple client types
– AIX 5.3
– SUSE Linux Enterprise Server 9 or 10 for POWER
– Red Hat Enterprise Linux AS for POWER Version 3 and 4


Virtual I/O Server Administration
 The VIO server uses a command line interface running in a restricted shell
– no smitty or GUI
 There is no root login on the VIO Server
 A special user – padmin – executes VIO server commands
 First login after install, user padmin is prompted to change password
 After that, padmin runs the command “license –accept”
 Slightly modified commands are used for managing devices, networks,
code installation and maintenance, etc.
 The padmin user can start a root AIX shell for setting up third-party
devices using the command “oem_setup_env”

We can get all commands by executing help on padmin user id

$ help
Install Commands
Physical Volume Commands
Security Commands
updateios
lspv
lsgcl
lssw
migratepv
cleargcl
ioslevel
lsfailedlogin
remote_management
Logical Volume Command
oem_setup_env
lslv
UserID Commands
oem_platform_level
mklv
mkuser
license
extendlv
rmuser
rmlv
lsuser
LAN Commands
mklvcopy
passwd
mktcpip
rmlvcopy
chuser
hostname
cfglnagg
netstat
Volume Group Commands
Maintenance Commands
entstat
lsvg
chlang
cfgnamesrv
mkvg
diagmenu
traceroute
chvg
shutdown
ping
extendvg
fsck
optimizenet
reducevg
backupios
lsnetsvc
mirrorios
savevgstruct
unmirrorios
restorevgstruct
Device Commands
activatevg
starttrace
mkvdev
deactivatevg
stoptrace
lsdev
importvg
cattracerpt
lsmap
exportvg
bootlist
chdev
syncvg
snap
rmdev
startsysdump

cfgdev
topas
mkpath
mount
chpath
unmount
lspath
showmount
rmpath
startnetsvc
errlog
stopnetsvc

Virtual I/O Server Overview

What is Advanced POWER Virtualization (APV)
 APV – the hardware feature code for POWER5 servers that enables:
Micro-partitioning – fractional CPU entitlements from a shared pool of
processors, beginning at one-tenth of a CPU
Partition Load Manager (PLM) – a policy-based, dynamic CPU and
memory reallocation tool
– Physical disks can be shared as virtual disks to client partitions
Shared Ethernet Adapter (SEA) – A physical adapter or EtherChannel in
a VIO Server can be shared by client partitions. Clients use virtual
Ethernet adapters
 Virtual Ethernet – a LPAR-to-LPAR Virtual LAN within a POWER5 Server
– Does not require the APV feature code


Why Virtual I/O Server?
 POWER5 systems will support more partitions than physical I/O slots
available
– Each partition still requires a boot disk and network connection, but
now they can be virtual instead of physical
 VIO Server allows partitions to share disk and network adapter resources
– The Fibre Channel or SCSI controllers in the VIO Server can be
accessed using Virtual SCSI controllers in the clients
– A Shared Ethernet Adapter in the VIO Server can be a layer 2 bridge
for virtual Ethernet adapters in the clients
 The VIO Server further enables on demand computing and server
consolidation


 Virtualizing I/O saves:
– Gbit Ethernet Adapters
– 2 Gbit Fibre Channel Adapters
– PCI slots
– Eventually, IO drawers
– Server frames?
– Floor space?
– Electric, HVAC?
– Ethernet switch ports
– Fibre channel switch ports
– Logistics, scheduling, delays of physical Ethernet, SAN attach
 Some servers run 90% utilization all the time – everyone knows which
ones.
 Average utilization in the UNIX server farm is closer to 25%. They don’t
all maximize their use of dedicated I/O devices
 VIO is departure from “new project, new chassis” mindset


Virtual I/O Server Characteristics

 Requires AIX 5.3 and POWER5 hardware with APV feature
 Installed as a special purpose, AIX-based logical partition
 Uses a subset of the AIX Logical Volume Manager and attaches
to traditional storage subsystems
 Inter-partition communication (client-server model) provided via
the POWER Hypervisor
 Clients “see” virtual disks as traditional AIX SCSI hdisks, although
they may be a physical disk or logical volume on the VIO Server
 One physical disk on a VIO server can provide logical volumes for
several client partitions


Virtual Ethernet
 Virtual Ethernet
– Enable inter-lpar communications without a physical adapter
– IEEE-compliant Ethernet programming model
– Implemented through inter-partition, in-memory communication
 VLAN splits up groups of network users on a physical network onto
segments of logical networks
 Virtual switch provides support for multiple (up to 4K) VLANs
– Each partition can connect to multiple networks, through one or more adapters
– VIO server can add VLAN ID tag to the Ethernet frame as appropriate.
Ethernet switch restricts frames to ports that are authorized to receive frames
with specific VLAN ID
 Virtual network can connect to physical network through “routing"
partitions – generally not recommended


Why Multiple VIO Servers?
 Second VIO Server adds extra protection to client LPARS
 Allows two teams to learn VIO setup on single system
 Having Multiple VIO Servers will:
– Provide you Multiple paths to your OS/Data Virtual disks
– Provide you Multiple paths to your network
 Advantages:
– Highest superior availability to other virtual I/O solutions
– Allows VIO Server updates without shutting down client LPAR’s

Saturday, March 1, 2008

Virtualization VIO basics

The Virtual I/O Server is part of the IBM System p Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.

The Virtual I/O Server is software that is located in a logical partition. This software facilitates the sharing of physical I/O resources between AIX® and Linux® client logical partitions within the server. The Virtual I/O Server provides virtual SCSI target and Shared Ethernet Adapter capability to client logical partitions within the system, allowing the client logical partitions to share SCSI devices and Ethernet adapters. The Virtual I/O Server software requires that the logical partition be dedicated solely for its use.
The Virtual I/O Server is available as part of the Advanced POWER™ Virtualization hardware feature.
Using the Virtual I/O Server facilitates the following functions:
-->Sharing of physical resources between logical partitions on the system
-->Creating logical partitions without requiring additional physical I/O resources
-->Creating more logical partitions than there are I/O slots or physical devices available with the ability for partitions to have dedicated I/O, virtual I/O, or both
-->Maximizing use of physical resources on the system
-->Helping to reduce the Storage Area Network (SAN) infrastructure
The Virtual I/O Server supports client logical partitions running the following operating systems:
-->AIX 5.3 or later
-->SUSE Linux Enterprise Server 9 for POWER (or later)
-->Red Hat® Enterprise Linux AS for POWER Version 3 (update 2 or later)
-->Red Hat Enterprise Linux AS for POWER Version 4 (or later)
For the most recent information about devices that are supported on the Virtual I/O Server, to download Virtual I/O Server fixes and updates, and to find additional information about the Virtual I/O Server, see the Virtual I/O Server Web site.
The Virtual I/O Server comprises the following primary components:
-->Virtual SCSI
-->Virtual Networking
-->Integrated Virtualization Manager
The following sections provide a brief overview of each of these components.


Virtual SCSI
Physical adapters with attached disks or optical devices on the Virtual I/O Server logical partition can be shared by one or more client logical partitions. The Virtual I/O Server offers a local storage subsystem that provides standard SCSI-compliant logical unit numbers (LUNs). The Virtual I/O Server can export a pool of heterogeneous physical storage as an homogeneous pool of block storage in the form of SCSI disks.
Unlike typical storage subsystems that are physically located in the SAN, the SCSI devices exported by the Virtual I/O Server are limited to the domain within the server. Although the SCSI LUNs are SCSI compliant, they might not meet the needs of all applications, particularly those that exist in a distributed environment.
The following SCSI peripheral-device types are supported:
-->Disks backed by a logical volume
-->Disks backed by a physical volume
-->Optical devices (DVD-RAM and DVD-ROM)


Virtual networking
Shared Ethernet Adapter allows logical partitions on the virtual local area network (VLAN) to share access to a physical Ethernet adapter and to communicate with systems and partitions outside the server. This function enables logical partitions on the internal VLAN to share the VLAN with stand-alone servers.


Integrated Virtualization Manager
The Integrated Virtualization Manager provides a browser-based interface and a command-line interface that you can use to manage IBM® System p5™ and IBM eServer™ pSeries® servers that use the IBM Virtual I/O Server. On the managed system, you can create logical partitions, manage the virtual storage and virtual Ethernet, and view service information related to the server. The Integrated Virtualization Manager is packaged with the Virtual I/O Server, but it is activated and usable only on certain platforms and where no Hardware Management Console (HMC) is present.

Introduction to VIO


Prior to the introduction of POWER5 systems, it was only possible to create as many separate logical partitions (LPARs) on an IBM system as there were physical processors. Given that the largest IBM eServer pSeries POWER4 server, the p690, had 32 processors, 32 partitions were the most anyone could create. A customer could order a system with enough physical disks and network adapter cards to so that each LPAR would have enough disks to contain operating systems and enough network cards to allow users to communicate with each partition.
The Advanced POWER Virtualization™ feature of POWER5 platforms1 makes it possible to allocate fractions of a physical CPU to a POWER5 LPAR. Using virtual CPU's and virtual I/O a user can create many more LPARs on a p5 system than there are CPU's or I/O slots. The Advanced POWER Virtualization feature accounts for this by allowing users to create shared network adapters and virtual SCSI disks. Customers can use these virtual resources to provide disk space and network adapters for each LPAR they create on their POWER5 system
(see Figure ).



There are three components of the Advanced POWER Virtualization feature: Micro-Partitioning™, shared Ethernet adapters, and virtual SCSI. In addition, AIX 5L Version
5.3 allows users to define virtual Ethernet adapters permitting inter-LPAR communication. This paper provides an overview of how each of these components works and then shows the details of how to set up a simple three-partition system where one partition is a Virtual I/O Server and the other two partitions use virtual Ethernet and virtual SCSI to differing degrees. What follows is a practical guide to help a new POWER5 customer set up simple systems where high availability is not a concern, but becoming familiar with this new technology in a development environment is the primary goal.


Micro-Partitioning
An element of the IBM POWER Virtualization feature called Micro-Partitioning can divide a single processor into many different processors. In POWER4 systems, each physical processor is dedicated to an LPAR. This concept of dedicated processors is still present in POWER5 systems, but so is the concept of shared processors. A POWER5 system administrator can use the Hardware Management Console (HMC) to place processors in
a shared processor pool. Using the HMC, the administrator can assign fractions of a CPU to individual partitions. If one LPAR is defined to use processors in the shared processor pool, when those CPUs are idle, the POWER Hypervisor™ makes them available to other partitions. This ensures that these processing resources are not wasted. Also, the ability to assign fractions of a CPU to a partition means it is possible to partition POWER5 servers into many different partitions. Allocation of physical processor and memory resources on POWER5 systems is managed by a system firmware component called the POWER Hypervisor.


Virtual Networking
Virtual networking on POWER5 hardware consists of two main capabilities. One capability is provided by a software IEEE 802.1q (VLAN) switch that is implemented in the Hypervisor on POWER5 hardware. Users can use the HMC to add Virtual Ethernet adapters to their partition definitions. Once these are added and the partitions booted, the new adapters can be configured just like real physical adapters, and the partitions can communicate with each other without having to connect cables between the LPARs. Users can separate traffic from different VLANs by assigning different VLAN IDs to each virtual Ethernet adapter. Each AIX 5.3 partition can support up to 256 Virtual Ethernet adapters


In addition, a part of the Advanced POWER virtualization virtual networking feature allows users to share physical adapters between logical partitions. These shared adapters, called Shared Ethernet Adapters (SEAs), are managed by a Virtual I/O Server partition which maps physical adapters under its control to virtual adapters. It is possible to map many physical Ethernet adapters to a single virtual Ethernet adapter thereby eliminating a single physical adapter as a point of failure in the architecture.
There are a few things users of virtual networking need to consider before implementing it. First, virtual networking ultimately uses more CPU cycles on the POWER5 machine than when physical adapters are assigned to a partition. Users should consider assigning a physical adapter directly to a partition when heavy network traffic is predicted over a certain adapter. Secondly, users may want to take advantage of larger MTU sizes that virtual Ethernet allows if they know that their applications will benefit from the reduced fragmentation and better performance that larger MTU sizes offer. The MTU size limit for SEA is smaller than Virtual Ethernet adapters, so users will have to carefully choose an MTU size so that packets are sent to external networks with minimum fragmentation.


Virtual SCSI
The Advanced POWER Virtualization feature called virtual SCSI allows access to physical disk devices which are assigned to the Virtual I/O Server (VIOS). The system administrator uses VIOS logical volume manager commands to assign disks to volume groups. The administrator creates logical volumes in the Virtual I/O Server volume groups. Either these logical volumes or the physical disks themselves may ultimately appear as physical disks (hdisks) to the Virtual I/O Server’s client partitions once they are associated with virtual SCSI host adapters. While the Virtual I/O Server software is
packaged as an additional software bundle that a user purchases separately from the AIX 53 distribution, the virtual I/O client software is a part of the AIX 5.3 base installation media so an administrator does not need to install any additional filesets on a Virtual SCSI client partition. Srikrishnan provides more details on how the Virtual SCSI feature works

Friday, February 29, 2008

Step 32 & 33 Check for cluster Stabilize & VG varied on


Wait for the cluster to stabilize. You can check when the cluster is up by following
commands
a. netstat –i
b. ifconfig –a : look-out for service ip. It will show on each node if the cluster is up.


Check whether the VGs under cluster’s RGs are varied-ON and the filesystems in the
VGs are mounted after the cluster start.


Here test1vg and test2vg are VGs which are varied-ON when the cluster is started and
Filesystems /test2 and /test3 are mounted when the cluster starts.
/test2 and /test3 are in test2vg which is part of the RG which is owned by this node.
32. Perform all the tests such as resource take-over, node failure, n/w failure and verify
the cluster before releasing the system to the customer.

step 30 & 31 Synchronize & start Cluster



Synchronize the cluster:
This will sync the info from one node to second node.
Smitty cl_sync


That’s it. Now you are ready to start the cluster.
Smitty clstart

You can start the cluster together on both nodes or start individually on each node.

You can start the cluster together on both nodes or start individually on each node.

step 29 Adding IP label & RG owned by Node


Add the service IP label for the owner node and also the VGs owned by the owner node
Of this resource group.




Continue similarly for all the resource groups.

step 28 Setting attributes of Resource group


Set attributes of the resource groups already defined:
Here you have to actually assign the resources to the resource groups.
smitty hacmp -> Extended Configuration -> Extended Resource Configuration ->
HACMP extended resource group configuration

step 27 Adding Resource Group


Add Resource Groups:
smitty hacmp -> Extended Configuration -> Extended Resource Configuration ->
HACMP extended resource group configuration


Continue similarly for all the resource groups.
The node selected first while defining the resource group will be the primary owner of
that resource group. The node after that is secondary node.
Make sure you set primary node correctly for each resource group. Also set the failover/fallback policies as per the requirement of the setup

step 26 Defining IP labels


Define the service IP labels for both nodes.
smitty hacmp -> Extended Configuration -> Extended Resource Configuration ->
HACMP extended resource configuration -> Configure HACMP service IP label

step 25 Adding Persistent IP labels



Add a persistent ip label for both nodes.


step 24 Adding persistent IP


Add the persistent IPs:


smitty hacmp -> Extended Configuration -> Extended Topology Configuration ->
Configure HACMP persistent nodes IP label/Addresses



step23 Adding boot IP & Disk heart beat information




Include all the four boot ips (2 for each nodes) in this ether interface already defined.Then include the disk for heartbeat on both the nodes in the diskhb already defined



step 22 Adding device for Disk Heart Beat


Include the interfaces/devices in the ether n/w and diskhb already defined.
smitty hacmp -> Extended Configuration -> Extended Topology Configuration ->
Configure HACMP communication interfaces/devices -> Add communication
Interfaces/devices.


Step21 Adding Communication interface


Add HACMP communication interfaces. (Ether interfaces.)
smitty hacmp -> Extended Configuration -> Extended Topology Configuration ->
Configure HACMP networks -> Add a network to the HACMP cluster.
Select ether and Press enter.
Then select diskhb and Press enter. Diskhb is your non-tcpip heartbeat.

Step20 Discover HACMP config for Network settings


22. Discover HACMP config: This will import for both nodes all the node info, boot ips,
service ips from the /etc/hosts
smitty hacmp -> Extended configurations -> Discover hacmp related information

Step 19 Define Cluster Nodes


19. Define the cluster nodes. #smitty hacmp -> Extended Configuration -> Extended topology configuration -> Configure an HACMP node - > Add a node to an HACMP cluster Define both the nodes on after the other.

Thursday, February 28, 2008

Step 18 to configure HACMP





18. Define cluster name.











Steps 1 to 17 to configure HACMP

Steps to configure HACMP:

1. Install the nodes, make sure the redundancy is maintained for power supplies, n/w and
fiber n/ws. Then Install AIX on the nodes.
2. Install all the HACMP filesets except HAview and HATivoli.
Install all the RSCT filesets from the AIX base CD.
Make sure that the AIX, HACMP patches and server code are at the latest level (ideally
recommended).
4. Check for fileset bos.clvm to be present on both the nodes. This is required to make the
VGs enhanced concurrent capable.
5. V.IMP: Reboot both the nodes after installing the HACMP filesets.
6. Configure shared storage on both the nodes. Also in case of a disk heartbeat, assign a
1GB shared storage LUN on both nodes.
7. Create the required VGs only on the first node. The VGs can be either normal VGs or
Enhanced concurrent VGs. Assign particular major number to each VGs while creating
the VGs. Record the major no. information.
To check the Majar no. use the command:
ls –lrt /dev grep
Mount automatically at system restart should be set to NO.
8. Varyon the VGs that was just created.
9. V.IMP: Create log LV on each VG first before creating any new LV. Give a unique
name to logLV.
Destroy the content of logLV by: logform /dev/loglvname
Repeat this step for all VGs that were created.
10. Create all the necessary LVs on each VG.
11. Create all the necessary file systems on each LV created…..you can create mount pts
as per the requirement of the customer,
Mount automatically at system restart should be set to NO.
12. umount all the filesystems and varyoff all the VGs.

13. chvg –an ---All VGs will be set to do not mount automatically at
System restart.
14. Go to node 2 and run cfgmgr –v to import the shared volumes.
15. Import all the VGs on node 2
use smitty importvg -----import with the same major number as assigned on node
16. Run chvg –an for all VGs on node 2.
17. V.IMP: Identify the boot1, boot2, service ip and persistent ip for both the nodes
and make the entry in the /etc/hosts.