Saturday, August 16, 2008

Restoring the Virtual I/O Server

As there are 4 different ways to backup the Virtual I/O Server, so there are 4 ways to restore it.

Restoring from a tape or DVD

To restore the Virtual I/O Server from tape or DVD, follow these steps:

1. specify the Virtual I/O Server partition to boot from the tape or DVD by
using the bootlist command or by altering the bootlist in SMS menu.
2. insert the tape/DVD into the drive.
3. from the SMS menu, select to install from the tape/DVD drive.
4. follow the installation steps according to the system prompts

Restoring the Virtual I/O Server from a remote file system using a nim_resources.tar file

To restore the Virtual I/O Server from a nim_resources.tar image in a file system, perform the following steps:

1. run the installios command without any flag from the HMC command line.
a) Select the Managed System where you want to restore your Virtual I/O Server
from the objects of type "managed system" found by installios command.
b) Select the VIOS Partition where you want to restore your system from the
objects of type "virtual I/O server partition" found

c) Select the Profile from the objects of type "profile" found.
d) Enter the source of the installation images [/dev/cdrom]:
server:/exported_dir
e) Enter the client's intended IP address:
f) Enter the client's intended subnet mask:
g) Enter the client's gateway:
h) Enter the client's speed [100]:
i) Enter the client's duplex [full]:
j) Would you like to configure the client's network after the installation
[yes]/no?

2. when the restoration is finished, open a virtual terminal connection (for
example, using telnet) to the Virtual I/O Server that you restored. Some
additional user input might be required



Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


Restoring the Virtual I/O Server from a remote file system using a mksysb image
To restore the Virtual I/O Server from a mksysb image in a file system using NIM, complete the following tasks:

1. define the mksysb file as a NIM object, by running the nim command.
#nim -o define -t mksysb -a server=master –a
location=/export/ios_backup/filename.mksysb objectname
objectname is the name by which NIM registers and recognizes the mksysb
file.
2. define a SPOT resource for the mksysb file by running the nim command.
#nim -o define -t spot -a server=master -a location=/export/ios_backup/
SPOT -a source=objectname SPOTname
SPOTname is the name of the SPOT resource for the mksysb file.
3. install the Virtual I/O Server from the mksysb file using the smit command.
#smit nim_bosinst
The following entry fields must be filled:
“Installation type” => mksysb
“Mksysb” => the objectname chosen in step1
“Spot” => the SPOTname chosen in step2
4. start the Virtual I/O Server logical partition.
a) On the HMC, right-click the partition to open the menu.
b) Click Activate. The Activate Partition menu opens with a selection of
partition profiles. Be sure the correct profile is highlighted.
c) Select the Open a terminal window or console session check box to open a
virtual terminal (vterm) window.
d) Click (Advanced...) to open the advanced options menu.
e) For the Boot mode, select SMS.
f) Click OK to close the advanced options menu.
g) Click OK. A vterm window opens for the partition.
h) In the vterm window, select Setup Remote IPL (Initial Program Load).
i) Select the network adapter that will be used for the installation.
j) Select IP Parameters.
k) Enter the client IP address, server IP address, and gateway IP address.
Optionally, you can enter the subnet mask. After you have entered these
values, press Esc to return to the Network Parameters menu.
l) Select Ping Test to ensure that the network parameters are properly
configured. Press Esc twice to return to the Main Menu.
m) From the Main Menu, select Boot Options.
n) Select Install/Boot Device.
o) Select Network.
p) Select the network adapter whose remote IPL settings you previously
configured.
q) When prompted for Normal or Service mode, select Normal.
r) When asked if you want to exit, select Yes.



Integrated Virtualization Manager (IVM) Consideration

If your Virtual I/O Server is managed by the IVM, prior to backup of your system, you need to backup your partition profile data for the management partition and its clients as IVM is integrated with Virtual I/O Server, but the LPARs profile is not saved with the backupios command.

There are two ways to perform this backup:
From the IVM Web Interface
1) From the Service Management menu, click Backup/Restore
2) Select the Partition Configuration Backup/Restore tab
3) Click Generate a backup

From the Virtual I/O Server CLI
1) Run the following command
#bkprofdata -o backup

Both these ways generate a file named profile.bak with the information about the LPARs configuration. While using the Web Interface, the default path for the file is /home/padmin. But if you perform the backup from CLI, the default path will be /var/adm/lpm. This path can be changed using the –l flag. Only ONE file can be present on the system, so each time the bkprofdata is issued or the Generate a Backup button is pressed, the file is overwritten.

To restore the LPARs profile you can use either the GUI or the CLI

From the IVM Web Interface
1) From the Service Management menu, click Backup/Restore
2) Select the Partition Configuration Backup/Restore tab
3) Click Restore Partition Configuration

From the Virtual I/O Server CLI
1) Run the following command
#rstprofdata –l 1 –f /home/padmin/profile.bak

It is not possible to restore a single partition profile. In order to restore LPARs profile, none of the LPARs profile included in the profile.bak must be defined in the IVM.

Backup of Virtual I/O Server

Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server


Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data

1. Extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data

2. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =

3. Fill the value of HDISKNAME with the name of the disk to which you want to restore to

4. Put back the modified bosinst.data in the nim_resources.tar image
#tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.
Use the following steps

1. extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data
2. extract the mksysb from the nim_resources.tar
#tar -xvf nim_resources.tar ./5300-00_mksysb
3. extract the original bosinst.data
#restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
4. view the original target_disk_data
#grep -p target_disk_data ./var/adm/ras/bosinst.data

The above command displays something like the following:

target_disk_data:
PVID = 00c5951e63449cd9
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0
LOCATION = 0A-08-00-5,0
SIZE_MB = 140000
HDISKNAME = hdisk0

5. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one
6. add the modified file to the nim_resources.tar
#tar -uvf nim_resources.tar ./bosinst.data


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

Backup of Virtual I/O Server

Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server


Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data

1. Extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data

2. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =

3. Fill the value of HDISKNAME with the name of the disk to which you want to restore to

4. Put back the modified bosinst.data in the nim_resources.tar image
#tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.
Use the following steps

1. extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data
2. extract the mksysb from the nim_resources.tar
#tar -xvf nim_resources.tar ./5300-00_mksysb
3. extract the original bosinst.data
#restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
4. view the original target_disk_data
#grep -p target_disk_data ./var/adm/ras/bosinst.data

The above command displays something like the following:

target_disk_data:
PVID = 00c5951e63449cd9
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0
LOCATION = 0A-08-00-5,0
SIZE_MB = 140000
HDISKNAME = hdisk0

5. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one
6. add the modified file to the nim_resources.tar
#tar -uvf nim_resources.tar ./bosinst.data


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

Updating VIO server Patch level update

Applying updates from a local hard disk

To apply the updates from a directory on your local hard disk, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):

NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Create a directory on the Virtual I/O Server.
$ mkdir
Using ftp, transfer the update file(s) to the directory you created.
Apply the update by running the updateios command
$ updateios -accept -dev
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel

B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin.
Create a directory on the Virtual I/O Server.
$ mkdir
Using ftp, transfer the update file(s) to the directory you created.
Apply the update by running the updateios command
$ updateios -accept -install -dev
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel


NOTE:- If you are updating from an ioslevel prior to 1.3.0.1, the updateios command may indicate several failures (i.e. missing requisites) during fix pack installation. These messages are expected. Proceed with the update if you are prompted to "Continue with the installation [y/n]".

Applying updates from a remotely mounted file system

If the remote file system is to be mounted read-only, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):
NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt
Apply the update by running the updateios command.
$ updateios -accept -dev /mnt
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin.
Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt
Apply the update by running the updateios command
$ updateios -accept -install -dev /mnt
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
Back to top
Applying updates from the CD/DVD driveThis fix pack can be burned onto a CD by using the ISO image file(s). After the CD has been created, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):
NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Place the CD-ROM into the drive assigned to VIOS.
Apply the update by running the updateios command
$ updateios -accept -dev /dev/cdX
where X is the device number 0-N assigned to VIOS
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin
Place the CD-ROM into the drive assigned to VIOS
Apply the update by running the following updateios command:
$ updateios -accept -install -dev /dev/cdX
where X is the device number 0-N assigned to VIOS
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel

Expanding rootvg disk in VIO environment where 2 VIO servers have been implemented for redundancy.

This article depicts the procedure for expanding a rootvg volume group for a POWER5 LPAR where two VIO Servers have been implemented for redundancy. It also assumes that the rootvg is mirrored across both VIO Servers. This procedure is not supported by IBM, but does work.
POWER5 LPAR:
• Begin by unmirroring your rootvg and remove hdisk1 from the rootvg volume group. If there are any swap or dump devices on this disk you may need to remove them first before you can remove hdisk1 from the rootvg volume group.

• Once the disk has been removed from the rootvg, remove it from the LPAR by executing the following:
#rmdev -l hdisk1 - d

• Now you execute the bosboot command and update your bootlist now that hdisk1 has been removed and is no longer part of the system:
#bosboot -a bootlist -o -m normal hdisk0

VIO Server (where hdisk1 was created):

• Remove the device from the VIO Server using the rmdev command:
#rmdev -dev < bckcnim_hdisk1 >

• Next you will need to access the AIX* OS part of the VIO Server by executing:
#oem_setup_env

• Now you have two options: you can extend the existing logical volume or create a new one if there is more than enough disk space left. In this example I will be using bckcnim_lv. smitty extendlv and add additonal LP's or smitty mklv

• Exit out of oem_setup_env by just typing "exit" at the OS prompt.

• Now that you are back within the restricted shell of the VIO Server, execute the following command. You can use whatever device name you wish. I used bckcnim_hdisk1 just for example purposes:
#mkvdev -vdev bckcnim_lv -vadapter < vhost# > -dev bckcnim_hdisk1

POWER5 LPAR:
• Execute cfgmgr to add the new hdisk1 back to LPAR:
#cfgmgr

• Add hdisk1 back to the rootvg volume group using the extendvg or smitty extendvg.

• Mirror rootvg using the mirrorvg command or smitty mirrorvg

• Sync the mirroring process to the background and wait to complete. This is very important and must complete before dealing with what represents the hdisk0 logical volume.

• Now you must execute bosboot again and update the bootlist again:
#bosboot -a
#bootlist -o -m normal hdisk0 hdisk1

Friday, August 15, 2008

Recovering a Failed VIO Disk

Recovering a Failed VIO Disk

Here is a recovery procedure for replacing a failed client disk on a Virtual IO
server. It assumes the client partitions have mirrored (virtual) disks. The
recovery involves both the VIO server and its client partitions. However,
it is non disruptive for the client partitions (no downtime), and may be
non disruptive on the VIO server (depending on disk configuration). This
procedure does not apply to Raid5 or SAN disk failures.

The test system had two VIO servers and an AIX client. The AIX client had two
virtual disks (one disk from each VIO server). The two virtual disks
were mirrored in the client using AIX's mirrorvg. (The procedure would be
the same on a single VIO server with two disks.)

The software levels were:


p520: Firmware SF230_145 VIO Version 1.2.0 Client: AIX 5.3 ML3


We had simulated the disk failure by removing the client LV on one VIO server. The
padmin commands to simulate the failure were:


#rmdev -dev vtscsi01 # The virtual scsi device for the LV (lsmap -all)
#rmlv -f aix_client_lv # Remove the client LV


This caused "hdisk1" on the AIX client to go "missing" ("lsvg -p rootvg"....The
"lspv" will not show disk failure...only the disk status at the last boot..)

The recovery steps included:

VIO Server


Fix the disk failure, and restore the VIOS operating system (if necessary)mklv -lv aix_client_lv rootvg 10G # recreate the client LV mkvdev -vdev aix_client_lv -vadapter vhost1 # connect the client LV to the appropriate vhost


AIX Client


# cfgmgr # discover the new virtual hdisk2
replacepv hdisk1 hdisk2
# rebuild the mirror copy on hdisk2
# bosboot -ad /dev/hdisk2 ( add boot image to hdisk2)
# bootlist -m normal hdisk0 hdisk2 ( add the new disk to the bootlist)

# rmdev -dl hdisk1 ( remove failed hdisk1)


The "replacepv" command assigns hdisk2 to the volume group, rebuilds the mirror, and
then removes hdisk1 from the volume group.

As always, be sure to test this procedure before using in production.

Configuring MPIO for the virtual AIX client

Virtual SCSI Server Adapter and Virtual Target Device.
The mkvdev command will error out if the same name for both is used.

$ mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev hdiskpower0
Method error (/usr/lib/methods/define -g -d):
0514-013 Logical name is required.

The reserve attribute is named differently for an EMC device than the attribute
for ESS or FasTt storage device. It is “reserve_lock”.

Run the following command as padmin for checking the value of the attribute.
$ lsdev -dev hdiskpower# -attr reserve_lock

Run the following command as padmin for changing the value of the attribute.
$ chdev -dev hdiskpower# -attr reserve_lock=no

•Commands to change the Fibre Channel Adapter attributes And also change the following attributes of the fscsi#, fc_err_recov to “fast_fail” and dyntrk to “yes”

$ chdev -dev fscsi# -attr fc_err_recov=fast_fail dyntrk=yes –perm

The reason for changing the fc_err_recov to “fast_fail” is that if the Fibre
Channel adapter driver detects a link event such as a lost link between a storage
device and a switch, then any new I/O or future retries of the failed I/Os will be
failed immediately by the adapter until the adapter driver detects that the device
has rejoined the fabric. The default setting for this attribute is 'delayed_fail’.
Setting the dyntrk attribute to “yes” makes AIX tolerate cabling changes in the
SAN.

The VIOS needs to be rebooted for fscsi# attributes to take effect.

VIO VLAN Setup

HA VIO server setup

VIO server Detail

VIO Server General setup