AIX is short for Advanced Interactive eXecutive. AIX is the UNIX operating system from IBM for RS/6000, pSeries and the latest p5 & p5+ systems. Currently, it is called "System P". AIX/5L the 5L addition to AIX stands for version 5 and Linux affinity. AIX and RS/6000 was released on the 14th of February, 1990 in London. Currently, the latest release of AIX is version 6. AIX 7 beta will be released in Aug 2010, along with the new POWER7 hardware range. Today IBM Pureflex is
Saturday, April 6, 2013
Wednesday, October 26, 2011
The IBM AIX 5L Version 5.3 has been withdrawn from the market, effective April 29, 2011.
Highlights
• Well-proven, scalable, open, standards-based UNIX® operating system
• IBM POWER5™ technology and Virtualization Engine™ enablement help deliver power, increase utilization, ease administration and reduce total cost
• Rock-solid security and availability to help protect IT assets and keep businesses running
• Linux® affinity enables fast, cost-effective development of cross-platform applications
Accept no limits, make no compromises
In today’s on demand world, clients need a safe, secure, stable and flexible operating environment to run their organizations. That is why more and more businesses large and small are choosing AIX 5L™ for POWER™, IBM’s industrial-strength UNIX operating system (OS), for their mission-critical applications. With its proven scalability, reliability and manageability, the AIX 5L OS is an excellent choice for building a flexible IT infrastructure and is the only UNIX operating system that leverages IBM experience in building solutions that run businesses worldwide. And only one UNIX operating system leads the industry in vision and delivery of advanced support for 64-bit scalability, virtualization and affinity for Linux. That operating system is AIX 5L.
AIX 5L is an open, standards-based OS that conforms to The Open Group’s Single UNIX Specification Version 3. It provides fully integrated support for 32- and 64-bit applications. AIX 5L supports the IBM System p5™, IBM eServer™ p5, IBM eServer pSeries®, IBM eServer i5 and IBM RS/6000® server product lines, as well as IBM BladeCenter® JS2x blades and IntelliStation® POWER and RS/6000 workstations. In addition to compliance with UNIX standards, AIX 5L includes commands and application programming interfaces to ease the porting of applications from Linux to AIX 5L.
AIX 5L Version 5.3 offers new levels of innovative self-management technologies. It continues to exploit current 64-bit system and software architecture to support advanced virtualization options, as well as IBM POWER5 and POWER5+™ processors with simultaneous multithreading capability for improved performance and system utilization. AIX 5L V5.3 is enhanced to support the IBM Virtualization Engine systems technology innovations available on POWER5 and POWER5+ systems, including Micro-Partitioning™ and Virtual I/O Server support.
AIX 5L V5.3 also includes the advanced distributed file system NFSv4. NFSv4 is an open, standards-based distributed file system that offers superior security, interoperability and scalability. AIX 5L was the first commercial UNIX vendor to include NFSv4. IBM includes advanced NFSv4 file system federation and replication management capabilities.
AIX 5L V5.3 provides improved system security, enhanced performance analysis and tuning tools and added system management tools. This AIX 5L release underscores IBM’s firm commitment to long-term UNIX innovations that deliver business value.
Friday, October 21, 2011
AIX videos Links
Welcome to the POWER6/POWER7 and AIX6 Hands-On Technical Product Demos
The idea is to provide the "cook book" information to get your started with these new interesting technologies and to answer some basic questions:
•What is it about?
•How do I get started?
•What are a few typical first good uses I could start with?
•How easy is it to use?
•How could this save me time or money?
•Where can I get more information?
We hope you find these movies interesting and let you make a flying start.
Currently, the movies add up to 20.6 hours of free education on the hottest topics.
Quick links to the main sections:
1.POWER7 Processor
2.AIX Workload Partitions
3.AIX6 and AIX7 Operating System Features
4.POWER6 Processor Features
5.Integrated Virtualization Manager (IVM)
6.Other Cool & Interesting Stuff
7.IBM System Director 6 on AIX
8.Thirteen More Director 6 Movies
9.Back to POWER Basics
10.New Virtualisation Features
11.PowerHA SystemMirror 7.1 for AIX
The latest movies added are:
•2nd Sept 2010 - How Systems Director Saves Me Time - movie 84
•12th Jan 2011 - Shared Storage Pools Hands-On - movie 85
•28th Jan 2011 - Shared Storage Pools Intro - movie 86
•March 2011 - HACMP = PowerHA System Mirror
◦On this Techdocs website the famous Shawn Bodily, Power/AIX Advanced Technical Skills, USA presents four technical movies on AIX High Availability. These are in .mov format. I had to download Apple QuickTime to view them as other players don't work (mostly audio problems).
•PowerHA SystemMirror 7.1 for AIX by HACMP Guru Alex Abderrazag - this includes a set of 6 movies:
1.PowerHA Introduction to a typical environment used in the movies
2.PowerHA Configuration via SMIT
3.PowerHA The "clmgr" command
4.PowerHA High Availability in Action
5.PowerHA SAN Communications
6.PowerHA Application Monitoring
Notes on getting the movies to work on your PC:
•These movies are in Windows Movie Format (.wmv) to make them small enough to watch over the internet or download but this means some quality has been lost from the Audio Video Interleave (.avi) originals which are 60 MB to 90 MBs in size.
•When tested on some PCs it took 4 to 5 minutes to start the movie - please be patient and don't just assume its broken - some browsers download the entire movie before they start playing it.
•Other browsers handle the media file differently - some start Windows Media Player and some start it within the browser itself. Also I have found that some auto resize the movie to fit the window - so start the movie in a suitable sized browser window. The movies where first recorded at 1024x768 but later ones at 800x600 but higher resolution. Sorry but I rather create new movies than try to regenerate them all to one size. If the movie does not fit your screen the best fix is to upgrade your screen to at least 1280x1024
•If all else fails try to download the .wmv file and play locally on your machine: using Right Click on the Download link below and selecting "Save Link as" or "Save Target as". This may highlight your PC does not support this format (good luck sorting that out!).
•Linux workstation users - ideas please, can Linux handle the .mwv format? If so, how or a good alternative solution is welcome.
◦I am told that Linux can indeed play this format - have a look at this website for hints Ubuntu - Installing Mplayer Codecs and installing OpenSUSE codecs is really simple too.
•Windows 7 users - some of the older movies do not work with Windows 7 Media Player. This appears to be missing CODEC's from Windows 7 that were in early Windows versions send your comments to Microsoft. We fixed this by downloading the ACELP CODEC from http://www.voiceage.com/acelp_eval.php - strictly at your own risk. I installed the Vista-64 version as I run Windows 7. Then watching the movie via the Windows Media Center (not the Player).
•For Windows 7 these movies have been remastered (August 2010) to fix Windows 7 problems of lack of certain CODECs found in earlier Windows versions, poor audio or hangs half way through: DFP, HMC7 Partition Mobility, Memory Keys, Partition Priority, CPU Pools and Monitoring Pools, Ganglia and PowerVM LX86.
•Feed back and further ideas for movies to Nigel Griffiths - nag at uk dot ibm dot com
Saturday, October 1, 2011
Migration of AIX LPAR from one hardware to other
Technote (FAQ)
Question
I would like to move, duplicate, or clone an AIX system onto another partition or hardware. How can I accomplish this?
Answer
This document describes the supported methods of duplicating, or cloning, an AIX instance to create new systems based on an existing one. It also describes methods known to us that are not supported and will not work.
Why Duplicate A System?
Duplicating an installed and configured AIX system has some advantages over installing AIX from scratch, and can be a faster way to get a new LPAR or system up and running.
Using this method customized configuration files, installation of additional AIX filesets, application configurations and tuning parameters can be set up once and then installed on another system or partition.
Supported Methods
1. Cloning a system via mksysb backup from one system and restore to new system.
This can either be a mksysb backup of the rootvg from the source system to tape, DVD, or a file on a NIM server.
If the mksysb is going to be used to create a new machine, make sure to set 'recover devices' to NO when it is restored. This will insure that devices existing on the source machine aren't added to the ODM of the target machine.
2. Using the alt_disk_copy command.
If you have extra disks on your system, or have disks you would like to associate with one system, load a rootvg, then remove them and associate with a new system, this is a good way to copy the rootvg to them.
The basic command to do this would be:
# alt_disk_copy -BOd hdiskx
The -B option tells alt_disk_copy not to change the bootlist to this new copy of rootvg, the -O option will remove devices from your customized ODM database.
From the alt_disk_copy man page:
-O
Performs a device reset on the target altinst_rootvg. This causes
the alternate disk install to not retain any user-defined device
configurations. This flag is useful if the target disk or disks
become the rootvg of a different system (such as in the case of
logical partitioning or system disk swap).
When the disks containing this altinst_rootvg are moved to another host and then booted from, AIX will run cfgmgr and probe for any hardware, adding ODM information at that time.
3. Using alt_disk_mksysb to install a mksysb image on another disk.
Using this technique a mksysb image is first created, either to a file, on CD or DVD or tape.
Then that mksysb image is restored to unused disks in the current system using alt_disk_mksysb, again using the -O option to perform a device reset.
After this the disks could be removed and placed in a new system, or via fibre rezoned to a new system, and the rootvg booted up.
Advanced Techniques
1. Live Partition Mobility
Using the Live Partition Mobility feature of AIX you can migrate an AIX LPAR and applications from one LPAR to another while it is up and running. Please see the AIX Manual for further information:
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.baseadmn/doc/baseadmndita/lpm_overview.htm
2. Higher Availability Using SAN Services
There are methods not described here, which have been documented by DeveloperWorks.
Please refer to the document "AIX higher availability using SAN services" for details.
http://www.ibm.com/developerworks/aix/library/au-AIX_HA_SAN/index.html
Unsupported Methods
1. Using a bitwise copy of a rootvg disk to another disk.
This bitwise copy can be a one-time snapshot copy such as flashcopy, from one disk to another, or a continuously-updating copy method, such as Metro Mirror.
While these methods will give you an exact duplicate of the installed AIX operating system, the copy of the OS may not be bootable.
2. Removing the rootvg disks from one system and inserting into another.
This also applies to re-zoning SAN disks that contain the rootvg so another host can see them and attempt to boot from them.
Why don't these methods work?
The reason for this is there are many objects in an AIX system that are unique to it; Hardware location codes, World-Wide Port Names, partition identifiers, and Vital Product Data (VPD) to name a few. Most of these objects or identifiers are stored in the ODM and used by AIX commands.
If a disk containing the AIX rootvg in one system is copied bit-for-bit (or removed), then inserted in another system, the firmware in the second system will describe an entirely different device tree than the AIX ODM expects to find, because it is operating on different hardware. Devices that were previously seen will show missing or removed, and usually the system will typically fail to boot with LED 554 (unknown boot disk).
Thursday, October 21, 2010
Moving file systems from one volume group to another
ATTENTION: Make sure a full backup exists of any data you intend to migrate before using these procedures.
In AIX, storage allocation is performed at the volume group level. Storage cannot span volume groups. If space within a volume group becomes constrained, then space that is available in other volume groups cannot be used to resolve storage issues.
The solution to this problem is to add more physical volumes to the relevant volume group. This may not be an option in all environments. If other volume groups contain the required free space, the alternative is to move the required logical volumes to the desired volume group and expand them as needed.
The source logical volume can be moved to another volume group with the cplv command. The following steps achieve this.
ATTENTION: The logical volume should be inactive during these steps to prevent incomplete or inconsistent data. If the logical volume contains a mounted file system, then that file system should be unmounted first. If this logical volume is being used as a RAW storage device, then the application using this logical volume should close the device or be shut down.
1.Copy the source logical volume to the desired volume group with the cplv command.
For example, where myvg is the new volume group and mylv is the name of the user's logical volume, enter:
cplv -v myvg mylv
This will return the name of the new logical volume, such as lv00.
If this logical volume was being used for RAW storage, skip to to step 6. If this is a JFS or JFS2 file system, proceed to step 2. Note that RAW storage devices should NOT use the first 512 bytes of the RAW device. This is reserved for the LVCB or logical volume control block. cplv will not copy the first 512 bytes of the RAW logical volume, but it will update fields in the new logical volume's LVCB.
2.All JFS and JFS2 file systems require a log device. This will be a logical volume with a type of jfslog or jfs2log for JFS2 file systems. Run the lsvg -l
step 3
With a JFS2 filesystem, you also have the option of using an inline log. With inline logs, the jfs2log exists on the filesyster itself. After the cplv command is ran on a JFS2 inline log filesystem, run:
logform /dev/lvname
You should receive a message about formatting the inline log. If you do not receive a message about an inline log, then this filesystem is not a JFS2 inline log filesystem and you should treat it as a regular JFS2 filesystem. After hitting y on formatting the inline log, continue to step 3.
To make a new JFS log, enter the following command, where myvg is the name of the new volume group, enter:
mklv -t jfslog myvg 1
To make a new JFS2 log, enter: mklv -t jfs2log myvg 1
This will return a new logical volume of either type jfslog or jfs2log, such as loglv00. This new logical volume will need to be formatted with the logform command in order to function properly as either a JFS or JFS2 log. For example:
logform /dev/loglv00
Answer yes to destroy.
3.Change the filesystem to reference a log device that exists in the new volume group and the new logical volume with the chfs command.
For example, where myfilesystem is the name of the user's filesystem, enter:
chfs -a dev=/dev/lv00 -a log=/dev/loglv00 /myfilesystem
With inline logs on JFS2 filesystems this command is also different:
chfs -a dev=/dev/lv00 -a log=INLINE /myfilesystem
4.Run fsck to ensure filesystem integrity. Enter:
fsck -p /dev/lv00
NOTE: It is common to receive errors after running fsck -p /dev/lvname prior to mounting the filesystem. These errors are due to a known bug that development is currently aware of and which will be resolved in a future release of AIX. Once the filesystem is mounted, a future fsck with the filesystem unmounted should no longer produce an error.
Mount the file system.
For example,
where myfilesystem is the name of the user's file system, enter:
mount /myfilesystem
At this point, the migration is complete, and any applications or users can now access the data in this filesystem. To change the logical volume name, proceed to the following step.
NOTE: If you receive errors from the preceding step, do not continue. Contact you AIX support center.
6.Remove the source logical volume with the rmlv command.
For example,
where mylv is the name of the user's logical volume, enter:
rmlv mylv
Rename and reset any needed attributes on the new logical volume with the chlv or chmod commands. In order to rename the logical volume, the filesystem or raw logical volume must be in a closed state.
For example, where mylv is the new name you wish to change lv00 to be, enter:
chlv -n mylv lv00
Logical volumes specific to rootvg
The following logical volumes and file systems are specific to the rootvg volume group and cannot be moved to other volume groups
Logical Volume File System or Description ------------------------------------------------------
hd2 /usr
hd3 /tmp
hd4 /
hd5
hd6
hd8
hd9var /var
Moving file systems from one volume group to another
ATTENTION: Make sure a full backup exists of any data you intend to migrate before using these procedures.
In AIX, storage allocation is performed at the volume group level. Storage cannot span volume groups. If space within a volume group becomes constrained, then space that is available in other volume groups cannot be used to resolve storage issues.
The solution to this problem is to add more physical volumes to the relevant volume group. This may not be an option in all environments. If other volume groups contain the required free space, the alternative is to move the required logical volumes to the desired volume group and expand them as needed.
The source logical volume can be moved to another volume group with the cplv command. The following steps achieve this.
ATTENTION: The logical volume should be inactive during these steps to prevent incomplete or inconsistent data. If the logical volume contains a mounted file system, then that file system should be unmounted first. If this logical volume is being used as a RAW storage device, then the application using this logical volume should close the device or be shut down.
Copy the source logical volume to the desired volume group with the cplv command.
For example, where myvg is the new volume group and mylv is the name of the user's logical volume, enter: cplv -v myvg mylv
This will return the name of the new logical volume, such as lv00.
If this logical volume was being used for RAW storage, skip to to step 6. If this is a JFS or JFS2 file system, proceed to step 2. Note that RAW storage devices should NOT use the first 512 bytes of the RAW device. This is reserved for the LVCB or logical volume control block. cplv will not copy the first 512 bytes of the RAW logical volume, but it will update fields in the new logical volume's LVCB.
All JFS and JFS2 file systems require a log device. This will be a logical volume with a type of jfslog or jfs2log for JFS2 file systems. Run the lsvg -l
With a JFS2 filesystem, you also have the option of using an inline log. With inline logs, the jfs2log exists on the filesyster itself. After the cplv command is ran on a JFS2 inline log filesystem, run: logform /dev/lvname
You should receive a message about formatting the inline log. If you do not receive a message about an inline log, then this filesystem is not a JFS2 inline log filesystem and you should treat it as a regular JFS2 filesystem. After hitting y on formatting the inline log, continue to step 3.
To make a new JFS log, enter the following command, where myvg is the name of the new volume group, enter: mklv -t jfslog myvg 1
To make a new JFS2 log, enter: mklv -t jfs2log myvg 1
This will return a new logical volume of either type jfslog or jfs2log, such as loglv00. This new logical volume will need to be formatted with the logform command in order to function properly as either a JFS or JFS2 log. For example: logform /dev/loglv00
Answer yes to destroy.
Change the filesystem to reference a log device that exists in the new volume group and the new logical volume with the chfs command.
For example, where myfilesystem is the name of the user's filesystem, enter: chfs -a dev=/dev/lv00 -a log=/dev/loglv00 /myfilesystem
With inline logs on JFS2 filesystems this command is also different: chfs -a dev=/dev/lv00 -a log=INLINE /myfilesystem
Run fsck to ensure filesystem integrity. Enter: fsck -p /dev/lv00
NOTE: It is common to receive errors after running fsck -p /dev/lvname prior to mounting the filesystem. These errors are due to a known bug that development is currently aware of and which will be resolved in a future release of AIX. Once the filesystem is mounted, a future fsck with the filesystem unmounted should no longer produce an error.
Mount the file system.
For example, where myfilesystem is the name of the user's file system, enter: mount /myfilesystem
At this point, the migration is complete, and any applications or users can now access the data in this filesystem. To change the logical volume name, proceed to the following step.
NOTE: If you receive errors from the preceding step, do not continue. Contact you AIX support center.
Remove the source logical volume with the rmlv command.
For example, where mylv is the name of the user's logical volume, enter: rmlv mylv
Rename and reset any needed attributes on the new logical volume with the chlv or chmod commands. In order to rename the logical volume, the filesystem or raw logical volume must be in a closed state.
For example, where mylv is the new name you wish to change lv00 to be, enter: chlv -n mylv lv00
Logical volumes specific to rootvg
The following logical volumes and file systems are specific to the rootvg volume group and cannot be moved to other volume groups: Logical Volume File System or Description
------------------------------------------------------
hd2 /usr
hd3 /tmp
hd4 /
hd5
hd6
hd8
hd9var /var
Saturday, September 11, 2010
Error when install softwares giving bosboot verification failure
when we try to install software on aix box it gives me error of bosboot verification failed.
We check and found that /dev/ipldevice was not present. this file is a symlink of /dev/hdisk0 ( boot disk ).
so recreate the file /dev/ipldevice and make a hardlink of /dev/hdisk0
ln /dev/hdisk0 /dev/ipldevice
and then do bosboot -ad /dev/hdisk0
and then i tried to install software it works
IBM VIOS Installation over NIM
Prerequisites
IBM VIOS Installation DVD
IBM AIX Installation CD Disk 1 (I used AIX 5.3)
AIX NIM Server (I used AIX 5.3)
Power Series 5
Step 1. Prepare Installation files:
AIX File Limit SizeYou must ensure that your file size security limitation isn't going to stop you from copying your mksysb image from your cdrom to your hard drive. On your NIM server, go to the /etc/security directory and edit the limits file. Change the fsize to -1 or something large enough to ensure the mksysb image will copy over. You will need to reboot your system for this to take place, or you can log out and log in again.
cd /etc/securityvi limitsfsize = -1reboot or logout
Insert and Mount VIOS DVD
smitty mountfsFILE SYSTEM name: /dev/cd0DIRECTORY over which to mount: /cdromTYPE of file system: cdrfsMount as a READ-ONLY system? yes(or mkdir /cdrommount -v cdrfs -o ro /dev/cd0 /cdrom )
Copy installation files from cdrom:
mkdir /export/VIOScd /cdrom/nimol/ioserver_res
-rw-r--r-- 1 root system 11969032 Jul 05 07:07 booti.chrp.mp.ent.Z
-rw-r--r-- 1 root system 951 Jul 05 07:07 bosinst.data
-rw-r--r-- 1 root system 40723208 Jul 05 07:07 ispot.tar.Z
lrwxrwxrwx 1 root system 38 Jul 05 07:07 mksysb -> ../../usr/sys/inst.images/mksysb_image
cp bosinst.data /export/VIOScd /cdrom/usr/sys/inst.images
-rw-r--r-- 1 root system 1101926400 Jul 05 06:52 mksysb_image
cp mksysb_image /export/VIOS
For newer versions of vio like 1.5.2 & 2.1 you need to do the following:cp mksysb_image2 /export/VIOScd /export/VIOScat mksysb_image2 >> mksysb_image
Step 2. Define NIM Resources:
Define the mksysb_image resource object
nim -o define -t mksysb -a location=/export/VIOS/mksysb_image -a server=master vios_mksysb
Define the SPOT resource object
mkdir /export/VIOSSPOTnim -o define -t spot -a server=master -a location=/export/VIOS/VIOSSPOT -a source=vios_mksysb vios_spot
# nim -o define -t spot -a server=master -a location=/export/VIOS/VIOSSPOT -a so
urce=vios_mksysb vios_spot
Creating SPOT in "/export/VIOS/VIOSSPOT" on machine "master" from "vios_mksysb"
...
Restoring files from BOS image. This may take several minutes ...
Checking filesets and network boot images for SPOT "vios_spot".
This may take several minutes ...
Define the bosinst resource object
nim -o define -t bosinst_data -a location=/export/VIOS/bosinst.data -a server=master vios_bosinst
Define the lpp_source resource object.( You might skip this step if you wish as lpp_source provides extra filesets. But you should be able to install/runvio without lpp_source, same as AIX. Also note that different VIO version is based on different AIX version.You need to find which AIX version you need to create the lpp_source. Run lsnim -l vios_mksysb and you willsee the AIX version. You need that CD to create the lpp_source. For example for VIO 1.5 you need AIX 5.3 TL7 CD1, for 1.5.2 you need AIX 5.3 TL8 CD1 for 2.1 you need AIX 6.1 TL2. But always run lsnim -l command on the mksysb or the spot you just created to find which AIX CD you need.)
Insert the first disk of the AIX installation. NOTE: When trying to use the VIOS lpp_source, when trying to NIM an LPAR, you get a missing simages error. So instead, we will use the AIX installation CDs, which works just fine.
umount /cdrommkdir /export/VIOS/lppsourcenim -o define -t lpp_source -a source=/dev/cd0 -a server=master -a location=/export/VIOS/lppsource vios_lppsource
Step 3. Create VIOS LPAR:
NOTE: I don't have any pictures of this part of the setup, but it should be obvious how this is doneNOTE: I give specifications for a typical VIOS server. Your environment may vary.
On the Power 5 HMC, right click on Partitions and select Create -> Logical Partition
Enter a Parition ID and a Partition name. Under Partition environment, select Virtual I/O server.
Select Next.
Configure the workload group, otherwise select No. Select Next.
Enter a Profile Name. Select Next.
Enter select the amount of Minimum memory, Desired memory, and Maximum memory. I usually use 2 GB throughout all three. Select Next.
Select a Processing mode. I use Dedicated. Select Next.
If using Dedicated, enter the Minimum processors, Desired processors, and Maximum processors. I usually use 4 processors throughout all three. Select Next.
Select your Hardware Configuration that you wish to use for your environment. Select Next.
Configure I/O pools - Leave these as the default. Select Next.
Configure Virtual I/O adapters - I typically configure this part later. Select Next.
Configure Power Controlling Partitions - Leave these as the default settings. Select Next.
Optional Settings - Leave these as the default settings. Select Next.
Verify settings and Select Finish.
Step 4. NIM VIOS LPAR:
On the NIM server, start NIM: smit nim
Network Installation Management
Move cursor to desired item and press Enter.
Configure the NIM Environment
Perform NIM Software Installation and Maintenance Tasks
Perform NIM Administration Tasks
Create IPL ROM Emulation Media
Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do
Select Perform NIM Software Installation and Maintenance Tasks
Perform NIM Software Installation and Maintenance Tasks
Move cursor to desired item and press Enter.
Install and Update Software
List Software and Related Information
Software Maintenance and Utilities
Alternate Disk Installation
Manage Diskless/Dataless Machines
Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do
Select Install and Update Software
Install and Update Software
Move cursor to desired item and press Enter.
Install the Base Operating System on Standalone Clients
Install Software
Update Installed Software to Latest Level (Update All)
Install Software Bundle
Update Software by Fix (APAR)
Install and Update from ALL Available Software
Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do
Select Install the Base Operating System on Standalone Client
Install and Update Software
Move cursor to desired item and press Enter.
Install the Base Operating System on Standalone Clients
Install Software
Update Installed Software to Latest Level (Update All)
Install Software Bundle
Update Software by Fix (APAR)
Install and Update from ALL Available Software
+--------------------------------------------------------------------------+
Select a TARGET for the operation
Move cursor to desired item and press Enter.
reg-05 machines standalone
Esc+1=Help Esc+2=Refresh Esc+3=Cancel
Esc+8=Image Esc+0=Exit Enter=Do
Es /=Find n=Find Next
Es+--------------------------------------------------------------------------+
Select the machine to install VIOS on. If nothing appears, make sure you have created a standalone system.
Install and Update Software
Move cursor to desired item and press Enter.
Install the Base Operating System on Standalone Clients
Install Software
Update Installed Software to Latest Level (Update All)
Install Software Bundle
Update Software by Fix (APAR)
Install and Update from ALL Available Software
+--------------------------------------------------------------------------+
Select the installation TYPE
Move cursor to desired item and press Enter.
rte - Install from installation images
mksysb - Install from a mksysb
spot - Install a copy of a SPOT resource
Esc+1=Help Esc+2=Refresh Esc+3=Cancel
Esc+8=Image Esc+0=Exit Enter=Do
Es /=Find n=Find Next
Es+--------------------------------------------------------------------------+
Select mksysb - Install from a mksysb
Install and Update Software
Move cursor to desired item and press Enter.
Install the Base Operating System on Standalone Clients
Install Software
Update Installed Software to Latest Level (Update All)
Install Software Bundle
Update Software by Fix (APAR)
Install and Update from ALL Available Software
+--------------------------------------------------------------------------+
Select the MKSYSB to use for the installation
Move cursor to desired item and press Enter.
vios_mksysb resources mksysb
Esc+1=Help Esc+2=Refresh Esc+3=Cancel
Esc+8=Image Esc+0=Exit Enter=Do
Es /=Find n=Find Next
Es+--------------------------------------------------------------------------+
Select the vios_mksysb resource.
Install and Update Software
Move cursor to desired item and press Enter.
Install the Base Operating System on Standalone Clients
Install Software
Update Installed Software to Latest Level (Update All)
Install Software Bundle
Update Software by Fix (APAR)
Install and Update from ALL Available Software
+--------------------------------------------------------------------------+
Select the SPOT to use for the installation
Move cursor to desired item and press Enter.
vios_spot resources spot
Esc+1=Help Esc+2=Refresh Esc+3=Cancel
Esc+8=Image Esc+0=Exit Enter=Do
Es /=Find n=Find Next
Es+--------------------------------------------------------------------------+
Select vios_spot resource.
Select the vios_lppsource resource.
Select the vios_bosinst resource.
Install the Base Operating System on Standalone Clients
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Installation Target reg-05
* Installation TYPE mksysb
* SPOT vios_spot
LPP_SOURCE [vios_lppsource] +
MKSYSB vios_mksysb
BOSINST_DATA to use during installation [vios_bosinst] +
IMAGE_DATA to use during installation [] +
RESOLV_CONF to use for network configuration [] +
Customization SCRIPT to run after installation [] +
Customization FB Script to run at first reboot [] +
ACCEPT new license agreements? [no] +
Remain NIM client after install? [yes] +
PRESERVE NIM definitions for resources on [yes] +
this target?
FORCE PUSH the installation? [no] +
[MORE...31]
Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+4=List
Esc+5=Reset Esc+6=Command Esc+7=Edit Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do
NOTE: Setting the "Remain as NIM client after install" as YES can cause errors when configuring your shared ethernet adapters after install.
Press Enter to start the NIM process.
Assiging original PVID to hdisk in AIX
--------
I am using AIX 5.3L with EMC Symmetrix storage, establishing BCV's and then
splitting them and mounting them to the same host. I can mount the BCV's to
the same host using the 'recreatevg' command, but the problem I'm having is
when I'm restoring a BCV back to the standard. When the BCV is restored and
I do an 'lsvg vg1' where vg1's original PV was hdiskpower33 (the standard) it
is now hdiskpower35 (the BCV). I do not want this to happen and suspect the
problem is that the BCV's PVID was changed during the recreatevg. I want to
assign the original PVID to the BCV so that it will not remove hdiskpower33
from vg1. If I do 'rmdev -dl hdiskpower35' and then do 'lsvg -p vg1' I get
an error stating that the PVID was not found, and hdiskpower33 is not listed
as being a member of the vg1 volume group. I've tried doing:
chdev -l hdiskpower35 -a pv={original pvid}
but am told it is an illegal parameter. Is there another way to do this?
Solution:
---------
Use at your own risk:
1) BACKUP old disk critical information
# dd if=/dev/hdisk9 of=/tmp/hdisk9.save bs=4k count=1
If something were to go wrong and the head information got damaged
use the following to RECOVER the origional PVID and head information
RECOVERY
# dd if=/tmp/hdisk9.save of=/dev/hdisk9 bs=4k count=1
2) Find the origional PVID. This might be seen with lspv importvg, or
varyonvg. Our example origional PVID is "0012a3e42bc908f3"
# lqueryvg -Atp /dev/hdisk9
...
Physical: 0012a3e42bc908f3 2 0
00ffffffc9cc5f99 1 0
...
3) Verify that the disk sees an invalid PVID. The first 2 data fields
of offset 80 contain the PVID.
# lquerypv -h /dev/hdisk9 80 10
00000080 00001155 583CD4B0 00000000 00000000 ...UX<.......... ^^^^^^PVID^^^^^^^ 4) Translate the ORIGIONAL PVID into the octal version. Take every 2 digits of the hex PVID and translate it to octal. This can be done by hand, calculator, script, or web page. 00012a3e42bc908f3 -> 00 12 a3 e4 2b c9 08 f3
Octal version -> 000 022 243 344 053 311 010 363
5) Write the binary version of the PVID to a file by using the octal
values. Each octal char is lead with a backslash-Zero "\0". Do
not use spaces or any other characters except for the final \c to
keep from issuing a hard return.
# echo "\0000\0022\0243\0344\0053\0311\0010\0363\c" >/tmp/oldpvid
6) Verify that the binary pvid was written correctly. The origional
hex PVID should be seen AND the final address should be "0000010"
If EITHER of these is incorrect, try again, make sure there are no
spaces in the echo and the echo ends with a "\c".
# od -x /tmp/oldpvid
0000000 0012 a3e4 2bc9 08f3
0000010
7) Restore the PVID to the disk. You sould see 8 records in and out.
If there are more or less, restore the origional 4K block by using
the recovery method in step 1.
# cat /tmp/oldpvid dd of=/dev/hdisk9 bs=1 seek=128
8+0 records in.
8+0 records out.
8) Verify that the PVID was written correctly
#lquerypv -h /dev/hdisk9 80 10
00000080 0012A3E4 2BC908F3 00000000 00000000 ....+...........
9) Reconfigure the disk definitions on all systems attaching to that disk.
The ODM information for that drive will NOT be updated until the
disk is removed and reconfigured. Until that reconfigure commands
like `lspv` will still be incorrect.
Script to delete failed path MPIO in AIX
do
for path in `lspath -l $disk -F "status connection" grep Failed awk '{ print $2 }'`
do
echo $disk
rmpath -l $disk -w $path -d
done
done
Sunday, August 22, 2010
IBM AIX 7 Open Beta Program
IBM AIX 7 Open Beta Program
Welcome to the open beta for IBM’s premier UNIX operating system, AIX 7. AIX 7 is binary compatible with previous releases of AIX including AIX 6, 5.3, 5.2 and 5.1. AIX 7 extends the leadership features of AIX to include exciting new capabilities for vertical scalability, virtualization and manageability.
The open beta for AIX 7 is intended to give our clients, independent software vendors and business partners the opportunity to gain early experience with this new release of AIX prior to general availability. This open beta can be run on any Power Systems, IBM System p or eServer pSeries system that is based on POWER4, PPC970, POWER5, POWER6, or POWER7 processors.
Key features of AIX 7 included in this beta:
Virtualization
AIX 5.2 Workload Partitions for AIX 7 - This new enhancement to WPAR technology allows a client to backup an LPAR running AIX V5.2 and restore it into a WPAR running on AIX 7 on POWER7. This capability is designed to allow clients to easily consolidate smaller workloads running on older hardware onto larger, more efficient POWER7 systems. Although this capability is designed specifically for POWER7, it can be tested on older POWER processors during the open beta. Please note that this capability will only work with AIX 5.2
Support for Fibre Channel adapters in a Workload Partition AIX 7 includes support to allow a physical or virtual fibre channel adapter to a WPAR. This allows WPAR to directly own SAN devices including tape devices using the ‘atape” device type. This capability is designed to expand the capabilities of a Workload Partition and simplify management of storage devices.
Security
Domain Support in Role Based Access Control - This enhancement to RBAC allows a security policy to restrict administrative access to a specific set of similar resources, such as a subset of the available network adapters. This allows IT organizations that host services for multiple tenants to restrict administrator access to only the resources associated with a particular tenant. Domains can be used to control access to Volume Groups, Filesystems, files, devices (in /dev)
Manageability
NIM thin server Network Installation Management (NIM) support for thin servers has been enhanced to support NFSV4 and IPv6. Thin Servers are diskless or dataless AIX instances that boot from a common AIX image via NFS.
Networking
Etherchannel enhancements - Support for the 802.3AD Etherchannel has been enhanced to insure that a link is Link Aggregation Control Protocol (LACP) ready before sending data packets.
Product plans referenced in this document may change at any time at IBM’s sole discretion, and are not intended to be a commitment to future product or feature availability. All statements regarding IBM future direction, plans, product names or intent are subject to change or withdrawal without notice and represent goals and objectives only. All information is provided on an as is basis, without any warranty of any kind.
Links for AIX 7.0
IBM AIX 7 Open Beta Program
The following links provide additional valuable resources related to this AIX 7 Open Beta.
AIX 7 On-line Information Center
The official IBM statement on AIX binary compatibility
IBM articles, tutorials, and technical resources for AIX and UNIX users
A full range of IBM POWER System solutions to match your business needs
A full range of IBM POWER System hardware to match your business needs
Discover the POWER of IBM POWER System servers and solutions
PartnerWorld for AIX has resources and support for IBM Business Partners looking to exploit and learn about AIX
A one stop shop to learn about the benefits, resources and support available to IBM Business Partners for IBM Systems, servers and storage
New Features in AIX Version 7
New Features in AIX Version 7
IBM announced AIX version 7. http://www-03.ibm.com/systems/power/software/aix/v71/preview.html
Several new features were mentioned in the launch, but there were two new features that I found particularly interesting:
- AIX 5.2 WPARs for AIX 7
- Cluster Aware AIX
AIX 5.2 WPARs for AIX 7
In AIX version 7, administrators will now have the capability to create Workload Partitions (WPARs) that can run AIX 5.2, inside an AIX 7 operating system instance. This will be supported on the POWER7 server platform. This is pretty cool. IBM have done this to allow some customers, that are unable to migrate to later generations of AIX and Power, to move to POWER7 whilst keeping their legacy AIX 5.2 systems operational. So for those clients that MUST stay on AIX 5.2 (for various reasons such as Application support) but would like to run their systems on POWER7, this feature may be very attractive. It will help to reduce the effort required when consolidating older AIX 5.2 systems onto newer hardware. It may also reduce some of the risk associated with migrating applications from one version of the AIX operating system to another.
To migrate an existing AIX 5.2 system to an AIX 7 WPAR, administrators will first need to take a mksysb of the existing system. Then they can simply restore the mksysb image inside the AIX 7 WPAR. IBM will also offer limited defect and how-to support for the AIX 5.2 operating system in an AIX 7 WPAR. These WPARs can, of course, be managed via IBM Systems Director with the Workload Partitions Manager plug-in.
The following figure provides a visualization of how these AIX 5.2 systems will fit into an AIX 7 WPAR. The WPARs in blue are native AIX 7 WPARs, while the WPARs in orange are AIX 5.2 WPARs running in the same AIX 7 instance. Pretty amazing really!
Cluster Aware AIX
Another very interesting feature of AIX 7 is a new technology known as “Cluster Aware AIX”. Believe it or not, administrators will now be able to create a cluster of AIX systems using features of the new AIX 7 kernel. IBM have introduced this “in built” clustering to the AIX OS in order to simplify the configuration and management of highly available clusters. This new AIX clustering has been designed to allow for:
- The easy creation of clusters of AIX instances for scale-out computing or high availability.
- Significantly simplify cluster configuration, construction, and maintenance.- Improve availability by reducing the time to discover failures.
- Capabilities such as common device naming to help simplify administration.
- Built in event management and monitoring.
- A foundation for future AIX capabilities and the next generation of PowerHA SystemMirror.
This does not replace PowerHA but it does change the way in which AIX traditionally integrates with cluster software like HACMP and PowerHA. A lot of the HA cluster functionality is now available in the AIX 7 kernel itself. However, the mature RSCT technology is still a component of the AIX and PowerHA configuration. I’m looking forward to reading more about this new technology and it’s capabilities.
These are just two of the many features introduced in AIX 7. I’m eagerly looking forward to what these features and others mean for the future of the AIX operating system. It’s exciting to watch this operating system grow and strengthen over time. I can’t wait to get my hands on an AIX 7 system so that I can trial these new features.
And speaking of trialing AIX 7, there is good news. IBM plan on running another AIX Open Beta program for AIX 7 mid 2010. Just as they did with AIX Version 6, customers will be given the opportunity to download a beta version of AIX 7 and trial it on their own systems in their own environment. This is very exciting and I’m really looking forward to it.
=================================================================
Clustering infrastructureAIX 7 (which some are calling Cluster Aware AIX) will be the first AIX release that will provide for built-in clustering. This promises to simplify high-availability application management with PowerHA SystemMirror.
It should be noted: This innovation isn’t being targeted as a replacement of PowerHA, but it’s supposed to change the way in which AIX integrates with it. Much of the PowerHA cluster functionality will now be available in the actual kernel. It’s simply designed to more easily construct and manage clusters for scale-out and high-availability applications.
Furthermore, AIX 7 will have features that will help reduce the time to discover failures, along with common device naming, to help systems administrators simplify cluster administration. It will also provide for event management and monitoring.
I am excited about this tighter integration between PowerHA and AIX, because anything that provides greater transparency between high-availability software and the OS further eases the burden of system administrators who architect, install and configure high-availability software.
Vertical ScalabilityAIX 7 will allow you to scale up to 1,024 threads or 256 cores in a single partition. This is simply outstanding; No other Unix OS can come close to this.
Profile-Based Configuration ManagementIBM Systems Director enhancements will simplify AIX systems-configuration management. IBM is calling this facility profile-based configuration management.
At a high level it’ll provide simplified discovery, application, update and AIX configuration-verification properties across multiple systems. It’ll be particularly helpful in terms of cloning out changes to ‘pools’ of systems. After populating a profile into a file (XML), it can then be deployed to the other servers in the pool (see Figure 1).
AIX 5.2 and WPARsAIX 7 will now provide the capability to run AIX 5.2 inside of a Workload Partition (WPAR). This will allow for further IT consolidation and flexible deployment opportunities (such as moving up to the POWER7 architecture) to folks who are still on older AIX OSs. In an easy way, it also allows you to backup an existing environment and restore it inside an AIX 7 WPAR. Furthermore, it will allow you to do this through IBM Systems Director’s Workload Partitions Manager.
I’m particularly impressed with this feature. Most companies look to discontinue support for their older operating systems as soon as they can. On the other hand, IBM continues to listen to their customers and provide additional features to folks on older versions of their systems. For example, AIX 7 will also support older hardware, including POWER4 processor-based servers. While this type of compatibility is critical to those who want to take advantage of the feature/functionality improvements of AIX but can’t afford to upgrade their hardware, it should also be noted AIX 7 will include exploitation features that take full advantage of POWER7 processor-based servers. Additionally, AIX 7 will have full binary compatibility for application programs developed on prior versions of AIX—as long as these programs comply with reasonable programming standards.
AIX7 WPAR support
Besides adding AIX 5.2 support to WPAR's (workload partitions) AIX7 is also adding more virtual device support and security to the WPAR virtualization engine.
AIX WPAR support will add Fibre Channel support- or exporting a virtual (NPIV) or physical fibre channel adapter.Fibre channel tape systems using the "atape" driver are also
supported inside the WPAR in this configuration.
With the next releases AIX, VIO SCSI disks are now supported in a WPAR in the same manner as Fibre Channel disks. This feature is available on both AIX V7.1 and AIX V6.1 with the 6100-06 Technology Level.
Trusted Kernel Extension Loading and Config from WPAR (AIX 7.1 Only)
AIX V7.1 provides the capability for a Global administrator to export specific kernel extensions for a WPAR administrator to have the ability to load and configure from inside the WPAR.
Workload Management in AIX: WLM, DLPAR and now WPAR
Over the years several methods of Workload Management have been developed as means to enhance resource utilization of systems. Some might say the Workload Management is a form of Performance Management – but that is only true in that Performance Management is actually Resource Management. In this sense, Workload Management is the collection of services and resource management applications that are used to monitor and regulate the resources a workload is permitted at any particular time.
Legacy UNIX systems had a very simple model of workload resource management. This was also known as sizing the box. Generally, the workload was the collection of all applications or processes running on the box. Various tools could be used – either system or application tools – to tune the application(s) to best fit the box. In other words, the amount of resources available to the workload was constant – whatever the box had.
With the release of AIX 4.3.3 in 1999 AIX included a new system component – AIX Workload Manager (WLM). This component made it possible to define collections of applications and processes into a workload, or workload class. A workload class was given resource entitlement (CPU, Memory, and starting with AIX 5.1 local I/O) in terms of shares. If there were four (4) classes active, and a resource was saturated (being used 100%) then AIX would compute a resource entitlement percentage based on the total active shares. If the four (4) classes had, respectively 15, 30, 45, and 60 shares the classes would be entitled to respectively – 10, 20, 30 and 40% of the resource concerned. As long as a resource was not constrained (less than 100% usage) WLM, by default, would not restrict a class resource entitlement.
The primary advantages of WLM are that it is included in the AIX base at no extra charge and is a software solution requiring no special hardware. However, performance specialists seem to have found it difficult to think in terms of performance management on a system which is regularly going to need more resources than it has. In other words, the legacy UNIX workload management model dominates most system administrations view of resource management.
Firmware Partitioning as Resource Management
Parallel with the development of WLM, a software solution for workload resource monitoring and control, the use of dividing a single system in to several separate system definitions commonly referred to as partitions. Virtualization in UNIX had become. Unlike WLM, partitioning required specific hardware features. For AIX, partitioning was introduced with the p690 POWER4 system.
Partitioning is a technique used to define multiple systems from a single system. A partition is allocated a specific amount of resources that it has to use as it desires. Individual partitions resources are isolated via firmare (Logical Partitions, or LPAR) or by the hardware component assembly (Physical Partition, or PPAR).
Initially, resource assignment was static. To change the resource allocation a halt and a (re)activation of the partition was required. Starting with AIX 5.2 the acronym DLPAR (dynamic LPAR) was introduced. This enhancement enables dynamic resource allocation to a partition, that is, a partition can have it's allocation of resources increased or decreased without a system, i.e. partition, halt and reactivation. With POWER5 the resource virtualization continued with the introduction of the firmware hypervisor, micropartitions, virtual Ethernet and virtual SCSI.
The advantages of partitioning are the flexibility in allocation of resources and the isolation quaranteed by the hypervisor firmware. However, partitioning requires specific hardware. Also, an administrator needs extra training to create and manage partition resources.
AIX 6.1 introduces Workload Partitions
A workload partition is a virtual system environment created using software tools. A workload partition is hosted by an AIX software environment. Applications and users working within the workload partition see the workload partition as if it was a regular system. Although less than a firmware created and managed partition – workload partition processes, signals and even file systems are isolated from the hosting environment as well as from other workload partitions. Additionally, workload partitions can have their own users, groups and dedicated network addresses. Interpprocess communication is limited to processes running within the workload partition.
AIX supports two kinds of Workload Partiions (WPARs).
A System WPAR is an environment that can be best compared to a stand-alone system. This WPAR runs it owns services and does not share writeable file systems with another WPAR or the AIX hosting (global) system.
An Application WPAR has all the process isolation a system WPAR has. The defining charactgeristic is that it shares file system name space with the global system and applications defined within the application WPAR environment.
Both types of WPARs can be configured for mobility to allow running insttances of the WPAR to be moved between physical systems or LPARs using the AIX Workload Partitin Manager LPP.
Summary
With the addition of WPAR (workload partitions) to AIX workload management has an intermediate level of flexibility and isolation of applications, users and data. Using WLM all process share the same environment with only CPU, memory and I/O resource allocation being managed when a resource is saturated. Firmware based virtualization of partitions starting with POWER4 hardware provides both hard allocation resource levels as well as complete isolation of services, network addresses, devices, etc. from all other partitions. Workload partitions, or WPAR, are a software based virtualization of partitions supporting a high degree of isolation and enhanced mobility over supporting global systems.
Saturday, August 7, 2010
Powerpath CLI Commands
Command
Description
powermt
Manages a PowerPath environment
powercf
Configures PowerPath devices
emcpreg -install
Manages PowerPath license registration
emcpminor
Checks for free minor numbers
emcpupgrade
Converts PowerPath configuration files
powermt command
Command
Description
powermt check
Checks for, and optionally removes, dead paths.
powermt check_ registration
Checks the state of the PowerPath license.
powermt config
Configures logical devices as PowerPath devices.
powermt display
powermt watch
Displays the state of HBAs configured for PowerPath.
powermt watch is deprecated.
powermt display options
Displays the periodic autorestore setting.
powermt load
Loads a PowerPath configuration.
powermt remove
Removes a path from the PowerPath configuration.
powermt restore
Tests and restores paths.
powermt save
Saves a custom PowerPath configuration.
powermt set mode
Sets paths to active or standby mode.
powermt set
periodic_autorestore
Enables or disables periodic autorestore.
powermt set policy
Changes the load balancing and failover policy.
powermt set priority
Sets the I/O priority
powermt version
Returns the number of the PowerPath version for which powermt was created.
powermt command examples
powermt display: # powermt display paths class=all
# powermt display ports dev=all
# powermt display dev=all
powermt set: To disable a HBA from passing I/O # powermt set mode=standby adapter=
To enable a HBA from passing I/O # powermt set mode=active adapter=
To set or validate the Load balancing policy
To see the current load-balancing policy and I/Os run the following command
# powermt display dev=
so = Symmetrix Optimization (default)
co = Clariion Optimization
li = Least I/Os (queued)
lb = Least Blocks (queued)
rr = Round Robin (one path after another)
re = Request (failover only)
nr = No Redirect (no load-balancing or failover)
To set to no load balancing # powermt set policy=nr dev=
To set the policy to default Symmetrix Optimization # powermt set policy=so dev=
To set the policy to default Clariion Optimization # powermt set policy=co dev=
pprootdev
To bring the rootvg devices under powerpath control # pprootdev on
To bring back the rootvg disks back to hdisk control # pprootdev off
To temporarily bring the rootvg disks to hdisk control for running "bosboot" # pprootdev fix
powermt command examples with output
To validate the installation # powermt check_registration
Key B3P3-HB43-CFMR-Q2A6-MX9V-O9P3
Product: PowerPath
Capabilities: Symmetrix CLARiiON
To display each device's path, state, policy and average I/O information
# powermt display dev=emcpower6a
Pseudo name=emcpower6a
Symmetrix ID=000184503070
Logical device ID=0021
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
0 sbus@2,0/fcaw@2,0 c4t25d225s0 FA 13bA active dead 0 1
1 sbus@6,0/fcaw@1,0 c5t26d225s0 FA 4bA active alive 0 0
To show the paths and dead paths to the storage port
# powermt display paths
Symmetrix logical device count=20
----- Host Bus Adapters --------- ------ Storage System ----- - I/O Paths -
### HW Path ID Interface Total Dead
0 sbus@2,0/fcaw@2,0 000184503070 FA 13bA 20 20
1 sbus@6,0/fcaw@1,0 000184503070 FA 4bA 20 0
CLARiiON logical device count=0
----- Host Bus Adapters --------- ------ Storage System ----- - I/O Paths -
### HW Path ID Interface Total Dead
To display the storage ports information
# powermt display ports
Storage class = Symmetrix
----------- Storage System --------------- -- I/O Paths -- --- Stats ---
ID Interface Wt_Q Total Dead Q-IOs Errors
000184503070 FA 13bA 256 20 20 0 20
000184503070 FA 4bA 256 20 0 0 0
Storage class = CLARiiON
----------- Storage System --------------- -- I/O Paths -- --- Stats ---
ID Interface Wt_Q Total Dead Q-IOs Errors
Moving an LPAR to another frame
Steps for migrating LPAR from ONE Frame to Another IBM Frame
1.Have Storage zone the LPARs disk to the new HBA(s). Also have them add an additional 40GB drive for the new boot disk. By doing this we have a back out to the old boot disk on the old frame.
2. Collect data from the current LPAR:
a. Network information – Write down IP and ipv4 alias(s) for each interface
b. Run “oslevel –r” - will need this when setting up NIM for the mksysb recovery
c. Is the LPAR running AIO, if so will need to configure after the mksysb recovery
d. Run “lspv”, save this output, contains volume group and PVID information
e. Any other customizations you deem neccessary
3. create mksysb backup of this LPAR
4. Reconfigure the NIM machine for this LPAR, with new Ethernet MAC address. Foolproof method is to remove the machine and re-create it.
5. In NIM, configure the LPAR for a mksysb recovery. Select the appropriate SPOT and LPP Source, base on “oslevel –r” data collected in step 2.
6. Shut down the LPAR on the old frame (Halt the LPAR)
7. Move network cables, fibre cables, disk, zoning
8. if needed, to the LPAR on the new frame
9. On the HMC, bring up the LPAR on the new frame in SMS mode and select a network boot. Verify SMS profile has only a single HBA (if Clarrion attached, zoned to a single SP), otherwise the recovery will fail with a 554.
10. Follow prompts for building a new OS. Select the new 40GB drive for the boot disk (use lspv info collected in Step 2 to identify the correct 40GB drive). Leave defaults for remaining questions NO (shrink file systems, recover devices, and import volume groups).
11. After the LPAR has booted, from the console (the network interface may be down):
a. lspv Note the hdisk# of the bootdisk
b. bootlist –m normal –o Verify boot list is set – if not, set it
bootlist –m normal –o hdisk#
c. ifconfig en0 down If interface got configured, down it
d. ifconfig en0 detach and remove it
e. lsdev –Cc adapter Note Ethernet interfaces (ex. ent0, ent1)
f. rmdev –dl
g. rmdev –dl
h. cfgmgr Will rediscover the en/ent devices
i. chdev –l
running GIG, leave defaults
j. Configure the network interfaces and aliases Use info recorded from step 2 mktcpip –h
chdev –l en# -a alias4=
k. Verify that the network is working.
12. If LPAR was running AIO (data collected in Step 2), verify it is running (smitty aio)
13. Check for any other customizations which may have been made on this LPAR
14. Vary on the volume groups, use the “lspv” data collected in Step 2 to identify by PVID a hdisk in each volume group. Run for each volume group:
a. importvg –y
b. varyonvg
c. mount all Verify mounts are good
15. Verify paging space is configured appropriately
a. lsps –a Look for Active and Auto set to yes
b. chps –ay pagingXX Run for each paging space, sets Auto
c. swapon /dev/pagingxx Run for each paging space, sets Active
16. Verify LPAR is running 64 bit
a. bootinfo –K If 64, you are good
b. ln –sf /usr/lib/boot/unix_64 /unix If 32, change to run 64 bit
c. ln –sf /usr/lib/boot/unix_64 /usr/lib/boot/unix
d. bosboot –ak /usr/lib/boot/unix_64
17. If LPAR has Power Path
a. Run “powermt config” Creates the powerpath0 device
b. Run “pprootdev on” Sets Power Path control of the boot disk
c. If Clariion, make configuration changes to enable SP failover
chdev -l powerpath0 -Pa QueueDepthAdj=1
chdev –l fcsX –Pa num_cmd_elems=2048 For each fiber adapter
chdev –l fscsiX –Pa fc_err_recov=fast_fail For each fiber adapter
d. Halt the LPAR
e. Activate the Normal profile If Sym/DMX – verify two HBA’s in profile
f. If Clarrion attached, have Storage add zone to 2nd SP
i. Run cfgmgr Configure the 2nd set of disk
g. Run “pprootdev fix” Put rootdisk pvid’s back on hdisk
h. lspv grep rootvg Get boot disk hdisk#
i. bootlist –m normal –o hdisk# hdisk# Set the boot list with both hdisk
20. From the HMC, remove the LPAR profile from the old frame
21. Pull cables from the old LPAR (Ethernet and fiber), deactivate patch panel ports
22. Update documentation, Server Master, AIX Hardware spreadsheet, Patch Panel spreadsheet
23. Return the old boot disk to storage.
Unique VLAN ID for SEA failover control channel setup
Always select unique VLAN ID – which dosn’t exist on any of your organization network to avoid conflict when setting up dual VIOS with a control channel for SEA failover.. failure to follow this may result in a network storm. ( Very important and I couldn’t find any note on IBM site about it)
Requirements for Configuring SEA Failover
One SEA on one VIOS acts as the primary (active) adapter and the second SEA on the second VIOS acts as a backup (standby) adapter.
Each SEA must have at least one virtual Ethernet adapter with the “Access external network” flag (previously known as “trunk” flag) checked. This enables the SEA to provide bridging functionality between the two VIO servers.
This adapter on both the SEAs has the same PVID, but will have a different priority value.
A SEA in ha_mode (Failover mode) might have more than one trunk adapters, in which case all should have the same priority value.
The priority value defines which of the two SEAs will be the primary and which will be the backup. The lower the priority value, the higher the priority, e.g. an adapter with priority 1 will have the highest priority.
An additional virtual Ethernet adapter , which belongs to a unique VLAN on the system, is used to create the control channel between the SEAs, and must be specified in each SEA when configured in ha_mode.
The purpose of this control channel is to communicate between the two SEA adapters to determine when a fail over should take place.
Upgrading PowerPath in a dual VIO server environment
When upgrading PowerPath in a dual Virtual I/O (VIO) server environment, the devices need to be unmapped in order to maintain the existing mapping information.
To upgrade PowerPath in a dual VIO server environment:
1. On one of the VIO servers, run lsmap -all.
This command displays the mapping between physical, logical,
and virtual devices.
$ lsmap -all
SVSA Physloc Client Partition ID
————— ————————————– ——————–
vhost1 U8203.E4A.10B9141-V1-C30 0×00000000
VTD vtscsi1
Status Available
LUN 0×8100000000000000
Backing device hdiskpower5
Physloc U789C.001.DQD0564-P1-C2-T1-L67
2. Log in on the same VIO server as the padmin user.
3. Unconfigure the PowerPath pseudo devices listed in step 1 by
running:
rmdev -dev
where
For example, rmdev -dev vtscsil -ucfg
The VTD status changes to Defined.
Note: Run rmdev -dev
4. Upgrade PowerPath
=======================================================================
1. Close all applications that use PowerPath devices, and vary off all
volume groups except the root volume group (rootvg).
In a CLARiiON environment, if the Navisphere Host Agent is
running, type:
/etc/rc.agent stop
2. Optional. Run powermt save in PowerPath 4.x to save the
changes made in the configuration file.
Run powermt config.
5. Optional. Run powermt load to load the previously saved
configuration file.
When upgrading from PowerPath 4.x to PowerPath 5.3, an error
message is displayed after running powermt load, due to
differences in the PowerPath architecture. This is an expected
result and the error message can be ignored.
Even if the command succeeds in updating the saved
configuration, the following error message is displayed by
running powermt load:
host1a 5300-08-01-0819:/ #powermt load Error loading auto-restore value
Warning:Error occurred loading saved driver state from file /etc/powermt.custom
host1a 5300-08-01-0819:/ #powermt load Error loading auto-restore value
Warning:Error occurred loading saved driver state from file /etc/powermt.custom
…
Loading continues…
Error loading auto-restore value
When you upgrade from an unlicensed to a licensed version of
PowerPath, the load balancing and failover device policy is set to
bf/nr (BasicFailover/NoRedirect). You can change the policy by
using the powermt set policy command.
=======================================================================
5. Run powermt config.
6. Log in as the padmin user and then configure the VTD
unconfigured from step 3 by running:
cfgdev -dev
Where
For example, cfgdev -dev vtscsil
The VTD status changes to Available.
Note: Run cfgdev -dev
7. Run lspath -h on all clients to verify all paths are Available.
8. Perform steps 1 through 7 on the second VIO server.
Recovering emc dead path
# powermt display dev=allAnd you notice that there are "dead" paths, then these are the commands to run in order to set these paths back to "alive" again, of course, AFTER ensuring that any SAN related issues are resolved. To have PowerPath scan all devices and mark any dead devices as alive, if it finds that a device is in fact capable of doing I/O commands, run:
# powermt restoreTo delete any dead paths, and to reconfigure them again:
# powermt reset# powermt configOr you could run:
# powermt check
EMC - MPIO
You can run into an issue with EMC storage on AIX systems using MPIO (No Powerpath) for your boot disks:After installing the ODM_DEFINITONS of EMC Symmetrix on your client system, the system won't boot any more and will hang with LED 554 (unable to find boot disk). The boot hang (LED 554) is not caused by the EMC ODM package itself, but by the boot process not detecting a path to the boot disk if the first MPIO path does not corresponding to the fscsiX driver instance where all hdisks are configured. Let me explain that more in detail: Let's say we have an AIX system with four HBAs configured in the following order:
# lscfg -v grep fcsfcs2 (wwn 71ca) -> no devices configured behind this fscsi2 driver instance
(path only configured in CuPath ODM table)fcs3 (wwn 71cb) -> no devices configured behind this fscsi3 driver instance (path only configured in CuPath ODM table)fcs0 (wwn 71e4) -> no devices configured behind this fscsi0 driver instance (path only configured in CuPath ODM table)fcs1 (wwn 71e5) -> ALL devices configured behind this fscsi1 driver instance Looking at the MPIO path configuration, here is what we have for the rootvg disk:
# lspath -l hdisk2 -H -F"name parent path_id connection status" name parent path_id connection status hdisk2 fscsi0 0 5006048452a83987,33000000000000 Enabled hdisk2 fscsi1 1 5006048c52a83998,33000000000000 Enabled hdisk2 fscsi2 2 5006048452a83986,33000000000000 Enabled hdisk2 fscsi3 3 5006048c52a83999,33000000000000 Enabled The fscsi1 driver instance is the second path (pathid 1), then remove the 3 paths keeping only the path corresponding to fscsi1 :
# rmpath -l hdisk2 -p fscsi0 -d # rmpath -l hdisk2 -p fscsi2 -d # rmpath -l hdisk2 -p fscsi3 -d # lspath -l hdisk2 -H -F"name parent path_id connection status"Afterwards, do a savebase to update the boot lv hd5. Set up the bootlist to hdisk2 and reboot the host. It will come up successfully, no more hang LED 554. When checking the status of the rootvg disk, a new hdisk10 has been configured with the correct ODM definitions as shown below:
# lspv hdisk10 0003027f7f7ca7e2 rootvg active # lsdev -Cc disk hdisk2 Defined 00-09-01 MPIO Other FC SCSI Disk Drive hdisk10 Available 00-08-01 EMC Symmetrix FCP MPIO Raid6 To summarize, it is recommended to setup ONLY ONE path when installing an AIX to a SAN disk, then install the EMC ODM package then reboot the host and only after that is complete, add the other paths. Dy doing that we ensure that the fscsiX driver instance used for the boot process has the hdisk configured behind