tag:blogger.com,1999:blog-48886045207274432812024-02-21T13:47:10.163+05:30Santosh Gupta's passion for AIXAIX is short for Advanced Interactive eXecutive.
AIX is the UNIX operating system from IBM for RS/6000, pSeries and the latest p5 & p5+ systems. Currently, it is called "System P". AIX/5L the 5L addition to AIX stands for version 5 and Linux affinity. AIX and RS/6000 was released on the 14th of February, 1990 in London.
Currently, the latest release of AIX is version 6.
AIX 7 beta will be released in Aug 2010, along with the new POWER7 hardware range.
Today IBM Pureflex is Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.comBlogger116125tag:blogger.com,1999:blog-4888604520727443281.post-23532455876357308832016-04-15T15:12:00.004+05:302016-04-15T15:13:38.758+05:30Tools for Documentation<!-- Search Google -->
<br />
<center>
<form action="http://www.google.com/custom" method="get" target="google_window">
<table bgcolor="#ffffff">
<tbody>
<tr><td align="left" height="32" nowrap="nowrap" valign="top"><a href="http://www.google.com/">
<img align="middle" alt="Google" border="0" src="http://www.google.com/logos/Logo_25wht.gif" /></a>
<label for="sbi" style="display: none;">Enter your search terms</label>
<input id="sbi" maxlength="255" name="q" size="31" type="text" value="" />
<label for="sbb" style="display: none;">Submit search form</label>
<input id="sbb" name="sa" type="submit" value="Search" />
<br />
<input name="client" type="hidden" value="pub-7343084154641039" />
<input name="forid" type="hidden" value="1" />
<input name="ie" type="hidden" value="ISO-8859-1" />
<input name="oe" type="hidden" value="ISO-8859-1" />
<input name="cof" type="hidden" value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:336699;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;FORID:1" />
<input name="hl" type="hidden" value="en" /><br />
<br />
<h3>
PowerHA Tools</h3>
The original question was PowerHA specific, so let’s start with the <a href="http://www-01.ibm.com/support/knowledgecenter/SSPHQG_7.1.0/com.ibm.powerha.admngd/ha_admin_save_restore_configuration.htm" target="_blank">PowerHA snapshot tool</a>. The cluster snapshot tool lets you save and restore cluster configurations by saving a file a record of all the data that defines a particular cluster configuration.<br />
Then, you can recreate a particular cluster configuration, provided the cluster is configured with the requisite hardware and software to support the configuration. This snapshot tool can also make remote problem determination easier because the snapshots are simple ASCII files that can be sent via e-mail.<br />
You can also use the PowerHA-specific <a href="http://www.abderra.webspace.virginmedia.com/HA/qha2.html" target="_blank">qha and qcaa scripts</a>. These are real-time tools that you can use with your running systems more than as a deliverable, but they’re still valuable. Alex Abderrazag has provided a nice script to help you understand cluster manager internal states.<br />
<h3>
HMC Scanner</h3>
When it comes to documenting the way my servers have been configured, I like to <a href="http://ibmsystemsmag.com/Blogs/AIXchange/Archive/using-the-hmc-scanner/" target="_blank">use HMC Scanner</a>. HMC Scanner gives you a nice summary spreadsheet with almost anything you want to know about your environment, including serial numbers, how much memory and CPU are free on your frame,<br />
how each LPAR is configured, information on VLANS and WWNs, and much more. I did a video on running <a href="http://www.youtube.com/watch?v=5YxOgS8uhOo" target="_blank">HMC Scanner</a> and IBM’s Nigel Griffiths has also posted a video on <a href="http://www.youtube.com/watch?v=YMekBGhBh2E" target="_blank">HMC Scanner for Power Systems</a>. HMC Scanner works for AIX, IBM i, Power Linux and VIOS LPAR/VM.<br />
<h3>
System Planning Tool</h3>
I also like to use the IBM System Planning Tool (SPT), which I blogged about in “<a href="http://ibmsystemsmag.com/Blogs/AIXchange/Archive/configuring-you/" target="_blank">Configuring Your Machine Before it Arrives</a>” and which you can find on the <a href="http://www-947.ibm.com/systems/support/tools/systemplanningtool/" target="_blank">IBM support tools website</a>. The SPT provides nice pictures of the machines showing which slots are populated and assigned to which LPARs.<br />
If you’re comfortable with the command line, you can manipulate sysplans with the following commands, which may be easier than going into the GUI to do the same functions:<br />
<pre>lssysplan
rmsysplan
mksysplan
cpsysplan
</pre>
<h3>
viosbr</h3>
For VIO server-specific documentation, I like to use <a href="http://ibmsystemsmag.com/blogs/aixchange/archive/backing-up-vios/" target="_blank">viosbr</a>. After you’ve taken a backup, run:<br />
<pre>viosbr –view –file
</pre>
This provides a lot of information to document the setup of your VIO server. It will show your controllers, physical volumes, optical devices, tape devices, Ethernet interfaces, IP addresses, hostnames, storage pools, optical repository information, ether channel adapters, shared Ethernet adapters, and more.<br />
<h3>
snap –e</h3>
AIX-specific commands <a href="http://www-01.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.cmds5/snap.htm" target="_blank">would include snap –e</a>, which lets you gather a great deal of system information and run custom scripts to include other information with your snap. This tool is often run in conjunction with support to collect the information they need to help resolve issues with your machine.<br />
<h3>
prtconf</h3>
Another worthwhile command is prtconf. This command gives you information like model number, serial number, processor mode, firmware levels, clock speed, network information, volume group information, installed hardware, and more.<br />
<h3>
IBM i Options</h3>
For IBM i, the <a href="http://wiki.midrange.com/index.php/HMC_Operations" target="_blank">midrange wiki</a> has good information about different methods you can use to gather data, including how to print a rack config from a non-LPAR system:<br />
<ol>
<li>Sign on to IBM i with an appropriate userid</li>
<li>On a command line, perform command STRSST</li>
<li>Select option 1, Start a service tool</li>
<li>Select option 7, Hardware service manager</li>
<li>F6 to Print Configuration</li>
<li>Take the defaults on Print Format Options (use 132 columns)</li>
</ol>
<h3>
HMC</h3>
In the new HMC GUI, you can select your managed server then Manage PowerVM and you have options to see your virtual networks, virtual storage, virtualized I/O, and more. This information can also be helpful in documenting your environment.<br />
<h3>
Self-Documenting Tools</h3>
I find there’s value in having systems that can “self-document” via scripts and tools compared to administrators creating spreadsheets that might or might not get regular updates as soon as changes occur. Somhotlinke might find self-documenting tools <br />
don’t provide the correct information, which leaves us with the question of whether it’s better to have no documentation or wrong documentation when you’re working on a system.<br />
Self-documenting tools are a starting point. Whatever documentation you have on hand, take the time to double-check what the actual running system looks like compared to what you think it looks like. By not assuming anything about your running systems, <br />
you can avoid creating additional problems and outages because reality didn’t match what the documentation said.<br />
<h3>
Many Different Documentation Tools</h3>
From the frame, to the OS, to the VIOS, to the HMC, there are many different pieces of your infrastructure to keep an eye on and many different tools you can use to document your environment. I’m sure readers use many other tools and I’d be interested<br />
in hearing about those. Please weigh in with a comment.</td></tr>
</tbody></table>
</form>
</center>
<!-- Search Google --><br />Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0Paris, France48.856614 2.352221900000017748.6894645 2.0294984000000178 49.0237635 2.6749454000000177tag:blogger.com,1999:blog-4888604520727443281.post-71196567320832440902016-04-15T15:03:00.002+05:302016-04-15T15:03:47.444+05:30Improve Memory Utilization With the PowerVM AMS Feature<!-- Search Google -->
<br />
<center>
<form action="http://www.google.com/custom" method="get" target="google_window">
<table bgcolor="#ffffff">
<tbody>
<tr><td align="left" height="32" nowrap="nowrap" valign="top"><a href="http://www.google.com/">
<img align="middle" alt="Google" border="0" src="http://www.google.com/logos/Logo_25wht.gif" /></a>
<label for="sbi" style="display: none;">Enter your search terms</label>
<input id="sbi" maxlength="255" name="q" size="31" type="text" value="" />
<label for="sbb" style="display: none;">Submit search form</label>
<input id="sbb" name="sa" type="submit" value="Search" />
<br />
The PowerVM feature, Active Memory Sharing (AMS), helps Power Systems address peak memory demands and improve overall memory utilization at the server level. Based on a concept similar to the shared processor pool, AMS starts with a pool of physical memory and then shares it among a group of LPARs. Each LPAR has a desired memory setting that specifies the amount of memory it’s permitted to use. The aggregate of the desired memory settings is permitted to exceed the physical size of the pool, which overcommits the physical memory pool from a design perspective.<br />
The main idea is to move memory to and from LPARs to satisfy their requirements. Similar to the way AIX operates, if the demand for memory exceeds the physical memory available, paging devices on the virtual I/O server (VIOS) become active. Memory sharing has different operational dynamics than processor sharing. With processor sharing, the processor state for an idle LPAR can be quickly saved, freeing it for another LPAR to use. However, the give and take of memory between LPARs isn’t as fluid as processor sharing with the shared processor pool. AMS is included with PowerVM Enterprise Edition and can be implemented on POWER6 and POWER7 servers. It’s supported with AIX V6.1 or higher and IBMi V6.1.<br />
<h3>
Implementing AMS</h3>
Finding situations to implement AMS can be somewhat challenging, primarily because:<br />
<ol>
<li>For most applications, you want to avoid page-in activity. If systems begin moving memory pages back into physical memory, the application is likely to suffer a noticeable degradation in performance.</li>
<li>Many applications aren’t well behaved and don’t free up unused memory blocks. This results in a situation where memory is allocated to an LPAR, but might not be in active use. AIX (or IBMi) and the hypervisor must work together to determine which memory blocks should be paged out to the VIOS paging device.</li>
</ol>
Here are two use cases that address these challenges. They’re good candidates for AMS because the shared memory pool LPARs are predominantly “quiet,” just waiting to ramp up to run production workloads. AMS provides a fast, automated method to move physical memory to the LPARs in need.<br />
<h3>
AMS for High Availability Clusters</h3>
Use of AMS can simplify the configuration and operation of PowerHA high-availability clusters, especially those with a large number of servers being clustered to a single recovery server. For example, you might have a PowerHA cluster with eight active servers and one hot standby server. Each active server has one 32GB production LPAR with a corresponding cluster partner LPAR on the standby server. The cluster partner LPARs only require 2GB of memory while in standby mode and 32GB of memory when a failover is active. It’s assumed that the standby server will be configured to support just one failover at a time.<br />
Without AMS, a typical design would assign 2GB of memory to each of the recovery LPARs with 30GB of unallocated memory available for an active failover. Total memory required on the standby server would be 46GB [(8 x 2GB) + 30GB = 46GB]. During a cluster failover for one production LPAR, an additional 30GB of memory would need to move to its cluster partner on the standby server. This could be accomplished with a scripted or manual dynamic LPAR (DLPAR) operation at the Hardware Management Console (HMC).<br />
Implementing AMS on the standby server, however, can simplify and automate this process. With AMS, a 48GB shared memory pool would be configured. Each cluster partner LPAR would be set up to have access to 32GB of memory from the pool. During a failover event, the corresponding cluster partner LPAR would automatically make use of up to 30GB of additional physical memory from the pool. There should be no delay in acquiring this memory because it doesn’t need to freed from the other LPARs. After the production workload has been shifted back to the production server, the additional 30GB of memory is no longer needed on the cluster partner. To prevent that 30GB from remaining with this LPAR, it could be shutdown and reactivated. The LPAR would reboot and resume operation with just the 2GB of memory required to sustain standby operations. This would leave 30 GB of unallocated memory in the shared memory pool, ready for use by any cluster partner LPAR.<br />
<h3>
AMS and Disaster Recovery</h3>
Using AMS as part of a disaster recovery (DR) architecture has a similar theme. Assume you have a collection of DR recovery LPARs with minimal workload under normal operation. This time, a DR event causes an AMS-based recovery LPAR to make use of additional memory. When the DR event occurs, the workload on the corresponding recovery LPAR increases, and memory is automatically shifted to that LPAR from the shared memory pool.<br />
Using AMS for DR would work best if the collection of recovery LPARs on a single server was composed of a mix from several physical production locations or at least, different physical servers. If all recovery LPARs on a single server were to become active at the same time, an overcommit situation would likely result in undesired paging. Another option would be to configure development and test LPARs along with the recovery LPARs at the DR site. The recovery, development and test LPARs would share a single shared memory pool. During a large-scale DR event, the development and test LPARs could be shutdown. This would automatically free up memory in the pool for use by the recovery LPARs.<br />
<h3>
Potential Savings</h3>
With some creative thinking, you may find a good use for AMS in your Power Systems environment. This will position you to take advantage of the flexibility and potential cost savings available with AMS.<br />
<input name="client" type="hidden" value="pub-7343084154641039" />
<input name="forid" type="hidden" value="1" />
<input name="ie" type="hidden" value="ISO-8859-1" />
<input name="oe" type="hidden" value="ISO-8859-1" />
<input name="cof" type="hidden" value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:336699;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;FORID:1" />
<input name="hl" type="hidden" value="en" />
<br /></td></tr>
</tbody></table>
</form>
</center>
<!-- Search Google --><br />Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-16118476847934544222016-04-15T15:01:00.001+05:302016-04-15T15:01:37.366+05:30Manage AIX Workloads More Effectively Using WLM<!-- Search Google -->
<br />
<center>
<form action="http://www.google.com/custom" method="get" target="google_window">
<table bgcolor="#ffffff">
<tbody>
<tr><td align="left" height="32" nowrap="nowrap" valign="top"><a href="http://www.google.com/">
<img align="middle" alt="Google" border="0" src="http://www.google.com/logos/Logo_25wht.gif" /></a>
<label for="sbi" style="display: none;">Enter your search terms</label>
<input id="sbi" maxlength="255" name="q" size="31" type="text" value="" />
<label for="sbb" style="display: none;">Submit search form</label>
<input id="sbb" name="sa" type="submit" value="Search" />
<br />
Since AIX V4.3.3, the free, built-in offering called Workload Manager (WLM) has allowed AIX administrators to consolidate workloads into one OS instance. WLM manages heterogeneous workloads, providing granular control of system CPU, real memory and disk I/O. It does this using percentages of resources and a combination of classes, shares and tiers to manage CPU time, memory and I/O bandwidth. WLM is integrated with the AIX kernel including the scheduler, Virtual Memory Manager and disk device drivers—running in either active or passive modes. Effectively a resource manager, WLM tracks:<br />
<ul>
<li>The sum of all CPU cycles consumed by every thread in the class</li>
<li>The physical memory utilization of the processes in each class by looking at the sum of all memory pages belonging to the processes, and</li>
<li>The disk I/O bandwidth in 512-byte blocks per second for all I/O started by threads in the class</li>
</ul>
A combination of targets and limits can be used to determine how resources are allocated. Targets are basically shares of available resources and range from 1 to 65,535, whereas limits are percentages of available resources with a minimum, soft maximum and hard maximum setting. Limits take priority over shares so, normally, only limits get defined. Hard limits have precedence, then tiers, soft limits and, finally, shares.<br />
<h3>
Tiers</h3>
WLM uses tiers to define class importance relative to other classes. You can define up to 10 tiers (0-9) to prioritize classes, with 0 being most important and 9 least important. WLM assigns resources to the highest tier process that’s ready to run. Processes default to tier 0, but it’s common to assign batch or less important workloads to tier 1 to prioritize online response.<br />
<h3>
Classes</h3>
A class is a set of processes with a single set of resource limits. An individual class is either a superclass or a subclass. Resource shares and limits are assigned to a superclass based on the total resources in the system. Subclasses can be defined to further divide a superclass’s assigned resources among its assigned jobs.<br />
Five predefined superclasses exist by default, and you can add up to 27 user-defined superclasses. Each superclass can have 12 subclasses: two predefined and 10 user-defined. Each class is assigned a name with a maximum of 16 characters. Superclass names must be unique, and subclass names must be unique within their assigned superclass.<br />
<h3>
Classification</h3>
The files necessary to classify and describe classes and tiers are defined in the /etc/wlm directory, beneath which is a directory created for the schema to be used. For example, if the /etc/wlm/prodsys95 directory is the production system’s definition for 9-5, an additional directory could be created for outside those hours. A simple cronjob command could switch between the two definition sets. Classification requires several files, including:<br />
<ul>
<li>Classes list each class, its description, the tier to which it belongs and other class attributes.</li>
<li>Rules are where control is exerted over the class resources.</li>
<li>Limits and shares define resource limits and shares, respectively.</li>
<li>The optional description file includes a definition of the classes.</li>
</ul>
Threads are assigned to a class based on class rules. This can be done automatically using a rules file or manually by a superuser. Each class assigns minimum and maximum amounts for CPU, memory and I/O throughput. To correctly assign a process to a class, WLM goes through process identification, analyzing the process’s attributes to see how it matches up with the class definitions. Processes can be identified and classified by owner or group ID, the full application path and name, the process type or a series of application tags. WLM assigns each class a set of resource shares and limits. Additionally classes can be assigned to tiers to further group and prioritize classes.<br />
WLM reads the rules file from top to bottom and assigns a process to the first matching rule, which makes it important to list rules from specific (top) to more general (bottom). Processes are generally assigned to a class based on the user ID, group or fully qualified path and application name. Type and tag fields can also be used. Type can be 32bit, 64bit, “plock” (the process is locked to pin memory) or fixed (a fixed-priority process).<br />
<h3>
What WLM Does</h3>
For CPU, WLM gathers utilization for all threads in each class 10 times per second in AIX V5.3 and beyond. It then produces a time-delayed average for CPU for each class. That average is compared against tier values, class targets and class limits, and a number is produced that results in either favoring or penalizing each thread being dispatched. WLM also monitors memory 10 times per second and enforces memory limits using least recently used (LRU) algorithms. As of AIX V5.3, TL05, it’s possible to set hard memory limits. However, if that limit is reached, the LRU daemon will start stealing pages from the class—even if memory pages are free. For I/O, WLM enforces limits by delaying I/O when the limit is reached.<br />
<h3>
Activation and Monitoring</h3>
WLM can be started in one of two modes: passive or active. Always start with passive so the effects can be monitored without making it active. Do this using the command <strong>/usr/sbin/wlmcntrl -d prodsys –p</strong> where prodsys is the directory name in which the configuration files live. Once the configuration has been proofed and monitored, WLM can be started in active mode by leaving off the –p.<br />
<h3>
Steps to Implementation</h3>
The first, most critical step is to design the classification criteria. Do so by evaluating workloads and determining how tiers and classes will be broken down, along with what limits should be applied to them. The second step is defining the class, limits, shares and rules files, and starting WLM in passive mode. Then refine the definitions before activating them. Once everything is tested, the final step is to restart WLM in active mode so it not only monitors—but also manages—the system.<br />
WLM enables the consolidation of workloads into one AIX instance while ensuring that applications get the percentage of resources required to provide the performance necessary for success. This allows workloads to be combined into the same OS instance to take better advantage of system resources. It’s worth the effort to classify workloads running on your systems now, even if WLM only ever runs in passive mode. This provides an additional means of monitoring what’s happening as well as a potential management tool, should the need arise.<br />
<input name="client" type="hidden" value="pub-7343084154641039" />
<input name="forid" type="hidden" value="1" />
<input name="ie" type="hidden" value="ISO-8859-1" />
<input name="oe" type="hidden" value="ISO-8859-1" />
<input name="cof" type="hidden" value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:336699;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;FORID:1" />
<input name="hl" type="hidden" value="en" /><strong>
</strong><br />
<h3>
Commands for WLM</h3>
<strong>acctcom</strong><br /> Updated with a –w flag to list WLM information and a –c flag to list only specific classes, many other commands, such as <strong>ps</strong>, were also updated.<br />
<strong>nmon</strong><br /> nmon has been updated to gather WLM statistics (use the –w flag).<br />
<strong>nmon analyzer</strong><br /> The analyzer reports on WLM statistics, adding data in the BBBP tab and adding three new tabs that contain WLM information – WLMBIO, WLMCPU and WLMMEM.<br />
<strong>ps –ae –o pid,user,class,pcpu,tag,thcount,vsz,wchan,args</strong><br /> This command provides a list of processes that includes WLM class information</td></tr>
</tbody></table>
</form>
</center>
<!-- Search Google --><br />Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-84017503380753016242016-04-15T14:59:00.001+05:302016-04-15T14:59:33.838+05:30Improve Your Operations with New CoD Features<!-- Search Google -->
<br />
<center>
<form action="http://www.google.com/custom" method="get" target="google_window">
<table bgcolor="#ffffff">
<tbody>
<tr><td align="left" height="32" nowrap="nowrap" valign="top"><a href="http://www.google.com/">
<img align="middle" alt="Google" border="0" src="http://www.google.com/logos/Logo_25wht.gif" /></a>
<label for="sbi" style="display: none;">Enter your search terms</label>
<input id="sbi" maxlength="255" name="q" size="31" type="text" value="" />
<label for="sbb" style="display: none;">Submit search form</label>
<input id="sbb" name="sa" type="submit" value="Search" />
<br />
IBM recently enhanced its Capacity on Demand (CoD) offerings for the new POWER7+ 770 and 780 servers and updated POWER7 795 server. These enhancements provide new operational features and financial benefits. A closer look might change your perspective on CoD from a usage model for special circumstances to one for mainstream, day-to-day use.<br />
Enterprise class Power Systems servers can be configured with CoD processor cores and memory that are readily available when needed. CoD resources can be enabled with:<br />
<ol>
<li>Capacity Upgrade on Demand (CUoD permanent activation)</li>
<li>Elastic CoD</li>
<li>Utility CoD (cores only)</li>
<li>Trial CoD</li>
</ol>
<h3>
Elastic CoD</h3>
For this article, I’ll focus on the recent enhancements to Elastic CoD, which is a new name for what was previously called On/Off CoD. This name change is meant to emphasize the new features being offered.<br />
Like On/Off CoD, Elastic CoD allows you to activate CoD resources in 1 processor core and 1 GB memory increments. Processor cores and memory can be activated independently of one another. To get started, Elastic CoD enablement keys must be entered at the Hardware Management Console (HMC). When these resources are needed, the desired amounts of processor cores or memory are activated via the HMC. Resource use must be reported to IBM monthly and is billed quarterly. The monthly reporting requirement can be automated through an HMC connected to IBM via Electronic Service Agent.<br />
With the previous On/Off CoD implementation, the enablement keys permitted the use of up to 360 processor days and 999 GB memory days. Upon reaching one of these limits, a new enablement key had to be ordered. Some customers found themselves in a situation where they had to frequently re-order enablement keys. For example, if you were to activate 16 cores for a temporary project, 360 processor days would be consumed in just 22 days. Likewise, if you activated 64 GB of memory, it would take just 15 days to reach the 999 GB memory days permitted.<br />
A new 90-day Elastic CoD offering provides the ability to activate all CoD processor cores and memory for a full 90-days per enablement key. This will eliminate the requirement to frequently reorder the enablement keys for heavy users of CoD resources.<br />
<input name="client" type="hidden" value="pub-7343084154641039" />
<input name="forid" type="hidden" value="1" />
<input name="ie" type="hidden" value="ISO-8859-1" />
<input name="oe" type="hidden" value="ISO-8859-1" />
<input name="cof" type="hidden" value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:336699;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;FORID:1" />
<input name="hl" type="hidden" value="en" />
<br /><br />
<h3>
Credits</h3>
Elastic CoD credits are a new feature available with the purchase of new 780+ and 795 servers. As part of the server purchase, IBM will provide a quantity of no-charge, temporary processor core days and GB memory days to be used at your discretion. Potential uses include: addressing processing peaks, setting up temporary virtual servers, workload balancing, or disaster recovery (DR). The CoD credits are immediately available whenever you need them. There’s no need to submit a request for temporary activation keys. Here are some highlights of the Elastic CoD credits offering:<br />
<ul>
<li>Credits are included with the original purchase of a new Model 780 (9179-MHD) or Model 795 (9119-FHB) server.</li>
<li>The servers must be running system firmware level 7.6 or higher.</li>
<li>A credit of 15 Elastic On/Off processor days and 240 GB memory days will be provided for each installed processor core that’s purchased. Both active and CoD processor cores qualify as installed processors. The credit is held “on-account” by IBM.</li>
<li>To acquire and make use of the credits, a temporary CoD contract must be in place with IBM. This contract has two primary requirements: Elastic CoD usage must be reported to IBM monthly, and Usage will be reconciled quarterly. Once all credits have been consumed, an invoice will be generated for additional use.</li>
</ul>
Here’s an example for a Model 780 system ordered with 32 processor cores installed and 16 processor core activations. The credits provided are:<br />
<ul>
<li>32 installed cores times 15 core days per core equals 480 available processor days</li>
<li>32 installed cores times 240 GB memory days per core equals 7,680 GB memory days</li>
</ul>
Note that the 16 processor-core activations are not relevant to the amount of credits provided.<br />
<h3>
Power Systems Pools</h3>
The Power Systems Pools feature is also available for use with new Model 780+ and 795 servers. Designed to assist customers with planned maintenance events, the feature enables you to redistribute your processor and memory activations across a pool of 780 and 795 servers during an event. This is an enhanced version of the former PowerFlex offering for Model 795 servers. The following are some configuration and operational rules that apply to Power Systems pools:<br />
<ul>
<li>The pool can consist of any combination of up to 10 780 and 795 servers running system firmware 7.6 or higher.</li>
<li>At least 50 percent of the total amount of installed processors and memory within the server pool must be permanently activated.</li>
<li>You’re permitted to have eight planned maintenance events per year.</li>
<li>All of the servers within a pool are permitted to participate in each event.</li>
<li>AIX and IBM i cannot be intermixed within the same pool.</li>
<li>Each maintenance event can last up to seven days, after which, all processor and memory resources must be returned to their previous state.</li>
<li>Requests for a maintenance event must be submitted to IBM at least two business days prior to planned usage.</li>
</ul>
Power Systems Pools can make it much easier to conduct planned maintenance events with no application downtime. Once the temporary activation keys are entered at the HMC, the Live Partition Mobility (LPM) feature of PowerVM can be used to migrate LPARs off of the servers requiring maintenance. Note that this feature is not intended to be part of a DR scenario due to the two-business-day requirement to acquire the temporary activation keys.<br />
<h3>
Reconsider CoD</h3>
IBM’s new 90-day Elastic CoD enablement feature, no-charge CoD credits and Power Systems Pools features offer ways to get creative with CoD implementations. Use of these new features can improve operational flexibility and provide financial savings.</td></tr>
</tbody></table>
</form>
</center>
<!-- Search Google --><br />Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-23436522725894799782016-04-15T14:52:00.002+05:302016-04-15T14:52:48.998+05:30Working With Active Memory Expansion<!-- Search Google -->
<br />
<center>
<form action="http://www.google.com/custom" method="get" target="google_window">
<table bgcolor="#ffffff">
<tbody>
<tr><td align="left" height="32" nowrap="nowrap" valign="top"><a href="http://www.google.com/">
<img align="middle" alt="Google" border="0" src="http://www.google.com/logos/Logo_25wht.gif" /></a>
<label for="sbi" style="display: none;">Enter your search terms</label>
<input id="sbi" maxlength="255" name="q" size="31" type="text" value="" />
<label for="sbb" style="display: none;">Submit search form</label>
<input id="sbb" name="sa" type="submit" value="Search" />
<br />
Active Memory Expansion (AME) is a Power Systems feature that can improve the utilization of physical memory assigned to an LPAR. Operating systems running on AME enabled LPARs are unaware that AME is active. If you need to free up physical memory on a server, AME can be used to reduce the physical memory assigned to LPARs. If you have an LPAR that needs more memory, AME can make additional memory available without an increase to the LPAR’s physical memory assignment.<br />
<h3>
Introduction to AME</h3>
AME expands the physical memory assigned to an LPAR and makes the installed operating system “see” more memory than is actually assigned to it. For example, an LPAR might have a desired memory setting of 32 GB, but with AME enabled the operating system might “see” 40 GB. The hypervisor achieves this expansion by compressing least used memory pages. The increase in available memory is called the expansion factor. Continuing our example, the expansion factor is 1.25, which increases the 32 GB of physical memory allocation by 25 percent. This results in 40 GB of memory visible to the LPAR. On POWER7 servers, the compression/expansion is performed using general processor cycles. For POWER7+ and POWER8 servers, dedicated AME circuitry was added to the processor. This circuitry reduces general processor cycle consumption by 90 percent.<br />
<h3>
Preview the Benefits of AME</h3>
The AIX command amepat (AME Planning and Advisory Tool) can be used to estimate the AME benefit for a particular workload. It’s available on AIX 6.1 and higher and can be run on servers as old as POWER4. The amepat tool should be run during peak processing periods. If you conduct your AME modeling during non-peak periods, CPU contention might occur during peak workloads as AME might consume more CPU than originally planned. The output of the amepat command displays a table of data that lists various options for memory expansion with four main columns:<br />
<ul>
<li><strong>Expansion Factor</strong> provides a multiplier for how much additional memory the LPAR will see. For example, an expansion factor of 1.5 indicates that the LPAR will see 50 percent more memory than the modeled physical memory allocation.</li>
<li><strong>Modeled True Memory Size</strong> is the amount of physical memory that would be allocated to the LPAR.</li>
<li><strong>Modeled Memory Gain</strong> is the amount of additional memory above the physical memory that would be provided through the use of AME.</li>
<li><strong>CPU Usage Estimate</strong> lists the estimated amount of CPU that would be consumed for the corresponding expansion factor.</li>
</ul>
As the expansion factor increases, so will the amount of required CPU to achieve this level of expansion. One of the flags for amepat lets you specify the target environment where you plan to use AME. This takes into consideration the significant reduction in CPU usage provided by the dedicated AME circuitry on the POWER7+ and POWER8 processors and can be helpful when consolidating older server workloads onto newer servers. An average compression ratio is also displayed. Larger ratios indicate good compressibility. You might also see an amepat output that shows 0.00 for the CPU Usage Estimate. This indicates that there’s an opportunity to reduce the amount of physical memory assigned to the LPAR without consuming any CPU. At some point, further reductions in physical memory would then begin to invoke some level of AME work and begin to consume some CPU cycles.<br />
<h3>
Enabling Your Server to Use AME</h3>
AME is available for POWER7/7+/8 servers running AIX 6.1 or higher. The server must be managed by an HMC. AME is ordered as a server hardware feature code, with the initial server order or as an upgrade. There is a single one-time-charge for AME per server. Once purchased, IBM provides an AME enablement license key. This license key is entered on the HMC. All of the LPARs running on an AME enabled server are eligible to use AME. The break-even point for the purchase of AME on POWER8 servers can be expressed in terms of physical memory purchase avoidance:<br />
<ul>
<li>S822: 40 GB</li>
<li>S824: 40 GB</li>
<li>E850: 60 GB</li>
<li>E870: 41 GB</li>
<li>E880: 41 GB</li>
</ul>
For example, on an S824 server, if you can use AME to reduce physical memory consumption by at least 40 GB then AME might be a good fit.<br />
<h3>
Configuring an LPAR to Use AME</h3>
AME is enabled for an LPAR within the partition profile configuration menus. If your server has AME enabled, you’ll see AME configuration options at the bottom of the partition profile memory configuration tab. There’s a checkbox to enable AME. If checked, you’ll also need to provide an expansion factor in the range of 0 to 10. Note that choosing 0 would yield no expansion. For performance tuning, the expansion factor can be dynamically changed on a running LPAR using a dynamic LPAR (DLPAR) operation from the HMC. You might also choose to use DLPAR operations to change (add/remove) the amount of physical memory assigned to the LPAR.<br />
<h3>
Monitoring AME in Use</h3>
The amepat tool can also be used with an LPAR that has AME enabled. Data about the current amount of memory compression and CPU use for AME will be displayed. In addition, amepat will list several potential expansion factor values along with corresponding modeling data, similar to the information provided when amepat is run in a preview mode for LPARs not actively running AME. Last, amepat will provide recommended configuration changes to improve the overall performance of AME for the LPAR.<br />
<h3>
Take a Test Drive</h3>
If you’re planning to migrate LPARs running on older Power servers to new POWER7/POWER8 servers, it’s recommended that you run the amepat tool on your LPARs to see if AME would be effective for your workloads. For workloads already running on POWER7/POWER8 servers, running amepat can help you decide if implementing AME would be beneficial. Remember that AME can be used to expand the memory of an existing LPAR or to reclaim physical memory. If you have some LPARs that look like a good fit for AME, IBM offers a one-time, no-charge 60-day AME trial per server. This trial will allow you to conduct real AME test before committing to a purchase of AME. Find additional information in the <a href="http://www-01.ibm.com/support/knowledgecenter/#!/P8DEA/p8hat/p8hat_ame.htm" target="_blank">IBM Knowledge Center</a>.<br />
<input name="client" type="hidden" value="pub-7343084154641039" />
<input name="forid" type="hidden" value="1" />
<input name="ie" type="hidden" value="ISO-8859-1" />
<input name="oe" type="hidden" value="ISO-8859-1" />
<input name="cof" type="hidden" value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:336699;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;FORID:1" />
<input name="hl" type="hidden" value="en" />
<br /></td></tr>
</tbody></table>
</form>
</center>
<!-- Search Google --><br />Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-29166840725163221002016-04-15T14:48:00.000+05:302016-04-15T14:48:11.422+05:30AIX Call Home Web Monitors System Health<!-- Search Google -->
<br />
<center>
<form action="http://www.google.com/custom" method="get" target="google_window">
<table bgcolor="#ffffff">
<tbody>
<tr><td align="left" height="32" nowrap="nowrap" valign="top"><a href="http://www.google.com/">
<img align="middle" alt="Google" border="0" src="http://www.google.com/logos/Logo_25wht.gif" /></a>
<label for="sbi" style="display: none;">Enter your search terms</label>
<input id="sbi" maxlength="255" name="q" size="31" type="text" value="" />
<label for="sbb" style="display: none;">Submit search form</label>
<input id="sbb" name="sa" type="submit" value="Search" />
<input name="client" type="hidden" value="pub-7343084154641039" />
<input name="forid" type="hidden" value="1" />
<input name="ie" type="hidden" value="ISO-8859-1" />
<input name="oe" type="hidden" value="ISO-8859-1" />
<input name="cof" type="hidden" value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:336699;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;FORID:1" />
<input name="hl" type="hidden" value="en" />
</td></tr>
</tbody></table>
</form>
</center>
IBM Call Home is an incredibly useful feature that monitors the health of your system and automatically notifies both you and IBM support when events that need action take place. It will automatically open a service request and transfer the initial diagnostic data to the support team so that they can start an initial remediation plan. The only data sent is diagnostic data, which is encrypted to ensure security. This feature helps ensure rapid remediation of system problems and should be enabled on all systems where possible.<br />
<h3>
Enabling Call Home</h3>
Call Home is also referred to as ESA (electronic service agent) and this software is installed on the HMC (hardware management console) if you have one. Otherwise you need to install it on every server. Below are the instructions for configuring it on the HMC; on a server you would go to smitty and select electronic service agent. If that isn’t an option, then you may need to install it. The ESA home page has links with instructions on installation and configuration.<br />
<h3>
Configuring on the HMC</h3>
<ul>
<li>On the HMC you configure ESA and then enable it for use</li>
<li>The configuration will request your IBM customer number, contact information and details for sending notification emails</li>
<li>You will need to allow communication to IBM servers through your firewall</li>
<li>Verify connectivity between the HMC and IBM</li>
</ul>
<h3>
So what is Call Home Web</h3>
Call Home Web is a new feature that was added to Call Home. It provides a dashboard that gives a view into your most recent events along with a summary of the past seven days. The system summary can be exported for later use. Call Home Web supports most Power Systems, Power-based PureFlex Systems, and IBM Storage Systems that report Call Home information. In order to use Call Home Web, the system must either be under warranty or be included in a maintenance support contract. When the system is added to Call Home Web, an entitlement check occurs⎯if a warranty or support contract cannot be found then the system cannot be added. At that point, you can request manual entitlement assistance.<br />
Features provided by Call Home Web include:<br />
<strong>Dashboard view</strong><br />
Where you view your most recent events and a summary of the past seven days<br />
<strong>Manage my systems</strong><br />
Where you edit information about your systems and organize systems into groups<br />
<strong>Groups</strong><br />
Which allows you to group systems into logical groups. You may want to group all the HR systems together or all the database ones or you may prefer other groupings<br />
<strong>Contract information</strong><br />
Keeps track of information on your warranty and support contract, including when it expires. It can be set up to notify you when your contract is close to expiring<br />
<strong>Events by my inventory</strong><br />
Shows events for your system, on an individual or group basis if groups have been set up<br />
<strong>How to and help information</strong><br />
Also called Call Home Assistance. Provides information on how to set up and enable Call Home and Call Home Web as well as on how to register systems and general usage. You can also get additional site assistance such as descriptions of what the various icons mean<br />
<strong>Risk analysis indicators</strong><br />
From the dashboard you can go to the system summary section to get a breakdown of indicators of current risks to your systems, specifically backlevel software.<br />
<strong>Recommended software level</strong><br />
Under system details there is a risk analysis section that provides details on whether your system is at the minimum recommended level. If it’s not at the minimum recommended level, then links are provided to download that level.<br />
<strong>Notifications for back-level software</strong><br />
Under system details is an associated users section, where you can request notifications when Call Home determines that your software is backlevel<br />
<strong>System summary by group</strong><br />
If you have setup groups, then on the dashboard under system summary you can choose to display by group, sort by group and filter by group. This lets you quickly drill down into groups of systems that maybe having issues.<br />
<h3>
Adding Systems</h3>
Another feature added to Call Home Web is the ability to request a boarding spreadsheet. If you have a large number of systems to add then you can send an email to <a href="mailto:spe@us.ibm.com" target="_blank">spe@us.ibm.com</a> and request a boarding spreadsheet. This comes with instructions and allows you to put all your systems in a spreadsheet that will then be uploaded by the Call Home helpdesk. Individual systems are added using the Managing My Systems tab. You start by clicking on the Register New Systems button and then follow the instructions. Using an S824 8246-42A serial 02-0012345 as an example, you will need to be able to provide the following information:<br />
<ul>
<li>4 digit Machine type – i.e. 8246</li>
<li>3 digit Machine model – i.e. 42A</li>
<li>7 digit Machine serial – i.e. 0012345</li>
<li>Call Home Web System name – whatever name you want Call Home Web to refer to this system by</li>
<li>Country where the system gets its entitlement</li>
<li>Your IBM customer number – 7 characters</li>
</ul>
After you click on submit you’ll see either registration success, entitlement failure, test event required or machine type not supported by Call Home Web. If the registration is successful, then click on the “Requires confirmation” link and agree to the terms and conditions. If it requires a test event, then generate a test event for that server within 3 hours. Creating a test event or problem is a function within ESA that is easy to perform. If you receive an entitlement failure, then click on the link to get more information and complete the entitlement failure form to have the help desk determine the issue.<br />
<h3>
Summary</h3>
IBM Call Home constantly monitors the health and functionality of your system. This is a critical tool in your system health and maintenance strategy. IBM Call Home Web provides you with an additional interface that allows you to group information together and rapidly get an overall view of your entire environment’s health. In today’s business environment it’s important to understand this. Call Home and Call Home Web provide a view into your systems that helps you proactively manage events to reduce potential downtime.<br />
<!-- Search Google --><br />Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-50484218559526419902013-04-06T01:24:00.001+05:302013-12-15T13:29:23.860+05:30IBM PUREFLEX COMING SOON <!-- Search Google -->
<br />
<center>
<form action="http://www.google.com/custom" method="get" target="google_window">
<table bgcolor="#ffffff">
<tbody>
<tr><td align="left" height="32" nowrap="nowrap" valign="top"><br />
<a href="http://www.google.com/">
<img align="middle" alt="Google" border="0" src="http://www.google.com/logos/Logo_25wht.gif" /></a>
<label for="sbi" style="display: none;">Enter your search terms</label>
<input id="sbi" maxlength="255" name="q" size="31" type="text" value="" />
<label for="sbb" style="display: none;">Submit search form</label>
<input id="sbb" name="sa" type="submit" value="Search" />
<input name="client" type="hidden" value="pub-7343084154641039" />
<input name="forid" type="hidden" value="1" />
<input name="ie" type="hidden" value="ISO-8859-1" />
<input name="oe" type="hidden" value="ISO-8859-1" />
<input name="cof" type="hidden" value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:336699;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;FORID:1" />
<input name="hl" type="hidden" value="en" />
</td></tr>
</tbody></table>
<br />
IBM PUREFLEX COMING SOON<br />
<br />
<br />
<div style="text-align: left;">
Hello Friends,</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Not getting enough time to update blog. Sorry for inconvenience caused. </div>
<div style="text-align: left;">
You can email me on forsantoshgupta@gmail.com for any queries. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Thanks and Regards</div>
<div style="text-align: left;">
Santosh Gupta</div>
</form>
</center>
<!-- Search Google -->Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com2tag:blogger.com,1999:blog-4888604520727443281.post-84589575219102318872011-10-26T23:37:00.000+05:302011-10-26T23:38:31.468+05:30AIX Version 5.3<br /><br />The IBM AIX 5L Version 5.3 has been withdrawn from the market, effective April 29, 2011.<br />Highlights<br />• Well-proven, scalable, open, standards-based UNIX® operating system<br />• IBM POWER5™ technology and Virtualization Engine™ enablement help deliver power, increase utilization, ease administration and reduce total cost<br />• Rock-solid security and availability to help protect IT assets and keep businesses running<br />• Linux® affinity enables fast, cost-effective development of cross-platform applications<br />Accept no limits, make no compromises<br />In today’s on demand world, clients need a safe, secure, stable and flexible operating environment to run their organizations. That is why more and more businesses large and small are choosing AIX 5L™ for POWER™, IBM’s industrial-strength UNIX operating system (OS), for their mission-critical applications. With its proven scalability, reliability and manageability, the AIX 5L OS is an excellent choice for building a flexible IT infrastructure and is the only UNIX operating system that leverages IBM experience in building solutions that run businesses worldwide. And only one UNIX operating system leads the industry in vision and delivery of advanced support for 64-bit scalability, virtualization and affinity for Linux. That operating system is AIX 5L.<br />AIX 5L is an open, standards-based OS that conforms to The Open Group’s Single UNIX Specification Version 3. It provides fully integrated support for 32- and 64-bit applications. AIX 5L supports the IBM System p5™, IBM eServer™ p5, IBM eServer pSeries®, IBM eServer i5 and IBM RS/6000® server product lines, as well as IBM BladeCenter® JS2x blades and IntelliStation® POWER and RS/6000 workstations. In addition to compliance with UNIX standards, AIX 5L includes commands and application programming interfaces to ease the porting of applications from Linux to AIX 5L.<br />AIX 5L Version 5.3 offers new levels of innovative self-management technologies. It continues to exploit current 64-bit system and software architecture to support advanced virtualization options, as well as IBM POWER5 and POWER5+™ processors with simultaneous multithreading capability for improved performance and system utilization. AIX 5L V5.3 is enhanced to support the IBM Virtualization Engine systems technology innovations available on POWER5 and POWER5+ systems, including Micro-Partitioning™ and Virtual I/O Server support.<br />AIX 5L V5.3 also includes the advanced distributed file system NFSv4. NFSv4 is an open, standards-based distributed file system that offers superior security, interoperability and scalability. AIX 5L was the first commercial UNIX vendor to include NFSv4. IBM includes advanced NFSv4 file system federation and replication management capabilities.<br />AIX 5L V5.3 provides improved system security, enhanced performance analysis and tuning tools and added system management tools. This AIX 5L release underscores IBM’s firm commitment to long-term UNIX innovations that deliver business value.Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com2tag:blogger.com,1999:blog-4888604520727443281.post-56970031895253985662011-10-21T23:56:00.003+05:302011-10-22T00:03:38.540+05:30AIX videos Links<a href="http://www.ibm.com/developerworks/wikis/display/WikiPtype/Movies#Movies-power7">http://www.ibm.com/developerworks/wikis/display/WikiPtype/Movies#Movies-power7</a><br /><br /><br />Welcome to the POWER6/POWER7 and AIX6 Hands-On Technical Product Demos<br />The idea is to provide the "cook book" information to get your started with these new interesting technologies and to answer some basic questions:<br /><br />•What is it about?<br />•How do I get started?<br />•What are a few typical first good uses I could start with?<br />•How easy is it to use?<br />•How could this save me time or money?<br />•Where can I get more information?<br />We hope you find these movies interesting and let you make a flying start.<br /><br />Currently, the movies add up to 20.6 hours of free education on the hottest topics.<br /><br />Quick links to the main sections:<br /><br />1.POWER7 Processor<br />2.AIX Workload Partitions<br />3.AIX6 and AIX7 Operating System Features<br />4.POWER6 Processor Features<br />5.Integrated Virtualization Manager (IVM)<br />6.Other Cool & Interesting Stuff<br />7.IBM System Director 6 on AIX<br />8.Thirteen More Director 6 Movies<br />9.Back to POWER Basics<br />10.New Virtualisation Features<br />11.PowerHA SystemMirror 7.1 for AIX<br />The latest movies added are:<br /><br />•2nd Sept 2010 - How Systems Director Saves Me Time - movie 84<br />•12th Jan 2011 - Shared Storage Pools Hands-On - movie 85<br />•28th Jan 2011 - Shared Storage Pools Intro - movie 86<br />•March 2011 - HACMP = PowerHA System Mirror<br />◦On this Techdocs website the famous Shawn Bodily, Power/AIX Advanced Technical Skills, USA presents four technical movies on AIX High Availability. These are in .mov format. I had to download Apple QuickTime to view them as other players don't work (mostly audio problems).<br />•PowerHA SystemMirror 7.1 for AIX by HACMP Guru Alex Abderrazag - this includes a set of 6 movies:<br />1.PowerHA Introduction to a typical environment used in the movies<br />2.PowerHA Configuration via SMIT<br />3.PowerHA The "clmgr" command<br />4.PowerHA High Availability in Action<br />5.PowerHA SAN Communications<br />6.PowerHA Application Monitoring<br />Notes on getting the movies to work on your PC:<br /><br />•These movies are in Windows Movie Format (.wmv) to make them small enough to watch over the internet or download but this means some quality has been lost from the Audio Video Interleave (.avi) originals which are 60 MB to 90 MBs in size.<br />•When tested on some PCs it took 4 to 5 minutes to start the movie - please be patient and don't just assume its broken - some browsers download the entire movie before they start playing it.<br />•Other browsers handle the media file differently - some start Windows Media Player and some start it within the browser itself. Also I have found that some auto resize the movie to fit the window - so start the movie in a suitable sized browser window. The movies where first recorded at 1024x768 but later ones at 800x600 but higher resolution. Sorry but I rather create new movies than try to regenerate them all to one size. If the movie does not fit your screen the best fix is to upgrade your screen to at least 1280x1024<br />•If all else fails try to download the .wmv file and play locally on your machine: using Right Click on the Download link below and selecting "Save Link as" or "Save Target as". This may highlight your PC does not support this format (good luck sorting that out!).<br />•Linux workstation users - ideas please, can Linux handle the .mwv format? If so, how or a good alternative solution is welcome.<br />◦I am told that Linux can indeed play this format - have a look at this website for hints Ubuntu - Installing Mplayer Codecs and installing OpenSUSE codecs is really simple too.<br />•Windows 7 users - some of the older movies do not work with Windows 7 Media Player. This appears to be missing CODEC's from Windows 7 that were in early Windows versions send your comments to Microsoft. We fixed this by downloading the ACELP CODEC from http://www.voiceage.com/acelp_eval.php - strictly at your own risk. I installed the Vista-64 version as I run Windows 7. Then watching the movie via the Windows Media Center (not the Player).<br />•For Windows 7 these movies have been remastered (August 2010) to fix Windows 7 problems of lack of certain CODECs found in earlier Windows versions, poor audio or hangs half way through: DFP, HMC7 Partition Mobility, Memory Keys, Partition Priority, CPU Pools and Monitoring Pools, Ganglia and PowerVM LX86.<br />•Feed back and further ideas for movies to Nigel Griffiths - nag at uk dot ibm dot comSantosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com3tag:blogger.com,1999:blog-4888604520727443281.post-51477255616970699022011-10-01T02:01:00.000+05:302011-10-01T02:02:05.667+05:30Migration of AIX LPAR from one hardware to otherSupported Methods of Duplicating an AIX System<br /> <br /><br /> Technote (FAQ) <br /> <br />Question <br />I would like to move, duplicate, or clone an AIX system onto another partition or hardware. How can I accomplish this? <br /> <br /> <br /> <br />Answer <br />This document describes the supported methods of duplicating, or cloning, an AIX instance to create new systems based on an existing one. It also describes methods known to us that are not supported and will not work.<br />Why Duplicate A System?<br />Duplicating an installed and configured AIX system has some advantages over installing AIX from scratch, and can be a faster way to get a new LPAR or system up and running.<br /><br />Using this method customized configuration files, installation of additional AIX filesets, application configurations and tuning parameters can be set up once and then installed on another system or partition.<br /><br /><br />Supported Methods<br /><br />1. Cloning a system via mksysb backup from one system and restore to new system.<br /><br />This can either be a mksysb backup of the rootvg from the source system to tape, DVD, or a file on a NIM server. <br /><br />If the mksysb is going to be used to create a new machine, make sure to set 'recover devices' to NO when it is restored. This will insure that devices existing on the source machine aren't added to the ODM of the target machine.<br /><br /><br />2. Using the alt_disk_copy command.<br /><br />If you have extra disks on your system, or have disks you would like to associate with one system, load a rootvg, then remove them and associate with a new system, this is a good way to copy the rootvg to them.<br /><br />The basic command to do this would be:<br /><br /># alt_disk_copy -BOd hdiskx<br /><br />The -B option tells alt_disk_copy not to change the bootlist to this new copy of rootvg, the -O option will remove devices from your customized ODM database.<br /><br />From the alt_disk_copy man page:<br /><br />-O<br />Performs a device reset on the target altinst_rootvg. This causes<br />the alternate disk install to not retain any user-defined device<br />configurations. This flag is useful if the target disk or disks<br />become the rootvg of a different system (such as in the case of<br />logical partitioning or system disk swap).<br /><br />When the disks containing this altinst_rootvg are moved to another host and then booted from, AIX will run cfgmgr and probe for any hardware, adding ODM information at that time.<br /><br /><br />3. Using alt_disk_mksysb to install a mksysb image on another disk.<br /><br />Using this technique a mksysb image is first created, either to a file, on CD or DVD or tape.<br /><br />Then that mksysb image is restored to unused disks in the current system using alt_disk_mksysb, again using the -O option to perform a device reset.<br /><br />After this the disks could be removed and placed in a new system, or via fibre rezoned to a new system, and the rootvg booted up. <br /><br /><br />Advanced Techniques<br /><br />1. Live Partition Mobility<br /><br />Using the Live Partition Mobility feature of AIX you can migrate an AIX LPAR and applications from one LPAR to another while it is up and running. Please see the AIX Manual for further information:<br /><br />http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.baseadmn/doc/baseadmndita/lpm_overview.htm<br /><br /><br />2. Higher Availability Using SAN Services<br /><br />There are methods not described here, which have been documented by DeveloperWorks.<br />Please refer to the document "AIX higher availability using SAN services" for details.<br /><br />http://www.ibm.com/developerworks/aix/library/au-AIX_HA_SAN/index.html<br /><br /><br />Unsupported Methods<br /><br />1. Using a bitwise copy of a rootvg disk to another disk.<br /><br />This bitwise copy can be a one-time snapshot copy such as flashcopy, from one disk to another, or a continuously-updating copy method, such as Metro Mirror.<br /><br />While these methods will give you an exact duplicate of the installed AIX operating system, the copy of the OS may not be bootable.<br /><br /><br />2. Removing the rootvg disks from one system and inserting into another.<br /><br />This also applies to re-zoning SAN disks that contain the rootvg so another host can see them and attempt to boot from them.<br /><br /><br />Why don't these methods work?<br /><br />The reason for this is there are many objects in an AIX system that are unique to it; Hardware location codes, World-Wide Port Names, partition identifiers, and Vital Product Data (VPD) to name a few. Most of these objects or identifiers are stored in the ODM and used by AIX commands.<br /><br />If a disk containing the AIX rootvg in one system is copied bit-for-bit (or removed), then inserted in another system, the firmware in the second system will describe an entirely different device tree than the AIX ODM expects to find, because it is operating on different hardware. Devices that were previously seen will show missing or removed, and usually the system will typically fail to boot with LED 554 (unknown boot disk).Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com5tag:blogger.com,1999:blog-4888604520727443281.post-65474134594135125312010-10-21T16:20:00.002+05:302010-10-21T16:26:44.514+05:30Moving file systems from one volume group to another<u>Moving file systems from one volume group to another</u><br /><br />ATTENTION: Make sure a full backup exists of any data you intend to migrate before using these procedures.<br /><br />In AIX, storage allocation is performed at the volume group level. Storage cannot span volume groups. If space within a volume group becomes constrained, then space that is available in other volume groups cannot be used to resolve storage issues.<br /><br />The solution to this problem is to add more physical volumes to the relevant volume group. This may not be an option in all environments. If other volume groups contain the required free space, the alternative is to move the required logical volumes to the desired volume group and expand them as needed.<br /><br />The source logical volume can be moved to another volume group with the cplv command. The following steps achieve this.<br /><br />ATTENTION: The logical volume should be inactive during these steps to prevent incomplete or inconsistent data. If the logical volume contains a mounted file system, then that file system should be unmounted first. If this logical volume is being used as a RAW storage device, then the application using this logical volume should close the device or be shut down.<br /><br />1.Copy the source logical volume to the desired volume group with the cplv command.<br /><br />For example, where myvg is the new volume group and mylv is the name of the user's logical volume, enter:<br /><br />cplv -v myvg mylv<br /><br />This will return the name of the new logical volume, such as lv00.<br /><br />If this logical volume was being used for RAW storage, skip to to <a href="http://www.aixmind.com/?p=1000#step6">step 6</a>. If this is a JFS or JFS2 file system, proceed to step 2. Note that RAW storage devices should NOT use the first 512 bytes of the RAW device. This is reserved for the LVCB or logical volume control block. cplv will not copy the first 512 bytes of the RAW logical volume, but it will update fields in the new logical volume's LVCB.<br /><br />2.All JFS and JFS2 file systems require a log device. This will be a logical volume with a type of jfslog or jfs2log for JFS2 file systems. Run the lsvg -l <vgname> command on your destination volume group. If a JFS or JFS2 log DOES NOT already exist on the new volume group, create one by using the mklv and logform commands as detailed below. If a JFS or JFS2 log DOES exist, proceed to<br /><u>step 3</u><br /><u></u><br />With a JFS2 filesystem, you also have the option of using an inline log. With inline logs, the jfs2log exists on the filesyster itself. After the cplv command is ran on a JFS2 inline log filesystem, run:<br /><br />logform /dev/lvname<br /><br />You should receive a message about formatting the inline log. If you do not receive a message about an inline log, then this filesystem is not a JFS2 inline log filesystem and you should treat it as a regular JFS2 filesystem. After hitting y on formatting the inline log, continue to step 3.<br /><br />To make a new JFS log, enter the following command, where myvg is the name of the new volume group, enter:<br /><br />mklv -t jfslog myvg 1<br /><br />To make a new JFS2 log, enter: mklv -t jfs2log myvg 1<br /><br />This will return a new logical volume of either type jfslog or jfs2log, such as loglv00. This new logical volume will need to be formatted with the logform command in order to function properly as either a JFS or JFS2 log. For example:<br /><br />logform /dev/loglv00<br /><br />Answer yes to destroy.<br /><br />3.Change the filesystem to reference a log device that exists in the new volume group and the new logical volume with the chfs command.<br /><br />For example, where myfilesystem is the name of the user's filesystem, enter:<br /><br />chfs -a dev=/dev/lv00 -a log=/dev/loglv00 /myfilesystem<br /><br />With inline logs on JFS2 filesystems this command is also different:<br /><br /> chfs -a dev=/dev/lv00 -a log=INLINE /myfilesystem<br /><br />4.Run fsck to ensure filesystem integrity. Enter:<br /><br />fsck -p /dev/lv00<br /><br />NOTE: It is common to receive errors after running fsck -p /dev/lvname prior to mounting the filesystem. These errors are due to a known bug that development is currently aware of and which will be resolved in a future release of AIX. Once the filesystem is mounted, a future fsck with the filesystem unmounted should no longer produce an error.<br /><br />Mount the file system.<br /><br />For example,<br />where myfilesystem is the name of the user's file system, enter:<br /><br />mount /myfilesystem<br /><br />At this point, the migration is complete, and any applications or users can now access the data in this filesystem. To change the logical volume name, proceed to the following step.<br /><br />NOTE: If you receive errors from the preceding step, do not continue. Contact you AIX support center.<br /><br /><br />6.Remove the source logical volume with the rmlv command.<br /><br />For example,<br /><br />where mylv is the name of the user's logical volume, enter:<br /><br />rmlv mylv<br /><br /><br />Rename and reset any needed attributes on the new logical volume with the chlv or chmod commands. In order to rename the logical volume, the filesystem or raw logical volume must be in a closed state.<br /><br />For example, where mylv is the new name you wish to change lv00 to be, enter:<br /><br /> chlv -n mylv lv00<br /><br /><a name="4"><u>Logical volumes specific to rootvg</u></a><br /><br />The following logical volumes and file systems are specific to the rootvg volume group and cannot be moved to other volume groups<br /><br /> Logical Volume File System or Description ------------------------------------------------------ <br />hd2 /usr <br />hd3 /tmp <br />hd4 / <br />hd5 <boot> <br />hd6 <primary><br />hd8 <primary> <br />hd9var /varSantosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com3tag:blogger.com,1999:blog-4888604520727443281.post-38980496502923054282010-10-21T16:18:00.001+05:302010-10-21T16:19:29.152+05:30Moving file systems from one volume group to another<a name="3"><strong>Moving file systems from one volume group to another</strong></a><br />ATTENTION: Make sure a full backup exists of any data you intend to migrate before using these procedures.<br />In AIX, storage allocation is performed at the volume group level. Storage cannot span volume groups. If space within a volume group becomes constrained, then space that is available in other volume groups cannot be used to resolve storage issues.<br />The solution to this problem is to add more physical volumes to the relevant volume group. This may not be an option in all environments. If other volume groups contain the required free space, the alternative is to move the required logical volumes to the desired volume group and expand them as needed.<br />The source logical volume can be moved to another volume group with the cplv command. The following steps achieve this.<br />ATTENTION: The logical volume should be inactive during these steps to prevent incomplete or inconsistent data. If the logical volume contains a mounted file system, then that file system should be unmounted first. If this logical volume is being used as a RAW storage device, then the application using this logical volume should close the device or be shut down.<br />Copy the source logical volume to the desired volume group with the cplv command.<br />For example, where myvg is the new volume group and mylv is the name of the user's logical volume, enter: cplv -v myvg mylv<br />This will return the name of the new logical volume, such as lv00.<br />If this logical volume was being used for RAW storage, skip to to <a href="http://www.aixmind.com/?p=1000#step6">step 6</a>. If this is a JFS or JFS2 file system, proceed to step 2. Note that RAW storage devices should NOT use the first 512 bytes of the RAW device. This is reserved for the LVCB or logical volume control block. cplv will not copy the first 512 bytes of the RAW logical volume, but it will update fields in the new logical volume's LVCB.<br />All JFS and JFS2 file systems require a log device. This will be a logical volume with a type of jfslog or jfs2log for JFS2 file systems. Run the lsvg -l <vgname> command on your destination volume group. If a JFS or JFS2 log DOES NOT already exist on the new volume group, create one by using the mklv and logform commands as detailed below. If a JFS or JFS2 log DOES exist, proceed to <a href="http://www.aixmind.com/?p=1000#step3">step 3</a>.<br />With a JFS2 filesystem, you also have the option of using an inline log. With inline logs, the jfs2log exists on the filesyster itself. After the cplv command is ran on a JFS2 inline log filesystem, run: logform /dev/lvname<br />You should receive a message about formatting the inline log. If you do not receive a message about an inline log, then this filesystem is not a JFS2 inline log filesystem and you should treat it as a regular JFS2 filesystem. After hitting y on formatting the inline log, continue to step 3.<br />To make a new JFS log, enter the following command, where myvg is the name of the new volume group, enter: mklv -t jfslog myvg 1<br />To make a new JFS2 log, enter: mklv -t jfs2log myvg 1<br />This will return a new logical volume of either type jfslog or jfs2log, such as loglv00. This new logical volume will need to be formatted with the logform command in order to function properly as either a JFS or JFS2 log. For example: logform /dev/loglv00<br />Answer yes to destroy.<br /><a name="step3"><br />Change the filesystem to reference a log device that exists in the new volume group and the new logical volume with the chfs command.</a><br />For example, where myfilesystem is the name of the user's filesystem, enter: chfs -a dev=/dev/lv00 -a log=/dev/loglv00 /myfilesystem<br />With inline logs on JFS2 filesystems this command is also different: chfs -a dev=/dev/lv00 -a log=INLINE /myfilesystem<br />Run fsck to ensure filesystem integrity. Enter: fsck -p /dev/lv00<br />NOTE: It is common to receive errors after running fsck -p /dev/lvname prior to mounting the filesystem. These errors are due to a known bug that development is currently aware of and which will be resolved in a future release of AIX. Once the filesystem is mounted, a future fsck with the filesystem unmounted should no longer produce an error.<br />Mount the file system.<br />For example, where myfilesystem is the name of the user's file system, enter: mount /myfilesystem<br />At this point, the migration is complete, and any applications or users can now access the data in this filesystem. To change the logical volume name, proceed to the following step.<br />NOTE: If you receive errors from the preceding step, do not continue. Contact you AIX support center. <a name="step6"></a><br />Remove the source logical volume with the rmlv command.<br />For example, where mylv is the name of the user's logical volume, enter: rmlv mylv<br />Rename and reset any needed attributes on the new logical volume with the chlv or chmod commands. In order to rename the logical volume, the filesystem or raw logical volume must be in a closed state.<br />For example, where mylv is the new name you wish to change lv00 to be, enter: chlv -n mylv lv00<br /><a name="4">Logical volumes specific to rootvg</a><br />The following logical volumes and file systems are specific to the rootvg volume group and cannot be moved to other volume groups: Logical Volume File System or Description<br />------------------------------------------------------<br />hd2 /usr<br />hd3 /tmp<br />hd4 /<br />hd5 <boot><br />hd6 <primary><br />hd8 <primary><br />hd9var /varSantosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com1tag:blogger.com,1999:blog-4888604520727443281.post-91525809589301643652010-09-11T18:22:00.000+05:302010-09-11T18:23:13.723+05:30Error when install softwares giving bosboot verification failureNot able to install softwares giving bosboot verification failure<br /> <br />when we try to install software on aix box it gives me error of bosboot verification failed.<br /><br />We check and found that /dev/ipldevice was not present. this file is a symlink of /dev/hdisk0 ( boot disk ).<br /><br />so recreate the file /dev/ipldevice and make a hardlink of /dev/hdisk0<br /><br />ln /dev/hdisk0 /dev/ipldevice<br /><br />and then do bosboot -ad /dev/hdisk0<br /><br />and then i tried to install software it worksSantosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com2tag:blogger.com,1999:blog-4888604520727443281.post-49112851432829684742010-09-11T12:46:00.001+05:302010-09-11T12:48:18.011+05:30IBM VIOS Installation over NIM<p>Prerequisites<br />IBM VIOS Installation DVD<br />IBM AIX Installation CD Disk 1 (I used AIX 5.3)<br />AIX NIM Server (I used AIX 5.3)<br />Power Series 5 </p><p><a name="IBMVIOSInstallationoverNIM-Step1.PrepareInstallationfiles%3A"></a>Step 1. Prepare Installation files:<br />AIX File Limit SizeYou must ensure that your file size security limitation isn't going to stop you from copying your mksysb image from your cdrom to your hard drive. On your NIM server, go to the /etc/security directory and edit the limits file. Change the fsize to -1 or something large enough to ensure the mksysb image will copy over. You will need to reboot your system for this to take place, or you can log out and log in again.<br />cd /etc/securityvi limitsfsize = -1reboot or logout<br />Insert and Mount VIOS DVD<br />smitty mountfsFILE SYSTEM name: /dev/cd0DIRECTORY over which to mount: /cdromTYPE of file system: cdrfsMount as a READ-ONLY system? yes(or mkdir /cdrommount -v cdrfs -o ro /dev/cd0 /cdrom )<br />Copy installation files from cdrom:<br />mkdir /export/VIOScd /cdrom/nimol/ioserver_res<br />-rw-r--r-- 1 root system 11969032 Jul 05 07:07 booti.chrp.mp.ent.Z<br />-rw-r--r-- 1 root system 951 Jul 05 07:07 bosinst.data<br />-rw-r--r-- 1 root system 40723208 Jul 05 07:07 ispot.tar.Z<br />lrwxrwxrwx 1 root system 38 Jul 05 07:07 mksysb -> ../../usr/sys/inst.images/mksysb_image<br />cp bosinst.data /export/VIOScd /cdrom/usr/sys/inst.images<br />-rw-r--r-- 1 root system 1101926400 Jul 05 06:52 mksysb_image<br />cp mksysb_image /export/VIOS<br />For newer versions of vio like 1.5.2 & 2.1 you need to do the following:cp mksysb_image2 /export/VIOScd /export/VIOScat mksysb_image2 >> mksysb_image<br /><a name="IBMVIOSInstallationoverNIM-Step2.DefineNIMResources%3A"></a>Step 2. Define NIM Resources:<br />Define the mksysb_image resource object<br />nim -o define -t mksysb -a location=/export/VIOS/mksysb_image -a server=master vios_mksysb<br />Define the SPOT resource object<br />mkdir /export/VIOSSPOTnim -o define -t spot -a server=master -a location=/export/VIOS/VIOSSPOT -a source=vios_mksysb vios_spot<br /># nim -o define -t spot -a server=master -a location=/export/VIOS/VIOSSPOT -a so<br />urce=vios_mksysb vios_spot<br />Creating SPOT in "/export/VIOS/VIOSSPOT" on machine "master" from "vios_mksysb"<br />...<br />Restoring files from BOS image. This may take several minutes ...<br />Checking filesets and network boot images for SPOT "vios_spot".<br />This may take several minutes ...<br />Define the bosinst resource object<br />nim -o define -t bosinst_data -a location=/export/VIOS/bosinst.data -a server=master vios_bosinst<br />Define the lpp_source resource object.( You might skip this step if you wish as lpp_source provides extra filesets. But you should be able to install/runvio without lpp_source, same as AIX. Also note that different VIO version is based on different AIX version.You need to find which AIX version you need to create the lpp_source. Run lsnim -l vios_mksysb and you willsee the AIX version. You need that CD to create the lpp_source. For example for VIO 1.5 you need AIX 5.3 TL7 CD1, for 1.5.2 you need AIX 5.3 TL8 CD1 for 2.1 you need AIX 6.1 TL2. But always run lsnim -l command on the mksysb or the spot you just created to find which AIX CD you need.)<br />Insert the first disk of the AIX installation. NOTE: When trying to use the VIOS lpp_source, when trying to NIM an LPAR, you get a missing simages error. So instead, we will use the AIX installation CDs, which works just fine.<br />umount /cdrommkdir /export/VIOS/lppsourcenim -o define -t lpp_source -a source=/dev/cd0 -a server=master -a location=/export/VIOS/lppsource vios_lppsource<br /><a name="IBMVIOSInstallationoverNIM-Step3.CreateVIOSLPAR%3A"></a>Step 3. Create VIOS LPAR:<br />NOTE: I don't have any pictures of this part of the setup, but it should be obvious how this is doneNOTE: I give specifications for a typical VIOS server. Your environment may vary.<br />On the Power 5 HMC, right click on Partitions and select Create -> Logical Partition<br />Enter a Parition ID and a Partition name. Under Partition environment, select Virtual I/O server.<br />Select Next.<br />Configure the workload group, otherwise select No. Select Next.<br />Enter a Profile Name. Select Next.<br />Enter select the amount of Minimum memory, Desired memory, and Maximum memory. I usually use 2 GB throughout all three. Select Next.<br />Select a Processing mode. I use Dedicated. Select Next.<br />If using Dedicated, enter the Minimum processors, Desired processors, and Maximum processors. I usually use 4 processors throughout all three. Select Next.<br />Select your Hardware Configuration that you wish to use for your environment. Select Next.<br />Configure I/O pools - Leave these as the default. Select Next.<br />Configure Virtual I/O adapters - I typically configure this part later. Select Next.<br />Configure Power Controlling Partitions - Leave these as the default settings. Select Next.<br />Optional Settings - Leave these as the default settings. Select Next.<br />Verify settings and Select Finish.<br /><a name="IBMVIOSInstallationoverNIM-Step4.NIMVIOSLPAR%3A"></a>Step 4. NIM VIOS LPAR:<br />On the NIM server, start NIM: smit nim<br />Network Installation Management<br />Move cursor to desired item and press Enter.<br />Configure the NIM Environment<br />Perform NIM Software Installation and Maintenance Tasks<br />Perform NIM Administration Tasks<br />Create IPL ROM Emulation Media<br />Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+8=Image<br />Esc+9=Shell Esc+0=Exit Enter=Do<br />Select Perform NIM Software Installation and Maintenance Tasks<br />Perform NIM Software Installation and Maintenance Tasks<br />Move cursor to desired item and press Enter.<br />Install and Update Software<br />List Software and Related Information<br />Software Maintenance and Utilities<br />Alternate Disk Installation<br />Manage Diskless/Dataless Machines<br />Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+8=Image<br />Esc+9=Shell Esc+0=Exit Enter=Do<br />Select Install and Update Software<br />Install and Update Software<br />Move cursor to desired item and press Enter.<br />Install the Base Operating System on Standalone Clients<br />Install Software<br />Update Installed Software to Latest Level (Update All)<br />Install Software Bundle<br />Update Software by Fix (APAR)<br />Install and Update from ALL Available Software<br />Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+8=Image<br />Esc+9=Shell Esc+0=Exit Enter=Do<br />Select Install the Base Operating System on Standalone Client<br />Install and Update Software<br />Move cursor to desired item and press Enter.<br />Install the Base Operating System on Standalone Clients<br />Install Software<br />Update Installed Software to Latest Level (Update All)<br />Install Software Bundle<br />Update Software by Fix (APAR)<br />Install and Update from ALL Available Software<br />+--------------------------------------------------------------------------+<br /> Select a TARGET for the operation<br /><br /> Move cursor to desired item and press Enter.<br /><br /> reg-05 machines standalone<br /><br /> Esc+1=Help Esc+2=Refresh Esc+3=Cancel<br /> Esc+8=Image Esc+0=Exit Enter=Do<br />Es /=Find n=Find Next<br />Es+--------------------------------------------------------------------------+<br />Select the machine to install VIOS on. If nothing appears, make sure you have created a standalone system.<br />Install and Update Software<br />Move cursor to desired item and press Enter.<br />Install the Base Operating System on Standalone Clients<br />Install Software<br />Update Installed Software to Latest Level (Update All)<br />Install Software Bundle<br />Update Software by Fix (APAR)<br />Install and Update from ALL Available Software<br />+--------------------------------------------------------------------------+<br /> Select the installation TYPE<br /><br /> Move cursor to desired item and press Enter.<br /><br /> rte - Install from installation images<br /> mksysb - Install from a mksysb<br /> spot - Install a copy of a SPOT resource<br /><br /> Esc+1=Help Esc+2=Refresh Esc+3=Cancel<br /> Esc+8=Image Esc+0=Exit Enter=Do<br />Es /=Find n=Find Next<br />Es+--------------------------------------------------------------------------+<br />Select mksysb - Install from a mksysb<br />Install and Update Software<br />Move cursor to desired item and press Enter.<br />Install the Base Operating System on Standalone Clients<br />Install Software<br />Update Installed Software to Latest Level (Update All)<br />Install Software Bundle<br />Update Software by Fix (APAR)<br />Install and Update from ALL Available Software<br />+--------------------------------------------------------------------------+<br /> Select the MKSYSB to use for the installation<br /><br /> Move cursor to desired item and press Enter.<br /><br /> vios_mksysb resources mksysb<br /><br /> Esc+1=Help Esc+2=Refresh Esc+3=Cancel<br /> Esc+8=Image Esc+0=Exit Enter=Do<br />Es /=Find n=Find Next<br />Es+--------------------------------------------------------------------------+<br />Select the vios_mksysb resource.<br />Install and Update Software<br />Move cursor to desired item and press Enter.<br />Install the Base Operating System on Standalone Clients<br />Install Software<br />Update Installed Software to Latest Level (Update All)<br />Install Software Bundle<br />Update Software by Fix (APAR)<br />Install and Update from ALL Available Software<br />+--------------------------------------------------------------------------+<br /> Select the SPOT to use for the installation<br /><br /> Move cursor to desired item and press Enter.<br /><br /> vios_spot resources spot<br /><br /> Esc+1=Help Esc+2=Refresh Esc+3=Cancel<br /> Esc+8=Image Esc+0=Exit Enter=Do<br />Es /=Find n=Find Next<br />Es+--------------------------------------------------------------------------+<br />Select vios_spot resource.<br />Select the vios_lppsource resource.<br />Select the vios_bosinst resource.<br />Install the Base Operating System on Standalone Clients<br />Type or select values in entry fields.<br />Press Enter AFTER making all desired changes.<br />[TOP] [Entry Fields]<br />* Installation Target reg-05<br />* Installation TYPE mksysb<br />* SPOT vios_spot<br />LPP_SOURCE [vios_lppsource] +<br />MKSYSB vios_mksysb<br />BOSINST_DATA to use during installation [vios_bosinst] +<br />IMAGE_DATA to use during installation [] +<br />RESOLV_CONF to use for network configuration [] +<br />Customization SCRIPT to run after installation [] +<br />Customization FB Script to run at first reboot [] +<br />ACCEPT new license agreements? [no] +<br />Remain NIM client after install? [yes] +<br />PRESERVE NIM definitions for resources on [yes] +<br />this target?<br />FORCE PUSH the installation? [no] +<br />[MORE...31]<br />Esc+1=Help Esc+2=Refresh Esc+3=Cancel Esc+4=List<br />Esc+5=Reset Esc+6=Command Esc+7=Edit Esc+8=Image<br />Esc+9=Shell Esc+0=Exit Enter=Do<br />NOTE: Setting the "Remain as NIM client after install" as YES can cause errors when configuring your shared ethernet adapters after install.<br />Press Enter to start the NIM process. </p>Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-80088240816012530342010-09-11T12:32:00.000+05:302010-09-11T12:34:04.966+05:30Assiging original PVID to hdisk in AIXProblem:<br />--------<br /><br /><br />I am using AIX 5.3L with EMC Symmetrix storage, establishing BCV's and then<br />splitting them and mounting them to the same host. I can mount the BCV's to<br />the same host using the 'recreatevg' command, but the problem I'm having is<br />when I'm restoring a BCV back to the standard. When the BCV is restored and<br />I do an 'lsvg vg1' where vg1's original PV was hdiskpower33 (the standard) it<br />is now hdiskpower35 (the BCV). I do not want this to happen and suspect the<br />problem is that the BCV's PVID was changed during the recreatevg. I want to<br />assign the original PVID to the BCV so that it will not remove hdiskpower33<br />from vg1. If I do 'rmdev -dl hdiskpower35' and then do 'lsvg -p vg1' I get<br />an error stating that the PVID was not found, and hdiskpower33 is not listed<br />as being a member of the vg1 volume group. I've tried doing:<br /><br />chdev -l hdiskpower35 -a pv={original pvid}<br /><br />but am told it is an illegal parameter. Is there another way to do this?<br /><br /><br />Solution:<br />---------<br /><br />Use at your own risk:<br /><br />1) BACKUP old disk critical information<br /><br /># dd if=/dev/hdisk9 of=/tmp/hdisk9.save bs=4k count=1<br /><br />If something were to go wrong and the head information got damaged<br />use the following to RECOVER the origional PVID and head information<br /><br />RECOVERY<br /># dd if=/tmp/hdisk9.save of=/dev/hdisk9 bs=4k count=1<br /><br />2) Find the origional PVID. This might be seen with lspv importvg, or<br />varyonvg. Our example origional PVID is "0012a3e42bc908f3"<br /><br /># lqueryvg -Atp /dev/hdisk9<br />...<br />Physical: 0012a3e42bc908f3 2 0<br />00ffffffc9cc5f99 1 0<br />...<br /><br />3) Verify that the disk sees an invalid PVID. The first 2 data fields<br />of offset 80 contain the PVID.<br /><br /># lquerypv -h /dev/hdisk9 80 10<br />00000080 00001155 583CD4B0 00000000 00000000 ...UX<.......... ^^^^^^PVID^^^^^^^ 4) Translate the ORIGIONAL PVID into the octal version. Take every 2 digits of the hex PVID and translate it to octal. This can be done by hand, calculator, script, or web page. 00012a3e42bc908f3 -> 00 12 a3 e4 2b c9 08 f3<br />Octal version -> 000 022 243 344 053 311 010 363<br /><br />5) Write the binary version of the PVID to a file by using the octal<br />values. Each octal char is lead with a backslash-Zero "\0". Do<br />not use spaces or any other characters except for the final \c to<br />keep from issuing a hard return.<br /><br /># echo "\0000\0022\0243\0344\0053\0311\0010\0363\c" >/tmp/oldpvid<br /><br />6) Verify that the binary pvid was written correctly. The origional<br />hex PVID should be seen AND the final address should be "0000010"<br />If EITHER of these is incorrect, try again, make sure there are no<br />spaces in the echo and the echo ends with a "\c".<br /><br /># od -x /tmp/oldpvid<br />0000000 0012 a3e4 2bc9 08f3<br />0000010<br /><br />7) Restore the PVID to the disk. You sould see 8 records in and out.<br />If there are more or less, restore the origional 4K block by using<br />the recovery method in step 1.<br /><br /># cat /tmp/oldpvid dd of=/dev/hdisk9 bs=1 seek=128<br />8+0 records in.<br />8+0 records out.<br /><br />8) Verify that the PVID was written correctly<br /><br />#lquerypv -h /dev/hdisk9 80 10<br />00000080 0012A3E4 2BC908F3 00000000 00000000 ....+...........<br /><br />9) Reconfigure the disk definitions on all systems attaching to that disk.<br />The ODM information for that drive will NOT be updated until the<br />disk is removed and reconfigured. Until that reconfigure commands<br />like `lspv` will still be incorrect.Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-90897657824486464882010-09-11T12:25:00.001+05:302010-09-11T12:29:47.411+05:30Script to delete failed path MPIO in AIXfor disk in `lspv awk '{ print $1 }'`<br />do<br />for path in `lspath -l $disk -F "status connection" grep Failed awk '{ print $2 }'`<br />do<br />echo $disk<br />rmpath -l $disk -w $path -d<br />done<br />doneSantosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-45226599834135707832010-08-22T15:55:00.000+05:302010-08-22T15:56:24.883+05:30IBM AIX 7 Open Beta Program<p><strong><u>IBM AIX 7 Open Beta Program<br /></u></strong>Welcome to the open beta for IBM’s premier UNIX operating system, AIX 7. AIX 7 is binary compatible with previous releases of AIX including AIX 6, 5.3, 5.2 and 5.1. AIX 7 extends the leadership features of AIX to include exciting new capabilities for vertical scalability, virtualization and manageability.<br />The open beta for AIX 7 is intended to give our clients, independent software vendors and business partners the opportunity to gain early experience with this new release of AIX prior to general availability. This open beta can be run on any Power Systems, IBM System p or eServer pSeries system that is based on POWER4, PPC970, POWER5, POWER6, or POWER7 processors.<br />Key features of AIX 7 included in this beta:<br />Virtualization<br />AIX 5.2 Workload Partitions for AIX 7 - This new enhancement to WPAR technology allows a client to backup an LPAR running AIX V5.2 and restore it into a WPAR running on AIX 7 on POWER7. This capability is designed to allow clients to easily consolidate smaller workloads running on older hardware onto larger, more efficient POWER7 systems. Although this capability is designed specifically for POWER7, it can be tested on older POWER processors during the open beta. Please note that this capability will only work with AIX 5.2<br />Support for Fibre Channel adapters in a Workload Partition AIX 7 includes support to allow a physical or virtual fibre channel adapter to a WPAR. This allows WPAR to directly own SAN devices including tape devices using the ‘atape” device type. This capability is designed to expand the capabilities of a Workload Partition and simplify management of storage devices.<br />Security<br />Domain Support in Role Based Access Control - This enhancement to RBAC allows a security policy to restrict administrative access to a specific set of similar resources, such as a subset of the available network adapters. This allows IT organizations that host services for multiple tenants to restrict administrator access to only the resources associated with a particular tenant. Domains can be used to control access to Volume Groups, Filesystems, files, devices (in /dev)<br />Manageability<br />NIM thin server Network Installation Management (NIM) support for thin servers has been enhanced to support NFSV4 and IPv6. Thin Servers are diskless or dataless AIX instances that boot from a common AIX image via NFS.<br />Networking<br />Etherchannel enhancements - Support for the 802.3AD Etherchannel has been enhanced to insure that a link is Link Aggregation Control Protocol (LACP) ready before sending data packets.<br />Product plans referenced in this document may change at any time at IBM’s sole discretion, and are not intended to be a commitment to future product or feature availability. All statements regarding IBM future direction, plans, product names or intent are subject to change or withdrawal without notice and represent goals and objectives only. All information is provided on an as is basis, without any warranty of any kind. </p><p> </p><p>Links for AIX 7.0</p><p>IBM AIX 7 Open Beta Program<br />The following links provide additional valuable resources related to this AIX 7 Open Beta.<br /><a href="http://publib.boulder.ibm.com/infocenter/aix/v7r1/">AIX 7 On-line Information Center</a><br />The official IBM statement on <a href="http://www-03.ibm.com/systems/power/software/aix/compatibility/">AIX binary compatibility</a><br /><a href="http://www.ibm.com/developerworks/aix">IBM articles, tutorials, and technical resources for AIX and UNIX users</a><br />A full range of <a href="http://www.ibm.com/systems/power/solutions/">IBM POWER System solutions</a> to match your business needs<br />A full range of <a href="http://www.ibm.com/systems/power/hardware/">IBM POWER System hardware</a> to match your business needs<br /><a href="http://www.ibm.com/systems/power/">Discover the POWER of IBM POWER System servers and solutions</a><br /><a href="http://www.ibm.com/partnerworld/aix">PartnerWorld</a> for AIX has resources and support for IBM Business Partners looking to exploit and learn about AIX<br />A one stop shop to learn about the benefits, resources and support available to <a href="http://ibm.com/partnerworld/systems">IBM Business Partners for IBM Systems</a>, servers and storage </p>Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com1tag:blogger.com,1999:blog-4888604520727443281.post-41673410137139591862010-08-22T15:50:00.002+05:302010-08-22T15:59:16.108+05:30New Features in AIX Version 7<p><strong><u>New Features in AIX Version 7</u></strong></p><br /><p><strong>IBM announced AIX version 7. <a href="http://www-03.ibm.com/systems/power/software/aix/v71/preview.html">http://www-03.ibm.com/systems/power/software/aix/v71/preview.html</a><br />Several new features were mentioned in the launch, but there were two new features that I found particularly interesting:<br /><u><br /></u>- AIX 5.2 WPARs for AIX 7<br />- Cluster Aware AIX<br /></strong></p><br /><p><strong><u>AIX 5.2 WPARs for AIX 7<br /><br />In AIX version 7, administrators will now have the capability to create Workload Partitions (WPARs) that can run AIX 5.2, inside an AIX 7 operating system instance. This will be supported on the POWER7 server platform. This is pretty cool. IBM have done this to allow some customers, that are unable to migrate to later generations of AIX and Power, to move to POWER7 whilst keeping their legacy AIX 5.2 systems operational. So for those clients that MUST stay on AIX 5.2 (for various reasons such as Application support) but would like to run their systems on POWER7, this feature may be very attractive. It will help to reduce the effort required when consolidating older AIX 5.2 systems onto newer hardware. It may also reduce some of the risk associated with migrating applications from one version of the AIX operating system to another.<br /><br />To migrate an existing AIX 5.2 system to an AIX 7 WPAR, administrators will first need to take a mksysb of the existing system. Then they can simply restore the mksysb image inside the AIX 7 WPAR. IBM will also offer limited defect and how-to support for the AIX 5.2 operating system in an AIX 7 WPAR. These WPARs can, of course, be managed via IBM Systems Director with the Workload Partitions Manager plug-in. <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRRASJpQbrvBL5Rg5Sgc4loC4Uzb_csKA2bctrF6d20_ORRnGJHwYVDfQqemDRtqnHEI80WaiSSnNFuzvptXcrW-wc9dI0Qv-8kTa8z_9b8ql6wJxRwY6tNRIg1WWaQTV6dsTFzMnMjsU/s1600/aix5.2wpar.gif"><img id="BLOGGER_PHOTO_ID_5508177751888969922" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 320px; CURSOR: hand; HEIGHT: 213px" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRRASJpQbrvBL5Rg5Sgc4loC4Uzb_csKA2bctrF6d20_ORRnGJHwYVDfQqemDRtqnHEI80WaiSSnNFuzvptXcrW-wc9dI0Qv-8kTa8z_9b8ql6wJxRwY6tNRIg1WWaQTV6dsTFzMnMjsU/s320/aix5.2wpar.gif" border="0" /></a><br /><br />The following figure provides a visualization of how these AIX 5.2 systems will fit into an AIX 7 WPAR. The WPARs in blue are native AIX 7 WPARs, while the WPARs in orange are AIX 5.2 WPARs running in the same AIX 7 instance. Pretty amazing really!<br /></p></u></strong><br /><p><strong><u></u></strong></p><p><strong><u></u></strong></p><p><strong><u></u></strong></p><p><strong><u>Cluster Aware AIX<br /></u></strong><br />Another very interesting feature of AIX 7 is a new technology known as “Cluster Aware AIX”. Believe it or not, administrators will now be able to create a cluster of AIX systems using features of the new AIX 7 kernel. IBM have introduced this “in built” clustering to the AIX OS in order to simplify the configuration and management of highly available clusters. This new AIX clustering has been designed to allow for:<br /><br />- The easy creation of clusters of AIX instances for scale-out computing or high availability.<br />- Significantly simplify cluster configuration, construction, and maintenance.- Improve availability by reducing the time to discover failures.<br />- Capabilities such as common device naming to help simplify administration.<br />- Built in event management and monitoring.<br />- A foundation for future AIX capabilities and the next generation of PowerHA SystemMirror.<br /><br />This does not replace PowerHA but it does change the way in which AIX traditionally integrates with cluster software like HACMP and PowerHA. A lot of the HA cluster functionality is now available in the AIX 7 kernel itself. However, the mature RSCT technology is still a component of the AIX and PowerHA configuration. I’m looking forward to reading more about this new technology and it’s capabilities.<br /><br />These are just two of the many features introduced in AIX 7. I’m eagerly looking forward to what these features and others mean for the future of the AIX operating system. It’s exciting to watch this operating system grow and strengthen over time. I can’t wait to get my hands on an AIX 7 system so that I can trial these new features.<br /><br />And speaking of trialing AIX 7, there is good news. IBM plan on running another AIX Open Beta program for AIX 7 mid 2010. Just as they did with <a href="http://gibsonnet.net/aix/wpars.html">AIX Version 6</a>, customers will be given the opportunity to download a beta version of AIX 7 and trial it on their own systems in their own environment. This is very exciting and I’m really looking forward to it.</p><p>=================================================================</p><p><br />Clustering infrastructureAIX 7 (which some are calling Cluster Aware AIX) will be the first AIX release that will provide for built-in clustering. This promises to simplify high-availability application management with PowerHA SystemMirror.<br />It should be noted: This innovation isn’t being targeted as a replacement of PowerHA, but it’s supposed to change the way in which AIX integrates with it. Much of the PowerHA cluster functionality will now be available in the actual kernel. It’s simply designed to more easily construct and manage clusters for scale-out and high-availability applications.<br />Furthermore, AIX 7 will have features that will help reduce the time to discover failures, along with common device naming, to help systems administrators simplify cluster administration. It will also provide for event management and monitoring.<br />I am excited about this tighter integration between PowerHA and AIX, because anything that provides greater transparency between high-availability software and the OS further eases the burden of system administrators who architect, install and configure high-availability software.<br />Vertical ScalabilityAIX 7 will allow you to scale up to 1,024 threads or 256 cores in a single partition. This is simply outstanding; No other Unix OS can come close to this.<br />Profile-Based Configuration ManagementIBM Systems Director enhancements will simplify AIX systems-configuration management. IBM is calling this facility profile-based configuration management.<br />At a high level it’ll provide simplified discovery, application, update and AIX configuration-verification properties across multiple systems. It’ll be particularly helpful in terms of cloning out changes to ‘pools’ of systems. After populating a profile into a file (XML), it can then be deployed to the other servers in the pool (see <a href="http://www.ibmsystemsmag.com/aix/enewsletterexclusive/33174p1.aspx" target="_blank">Figure 1</a>).<br />AIX 5.2 and WPARsAIX 7 will now provide the capability to run AIX 5.2 inside of a Workload Partition (WPAR). This will allow for further IT consolidation and flexible deployment opportunities (such as moving up to the POWER7 architecture) to folks who are still on older AIX OSs. In an easy way, it also allows you to backup an existing environment and restore it inside an AIX 7 WPAR. Furthermore, it will allow you to do this through IBM Systems Director’s Workload Partitions Manager.<br />I’m particularly impressed with this feature. Most companies look to discontinue support for their older operating systems as soon as they can. On the other hand, IBM continues to listen to their customers and provide additional features to folks on older versions of their systems. For example, AIX 7 will also support older hardware, including POWER4 processor-based servers. While this type of compatibility is critical to those who want to take advantage of the feature/functionality improvements of AIX but can’t afford to upgrade their hardware, it should also be noted AIX 7 will include exploitation features that take full advantage of POWER7 processor-based servers. Additionally, AIX 7 will have full binary compatibility for application programs developed on prior versions of AIX—as long as these programs comply with reasonable programming standards.</p>Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-86357667435217927122010-08-22T15:48:00.001+05:302010-08-22T15:48:50.163+05:30AIX7 WPAR support<strong><u>AIX7 WPAR support</u></strong><br />Besides adding AIX 5.2 support to WPAR's (workload partitions) AIX7 is also adding more virtual device support and security to the WPAR virtualization engine.<br />AIX WPAR support will add Fibre Channel support- or exporting a virtual (NPIV) or physical fibre channel adapter.Fibre channel tape systems using the "atape" driver are also<br />supported inside the WPAR in this configuration.<br />With the next releases AIX, VIO SCSI disks are now supported in a WPAR in the same manner as Fibre Channel disks. This feature is available on both AIX V7.1 and AIX V6.1 with the 6100-06 Technology Level.<br />Trusted Kernel Extension Loading and Config from WPAR (AIX 7.1 Only)<br />AIX V7.1 provides the capability for a Global administrator to export specific kernel extensions for a WPAR administrator to have the ability to load and configure from inside the WPAR.Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-37979220710664914512010-08-22T15:44:00.000+05:302010-08-22T15:46:06.550+05:30Workload Management in AIX: WLM, DLPAR and now WPAR<strong><u>Workload Manager - WLM</u></strong><br />Over the years several methods of Workload Management have been developed as means to enhance resource utilization of systems. Some might say the Workload Management is a form of Performance Management – but that is only true in that Performance Management is actually Resource Management. In this sense, Workload Management is the collection of services and resource management applications that are used to monitor and regulate the resources a workload is permitted at any particular time.<br /> Legacy UNIX systems had a very simple model of workload resource management. This was also known as sizing the box. Generally, the workload was the collection of all applications or processes running on the box. Various tools could be used – either system or application tools – to tune the application(s) to best fit the box. In other words, the amount of resources available to the workload was constant – whatever the box had.<br />With the release of AIX 4.3.3 in 1999 AIX included a new system component – AIX Workload Manager (WLM). This component made it possible to define collections of applications and processes into a workload, or workload class. A workload class was given resource entitlement (CPU, Memory, and starting with AIX 5.1 local I/O) in terms of shares. If there were four (4) classes active, and a resource was saturated (being used 100%) then AIX would compute a resource entitlement percentage based on the total active shares. If the four (4) classes had, respectively 15, 30, 45, and 60 shares the classes would be entitled to respectively – 10, 20, 30 and 40% of the resource concerned. As long as a resource was not constrained (less than 100% usage) WLM, by default, would not restrict a class resource entitlement.<br />The primary advantages of WLM are that it is included in the AIX base at no extra charge and is a software solution requiring no special hardware. However, performance specialists seem to have found it difficult to think in terms of performance management on a system which is regularly going to need more resources than it has. In other words, the legacy UNIX workload management model dominates most system administrations view of resource management.<br />Firmware Partitioning as Resource Management<br />Parallel with the development of WLM, a software solution for workload resource monitoring and control, the use of dividing a single system in to several separate system definitions commonly referred to as partitions. Virtualization in UNIX had become. Unlike WLM, partitioning required specific hardware features. For AIX, partitioning was introduced with the p690 POWER4 system.<br />Partitioning is a technique used to define multiple systems from a single system. A partition is allocated a specific amount of resources that it has to use as it desires. Individual partitions resources are isolated via firmare (Logical Partitions, or LPAR) or by the hardware component assembly (Physical Partition, or PPAR).<br />Initially, resource assignment was static. To change the resource allocation a halt and a (re)activation of the partition was required. Starting with AIX 5.2 the acronym DLPAR (dynamic LPAR) was introduced. This enhancement enables dynamic resource allocation to a partition, that is, a partition can have it's allocation of resources increased or decreased without a system, i.e. partition, halt and reactivation. With POWER5 the resource virtualization continued with the introduction of the firmware hypervisor, micropartitions, virtual Ethernet and virtual SCSI.<br />The advantages of partitioning are the flexibility in allocation of resources and the isolation quaranteed by the hypervisor firmware. However, partitioning requires specific hardware. Also, an administrator needs extra training to create and manage partition resources.<br /><strong><u>AIX 6.1 introduces Workload Partitions</u></strong><br />A workload partition is a virtual system environment created using software tools. A workload partition is hosted by an AIX software environment. Applications and users working within the workload partition see the workload partition as if it was a regular system. Although less than a firmware created and managed partition – workload partition processes, signals and even file systems are isolated from the hosting environment as well as from other workload partitions. Additionally, workload partitions can have their own users, groups and dedicated network addresses. Interpprocess communication is limited to processes running within the workload partition.<br />AIX supports two kinds of Workload Partiions (WPARs).<br />A System WPAR is an environment that can be best compared to a stand-alone system. This WPAR runs it owns services and does not share writeable file systems with another WPAR or the AIX hosting (global) system.<br />An Application WPAR has all the process isolation a system WPAR has. The defining charactgeristic is that it shares file system name space with the global system and applications defined within the application WPAR environment.<br />Both types of WPARs can be configured for mobility to allow running insttances of the WPAR to be moved between physical systems or LPARs using the AIX Workload Partitin Manager LPP.<br /><strong><u>Summary<br /></u></strong>With the addition of WPAR (workload partitions) to AIX workload management has an intermediate level of flexibility and isolation of applications, users and data. Using WLM all process share the same environment with only CPU, memory and I/O resource allocation being managed when a resource is saturated. Firmware based virtualization of partitions starting with POWER4 hardware provides both hard allocation resource levels as well as complete isolation of services, network addresses, devices, etc. from all other partitions. Workload partitions, or WPAR, are a software based virtualization of partitions supporting a high degree of isolation and enhanced mobility over supporting global systems.Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-18891615218194290452010-08-07T14:07:00.001+05:302010-08-07T14:07:44.572+05:30Powerpath CLI CommandsPowerpath CLI Commands<br />Command<br />Description<br />powermt<br />Manages a PowerPath environment<br />powercf<br />Configures PowerPath devices<br />emcpreg -install<br />Manages PowerPath license registration<br />emcpminor<br />Checks for free minor numbers<br />emcpupgrade<br />Converts PowerPath configuration files<br /><a id="powermt" name="powermt"></a>powermt command<br />Command<br />Description<br />powermt check<br />Checks for, and optionally removes, dead paths.<br />powermt check_ registration<br />Checks the state of the PowerPath license.<br />powermt config<br />Configures logical devices as PowerPath devices.<br />powermt display<br />powermt watch<br />Displays the state of HBAs configured for PowerPath.<br />powermt watch is deprecated.<br />powermt display options<br />Displays the periodic autorestore setting.<br />powermt load<br />Loads a PowerPath configuration.<br />powermt remove<br />Removes a path from the PowerPath configuration.<br />powermt restore<br />Tests and restores paths.<br />powermt save<br />Saves a custom PowerPath configuration.<br />powermt set mode<br />Sets paths to active or standby mode.<br />powermt set<br />periodic_autorestore<br />Enables or disables periodic autorestore.<br />powermt set policy<br />Changes the load balancing and failover policy.<br />powermt set priority<br />Sets the I/O priority<br />powermt version<br />Returns the number of the PowerPath version for which powermt was created.<br /><a id="powermt_ex" name="powermt_ex"></a>powermt command examples<br />powermt display: # powermt display paths class=all<br /># powermt display ports dev=all<br /># powermt display dev=all<br />powermt set: To disable a HBA from passing I/O # powermt set mode=standby adapter=<adapter#><br />To enable a HBA from passing I/O # powermt set mode=active adapter=<adapter#><br />To set or validate the Load balancing policy<br />To see the current load-balancing policy and I/Os run the following command<br /># powermt display dev=<device><br />so = Symmetrix Optimization (default)<br />co = Clariion Optimization<br />li = Least I/Os (queued)<br />lb = Least Blocks (queued)<br />rr = Round Robin (one path after another)<br />re = Request (failover only)<br />nr = No Redirect (no load-balancing or failover)<br />To set to no load balancing # powermt set policy=nr dev=<device><br />To set the policy to default Symmetrix Optimization # powermt set policy=so dev=<device><br />To set the policy to default Clariion Optimization # powermt set policy=co dev=<device><br />pprootdev<br />To bring the rootvg devices under powerpath control # pprootdev on<br />To bring back the rootvg disks back to hdisk control # pprootdev off<br />To temporarily bring the rootvg disks to hdisk control for running "bosboot" # pprootdev fix<br /><a id="powermt_output" name="powermt_output"></a>powermt command examples with output<br />To validate the installation # powermt check_registration<br />Key B3P3-HB43-CFMR-Q2A6-MX9V-O9P3<br />Product: PowerPath<br />Capabilities: Symmetrix CLARiiON<br />To display each device's path, state, policy and average I/O information<br /># powermt display dev=emcpower6a<br />Pseudo name=emcpower6a<br />Symmetrix ID=000184503070<br />Logical device ID=0021<br />state=alive; policy=SymmOpt; priority=0; queued-IOs=0<br /><br />---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---<br />### HW Path I/O Paths Interf. Mode State Q-IOs Errors<br /><br />0 sbus@2,0/fcaw@2,0 c4t25d225s0 FA 13bA active dead 0 1<br />1 sbus@6,0/fcaw@1,0 c5t26d225s0 FA 4bA active alive 0 0<br />To show the paths and dead paths to the storage port<br /># powermt display paths<br />Symmetrix logical device count=20<br /><br />----- Host Bus Adapters --------- ------ Storage System ----- - I/O Paths -<br />### HW Path ID Interface Total Dead<br /><br />0 sbus@2,0/fcaw@2,0 000184503070 FA 13bA 20 20<br />1 sbus@6,0/fcaw@1,0 000184503070 FA 4bA 20 0<br />CLARiiON logical device count=0<br /><br />----- Host Bus Adapters --------- ------ Storage System ----- - I/O Paths -<br />### HW Path ID Interface Total Dead<br /><br />To display the storage ports information<br /># powermt display ports<br />Storage class = Symmetrix<br /><br />----------- Storage System --------------- -- I/O Paths -- --- Stats ---<br />ID Interface Wt_Q Total Dead Q-IOs Errors<br /><br />000184503070 FA 13bA 256 20 20 0 20<br />000184503070 FA 4bA 256 20 0 0 0<br />Storage class = CLARiiON<br /><br />----------- Storage System --------------- -- I/O Paths -- --- Stats ---<br />ID Interface Wt_Q Total Dead Q-IOs ErrorsSantosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com2tag:blogger.com,1999:blog-4888604520727443281.post-67431852148715215532010-08-07T14:04:00.002+05:302010-08-07T14:07:05.976+05:30Moving an LPAR to another frame<strong>Moving an LPAR to another frame</strong><br /><br />Steps for migrating LPAR from ONE Frame to Another IBM Frame<br /><br />1.Have Storage zone the LPARs disk to the new HBA(s). Also have them add an additional 40GB drive for the new boot disk. By doing this we have a back out to the old boot disk on the old frame.<br /><br />2. Collect data from the current LPAR:<br /><br />a. Network information – Write down IP and ipv4 alias(s) for each interface<br /><br />b. Run “oslevel –r” - will need this when setting up NIM for the mksysb recovery<br /><br />c. Is the LPAR running AIO, if so will need to configure after the mksysb recovery<br /><br />d. Run “lspv”, save this output, contains volume group and PVID information<br /><br />e. Any other customizations you deem neccessary<br /><br /><br />3. create mksysb backup of this LPAR<br /><br />4. Reconfigure the NIM machine for this LPAR, with new Ethernet MAC address. Foolproof method is to remove the machine and re-create it.<br /><br />5. In NIM, configure the LPAR for a mksysb recovery. Select the appropriate SPOT and LPP Source, base on “oslevel –r” data collected in step 2.<br /><br />6. Shut down the LPAR on the old frame (Halt the LPAR)<br /><br />7. Move network cables, fibre cables, disk, zoning<br /><br />8. if needed, to the LPAR on the new frame<br /><br />9. On the HMC, bring up the LPAR on the new frame in SMS mode and select a network boot. Verify SMS profile has only a single HBA (if Clarrion attached, zoned to a single SP), otherwise the recovery will fail with a 554.<br /><br />10. Follow prompts for building a new OS. Select the new 40GB drive for the boot disk (use lspv info collected in Step 2 to identify the correct 40GB drive). Leave defaults for remaining questions NO (shrink file systems, recover devices, and import volume groups).<br /><br />11. After the LPAR has booted, from the console (the network interface may be down):<br /><br />a. lspv Note the hdisk# of the bootdisk<br /><br />b. bootlist –m normal –o Verify boot list is set – if not, set it<br /><br />bootlist –m normal –o hdisk#<br /><br />c. ifconfig en0 down If interface got configured, down it<br /><br />d. ifconfig en0 detach and remove it<br /><br /><br />e. lsdev –Cc adapter Note Ethernet interfaces (ex. ent0, ent1)<br /><br />f. rmdev –dl <en#>Remove all en devices<br /><br />g. rmdev –dl <ent#>Remove all ent devices<br /><br />h. cfgmgr Will rediscover the en/ent devices<br /><br />i. chdev –l <ent#>-a media_speed=100_Full_Duplex Set on each interface unless<br /><br />running GIG, leave defaults<br /><br /><br />j. Configure the network interfaces and aliases Use info recorded from step 2 mktcpip –h <hostname>-a <ip>-m <netmask>-i <en#>-g <gateway>-A no –t N/A –s<br /><br />chdev –l en# -a alias4=<alias>,<netmask><br /><br /><br />k. Verify that the network is working.<br /><br /><br />12. If LPAR was running AIO (data collected in Step 2), verify it is running (smitty aio)<br /><br />13. Check for any other customizations which may have been made on this LPAR<br /><br />14. Vary on the volume groups, use the “lspv” data collected in Step 2 to identify by PVID a hdisk in each volume group. Run for each volume group:<br /><br />a. importvg –y <vgname>hdisk# Will vary on all hdisk in the volume group<br /><br />b. varyonvg <vgname><br /><br />c. mount all Verify mounts are good<br /><br />15. Verify paging space is configured appropriately<br /><br />a. lsps –a Look for Active and Auto set to yes<br /><br />b. chps –ay pagingXX Run for each paging space, sets Auto<br /><br />c. swapon /dev/pagingxx Run for each paging space, sets Active<br /><br /><br />16. Verify LPAR is running 64 bit<br /><br />a. bootinfo –K If 64, you are good<br /><br /><br />b. ln –sf /usr/lib/boot/unix_64 /unix If 32, change to run 64 bit<br /><br />c. ln –sf /usr/lib/boot/unix_64 /usr/lib/boot/unix<br /><br />d. bosboot –ak /usr/lib/boot/unix_64<br /><br /><br />17. If LPAR has Power Path<br /><br />a. Run “powermt config” Creates the powerpath0 device<br /><br />b. Run “pprootdev on” Sets Power Path control of the boot disk<br /><br />c. If Clariion, make configuration changes to enable SP failover<br /><br /><br />chdev -l powerpath0 -Pa QueueDepthAdj=1<br /><br />chdev –l fcsX –Pa num_cmd_elems=2048 For each fiber adapter<br /><br />chdev –l fscsiX –Pa fc_err_recov=fast_fail For each fiber adapter<br /><br />d. Halt the LPAR<br /><br />e. Activate the Normal profile If Sym/DMX – verify two HBA’s in profile<br /><br />f. If Clarrion attached, have Storage add zone to 2nd SP<br /><br />i. Run cfgmgr Configure the 2nd set of disk<br /><br /><br />g. Run “pprootdev fix” Put rootdisk pvid’s back on hdisk<br /><br />h. lspv grep rootvg Get boot disk hdisk#<br /><br />i. bootlist –m normal –o hdisk# hdisk# Set the boot list with both hdisk<br /><br /><br />20. From the HMC, remove the LPAR profile from the old frame<br /><br />21. Pull cables from the old LPAR (Ethernet and fiber), deactivate patch panel ports<br /><br />22. Update documentation, Server Master, AIX Hardware spreadsheet, Patch Panel spreadsheet<br /><br />23. Return the old boot disk to storage.Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0tag:blogger.com,1999:blog-4888604520727443281.post-45572923081523850272010-08-07T14:02:00.001+05:302010-08-07T14:02:30.269+05:30Unique VLAN ID for SEA failover control channel setup<strong><u>Unique VLAN ID for SEA failover control channel setup</u></strong><br /><br />Always select unique VLAN ID – which dosn’t exist on any of your organization network to avoid conflict when setting up dual VIOS with a control channel for SEA failover.. failure to follow this may result in a network storm. ( Very important and I couldn’t find any note on IBM site about it)<br /><br /><br />Requirements for Configuring SEA Failover<br /><br />One SEA on one VIOS acts as the primary (active) adapter and the second SEA on the second VIOS acts as a backup (standby) adapter.<br />Each SEA must have at least one virtual Ethernet adapter with the “Access external network” flag (previously known as “trunk” flag) checked. This enables the SEA to provide bridging functionality between the two VIO servers.<br />This adapter on both the SEAs has the same PVID, but will have a different priority value.<br />A SEA in ha_mode (Failover mode) might have more than one trunk adapters, in which case all should have the same priority value.<br />The priority value defines which of the two SEAs will be the primary and which will be the backup. The lower the priority value, the higher the priority, e.g. an adapter with priority 1 will have the highest priority.<br />An additional virtual Ethernet adapter , which belongs to a unique VLAN on the system, is used to create the control channel between the SEAs, and must be specified in each SEA when configured in ha_mode.<br />The purpose of this control channel is to communicate between the two SEA adapters to determine when a fail over should take place.Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com2tag:blogger.com,1999:blog-4888604520727443281.post-58384383762213484952010-08-07T14:01:00.001+05:302010-08-07T14:01:29.907+05:30Upgrading PowerPath in a dual VIO server environment<strong>Upgrading PowerPath in a dual VIO server environment<br /></strong><br />When upgrading PowerPath in a dual Virtual I/O (VIO) server environment, the devices need to be unmapped in order to maintain the existing mapping information.<br /><br />To upgrade PowerPath in a dual VIO server environment:<br />1. On one of the VIO servers, run lsmap -all.<br />This command displays the mapping between physical, logical,<br />and virtual devices.<br /><br />$ lsmap -all<br />SVSA Physloc Client Partition ID<br />————— ————————————– ——————–<br />vhost1 U8203.E4A.10B9141-V1-C30 0×00000000<br />VTD vtscsi1<br />Status Available<br />LUN 0×8100000000000000<br />Backing device hdiskpower5<br />Physloc U789C.001.DQD0564-P1-C2-T1-L67<br /><br />2. Log in on the same VIO server as the padmin user.<br /><br />3. Unconfigure the PowerPath pseudo devices listed in step 1 by<br />running:<br />rmdev -dev <vtd>-ucfg<br />where <vtd>is the virtual target device.<br />For example, rmdev -dev vtscsil -ucfg<br />The VTD status changes to Defined.<br />Note: Run rmdev -dev <vtd>-ucfg for all VTDs displayed in step 1.<br /><br />4. Upgrade PowerPath<br /><br />=======================================================================<br /><br />1. Close all applications that use PowerPath devices, and vary off all<br />volume groups except the root volume group (rootvg).<br /><br />In a CLARiiON environment, if the Navisphere Host Agent is<br />running, type:<br />/etc/rc.agent stop<br /><br />2. Optional. Run powermt save in PowerPath 4.x to save the<br />changes made in the configuration file.<br /><br />Run powermt config.<br />5. Optional. Run powermt load to load the previously saved<br />configuration file.<br />When upgrading from PowerPath 4.x to PowerPath 5.3, an error<br />message is displayed after running powermt load, due to<br />differences in the PowerPath architecture. This is an expected<br />result and the error message can be ignored.<br />Even if the command succeeds in updating the saved<br />configuration, the following error message is displayed by<br />running powermt load:<br />host1a 5300-08-01-0819:/ #powermt load Error loading auto-restore value<br />Warning:Error occurred loading saved driver state from file /etc/powermt.custom<br /><br />host1a 5300-08-01-0819:/ #powermt load Error loading auto-restore value<br />Warning:Error occurred loading saved driver state from file /etc/powermt.custom<br />…<br />Loading continues…<br />Error loading auto-restore value<br />When you upgrade from an unlicensed to a licensed version of<br />PowerPath, the load balancing and failover device policy is set to<br />bf/nr (BasicFailover/NoRedirect). You can change the policy by<br />using the powermt set policy command.<br /><br />=======================================================================<br /><br />5. Run powermt config.<br /><br />6. Log in as the padmin user and then configure the VTD<br />unconfigured from step 3 by running:<br />cfgdev -dev <vtd><br />Where <vtd>is the virtual target device.<br />For example, cfgdev -dev vtscsil<br />The VTD status changes to Available.<br />Note: Run cfgdev -dev <vtd>for all VTDs unconfigured in step 3<br /><br />7. Run lspath -h on all clients to verify all paths are Available.<br /><br />8. Perform steps 1 through 7 on the second VIO server.Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com2tag:blogger.com,1999:blog-4888604520727443281.post-3093660605941196092010-08-07T13:52:00.000+05:302010-08-07T13:53:32.620+05:30Recovering emc dead path<p># powermt display dev=allAnd you notice that there are "dead" paths, then these are the commands to run in order to set these paths back to "alive" again, of course, AFTER ensuring that any SAN related issues are resolved. To have PowerPath scan all devices and mark any dead devices as alive, if it finds that a device is in fact capable of doing I/O commands, run:<br /># powermt restoreTo delete any dead paths, and to reconfigure them again:<br /># powermt reset# powermt configOr you could run:<br /># powermt check </p><p> </p>Santosh Guptahttp://www.blogger.com/profile/15304316861414673823noreply@blogger.com0