Saturday, May 3, 2008

Enabling the Advanced POWER Virtualization Feature






Enabling the Advanced POWER Virtualization Feature

Before we could use the virtual I/O, we had to determine whether the machine was enabled to use the feature. To do this, we right-clicked on the name of the target server in the HMC’s ‘Server and Partition’ view and looked at that server’s properties. Figure 4 shows it did not have the feature enabled.



Users can enable this feature by obtaining a key code from their IBM sales representative using information that the HMC gathers about their machine when the user navigates to Show Code Information in the HMC. Figure 5 shows how to navigate there as well as how to get to the HMC dialog box used to enter the activation code which renders the system VIO-capable. We obtained an access code and entered it in the dialog box in Figure.





VIO server setup example

Virtual I/O Example
A user who currently runs applications on a POWER4 system may want to upgrade to a POWER5 system running AIX 5.3 in order to take advantage of virtual I/O. If so, do these three things:
y Create a Virtual I/O Server. y Add virtual LANs. y Define virtual SCSI devices.
In our example, we had an IBM eServer p5 550 Express with four CPUs that was running one AIX 5.3 database server LPAR, and we needed to create a second application server LPAR that uses a virtual SCSI disk as its boot disk. We wanted to share one Ethernet adapter between the database and application server LPARs and use this shared adapter to access an external network. Finally, we needed a private network between the two LPARs and we decided to implement it using virtual Ethernet devices (see Figure 3). We followed these steps to set up our system:
1. Enabled the Advanced POWER Virtualization feature.
2.Installed the Virtual I/O Server. y Created the Public Ethernet VLAN.
3. Installed the Virtual SCSI Devices.
4. Installed the Private Ethernet VLAN.

Virtual I/O Server installation overview

The Virtual I/O Server The Virtual I/O Server is a dedicated partition that runs a special operating system called IOS. This special type of partition has physical resources assigned to it in its HMC profile. The administrator issues server partition IOS commands to create virtual resources which present virtual LAN, virtual SCSI adapters, and virtual disk drives client partitions. The client partition’s operating systems recognize these resources as physical devices. The Virtual I/O Server is responsible for managing the interaction between the client LPAR and the physical device supporting the virtualized service. Once the administrator logs in to the Virtual I/O Server as the user padmin, he or she has access to a restricted Korn shell session. The administrator uses IOS commands to create, change, and remove these physical and virtual devices as well as to configure and manage the VIO server. Executing the help command on the VIO server command line lists the commands that are available in padmin’s restricted Korn Shell session

Virtual I/O Server installation


 VIO Server code is packaged and shipped as an AIX mksysb image
on a VIO DVD
 Installation methods
– DVD install
– HMC install - Open rshterm and type “installios”; follow the
prompts
– Network Installation Manager (NIM)
 VIO Server can support multiple client types
– AIX 5.3
– SUSE Linux Enterprise Server 9 or 10 for POWER
– Red Hat Enterprise Linux AS for POWER Version 3 and 4


Virtual I/O Server Administration
 The VIO server uses a command line interface running in a restricted shell
– no smitty or GUI
 There is no root login on the VIO Server
 A special user – padmin – executes VIO server commands
 First login after install, user padmin is prompted to change password
 After that, padmin runs the command “license –accept”
 Slightly modified commands are used for managing devices, networks,
code installation and maintenance, etc.
 The padmin user can start a root AIX shell for setting up third-party
devices using the command “oem_setup_env”

We can get all commands by executing help on padmin user id

$ help
Install Commands
Physical Volume Commands
Security Commands
updateios
lspv
lsgcl
lssw
migratepv
cleargcl
ioslevel
lsfailedlogin
remote_management
Logical Volume Command
oem_setup_env
lslv
UserID Commands
oem_platform_level
mklv
mkuser
license
extendlv
rmuser
rmlv
lsuser
LAN Commands
mklvcopy
passwd
mktcpip
rmlvcopy
chuser
hostname
cfglnagg
netstat
Volume Group Commands
Maintenance Commands
entstat
lsvg
chlang
cfgnamesrv
mkvg
diagmenu
traceroute
chvg
shutdown
ping
extendvg
fsck
optimizenet
reducevg
backupios
lsnetsvc
mirrorios
savevgstruct
unmirrorios
restorevgstruct
Device Commands
activatevg
starttrace
mkvdev
deactivatevg
stoptrace
lsdev
importvg
cattracerpt
lsmap
exportvg
bootlist
chdev
syncvg
snap
rmdev
startsysdump

cfgdev
topas
mkpath
mount
chpath
unmount
lspath
showmount
rmpath
startnetsvc
errlog
stopnetsvc

Virtual I/O Server Overview

What is Advanced POWER Virtualization (APV)
 APV – the hardware feature code for POWER5 servers that enables:
Micro-partitioning – fractional CPU entitlements from a shared pool of
processors, beginning at one-tenth of a CPU
Partition Load Manager (PLM) – a policy-based, dynamic CPU and
memory reallocation tool
– Physical disks can be shared as virtual disks to client partitions
Shared Ethernet Adapter (SEA) – A physical adapter or EtherChannel in
a VIO Server can be shared by client partitions. Clients use virtual
Ethernet adapters
 Virtual Ethernet – a LPAR-to-LPAR Virtual LAN within a POWER5 Server
– Does not require the APV feature code


Why Virtual I/O Server?
 POWER5 systems will support more partitions than physical I/O slots
available
– Each partition still requires a boot disk and network connection, but
now they can be virtual instead of physical
 VIO Server allows partitions to share disk and network adapter resources
– The Fibre Channel or SCSI controllers in the VIO Server can be
accessed using Virtual SCSI controllers in the clients
– A Shared Ethernet Adapter in the VIO Server can be a layer 2 bridge
for virtual Ethernet adapters in the clients
 The VIO Server further enables on demand computing and server
consolidation


 Virtualizing I/O saves:
– Gbit Ethernet Adapters
– 2 Gbit Fibre Channel Adapters
– PCI slots
– Eventually, IO drawers
– Server frames?
– Floor space?
– Electric, HVAC?
– Ethernet switch ports
– Fibre channel switch ports
– Logistics, scheduling, delays of physical Ethernet, SAN attach
 Some servers run 90% utilization all the time – everyone knows which
ones.
 Average utilization in the UNIX server farm is closer to 25%. They don’t
all maximize their use of dedicated I/O devices
 VIO is departure from “new project, new chassis” mindset


Virtual I/O Server Characteristics

 Requires AIX 5.3 and POWER5 hardware with APV feature
 Installed as a special purpose, AIX-based logical partition
 Uses a subset of the AIX Logical Volume Manager and attaches
to traditional storage subsystems
 Inter-partition communication (client-server model) provided via
the POWER Hypervisor
 Clients “see” virtual disks as traditional AIX SCSI hdisks, although
they may be a physical disk or logical volume on the VIO Server
 One physical disk on a VIO server can provide logical volumes for
several client partitions


Virtual Ethernet
 Virtual Ethernet
– Enable inter-lpar communications without a physical adapter
– IEEE-compliant Ethernet programming model
– Implemented through inter-partition, in-memory communication
 VLAN splits up groups of network users on a physical network onto
segments of logical networks
 Virtual switch provides support for multiple (up to 4K) VLANs
– Each partition can connect to multiple networks, through one or more adapters
– VIO server can add VLAN ID tag to the Ethernet frame as appropriate.
Ethernet switch restricts frames to ports that are authorized to receive frames
with specific VLAN ID
 Virtual network can connect to physical network through “routing"
partitions – generally not recommended


Why Multiple VIO Servers?
 Second VIO Server adds extra protection to client LPARS
 Allows two teams to learn VIO setup on single system
 Having Multiple VIO Servers will:
– Provide you Multiple paths to your OS/Data Virtual disks
– Provide you Multiple paths to your network
 Advantages:
– Highest superior availability to other virtual I/O solutions
– Allows VIO Server updates without shutting down client LPAR’s