What is Advanced POWER Virtualization (APV)
APV – the hardware feature code for POWER5 servers that enables:
– Micro-partitioning – fractional CPU entitlements from a shared pool of
processors, beginning at one-tenth of a CPU
– Partition Load Manager (PLM) – a policy-based, dynamic CPU and
memory reallocation tool
– Physical disks can be shared as virtual disks to client partitions
– Shared Ethernet Adapter (SEA) – A physical adapter or EtherChannel in
a VIO Server can be shared by client partitions. Clients use virtual
Ethernet adapters
Virtual Ethernet – a LPAR-to-LPAR Virtual LAN within a POWER5 Server
– Does not require the APV feature code
Why Virtual I/O Server?
POWER5 systems will support more partitions than physical I/O slots
available
– Each partition still requires a boot disk and network connection, but
now they can be virtual instead of physical
VIO Server allows partitions to share disk and network adapter resources
– The Fibre Channel or SCSI controllers in the VIO Server can be
accessed using Virtual SCSI controllers in the clients
– A Shared Ethernet Adapter in the VIO Server can be a layer 2 bridge
for virtual Ethernet adapters in the clients
The VIO Server further enables on demand computing and server
consolidation
Virtualizing I/O saves:
– Gbit Ethernet Adapters
– 2 Gbit Fibre Channel Adapters
– PCI slots
– Eventually, IO drawers
– Server frames?
– Floor space?
– Electric, HVAC?
– Ethernet switch ports
– Fibre channel switch ports
– Logistics, scheduling, delays of physical Ethernet, SAN attach
Some servers run 90% utilization all the time – everyone knows which
ones.
Average utilization in the UNIX server farm is closer to 25%. They don’t
all maximize their use of dedicated I/O devices
VIO is departure from “new project, new chassis” mindset
Virtual I/O Server Characteristics
Requires AIX 5.3 and POWER5 hardware with APV feature
Installed as a special purpose, AIX-based logical partition
Uses a subset of the AIX Logical Volume Manager and attaches
to traditional storage subsystems
Inter-partition communication (client-server model) provided via
the POWER Hypervisor
Clients “see” virtual disks as traditional AIX SCSI hdisks, although
they may be a physical disk or logical volume on the VIO Server
One physical disk on a VIO server can provide logical volumes for
several client partitions
Virtual Ethernet
Virtual Ethernet
– Enable inter-lpar communications without a physical adapter
– IEEE-compliant Ethernet programming model
– Implemented through inter-partition, in-memory communication
VLAN splits up groups of network users on a physical network onto
segments of logical networks
Virtual switch provides support for multiple (up to 4K) VLANs
– Each partition can connect to multiple networks, through one or more adapters
– VIO server can add VLAN ID tag to the Ethernet frame as appropriate.
Ethernet switch restricts frames to ports that are authorized to receive frames
with specific VLAN ID
Virtual network can connect to physical network through “routing"
partitions – generally not recommended
Why Multiple VIO Servers?
Second VIO Server adds extra protection to client LPARS
Allows two teams to learn VIO setup on single system
Having Multiple VIO Servers will:
– Provide you Multiple paths to your OS/Data Virtual disks
– Provide you Multiple paths to your network
Advantages:
– Highest superior availability to other virtual I/O solutions
– Allows VIO Server updates without shutting down client LPAR’s
5 comments:
Hi,
Is it possible we can install VIO server itself on SAN disk...
Yes, You Can. San should support.
How would you go about migrating a dedicated-IO LPAR to a server that employs VIOS?
Hi Santosh,
Nice work, Keep it up. Your blog is very usefull and lot of information in it .
Can you explain few more concepts of VIO while creating VLPARS.
What is Entitle Capacity if CPU's ?
How do we caluculate to give Virtual CPU valuses in comparing with Physical CPUs?
Is there any Equation to follow while assigning Virtual CPUS?
Regards,
Naresh
Hi Santosh,
thanks in advance. I am glad this blog exists.
Can you explain why I would want to split the SCSI disk bay back plane to support separate boot disk for my dual vio servers? Do I want to do this? why?
thanks!
Post a Comment