nim:/# ifconfig en0 largesend

nim:/# ifconfig -a

en0: flags=1e080863,4c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>

        inet 180.24.56.123 netmask 0xffffff00 broadcast 180.24.56.255

        inet 181.24.56.188 netmask 0xffffff00 broadcast 181.24.56.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

nim:/#

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

nim:/# echo "ifconfig en0 largesend" >> /etc/rc.net



블로그 이미지

Melting

,

PowerVM QuickStart

IBM Power 2012. 7. 3. 18:53

tablespace.net ... 대략 짱~ 이라는... ^^

http://www.tablespace.net/quicksheet/powervm-quickstart.html





PowerVM QuickStart

Version 1.2.1
Date: 3/11/12

This document is minimally compatible with VIOS 1.5. VIOS 2.1 specific items are marked as such.

Overview

Virtualization Components

• (Whole) CPUs, (segments of) memory, and (physical) I/O devices can be assigned to an LPAR (Logical Partition). This is commonly referred to as dedicated resource partition (or Power 4 LPARs, as this capability was introduced with the Power 4 based systems).
• A DLPAR is the ability to dynamically add or remove resources from a running partition. All modern LPARs are DLPAR capable (even if the OS running on them is not). For this reason, DLPAR has more to do with the capabilities of the OS running in the LPAR than the LPAR itself. The acronym DLPAR is typically used as a verb (to add/remove a resource) as opposed to define a type of LPAR. Limits can be placed on DLPAR operations, but this is primarily a user imposed limit, not a limitation of the hypervisor.
• Slices of a CPU can be assigned to back a virtual processor in a partition. This is commonly referred to as a micro-partition (or commonly referred to as Power 5 LPAR, as this capability was introduced with the Power 5 based systems). Micro-partitions consume CPU from a shared pool of CPU resources.
• Power 5 and later systems introduced the concept of VIOS (Virtual I/O Server) that effectively allowed physical I/O resources to be shared amongst partitions.
• The IBM System P hypervisor is a Type 1 hypervisor with the option to provide shared I/O via a Type 2-ish software (VIOS) partition. The hypervisor does not own I/O resources (such as an Ethernet card), these can only be assigned to a LPAR. When owned by a VIOS LPAR they can be shared to other LPARs.
• Micro-Partitions in Power 6 systems can utilize multiple SPP (shared processor pools) to control CPU utilization to groups of micro-partitions.
• Some Power 6 systems introduced hardware based network virtualization with the IVE (Integrated Virtual Ethernet) device that allows multiple partitions to share a single Ethernet connection without the use of VIOS.
• Newer HBAs that offer NPIV (N Port ID Virtualization) can be used in conjunction with the appropriate VIOS version to present a virtualized HBA with a unique WWN directly to a client partition.
• VIOS 2.1 introduced memory sharing, only on Power 6 systems, that allows memory to be shared between two or more LPARs. Overcommitted memory can be paged to disk using a VIOS partition.
• VIOS and Micro-Partition technologies can be implemented independently. For example, a (CPU) Micro-Partition can be created with or without the use of VIOS.

PowerVM Acronyms & Definitions

CoD - Capacity on Demand. The ability to add compute capacity in the form of CPU or memory to a running system by simply activating it. The resources must be pre-staged in the system prior to use and are (typically) turned on with an activation key. There are several different pricing models for CoD.
DLPAR - Dynamic Logical Partition. This was used originally as a further clarification on the concept of an LPAR as one that can have resources dynamically added or removed. The most popular usage is as a verb; ie: to DLPAR (add) resources to a partition.
HEA - Host Ethernet Adapter. The physical port of the IVE interface on some of the Power 6 systems. A HEA port can be added to a port group and shared amongst LPARs or placed in promiscuous mode and used by a single LPAR (typically a VIOS LPAR).
HMC - Hardware Management Console. An "appliance" server that is used to manage Power 4, 5, and 6 hardware. The primary purpose is to enable / control the virtualization technologies as well as provide call-home functionality, remote console access, and gather operational data.
IVE - Integrated Virtual Ethernet. The capability to provide virtualized Ethernet services to LPARs without the need of VIOS. This functionality was introduced on several Power 6 systems.
IVM - Integrated Virtualization Manager. This is a management interface that installs on top of the VIOS software that provides much of the HMC functionality. It can be used instead of a HMC for some systems. It is the only option for virtualization management on the blades as they cannot have HMC connectivity.
LHEA - Logical Host Ethernet Adapter. The virtual interface of a IVE in a client LPAR. These communicate via a HEA to the outside / physical world.
LPAR - Logical Partition. This is a collection of system resources that can host an operating system. To the operating system this collection of resources appears to be a complete physical system. Some or all the resources on a LPAR may be shared with other LPARs in the physical system.
Lx86 - Additional software that allows x86 Linux binaries to run on Power Linux without recompilation.
MES - Miscellaneous Equipment Specification. This is a change order to a system, typically in the form of an upgrade. A RPO MES is for Record Purposes Only. Both specify to IBM changes that are made to a system.
MSPP - Multiple Shared Processor Pools. This is a Power 6 capability that allows for more than one SPP.
SEA - Shared Ethernet Adapter. This is a VIOS mapping of a physical to a virtual Ethernet adapter. A SEA is used to extend the physical network (from a physical Ethernet switch) into the virtual environment where multiple LPARs can access that network.
SPP - Shared Processor Pool. This is an organizational grouping of CPU resources that allows caps and guaranteed allocations to be set for an entire group of LPARs. Power 5 systems have a single SPP, Power 6 systems can have multiple.
VIOC - Virtual I/O Client. Any LPAR that utilizes VIOS for resources such as disk and network.
VIOS - Virtual I/O Server. The LPAR that owns physical resources and maps them to virtual adapters so VIOC can share those resources.

CPU Allocations

• Shared (virtual) processor partitions (Micro-Partitions) can utilize additional resources from the shared processor pool when available. Dedicated processor partitions can only use the "desired" amount of CPU, and only above that amount if another CPU is (dynamically) added to the LPAR.
• An uncapped partition can only consume up to the number of virtual processors that it has. (Ie: A LPAR with 5 virtual CPUs, that is backed by a minimum of .5 physical CPUs can only consume up to 5 whole / physical CPUs.) A capped partition can only consume up to its entitled CPU value. Allocations are in increments of 1/100th of a CPU, the minimal allocation is 1/10th of a CPU for each virtual CPU.
• All Micro-Partitions are guaranteed to have at least the entitled CPU value. Uncapped partitions can consume beyond that value, capped cannot. Both capped and uncapped relinquish unused CPU to a shared pool. Dedicated CPU partitions are guaranteed their capacity, cannot consume beyond their capacity, and on Power 6 systems, can relinquish CPU capacity to a shared pool.
• All uncapped micro-partitions using the shared processor pool compete for the remaining resources in the pool. When there is no contention for unused resources, a micro-partition can consume up to the number of virtual processors it has or the amount of CPU resources available to the pool.
• The physical CPU entitlement is set with the "processing units" values during the LPAR setup in the HMC. The values are defined as:
› Minimum: The minimum physical CPU resource required for this partition to start.
› Desired: The desired physical CPU resource for this CPU. In most situations this will be the CPU entitlement. The CPU entitlement can be higher if resources were DLPARed in or less if the LPAR started closer to the minimum value.
› Maximum: This is the maximum amount of physical CPU resources that can be DLPARed into the partition. This value does not have a direct bearing on capped or uncapped CPU utilization.
• The virtual CPU entitlement is set in the LPAR configuration much like the physical CPU allocation. Virtual CPUs are allocated in whole integer values. The difference with virtual CPUs (from physical entitlements) is that they are not a potentially constrained resource and the desired number is always received upon startup. The minimum and maximum numbers are effectively limits on DLPAR operations.
• Processor folding is an AIX CPU affinity method that insures that an AIX partition only uses as few CPUs as required. This is achieved by insuring that the LPAR uses a minimal set of physical CPUs and idles those it does not need. The benefit is that the system will see a reduced impact of configuring additional virtual CPUs. Processor folding was introduced in AIX 5.3 TL 3.
• When multiple uncapped micro-partitions compete for remaining CPU resources then the uncapped weight is used to calculate the CPU available to each partition. The uncapped weight is a value from 0 to 255. The uncapped weight of all partitions requesting additional resources is added together and then used to divide the available resources. The total amount of CPU received by a competing micro-partition is determined by the ratio of the partitions weight to the total of the requesting partitions. (The weight is notnice value like in Unix.) The default priority for this value is 128. A partition with a priority of 0 is effectively a capped partition.
CPU Usage
Figure 0: Virtualized and dedicated CPUs in a four CPU system with a single SPP.
• Dedicated CPU partitions do not have a setting for virtual processors. LPAR 3 in Figure 0 has a single dedicated CPU.
• LPAR 1 and LPAR2 in Figure 0 are Micro-Partitions with a total of five virtual CPUs backed by three physical CPUs. On a Power 6 system, LPAR 3 can be configured to relinquish unused CPU cycles to the shared pool where they will be available to LPAR 1 and 2 (provided they are uncapped).

Shared Processor Pools

• While SPP (Shared Processor Pool) is a Power 6 convention, Power 5 systems have a single SPP commonly referred to as a "Physical Shared Processor Pool". Any activated CPU that is not reserved for, or in use by a dedicated CPU LPAR, is assigned to this pool.
• All Micro-Partitions on Power 5 systems consume and relinquish resources from/to the single / physical SPP.
• The default configuration of a Power 6 system will behave like a Power 5 system in respect to SPPs. By default, all LPARs will be placed in the SPP0 / physical SPP.
• Power 6 systems can have up to 64 SPPs. (The limit is set to 64 because the largest Power 6 system has 64 processors, and a SPP must have at least 1 CPU assigned to it.)
• Power 6 SPPs have additional constraints to control CPU resource utilization in the system. These are:
• Desired Capacity - This is the total of all CPU entitlements for each member LPAR (at least .1 physical CPU for each LPAR). This value is changed by adding LPARs to a pool.
• Reserved Capacity - Additional (guaranteed) CPU resources that are assigned to the pool of partitions above the desired capacity. By default this is set to 0. This value is changed from the HMC as an attribute of the pool.
• Entitled Capacity - This is the total guaranteed processor capacity for the pool. It includes the guaranteed processor capacity of each partition (aka: Desired) as well as the Reserved pool capacity. This value is a derived value and is not directly changed, it is a sum of two other values.
• Maximum Pool Capacity - This is the maximum amount of CPU that all partitions assigned to this pool can consume. This is effectively a processor cap that is expanded to a group of partitions rather than a single partition. This value is changed from the HMC as an attribute of the pool.
• Uncapped Micro-Partitions can consume up to the total of the virtual CPUs defined, the maximum pool capacity, or the unused capacity in the system - whatever is smaller.

PowerVM Types

• PowerVM inherits nearly all the APV (Advanced Power Virtualization) concepts from Power 5 based systems. It uses the same VIOS software and options as the previous APV.
• PowerVM is Power 6 specific only in that it enables features available exclusively on Power 6 based systems (ie: Multiple shared processor pools and partition mobility are not available on Power 5 systems.)
• PowerVM (and its APV predecessor) are optional licenses / activation codes that are ordered for a server. Each is licensed by CPU.
• PowerVM, unlike its APV predecessor, comes in several different versions. Each is tiered for price and features. Each of these versions is documented in the table on the right.
• PowerVM activation codes can be checked from the HMC, IVM, or from IBM records at this URL: http://www-912.ibm.com/pod/pod. The activation codes on the IBM web site will show up as type "VET" codes.
Express• Only available on limited lower-end P6 systems
• Maximum of 3 LPARs (1 VIOS and 2 VIOC)
• CPU Micro-Partitions with single processor pool
• VIOS/IVM only, no HMC support
• Dedicated I/O resources (in later versions)
Standard• Up to the maximum partitions supported on each server
• Multiple shared processor pools
• IVM or HMC managed
Premium• All options in Standard & Live Partition Mobility

• The VET codes from the IBM activation code web site can be decoded using the following "key":
XXXXXXXXXXXXXXXXXXXXXXXX0000XXXXXXExpress
XXXXXXXXXXXXXXXXXXXXXXXX2C00XXXXXXStandard
XXXXXXXXXXXXXXXXXXXXXXXX2C20XXXXXXEnterprise

VIOS (Virtual I/O Server)

• VIOS is a special purpose partition that can serve I/O resources to other partitions. The type of LPAR is set at creation. The VIOS LPAR type allows for the creation of virtual server adapters, where a regular AIX/Linux LPAR does not.
• VIOS works by owning a physical resource and mapping that physical resource to virtual resources. Client LPARs can connect to the physical resource via these mappings.
• VIOS is not a hypervisor, nor is it required for sub-CPU virtualization. VIOS can be used to manage other partitions in some situations when a HMC is not used. This is called IVM (Integrated Virtualization Manager).
• Depending on configurations, VIOS may or may not be a single point of failure. When client partitions access I/O via a single path that is delivered via in a single VIOS, then that VIOS represents a potential single point of failure for that client partition.
• VIOS is typically configured in pairs along with various multipathing / failover methods in the client for virtual resources to prevent the VIOS from becoming a single point of failure.
• Active memory sharing and partition mobility require a VIOS partition. The VIOS partition acts as the controlling device for backing store for active memory sharing. All I/O to a partition capable of partition mobility must be handled by VIOS as well as the process ofshipping memory between physical systems.

VIO Redundancy

• Fault tolerance in client LPARs in a VIOS configuration is provided by configuring pairs of VIOS to serve redundant networking and disk resources. Additional fault tolerance in the form of NIC link aggregation and / or disk multipathing can be provided at the physical layer to each VIOS server. Multiple paths / aggregations from a single VIOS to a VIOC do not provide additional fault tolerance. These multiple paths / aggregations / failover methods for the client are provided by multiple VIOS LPARs. In this configuration, an entire VIOS can be lost (ie: rebooted during an upgrade) without I/O interruption to the client LPARs.
• In most cases (when using AIX) no additional configuration is required in the VIOC for this capability. (See discussion below in regards to SEA failover vs. NIB for network, and MPIO vs. LVM for disk redundancy.)
• Both virtualized network and disk redundancy methods tend to be active / passive. For example, it is not possible to run EtherChannelwithin the system, from a VIOC to a single VIOS.
• It is important to understand that the performance considerations of an active / passive configuration are not relevant inside the system as all VIOS can utilize pooled processor resources and therefore gain no significant (internal) performance benefit by active / active configurations. Performance benefits of active / active configurations are realized when used to connect to outside / physical resources such as EtherChannel (port aggregation) from the VIOS to a physical switch.

VIOS Setup & Management

VIOS Management Examples

Accept all VIOS license agreements
license -accept
(Re)Start the (initial) configuration assistant
cfgassist
Shutdown the server
shutdown
›››    Optionally include -restart
List the version of the VIOS system software 
ioslevel
List the boot devices for this lpar 
bootlist -mode normal -ls
List LPAR name and ID
lslparinfo
Display firmware level of all devices on this VIOS LPAR
lsfware -all
Display the MOTD
motd
Change the MOTD to an appropriate message 
motd "*****    Unauthorized access is prohibited!    *****"
List all (AIX) packages installed on the system 
lssw
›››    Equivalent to lslpp -L in AIX
Display a timestamped list of all commands run on the system 
lsgcl
To display the current date and time of the VIOS 
chdate
Change the current time and date to 1:02 AM March 4, 2009 
chdate -hour 1 \
       -minute 2 \
       -month 3 \
       -day 4 \
       -year 2009
Change just the timezone to AST 
chdate -timezone AST (Visible on next login)
›››   The date command is availible and works the same as in Unix.
Brief dump of the system error log
errlog
Detailed dump of the system error log
errlog -ls | more
Remove error log events older than 30 days
errlog -rm 30
›››   The errlog command allows you to view errors by sequence, but does not give the sequence in the default format.
 
• errbr works on VIOS provided that the errpt command is in padmin's PATH.

VIOS Networking Examples

Enable jumbo frames on the ent0 device 
chdev -dev ent0 -attr jumbo_frames=yes
View settings on ent0 device 
lsdev -dev ent0 -attr
List TCP and UDP sockets listening and in use 
lstcpip -sockets -family inet
List all (virtual and physical) ethernet adapters in the VIOS 
lstcpip -adapters
Equivalent of no -L command
optimizenet -list
Set up initial TCP/IP config (en10 is the interface for the SEA ent10)
mktcpip -hostname vios1 \
        -inetaddr 10.143.181.207 \
        -interface en10 \
        -start -netmask 255.255.252.0 \
        -gateway 10.143.180.1
Find the default gateway and routing info on the VIOS 
netstat -routinfo
List open (TCP) ports on the VIOS IP stack 
lstcpip -sockets | grep LISTEN
Show interface traffic statistics on 2 second intervals 
netstat -state 2
Show verbose statistics for all interfaces 
netstat -cdlistats
Show the default gateway and route table 
netstat -routtable
Change the default route on en0 (fix a typo from mktcpip) 
chtcpip -interface en0 \
        -gateway \
        -add 192.168.1.1 \
        -remove 168.192.1.1
Change the IP address on en0 to 192.168.1.2 
chtcpip -interface en0 \
        -inetaddr 192.168.1.2 \
        -netmask 255.255.255.0

VIOS Unix Subsystem

• The current VIOS runs on an AIX subsystem. (VIOS functionality is available for Linux. This document only deals with the AIX based versions.)
• The padmin account logs in with a restricted shell. A root shell can be obtained by the oem_setup_env command.
• The root shell is designed for installation of OEM applications and drivers only. It may be required for a small subset of commands. (The purpose of this document is to provide a listing of most frequent tasks and the proper VIOS commands so that access to a root shell is not required.)
• The restricted shell has access to common Unix utilities such as awkgrepsed, and vi. The syntax and usage of these commands has not been changed in VIOS. (Use "ls /usr/ios/utils" to get a listing of available Unix commands.)
• Redirection to a file is not allowed using the standard ">" character, but can be accomplished with the "tee" command.
 
Redirect the output of ls to a file
ls | tee ls.out
Determine the underlying (AIX) OS version (for driver install)
oem_platform_level
Exit the restricted shell to a root shell
oem_setup_env
Mirror the rootvg in VIOS to hdisk1
extendvg rootvg hdisk1 
mirrorios hdisk1
›››    The VIOS will reboot when finished

User Management

• padmin is the only user for most configurations. It is possible to configure additional users, such as operational users for monitoring purposes.
 
List attributes of the padmin user 
lsuser padmin
List all users on the system 
lsuser
›››   The optional parameter "ALL" is implied with no parameter.
Change the password for the current user 
passwd

Virtual Disk Setup & Management

Virtual Disk Setup

• Disks are presented to VIOC by creating a mapping between a physical disk or storage pool volume and the vhost adapter that is associated with the VIOC.
• Best practices configuration suggests that the connecting VIOS vhost adapter and the VIOC vscsi adapter should use the same slot number. This makes the typically complex array of virtual SCSI connections in the system much easier to comprehend.
• The mkvdev command is used to create a mapping between a physical disk and the vhost adapter.
Create a mapping of hdisk3 to the virtual host adapter vhost2.
mkvdev -vdev hdisk3 \
       -vadapter vhost2 \
       -dev wd_c3_hd3
›››   It is called wd_c3_hd3 for "WholeDisk_Client3_HDisk3". The intent of this naming convention is to relay the type of disk, where from, and who to.
Delete the virtual target device wd_c3_hd3
rmvdev -vtd wd_c3_hd3
Delete the above mapping by specifying the backing device hdisk3
rmvdev -vdev hdisk3

Disk Redundancy in VIOS

• LVM mirroring can be used to provide disk redundancy in a VIOS configuration. One disk should be provided through each VIOS in a redundant VIOS config to eliminate a VIOS as a single point of failure.
• LVM mirroring is a client configuration that mirrors data to two different disks presented by two different VIOS.
• Disk path redundancy (assuming an external storage device is providing disk redundancy) can be provided by a dual VIOS configuration with MPIO at the client layer.
• Newer NPIV (N Port ID Virtualization) capable cards can be used to provide direct connectivity of the client to a virtualized FC adapter. Storage specific multipathing drivers such as PowerPath or HDLM can be used in the client LPAR. NPIV adapters are virtualized using VIOS, and should be used in a dual-VIOS configuration.
• MPIO is automatically enabled in AIX if the same disk is presented to a VIOC by two different VIOS.
• LVM mirroring (for client LUNs) is not recommended within VIOS (ie: mirroring your storage pool in the VIOS). This configuration would provide no additional protections over LVM mirroring in the VIOC.
• Storage specific multipathing drivers can be used in the VIOS connections to the storage. In this case these drivers should not then be used on the client. In figure 1, a storage vendor supplied multipathing driver (such as PowerPath) would be used on VIOS 1 and VIOS 2, and native AIX MPIO would be used in the client. This configuration gives access to all four paths to the disk and eliminates both VIOS and any path as a singe point of failure.
VIO Disk Paths 
Figure 1: A standard disk multipathing scenario.

Virtual Optical Media

• Optical media can be assigned to an LPAR directly (by assigning the controller to the LPAR profile), through VIOS (by presenting the physical CD/DVD on a virtual SCSI controller), or through file backed virtual optical devices.
• The problem with assigning an optical device to an LPAR is that it may be difficult to manage in a multiple-LPAR system and requires explicit DLPAR operations to move it around.
• Assigning an optical device to a VIOS partition and re-sharing it is much easier as DLPAR operations are not required to move the device from one partition to another. cfgmgr is simply used to recognize the device and rmdev is used to "remove" it from the LPAR. When used in this manner, a physical optical device can only be accessed by one LPAR at a time.
• Virtual media is a file backed CD/DVD image that can be "loaded" into a virtual optical device without disruption to the LPAR configuration. CD/DVD images must be created in a repository before they can be loaded as virtual media.
• A virtual optical device will show up as a "File-backed Optical" device in lsdev output.
 
Create a 15 Gig virtual media repository on the clienthd storage pool
mkrep -sp clienthd -size 15G
Extend the virtual repository by an additional 5 Gig to a total of 20 Gig
chrep -size 5G
Find the size of the repository
lsrep
Create an ISO image in repository using .iso file
mkvopt -name fedora10 \
       -file /mnt/Fedora-10-ppc-DVD.iso -ro
Create a virtual media file directly from a DVD in the physical optical drive
mkvopt -name AIX61TL3 -dev cd0 -ro
Create a virtual DVD on vhost4 adapter
mkvdev -fbo -vadapter vhost4 \
       -dev shiva_dvd
›››   The LPAR connected to vhost4 is called shiva. shiva_dvd is simply a convenient naming convention.
Load the virtual optical media into the virtual DVD for LPAR shiva
loadopt -vtd shiva_dvd -disk fedora10iso
Unload the previously loaded virtual DVD (-release is a "force" option if the client OS has a SCSI reserve on the device.)
unloadopt -vtd shiva_dvd -release
List virtual media in repository with usage information
lsrep
Show the file backing the virtual media currently in murugan_dvd
lsdev -dev murugan_dvd -attr aix_tdev
Remove (delete) a virtual DVD image called AIX61TL3
rmvopt -name AIX61TL3

Storage Pools

• Storage pools work much like AIX VGs (Volume Groups) in that they reside on one or more PVs (Physical Volumes). One key difference is the concept of a default storage pool. The default storage pool is the target of storage pool commands where the storage pool is not explicitly specified.
• The default storage pool is rootvg. If storage pools are used in a configuration then the default storage pool should be changed to something other than rootvg.
 
List the default storage pool
lssp -default
List all storage pools 
lssp
List all disks in the rootvg storage pool 
lssp -detail -sp rootvg
Create a storage pool called client_boot on hdisk22
mksp client_boot hdisk22
Make the client_boot storage pool the default storage pool 
chsp -default client_boot
Add hdisk23 to the client_boot storage pool 
chsp -add -sp client_boot hdisk23
List all the physical disks in the client_boot storage pool 
lssp -detail -sp client_boot
List all the physical disks in the default storage pool 
lssp -detail
List all the backing devices (LVs) in the default storage pool 
lssp -bd
›››   Note: This command does NOT show virtual media repositories. Use the lssp command (with no options) to list free space in all storage pools.
Create a client disk on adapter vhost1 from client_boot storage pool 
mkbdsp -sp client_boot 20G \
       -bd lv_c1_boot \
       -vadapter vhost1
Remove the mapping for the device just created, but save the backing device 
rmbdsp -vtd vtscsi0 -savebd
Assign the lv_c1_boot backing device to another vhost adapter 
mkbdsp -bd lv_c1_boot -vadapter vhost2
Completely remove the virtual target device ld_c1_boot
rmbdsp -vtd ld_c1_boot
Remove last disk from the sp to delete the sp 
chsp -rm -sp client_boot hdisk22
Create a client disk on adapter vhost2 from rootvg storage pool
mkbdsp -sp rootvg 1g \
       -bd murugan_hd1 \
       -vadapter vhost2 \
       -tn lv_murugan_1
›››   The LV name and the backing device (mapping) name is specified in this command. This is different than the previous mkbdspexample. The -tn option does not seem to be compatible with all versions of the command and might be ignored in earlier versions of the command. (This command was run on VIOS 2.1) Also note the use of a consistent naming convention for LV and mapping - this makes understanding LV usage a bit easier. Finally note that rootvg was used in this example because of limitations of available disk in the rather small example system it was run on - Putting client disk on rootvg does not represent an ideal configuration.

Virtual Network Setup & Management

SEA Setup - Overview

• The command used to set up a SEA (Shared Ethernet Adapter) is mkvdev.
• IP addresses cannot be configured on either the virtual or the physical adapter used in the mkvdev command. IP addresses are configured either on the SEA itself (created by the mkvdev -sea command) or another physical or virtual adapter that is not part of a SEA "bridge". (An example of the latter is seen in Figure 2.)
• Best practices suggest that IP addresses for the VIOS should not be created on the SEA but should be put on another virtual adapter in the VIOS attached to the same VLAN. This makes the IP configuration independent of any changes to the SEA. Figure 2, has an example of an IP address configured on a virtual adapter interface en1 and not any part of the SEA "path". (This is not the case when using SEA failover).
• The virtual device used in the SEA configuration should have "Access External Networks" (AKA: "Trunk adapter") checked in its configuration (in the profile on the HMC). This is the only interface on the VLAN that should have this checked. Virtual interfaces with this property set will receive all packets that have addresses outside the virtual environment. In figure 2, the interface with "Access External Networks" checked should be ent2.
• If multiple virtual interfaces are used on a single VLAN as trunk adapters then each must have a different trunk priority. This is typically done with multiple VIOS servers - with one virtual adapter from the same VLAN on each VIO server. This is required when setting up SEA Failover in the next section.
VIOS IP Config 
Figure 2: Configuration of IP address on virtual adapter.
• The examples here are of SEAs handling only one VLAN. A SEA may handle more than one VLAN. The -default and -defaultid options to mkvdev -sea make more sense in this (multiple VLAN) context.

Network Redundancy in VIOS

• The two primary methods of providing network redundancy for a VIOC in a dual VIOS configuration are NIB (Network Interface Backup) and SEA Failover. (These provide protection from the loss of a VIOS or VIOS connectivity to a physical LAN.)
• NIB creates a link aggregation in the client of a single virtual NIC with a backup NIC. (Each virtual NIC is on a separate VLAN and connected to a different VIOS.) This configuration is done in each client OS. (See Figure 3 for an example of the VIOC that uses NIB to provide redundant connections to the LAN.)
• SEA Failover is a VIOS configuration option that provides two physical network connections, one from each VIOS. No client configuration is required as only one virtual interface is presented to the client.
• Most Power 6 based systems offer IVE (Integrated Virtual Ethernet) hardware that provides virtual NICs to clients. These do not provide redundancy and must be used in pairs or with another NIC / backup path on each client (or VIOS) to provide that capability. (Note: This is in the context of client NIB configurations. When IVE is used directly to the client different configuration rules apply. See the IVE Redpaper for the particulars of configuring IVE for aggregation and interface failover.)
• NIB and SEA Failover are not mutually exclusive and can be used together or with link aggregation (EtherChannel / 802.3ad) to a physical device in the VIOS. Figure 3 shows a link aggregation device (ent3) in VIOS 1 as the physical trunk adapter for the SEA (ent4) in what is seen by the client as a NIB configuration.
• Link aggregation (EtherChannel / 802.3ad) of more than one virtual adapter is not supported or necessary from the client as all I/O moves at memory speed in the virtual environment. The more appropriate method is to direct different kinds of I/O or networks to particular VIOS servers where they do not compete for CPU time.
VIOC NIC Failover 
Figure 3: NIC failover implemented at the VIO Client layer along with additional aggregation/failover at the VIOS layer.
• The primary benefit of NIB (on the client) is that the administrator can choose the path of network traffic. In the case of figure 3, the administrator would configure the client to use ent0 as the primary interface and ent1 as the backup. Then more resources (in the form of aggregate links) can be used on VIOS1 to handle the traffic with traffic only flowing through VIOS2 in the event of a failure of VIOS1. The problem with this configuration is that it requires additional configuration on the client and is not conducive as SEA Failover to simplistic NIM installs.
• NIB configurations also allow the administrator to balance clients so that all traffic does not go to a single VIOS. In this case hardware resources would be configured more evenly than they are in figure 3 with both VIOS servers having similar physical connections as appropriate for a more "active / active" configuration.

SEA Setup - Example

Create a SEA "bridge" between the physical ent0 and the virtual ent2 (from Figure 2)
mkvdev -sea ent0 -vadapter ent2 \
       -default ent2 -defaultid 1
›››   Explanation of the parameters:
-sea ent0 -- This is the physical interface
-vadapter ent2 -- This is the virtual interface
-default ent2 -- Default virtual interface to send untagged packets
-defaultid 1 -- This is the PVID for the SEA interface
• The PVID for the SEA is relevant when the physical adapter is connected to a VLAN configured switch and the virtual adapter is configured for VLAN (802.3Q) operation. All traffic passed through the SEA should be untagged in a non-VLAN configuration.
• This example assumes that separate (physical and virtual) adapters are used for each network. (VLAN configurations are not covered in this document).

SEA Management

Find virtual adapters associated with SEA ent4
lsdev -dev ent4 -attr virt_adapters
Find control channel (for SEA Failover) for SEA ent4
lsdev -dev ent4 -attr ctl_chan
Find physical adapter for SEA ent4
lsdev -dev ent4 -attr real_adapter
List all virtual NICs in the VIOS along with SEA and backing devices
lsmap -all -net
List details of Server Virtual Ethernet Adapter (SVEA) ent2
lsmap -vadapter ent2 -net

SEA Failover

• Unlike a regular SEA adapter, a SEA failover configuration has a few settings that are different from stated best practices.
• A SEA failover configuration is a situation when IP addresses should be configured on the SEA adapter.
• A control channel must be configured between the two VIOS using two virtual ethernet adapters that use that VLAN strictly for this purpose. The local virtual adapter created for this purpose should be specified in the ctl_chan attribute in each of the SEA setups.
• Both virtual adapters (on the VLAN with clients) should be configured to "Access External network", but one should have a higher priority (lower number) for the "Trunk priority" option. A SEA failover configuration is the only time that you should have two virtual adapters on the same VLAN that are configured in this manner.
SEA Failover Setup 
Figure 3: SEA Failover implemented in the VIOS layer.
The following command needs to be run on each of the VIOS to create a simple SEA failover. (It is assumed that interfaces match on each VIOS.)
mkvdev -sea ent0 -vadapter ent1 -default ent1 
        -defaultid 1 -attr ha_mode=auto 
        ctl_chan=ent3 netaddr=10.143.180.1
›››   Explanation of the parameters:
   -sea ent0 -- This is the physical interface
   -vadapter ent1 -- This is the virtual interface
   -default ent1 -- Default virtual interface to send untagged packets
   -defaultid 1 -- This is the PVID for the SEA interface
   -attr ha_mode=auto -- Turn on auto failover mode
   (-attr) ctl_chan=ent3 -- Define the control channel interface
   (-attr) netaddr=10.143.180.1 -- Address to ping for connect test
 
• auto is the default ha_mode, standby forces a failover situation
 
Change the device to standby mode (and back) to force failover
chdev -dev ent4 -attr ha_mode=standby 
chdev -dev ent4 -attr ha_mode=auto
See what the priority is on the trunk adapter 
netstat -cdlistats | grep "Priority"

VIOS Device Management

Disk

Determine if SCSI reserve is enabled for hdisk4
lsdev -dev hdisk4 -attr reserve_policy
Turn off SCSI reserve for hdisk4
chdev -dev hdisk4 -attr reserve_policy=no_reserve
Re-enable SCSI reserve for hdisk4
chdev -dev hdisk4 -attr reserve_policy=single_path
Enable extended disk statistics
chdev -dev sys0 -attr iostat=true
List the parent device of hdisk0
lsdev -dev hdisk0 -parent
List all the child devices of (DS4000 array) dar0
lsdev -dev dar0 -child
List the reserve policy for all disks on a DS4000 array 
for D in `lsdev -dev dar0 -child -field name | grep -v name` 
do 
  lsdev -dev $D -attr reserve_policy
done

Devices

Discover new devices
cfgdev
›››   This is the VIOS equivalent of the AIX cfgmgr command.
List all adapters (physical and virtual) on the system 
lsdev -type adapter
List only virtual adapters 
lsdev -virtual -type adapter
List all virtual disks (created with mkvdev command) 
lsdev -virtual -type disk
Find the WWN of the fcs0 HBA 
lsdev -dev fcs0 -vpd | grep Network
List the firmware levels of all devices on the system 
lsfware -all
›››   The invscout command is also available in VIOS.
Get a long listing of every device on the system
lsdev -vpd
List all devices (physical and virtual) by their slot address 
lsdev -slots
List all the attributes of the sys0 device
lsdev -dev sys0 -attr
List the port speed of the (physical) ethernet adapter eth0
lsdev -dev ent0 -attr media_speed
List all the possible settings for media_speed on ent0
lsdev -dev ent0 -range media_speed
Set the media_speed option to auto negotiate on ent0
chdev -dev ent0 -attr media_speed=Auto_Negotiation
Set the media_speed to auto negotiate on ent0 on next boot 
chdev -dev ent0 \
      -attr media_speed=Auto_Negotiation \
      -perm
Turn on disk performance counters
chdev -dev sys0 -attr iostat=true

Low Level Redundancy Configuration

• Management and setup of devices requiring drivers and tools not provided by VIOS (ie PowerPath devices) will require use of the root shell available from the oem_setup_env command.
• Tools installed from the root shell (using oem_setup_env) may not be installed in the PATH used by the restricted shell. The commands may need to be linked or copied to the correct path for the restricted padmin shell. Not all commands may work in this manner.
• The mkvdev -lnagg and cfglnagg commands can be used to set up and manage link aggregation (to external ethernet switches).
• The chpathmkpath, and lspath commands can be used to manage MPIO capable devices.

Best Practices / Additional Notes

• Virtual Ethernet devices should only have 802.1Q enabled if you intend to run additional VLANs on that interface. (In most instances this is not the case).
• Only one interface should be configured to "Access External Networks" on a VLAN, this should be the virtual interface used for the SEA on the VIOS and not the VIOC. This is the "gateway" adapter that will receive packets with MAC addresses that are unknown. (This is also known as a "Trunk adapter" on some versions of the HMC.)
• VIOS partitions are unique in that they can have virtual host adapters. Virtual SCSI adapters in VIOC partitions connect to LUNs shared through VIOS virtual host adapters.
• An organized naming convention to virtual devices is an important method to simplifying complexity in a VIOS environment. Several methods are used in this document, but each represents a self-documenting method that relates what the virtual device is and what it serves.
• VIOS commands can be run from the HMC. This may be a convenient alternative to logging into the VIOS LPAR and running the command.
• Power 6 based systems have an additional LPAR parameter called the partition "weight". The additional RAS features will use this value in a resource constrained system to kill a lower priority LPAR in the event of a CPU failure.
• The ratio of virtual CPUs in a partition to the actual amount of desired / entitled capacity is a "statement" on the partitions ability to be virtualized. A minimal backing of actual (physical) CPU entitlement to a virtual CPU suggests that the LPAR will most likely not be using large amounts of CPU and will relinquish unused cycles back to the shared pool the majority of the time. This is a measure of how over-committed the partition is.
• multiple profiles created on the HMC can represent different configurations such as with and without the physical CD/DVD ROM. These profiles can be named (for example) as <partition_name>_prod and <partition_name>_cdrom.
• HMC partition configuration and profiles can be saved to a file and backed up to either other HMCs or remote file systems.
• sysplan files can be created on the HMC or the SPT (System Planning Tool) and exported to each other. These files are a good method of expressing explicit configuration intent and can serve as both documentation as well as a (partial) backup method of configuration data.
• vhost adapters should be explicitly assigned and restricted to client partitions. This helps with documentation (viewing the config in the HMC) as well as preventing trespass of disks by other client partitions (typically due to user error).

Key VIOS Commands

VIOS commands are documented by categories on this InfoCenter page.

The lsmap command

• Used to list mappings between virtual adapters and physical resources.
List all (virtual) disks attached to the vhost0 adapter 
lsmap -vadapter vhost0
List only the virtual target devices attached to the vhost0 adapter 
lsmap -vadapter vhost0 -field vtd
This line can be used as a list in a for loop 
lsmap -vadapter vhost0 -field vtd -fmt :|sed -e "s/:/ /g"
List all shared ethernet adapters on the system 
lsmap -all -net -field sea
List all (virtual) disks and their backing devices 
lsmap -all -type disk -field vtd backing
List all SEAs and their backing devices 
lsmap -all -net -field sea backing
 

The mkvdev command

• Used to create a mapping between a virtual adapter and a physical resource. The result of this command will be a "virtual device".
 
Create a SEA that links physical ent0 to virtual ent1
mkvdev -sea ent0 -vadapter ent1 -default ent1 -defaultid 1
›››   The -defaultid 1 in the previous command refers to the default VLAN ID for the SEA. In this case it is set to the VLAN ID of the virtual interface (the virtual interface in this example does not have 802.1q enabled).
›››   The -default ent1 in the previous command refers to the default virtual interface for untagged packets. In this case we have only one virtual interface associated with this SEA.
Create a disk mapping from hdisk7 to vhost2 and call it wd_c1_hd7
mkvdev -vdev hdisk7 -vadapter vhost2 -dev wd_c1_hd7
Remove a virtual target device (disk mapping) named vtscsi0
rmvdev -vtd vtscsi0
 

Advanced VIOS Management

Performance Monitoring

Retrieve statistics for ent0 
entstat -all ent0
Reset the statistics for ent0 
entstat -reset ent0
View disk statistics (every 2 seconds)
viostat 2
Show summary for the system in stats
viostat -sys 2
Show disk stats by adapter (useful to see per-partition (VIOC) disk stats)
viostat -adapter 2
Turn on disk performance counters
chdev -dev sys0 -attr iostat=true
• The topas command is available in VIOS but uses different command line (start) options. When running, topas uses the standard AIX single key commands and may refer to AIX command line options.
 
View CEC/cross-partition information
topas -cecdisp

Backup

Create a mksysb file of the system on a NFS mount 
backupios -file /mnt/vios.mksysb -mksysb
Create a backup of all structures of (online) VGs and/or storage pools 
savevgstruct vdiskvg (Data will be saved to /home/ios/vgbackups)
List all (known) backups made with savevgstruct 
restorevgstruct -ls
Backup the system (mksysb) to a NFS mounted filesystem 
backupios -file /mnt
• sysplan files can be created from the HMC per-system menu in the GUI or from the command line using mksysplan.
• Partition data stored on the HMC can be backed up using (GUI method): per-system pop-up menu -> Configuration -> Manage Partition Data -> Backup

VIOS Security

List all open ports on the firewall configuration
viosecure -firewall view
To view the current security level settings
viosecure -view -nonint
Change system security settings to default
viosecure -level default
To enable basic firewall settings
viosecure -firewall on
List all failed logins on the system
lsfailedlogin
Dump the Global Command Log (all commands run on system)
lsgcl

About this QuickStart

Created by: William Favorite (wfavorite@tablespace.net) 
Updates at: http://www.tablespace.net 
Disclaimer: This document is a guide and it includes no express warranties to the suitability, relevance, or compatibility of its contents with any specific system. Research any and all commands that you inflict upon your command line. 
Distribution:Copies of this document are free to redistribute as long as credit to the author and tablespace.net is retained in the printed and electronic versions. 

블로그 이미지

Melting

,

 
§ 


 $ ifconfig en0 detach 
§ $ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
§ # smitty mktcpip -> en3 IP 설정
§ Client 파티션 생성시 PVID IP 대역이 동일하게


-----------------------------------------------------------------------------
How to setup SEA failover with Load Sharing configuration
  - http://www-01.ibm.com/support/docview.wss?uid=isg3T7000527
-----------------------------------------------------------------------------

Abstract

Use this Techdoc to help you to configure primary and backup Shared Ethernet Adapters for load sharing in the Virtual I/O Server.

Content

Authors:
Bianca Rademacher ( Bianca.Rademacher@de.ibm.com )
Michael Guha Thakurta ( michael.guha@de.ibm.com )

Reviewer: Rajendra D Patel ( rajpat@us.ibm.com )

Last Updated: July 31, 2012
For Changes / Updates - ref to Changes Section Below:

Table of contents:

Introduction
Requirements
Functionality
Simple test scenario from scratch
- Setup
- How do I know that load sharing is working and how do I know which VLANs are bridged by each SEA?
- Some outputs in disaster cases Enhanced test scenario (adding additional trunk adapter dynamically to the SEA)
- Setup
- Some outputs in disaster cases Switching from SEA Load Sharing to SEA failover
Switching from SEA failover to SEA Load Sharing
FAQs
References


Introduction:

The first implementation to fulfil redundancy requirements for Power System with network virtualization was the introduction of NIB (Network Interface Backup, s. Figure 1)

Figure 1: Network Interface Backup (NIB)


The client LPAR is connected to two separate VLANs. VLAN tagging cannot be supported in this case is because you can’t have one (virtual) switch with two connections.

To get around this problem, a different configuration needs to be used, namely, Shared Ethernet Adapter (SEA) Failover (s. Figure 2)


Figure 2: SEA Failover


The Shared Ethernet Adapter failover configuration provides redundancy only by configuring a backup Shared Ethernet Adapter on a different Virtual I/O Server.
This backup Shared Ethernet Adapter is in the standby mode and can be used only if the primary Shared Ethernet Adapter fails.
Hence, the bandwidth of the backup Shared Ethernet Adapter is not used. When using very expensive 10 Gb/s ethernet adapter the utilization of both, primary and backup adapter, would be highly appreciated. This can be obtained with use of SEA together with virtual switches (s. Figure 3).

Figure 3: Multiple Virtual Ethernet Switches


A significant benefit to this design is that both Virtual I/O Servers can be active at the same time.
Half of the clients could be configured to use Virtual I/O Server 1 and the other half to use Virtual I/O Server 2 as their primary paths. Each client would failover to its respective secondary path in the case that its primary path was lost. So the customer's investment in hardware is more effectively utilized.
If you need more detailed information, refer to the doc "Using Virtual Switches in PowerVM to Drive Maximum Value of 10 Gb Ethernet" (http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101752)

Different to SEA with vswitch mode is the SEA with load sharing configuration, where the SEA algorithm itself decides which VLAN is bridged by the primary or the backup SEA (s. Figure 4).

Figure 4: SEA with Load Sharing


Functionality

In the Shared Ethernet Adapter failover with load sharing configuration, the primary and the backup Shared Ethernet Adapters negotiate the set of virtual local area network (VLAN) IDs that they are responsible for bridging. After successful negotiation, each Shared Ethernet Adapter bridges the assigned trunk adapters and the associated VLANs. Thus, both the primary and the backup Shared Ethernet Adapter bridge the workload for their respective VLANs. If a failure occurs, the active Shared Ethernet Adapter bridges all trunk adapters and the associated VLANs. This action helps to avoid disruption in network services (s. Figure 4).


Requirements

--> Both of primary and backup Virtual I/O Servers are at Version 2.2.1.0, or later.
--> Two or more trunk adapters are configured for the primary and backup SEA pair.
--> Load Sharing mode must be enabled on both primary and backup SEA pair.
--> The virtual local area network (VLAN) definitions of the trunk adapters are
identical between the primary and backup SEA pair.
--> You need to set the same priority to all trunk adapters under one SEA. The
primary and backup priority definitions are set at the SEA level, not at trunk
adapters level.


Simple test scenario from scratch

If you already have traditional SEA failover configured refer to section “How to switch from ha_mode to sharing mode”.
For maintainability we only have tagged traffic in our szenario.


After doing below steps your setup will look like this:

Figure 5: SEA Load Sharing Scenario


p72vio1: p72vio2:
State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 1 10 11

High Availability Mode: Sharing
Priority: 1
State: BACKUP_SH
Bridge Mode: Partial
VID shared: 2 12 13

High Availability Mode: Sharing
Priority: 2

If the ha_mode is set to "sharing" on both the primary and the backup SEA but the state of the SEA is "PRIMARY" or "BACKUP" as opposed to "PRIMARY_SH" or "BACKUP_SH" something went wrong.

An error must have occurred which forced SEA to go back to non-sharing state even though Load sharing is enabled. Try to enable Load sharing again by running a chdev command on the backup SEA device. Check the SEA state again. If the SEA has recovered from its error, then it must be operating in Load sharing mode again. If the SEA is still not operating in Load sharing mode, then further detail investigation is required.


Setup

1.) Create virtual ethernet adapters on p72vio1
HMC --> select p72vio1 --> select Configuration --> select Manage Profiles
--> select the correct profile
--> Actions --> Edit --> Select Virtual Adapters
--> Actions --> Create Virtual Adapter --> Ethernet Adapter

Create first trunk adapter on p72vio1:

Create second trunk adapter on p72vio1:



Create Control Channel on p72vio1:



2.) Create virtual ethernet adapters on p72vio2

HMC --> select p72vio2 --> select Configuration --> select Manage Profiles
--> select the correct profile
--> Actions --> Edit --> Select Virtual Adapters
--> Actions --> Create Virtual Adapter --> Ethernet Adapter


Create first Trunk Adapter on p72vio2:




Create second Trunk Adapter on p72vio2:




Create Control Channel on p72vio2:



3.) Power on both VIO servers

p72vio1:
# lsdev -Cc adapter |grep Ethernet
ent0 Available Logical Host Ethernet Port (lp-hea)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ent3 Available Virtual I/O Ethernet Adapter (l-lan)
lhea0 Available Logical Host Ethernet Adapter (l-hea)

# entstat -d ent1 |grep Trunk
Trunk Adapter: True
# entstat -d ent2 |grep Trunk
Trunk Adapter: True
# entstat -d ent1 |grep VLAN
Port VLAN ID: 1
VLAN Tag IDs: 10 11
# entstat -d ent2 |grep VLAN
Port VLAN ID: 2
VLAN Tag IDs: 12 13

p72vio2:
# lsdev -Cc adapter |grep Ethernet
ent0 Available Logical Host Ethernet Port (lp-hea)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ent3 Available Virtual I/O Ethernet Adapter (l-lan)
lhea0 Available Logical Host Ethernet Adapter (l-hea)

# entstat -d ent1 |grep Trunk
Trunk Adapter: True
# entstat -d ent2 |grep Trunk
Trunk Adapter: True
# entstat -d ent1 |grep VLAN
Port VLAN ID: 1
VLAN Tag IDs: 10 11
# entstat -d ent2 |grep VLAN
Port VLAN ID: 2
VLAN Tag IDs: 12 13


On HMC:

hscroot@p7hmc:~>
lshwres -r virtualio --rsubtype eth -m p72 --level lpar |grep p72vio1
lpar_name=p72vio1,lpar_id=1,slot_num=2,state=1,is_required=0,is_trunk=1,trunk_priority=1,ieee_virtual_eth=1,port_vlan_id=1,vswitch=ETHERNET0,"addl_vlan_ids=10,11",mac_addr=761FA3934802,allowed_os_mac_addrs=all,qos_priority=none
lpar_name=p72vio1,lpar_id=1,slot_num=3,state=1,is_required=0,is_trunk=1,trunk_priority=1,ieee_virtual_eth=1,port_vlan_id=2,vswitch=ETHERNET0,"addl_vlan_ids=12,13",mac_addr=761FA3934803,allowed_os_mac_addrs=all,qos_priority=none
lpar_name=p72vio1,lpar_id=1,slot_num=4,state=1,is_required=0,is_trunk=0,ieee_virtual_eth=0,port_vlan_id=99,vswitch=ETHERNET0,addl_vlan_ids=,mac_addr=761FA3934804,allowed_os_mac_addrs=all,qos_priority=none

hscroot@p7hmc:~>
lshwres -r virtualio --rsubtype eth -m p72 --level lpar |grep p72vio2
lpar_name=p72vio2,lpar_id=2,slot_num=2,state=1,is_required=0,is_trunk=1,trunk_priority=2,ieee_virtual_eth=1,port_vlan_id=1,vswitch=ETHERNET0,"addl_vlan_ids=10,11",mac_addr=761FA4F6A602,allowed_os_mac_addrs=all,qos_priority=none
lpar_name=p72vio2,lpar_id=2,slot_num=3,state=1,is_required=0,is_trunk=1,trunk_priority=2,ieee_virtual_eth=1,port_vlan_id=2,vswitch=ETHERNET0,"addl_vlan_ids=12,13",mac_addr=761FA4F6A603,allowed_os_mac_addrs=all,qos_priority=none
lpar_name=p72vio2,lpar_id=2,slot_num=4,state=1,is_required=0,is_trunk=0,ieee_virtual_eth=0,port_vlan_id=99,vswitch=ETHERNET0,addl_vlan_ids=,mac_addr=761FA4F6A604,allowed_os_mac_addrs=all,qos_priority=none


4.) Creating SEA with Load Sharing mode

If all the Load sharing criteria are satisfied, then Load sharing can be enabled by setting the ha_mode attribute of SEA device to "sharing". This value must be set on primary SEA first before it is set for the backup SEA, because the backup SEA initiates the request for load sharing. If this sequence is not followed (means not activated on the primary first), then a chdev must occur on the backup SEA for Load sharing to work.

- When the backup SEA is configured in Load sharing mode it initiates the request for Load sharing by sending a special packet over the control channel to the primary SEA.
- The packet contains the list of VLANs the backup SEA proposes to take over for bridging.
- When the primary SEA receives the request for sharing it verifies the request and grants the request if it meets the sharing criteria.
- After the primary SEA grants the request, it switches over to sharing mode and sends an ACK packet to backup SEA via the control channel.
- When the backup SEA receives the ACK packet it switches to sharing mode and starts bridging for VLANs it had proposed to bridge.
- From then on, heartbeats are exchanged between the primary and the backup SEA in load sharing mode.
- Different to the traditional SEA failover mode, both the primary and the backup SEAs send and receive heartbeats.
Note that these heartbeats are in addition to heartbeats sent by the primary SEA in traditional SEA failover mode
- When either side (primary or backup) fails to receive load sharing heartbeats for a predetermined period of time, it is assumed that the other SEA has encountered a problem and it falls back to non-sharing mode and starts bridging traffic for all VLANs.
- To restart Load sharing, a chdev must occur on the backup SEA for the backup SEA to reinitiate Load sharing request.

On p72vio1:

$ mkvdev -sea ent0 -vadapter ent1,ent2 -default ent1 -defaultid 1 -attr ha_mode=sharing ctl_chan=ent3

ent4 Available
en4
et4

On p72vio2:

$ mkvdev -sea ent0 -vadapter ent1,ent2 -default ent1 -defaultid 1 -attr ha_mode=sharing ctl_chan=ent3

ent4 Available
en4
et4

How do I know that load sharing is working and how do I know which VLANs are bridged by each SEA?

--> Check the value of ha_mode attribute of the SEA device. It must be set to
"sharing" on both the primary SEA and the backup SEA
--> Run entstat -all (as padmin) or entstat -d (as root) for SEA device. If the SEA
is operating in Load sharing state, then its state must be either PRIMARY_SH or
BACKUP_SH
--> Run the entstat on the SEA device and look for values for VID Bridged. These are
the VLANs bridged by the SEA while SEA is operating in Load sharing mode.

On p72vio1:

# lsattr -El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode sharing High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4

State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 1 10 11

High Availability Mode: Sharing
Priority: 1


On p72vio2:

# lsattr -El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode sharing High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4

State: BACKUP_SH
Bridge Mode: Partial
VID shared: 2 12 13

High Availability Mode: Sharing
Priority: 2


Some outputs in disaster cases
Shutdown primary vio server (p72vio1)
p72vio1 (primary): p72vio2 (backup):
# errpt
E136EAFA 0112173112 I H ent4 BECOME PRIMARY

# entstat –d ent4

State: PRIMARY
Bridge Mode: All
High Availability Mode: Sharing
Priority: 2
Power on primary vio server (p72vio1)
p72vio1 (primary): p72vio2 (backup):
# errpt

E136EAFA 0112105212 I H ent4 BECOME PRIMARY


# entstat –d ent4

State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 1 10 11
High Availability Mode: Sharing
Priority: 1
# errpt

40D97644 0112174812 I H ent4 BECOME BACKUP


# entstat –d ent4

State: BACKUP_SH
Bridge Mode: Partial
VID shared: 2 12 13
High Availability Mode: Sharing
Priority: 2
Shutdown backup vio server (p72vio2)
p72vio1 (primary): p72vio2 (backup):
# errpt


# entstat –d ent4

State: PRIMARY
Bridge Mode: All
High Availability Mode: Sharing
Priority: 1
Power on backup vio server (p72vio2)
p72vio1 (primary): p72vio2 (backup):
# errpt





# entstat –d ent4

State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 1 10 11
High Availability Mode: Sharing
Priority: 1

On backup vio server:
# errpt

40D97644 0112180112 I H ent4 BECOME BACKUP


# entstat –d ent4

State: BACKUP_SH
Bridge Mode: Partial
VID shared: 2 12 13
High Availability Mode: Sharing
Priority: 2



Enhanced test scenario (adding additional trunk adapter dynamically to the SEA)


Figure 6: Enhanced SEA Load Sharing Scenario



p72vio1: p72vio2:
State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 3 4 33 44 333

High Availability Mode: Sharing
Priority: 1
State: BACKUP_SH
Bridge Mode: Partial
VID shared: 1 2 10 11 12 13

High Availability Mode: Sharing
Priority: 2

Setup

1.) create additional trunk adapter on the vio server with dlpar
HMC --> select p72vio1 --> select Dynamic Logical Partitioning Select Virtual
Adapter
--> Actions --> Create Virtual Adapter --> Ethernet Adapter

Create additional trunk adapter on p72vio1:


Create additional trunk adapter on p72vio1:



1.) create additional trunk adapter on the vio server with dlpar
HMC --> select p72vio2 --> select Dynamic Logical Partitioning Select Virtual
Adapter
--> Actions --> Create Virtual Adapter --> Ethernet Adapter

Create additional trunk adapter on p72vio2:



Create additional trunk adapter on p72vio2:



3.) add adapter to the profile
(starting with HMC 7.7.3 the existing profile can be overwritten,
previously you needed to specify new profile name )

On both vio server (p72vio1 and p72vio2):

HMC --> select Configuration --> seclect Save Current Configuration Overwrite
Existing Profile

4.) run cfgmgr (or cfgdev as padmin) on both vio server (p72vio1 and p72vio2):

# cfgmgr
# lsdev -Cc adapter |grep Ethernet
ent0 Available Logical Host Ethernet Port (lp-hea)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ent3 Available Virtual I/O Ethernet Adapter (l-lan)
ent4 Available Shared Ethernet Adapter
ent5 Available Virtual I/O Ethernet Adapter (l-lan)
ent6 Available Virtual I/O Ethernet Adapter (l-lan)
lhea0 Available Logical Host Ethernet Adapter (l-hea)

new virual adapter ent5 and ent6


5.) Add the new trunk adapter to the SEA

On p72vio1:
$ chdev -dev ent4 -attr virt_adapters=ent1,ent2,ent5,ent6
ent4 changed
On p72vio2:
$ chdev -dev ent4 -attr virt_adapters=ent1,ent2,ent5,ent6
ent4 changed

How do I know that load sharing is working and how do I know which VLANs are bridged by each SEA?

On p72vio1:
# lsattr –El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode sharing High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2,ent5,ent6 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4
...
State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 3 4 33 44 333

High Availability Mode: Sharing
Priority: 1
...

On p72vio2:

# lsattr -El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode sharing High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2,ent5,ent6 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4
...
State: BACKUP_SH
Bridge Mode: Partial
VID shared: 1 2 10 11 12 13

High Availability Mode: Sharing
Priority: 2
...



Some outputs in disaster cases
Shutdown primary vio server (p72vio1)
p72vio1 (primary): p72vio2 (backup):
# errpt

E136EAFA 0113115612 I H ent4 BECOME PRIMARY


# entstat –d ent4

State: PRIMARY
Bridge Mode: All
High Availability Mode: Sharing
Priority: 2
Power on primary vio server (p72vio1)
p72vio1 (primary): p72vio2 (backup):
# errpt:

E136EAFA 0113050512 I H ent4 BECOME PRIMARY

# entstat –d ent4

State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 3 4 33 44 333
High Availability Mode: Sharing
Priority: 1
# errpt:

40D97644 0113120112 I H ent4 BECOME BACKUP

# entstat –d ent4

State: BACKUP_SH
Bridge Mode: Partial
VID shared: 1 2 10 11 12 13
High Availability Mode: Sharing
Priority: 2



Switching from SEA Load Sharing to SEA failover

Switching to auto or to load sharing, you have to change ha_mode on the primary SEA first before changing ha_mode on the backup SEA.


On p72vio1 (Primary):


$ chdev -dev ent4 -attr ha_mode=auto
ent4 changed

# lsattr -El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode auto High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2,ent5,ent6 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4

State: PRIMARY
Bridge Mode: All

High Availability Mode: Auto
Priority: 1


On p72vio2 (Backup):


$ chdev -dev ent4 -attr ha_mode=auto
ent4 changed

# lsattr -El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode auto High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2,ent5,ent6 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4

Limbo Packets: 0
State: BACKUP
Bridge Mode: None

High Availability Mode: Auto
Priority: 2


Switching from SEA failover to SEA Load Sharing


On p72vio1 (Primary):


$ chdev -dev ent4 -attr ha_mode=sharing
ent4 changed


# lsattr -El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode sharing High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2,ent5,ent6 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4

State: PRIMARY
Bridge Mode: All

High Availability Mode: Sharing
Priority: 1



On p72vio2 (Backup):

$ chdev -dev ent4 -attr ha_mode=sharing
ent4 changed

# lsattr -El ent4
...
ctl_chan ent3 Control Channel adapter for SEA failover True
...
ha_mode sharing High Availability Mode True
...
pvid 1 PVID to use for the SEA device True
pvid_adapter ent1 Default virtual adapter to use for non-VLAN-tagged packets ...
real_adapter ent0 Physical adapter associated with the SEA True
...
virt_adapters ent1,ent2,ent5,ent6 List of virtual adapters associated with the SEA (comma separated)

# entstat –d ent4

State: BACKUP_SH
Bridge Mode: Partial
VID shared: 1 2 10 11 12 13

High Availability Mode: Sharing
Priority: 2



On p72vio1 (Primary):

State: PRIMARY_SH
Bridge Mode: Partial
VID shared: 3 4 33 44 333

High Availability Mode: Sharing
Priority: 1




FAQs

1) What is the algorithm that the VIOS pair uses to determine which VIOS should "own" the VID's?
VIOS divides the number of trunk adapters by 2. One VIOS bridges all VLANs of one half, and the other VIOS does bridging for VLANs of the other half.

2) As time goes on will the list of VID's change for each VIOS? (related to my algorithm question).
Not if the SEA configuration remains the same.

3) In load sharing mode is there any potential for broadcast storm if either of the VIOS are rebooted? (one at a time).
No.

4) Before rebooting each VIOS (one at a time) do I need to switch it back to "auto" mode?
No.

5) If I want to rmdev the SEA can I do this when it's in "sharing" mode on either VIOS? Based on what I've read I'd have to do this on the Primary SEA VIOS (with the lowest priority) first. In "auto" mode I can run rmdev against the SEA on either VIOS.
Yes.


Changes Section:

July 31, 2012
a) Correction to figures 4,5 and 6
b) Added, "For maintainability we only have tagged traffic in our szenario"
in the section Simple test scenario from scratch.
c) Figure 2 on the first VIOS the PVID was empty in the trunk adapter and is now added.
d) Figure 3 (multiple virtual ethernet switches) had incorrect PVID in the SEA on the second VIOS .
(PVID 2 instead of PVID 1)


References
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101752
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7hb1/iphb1_vios_scenario_sea_load_sharing.htm
http://www.redbooks.ibm.com/abstracts/sg247590.html
Redbook - IBM PowerVM Virtualization Managing and Monitoring

Related information

Using Virtual Switches in PowerVM to Drive Maximum Valu
Configuring Shared Ethernet Adapter failover with load
IBM PowerVM Virtualization Managing and Monitoring

Original publication date

2012/2/20 



'IBM Power' 카테고리의 다른 글

nim 서버 largesend 설정하기...  (0) 2012.07.05
PowerVM QuickStart  (0) 2012.07.03
LPM (Live Partition Mobility) 가 않될 경우... 체크 사항...  (0) 2012.07.03
env_chk.sh  (0) 2012.06.28
ftp.sh  (0) 2012.06.28
블로그 이미지

Melting

,

1. 대상서버의 vios와 Target 서버의 vios의 virtual adapter 설정에서 ...
      > vscsi client의 target이 모두 'Any Partition' && 'Any Partition Slot'
      > required 속성이 'No' 일 것



2. VIO 구성시 mpio 의 속성이 no_reserve !!!
    >  lsdev -Cc disk | grep MPIO | awk '{print "chdev -l " $1 " -a reserve_policy=no_reserve "}'| sh –x 
    >   lsdev -Cc disk | grep MPIO | awk '{print "chdev -l " $1 " -a algorithm=round_robin "}' | sh -x 


3. 각각의 vios와 HMC 가 SSH 통신이 가능해야 함
       && LPM의 대상 파티션에서 /.ssh/known_hosts 파일이 존재해야 함
             (ssh hscroot@hmc.ipaddr 로 해당 파일을 생성해 줄 것)

 


http://www.ibm.com/developerworks/aix/library/au-LPM_troubleshooting/index.html 

Basic understanding and troubleshooting of LPM

Introduction

Live Partition Mobility (LPM) was introduced on Power6. It helps to avoid downtime during VIOS and firmware updates when migrating to other frames. LPM also reduces the amount of work that is required while creating a new LPAR and set-up, which is required for the application.

A majority of customers perform LPM activities on a daily basis, and many may not know the exact procedure or what is taking place. This article shows steps to overcome or fix LPM issues.


Figure 1. The AIX I/O stack
The AIX I/O stack 

LPM key points

Things to remember about LPM are that it migrates running partitions from one physical server to another while maintaining complete transactional integrity and transfers the entire environment: processor state, memory, virtual devices, and connected users. Partitions may also migrate while powered off (inactive migration), and the operating system and application must reside on shared storage.

LPM prerequisites

You must have a minimum of two machines, a source and a destination, on POWER6 or higher with the Advanced Power Virtualization Feature enabled. The operating system and application must reside on a shared external storage (Storage Area Network). In addition to these hardware requirements, you must have:

  • One hardware management console (optional) or IVM.
  • Target system must have sufficient resources, like CPU and memory.
  • LPAR should not have physical adapters.

Your virtual I/O servers (VIOS) must have a Shared Ethernet Adapter (SEA) configured to bridge to the same Ethernet network which the mobile partition uses. It must be capable of providing virtual access to all the disk resources which the mobile partition uses (NPIV or vSCSI). If you are using vSCSI, then the virtual target devices must be physical disks (not logical volumes).

You must be at AIX version 5.3J or later, VIOS version 1.4 or later, HMC V7R310 or later and the firmware at efw3.1 or later.

What happens at the time of LPM


Figure 2. General LPM depiction
Illustration of a general LPM depiction 

The following describes the general LPM depiction in Figure 2:

  1. Partition profile (presently active) copied from source to target FSP.
  2. Storage is configured on the Target.
  3. Mover service partitions (MSP) is activated.
  4. Partition migration started.
    1. Majority of memory pages moved.
    2. All threads piped down.
  5. Activation resumed on target.
    1. Final memory pages moved.
    2. Cleanup storage and network traffic.
  6. Storage resources are deconfigured from the source.
  7. Partition profile removed from source FSP (Flexible Service Processor).

How to do LPM

Before doing LPM, we need to verify the availability of resources on both the source and destination side. If validation fails with some error, then we have to fix it to proceed further. Sometimes validation may end up with warning messages which can be ignored.

LPM using HMC GUI

Figure 3 shows you how to validate the LPAR with the HMC GUI.

From the System management -> Servers -> Trim screen, select the LPAR name: Operations -> Mobility -> Validate


Figure 3. Validating the LPAR
Validating the LPAR 

The validate screen, shown in Figure 4, shows that upt0052 LPAR is validated for migration from trsim to dash, and if needed, we have to specify the destination HMC.


Figure 4. Validation window
Screen shot of the Validation window 

Figure 5 show that the LPM has ended with a warning message, ignore the message and select Close to continue with the migration.


Figure 5. Validation passed with general warning message
Screen shot showing that Validation passed with                     general warning message 

Figure 6, the Partition Migration Validation screen, shows that the information is selected to set up a migration of the partition to a different managed system. Select Migrate to verify the information.


Figure 6. Ready for migration after validation passed
Ready for migration after validation passed 

When the migration completes, as shown in Figure 7, select Close


Figure 7. Migration progressing
Migration progressing 

HMC using commandline option

To validate the LPM in local HMC, enter the following:

migrlpar -o v -m [source cec] -t [target cec] -p [lpar to migrate] 

To validate the LPM in the Remote HMC, type:

migrlpar -o v -m [source cec] -t [target cec] -p [lpar to migrate] \
> --ip [target hmc] -u [remote user]

Note, you may prefer to use the hscroot command as the remote user.

Use the following migration command for LPM in the local HMC:

migrlpar -o m -m [source cec] -t [target cec] -p [lpar to migrate] 

The following migration command for LPM is used with the remote HMC:

migrlpar -o m -m [source cec] -t [target cec] -p [lpar to migrate] \
> --ip [target hmc] -u [remote user]

In case of MPIO (Multipath IO) failure of a LPAR due to configuration issues between source and destination, type the following to proceed (if applicable):

migrlpar -o m -m wilma -t visa -p upt07 --redundantpgvios 0 -n upt07_n
oams_npiv -u hscroot --vlanbridge 2 --mpio 2 -w 60 -d 5 -v -i
"source_msp+name=wilmav2,dest_msp_name=visav2" --ip destiny4
		          

Troubleshooting

This section covers various errors messages you might encounter and ways to correct them.

  • If LPM needs to be done across two different HMCs, in case of migration, the appropriate authorization between HMCs needs to be set. If proper authorization is not set, the following mkauthkey error displays:
    hscroot@destiny4:~> migrlpar -o v -m trim -p  UPT0052 --ip hmc-arizona -u
    hscroot -t arizona
    			        
    HSCL3653 The Secure Shell (SSH) communication configuration between the source
    and target Hardware Management Consoles has not been set up properly for user
    hscroot. Please run the mkauthkeys command to set up the SSH communication
    authentication keys.
    

    To overcome this error, type the following:

    hscroot@destiny4:~> mkauthkeys -g --ip hmc-arizona -u hscroot
    Enter the password for user hscroot on the remote host hmc-arizona
    			    

  • If migrating an POWER7 Advanced Memory Expansion (AME) partition, to any of the POWER6 machines, the following error displays:
    hscroot@destiny4:~> migrlpar -o v -m trim -p  
        UPT0052 --ip hmc-liken -u hscroot -t wilma
       
    HSCLA318 The migration command issued to the destination HMC failed with the 
    following error: HSCLA335 The Hardware Management Console for the destination 
    managed system does not support one or more capabilities required to perform 
    this operation. The unsupported capability codes are as follows: AME_capability
    hscroot@destiny4:~>
       

    To correct this error either migrate to POWER7 or remove the AME and then migrate.

  • If you are doing a migration of an Active Memory Sharing (AMS) partition with improper AMS setup or no free paging device on the destination side, the following error displays:
    hscroot@hmc-liken:~> migrlpar -o v -m wilma -t visa --ip destiny4 -u hscroot -p
    upt0060 --mpio 2
         
    Errors:
    HSCLA304 A suitable shared memory pool for the mobile partition was not found on the
    destination managed system. In order to support the mobile partitions, the
    destination managed system must have a shared memory pool that can accommodate the
    partition's entitled and maximum memory values, as well ad its redundant paging
    requirements. If the destination managed system has a shared memory pool, inability
    to support the mobile shared memory partition can be due to lack of sufficient memory
    in the pool, or lack of a paging space device in the pool that meets the mobile
    partition's redundancy and size requirements. 
         
    Details:
    HSCLA297 The DLPAR Resource Manager (DRM) capability bits )x) for mover service
    partition (MSP) visav2 indicate that partition mobility functions are not supported
    on the partition.
    HSCLA2FF An internal Hardware Management Console error has occurred. If this error
    persists, contact your service representative. 
         

    To correct this error do either, or both, of the following:

    • Since this problem is related to redundant AMS setup, the destination AMS pool should have redundant capability for an AMS pool defined as Shared Memory Pool with two Paging VIOS partitions for high availability HMC only. Users can select primary and alternate paging VIOS for each Shared Memory Partition. For any details related to AMS, please refer to "Configuring Active Memory Sharing from a customer's experience" (developerWorks, Aug 2009) for more information.
    • A sufficient space for paging device should be present in the target AMS pool.
  • If we try to migrate a LPAR from Power7 to Power6 CPU, we get the following error:
    hscroot@destiny4:~> migrlpar -o v -m dash -t arizona --ip hmc-arizona -u hscroot
    -p upt0053
            
    Errors:
    HSCLA224 The partition cannot be migrated because it has been designated to use a 
    processor compatibility level that is not supported by the destination managed 
    system. Use the HMC to configure a level that is compatible with the destination 
    managed system. 
    

    The solution for the above error could be one of the following:

    • Migrate to POWER7.
    • Change the processor mode to appropriate mode (as in the destination managed system).

      The steps to change processor mode in HMC GUI are:

      • Select the LPAR and deactivate it.
      • Go to Configuration -> Manage Profiles.
      • Select the profile that needs to be activated.
      • Go to Processors, change the Processor compatibility mode: to the required setting and boot it using the same profile.
    • LPAR should have the same shared vSCSI disks on source and destination MSPs:
      hscroot@destiny4:~> migrlpar -o v -m dash -t arizona --ip hmc-arizona -u hscroot
      -p upt0058
              
      Errors:
      The migrating partition's virtual SCSI adapter cannot be hosted by the existing 
      virtual I/O server (VIOS) partitions on the destination managed system. To 
      migrate the partition, set up the necessary VIOS hosts on the destination 
      managed system, then try the operation again. 
              
      Details:
      HSCLA356 The RMC command issued to partition arizona failed. This means that 
      destination VIOS partition arizona2 cannot host the virtual adapter 6 on the 
      migrating partition.
              
      HSCLA29A The RMC command issued to partition failed. 
      The partition command is:
      migmgr -f find_devices -t vscsi -C 0x3 -d 1
      The RMC return code is:
      0
      The OS command return code is:
      85
      The OS standard out is:
      Running method '/usr/lib.methods/mig_vscsi
      85
      The OS standard err is:
              
      The search was performed for the following device descriptions:
              <v-scsi-host>
                       <generalInfo>    
                          <version>2.0 </version>
                          <maxTransfer>262144</maxTransfer>
                          <minVIOSpatch>0</minVIOSpatch>
                          <minVIOScompatability>1</minVIOScompatability>
                          <effectiveVIOScompatability>1</effectiveVIOScompatability>
                        <generalInfo>
                        <ras>
                              <partitionID>2</partitionID>
                         </ras>
                         <virtDev>
                                  <vLUN>
                                              <LUA>0x81000000000000000</LUA>
                                              <LUNState>0</LUNState>
                                              <clientReserve>no</clientReserve>
                                              <AIX>
                                                      <type>vdasd</type>
                                                      <connWhere>1</connWhere>
                                              </AIX>
                                  </vLUN>
                                  <blockStirage>
                                              <reserveType>NI_RESERVE</reserveType>
                                              <AIX>
      
                                 <udid>261120017380003D30194072810XIV03IBMfcp</udid>
                                                      <type>UDID</type>
                                              </AIX>
                                  </blockStirage>
                          </virtDev>
           </v-scsi-host>
                                     
              

      And, the solution is as follows:

      • Make sure destination MSP has access to same vSCSI disks as source MSP.
      • Also make sure disks are not reserved.

    In cases where the mapping is correct, and you are still getting the same error, it may be due to having different types of FC adapters between source and destination MSP. For mapping methods, refer to last Note section of "Troubleshooting".

  • In the destination CEC, if the LPAR has insufficient CPUs, then we get the following error:
       
    hscpe@destiny4:~> migrlpar -o v -m dash -t wilma -p upt0053 --ip defiant2 -u
    hscroothmc-arizona -u hscroot
    Errors:
    The partition cannot be migrated because the processing resources it requires 
    exceeds the available processing resources in the destination managed system's 
    shared processor pool. If possible, free up processing resources from that shared 
    processor pool and try the operation again.    
            

    And the solution is:

    • We need to reduce the CPU in LPAR by DLPAR or change the profile.
    • We can increase the number of processors at destination machine by reducing the number of processor units using DLPAR operation on a few of its clients (if applicable).
  • If the destination CEC does not have sufficient memory, then:
       
    hscpe@destiny4:~> migrlpar -o v -m extra5 -t dash -p upt0027
    Errors:
    There is not enough memory: Obtained: 2816, Required: 4608.  Check that there is 
    enough memory available to activate the partition. If not, create a new profile or 
    modify the existing profile with the available resources, then activate the 
    partition. If the partition must be activated with these resources, deactivate any 
    running partition or partitions using the resource, then activate this partition. 
    

    And, the solution is either:

    • We need to reduce the amount of memory in LPAR by using DLPAR operation or by changing the profile; or,
    • We can increase the memory at the destination machine by reducing the memory of any other LPARs using DLPAR operation.

    If the RMC (Resource Monitoring and Control) connection is not established among the source, target VIOS's and LPAR, then we may get following error:

       
    hscpe@destiny4:~> migrlpar -o v -m dash -t trim -p upt0053
    Errors: 
    The operation to check partition upt0053 for migration readiness has failed. 
    The partition command is:
    drmgr -m -c pmig -p check -d 1
    The partition standard error is:
              
    HSCLA257 The migrating partition has returned a failure response to the HMC's
    request to perform a check for migration readiness. The migrating partition in
    not ready for migration at this time. Try the operation again later.
              
    Details:
    HSCLA29A  The RMC command issued to partition upt0053 failed. \
    The partition commend is:
    drmgr -m -c pmig -p check -d 1
    The RMC return code is:
    1141
    The OS command return code is:
    0
    The OS standard out is:
    Network interruption occurs while RMC is waiting for the execution of the command
    on the partition to finish.
    Either the partition has crashed, the operation has caused CPU starvation, or
    IBM.DRM has crashed in the middle of the operation.
    The operation could have completed successfully. (40007) (null)
    The OS standard err is:
              

    To fix this problem, refer to "Dynamic LPAR tips and checklists for RMC authentication and authorization" (developerWorks, Feb 2005) for more information.

    • If the partition you are trying to migrate is having MPIO with dual VIOS setup, and the target having dual VIOS but not set up properly for MPIO, then we may get error listed below:
         
      hscroote@hmc-liken:~> migrlpar -o v -m wilma -t visa --ip destiny4 -u hscroot -p
      upt0060
      Errors:
      HSCLA340 The HMC may not be able to replicate the source multipath I/O
      configuration for the migrating partition's virtual I/O adapters on the 
      destination. This means one or both of the following: (1) Client adapters 
      that are assigned to different source VIOS hosts may be assigned to a single 
      VIOS host on the destination; (2) Client adapters that are assigned to a single 
      source VIOS host may be assigned to different VIOS hosts on the destination. 
      You can review the complete list of HMC-chosen mappings by issuing the command 
      to list the virtual I/O mappings for the migrating partition. 
      HSCLA304 A suitable shared memory pool for the mobile partition was not found 
      on the destination managed system. In order to support the mobile partition, 
      the destination managed system must have a shared memory pool that can 
      accommodate the partition's entitled and maximum memory values, as well as its 
      redundant paging requirements. If the destination managed system has a shared 
      memory pool, inability to support the mobile shared memory partition can be due 
      to lack of sufficient memory in the pool, or lack of a paging space device in 
      the pool that meets the mobile partition's redundancy and size requirements. 
      Details:
      HSCLA297 The DLPAR Resource Manager (DRM) capability bits 0x0 for mover service
      partition (MSP) visav2 indicate that partition mobility functions are not 
      supported on the partition.
      HSCLA2FF  An internal Hardware Management Console error has occurred. If this 
      error persists, contact your service representative. 
      Warning:
      HSCLA246  The HMC cannot communicate migration commands to the partition visav2.
      Either the network connection is not available or the partition does not have a 
      level of software that is capable of supporting partition migration. Verify the 
      correct network and migration setup of the partition, and try the operation 
      again. 
                

      The solution is:

      • Check the correctness of dual VIOS, availability of adapters, mappings in SAN and switch.

      If above solution is not feasible to implement then:

      • Use --mpio 2 with the migrlpar command. By using this, we may lose dual VIOS setup for MPIO disks. Generally this is not a recommended solution by PowerVM.
    • If the Source VIOS is having non-recommended NPIV, we will get the following error:
         
      hscroote@hmc-liken:~> migrlpar -o v -m wilma -t visa --ip destiny4 -u hscroot -p
      upt0060
      Errors:
      HSCLA340 The HMC may not be able to replicate the source multipath I/O
      configuration for the migrating partition's virtual I/O adapters on the 
      destination. This means one or both of the following: (1) Client adapters 
      that are assigned to different source VIOS hosts may be assigned to a single 
      VIOS host on the destination; (2) Client adapters that are assigned to a single 
      source VIOS host may be assigned to different VIOS hosts on the destination. 
      You can review the complete list of HMC-chosen mappings by issuing the command 
      to list the virtual I/O mappings for the migrating partition. 
      HSCLA304 A suitable shared memory pool for the mobile partition was not found 
      on the destination managed system. In order to support the mobile partition, 
      the destination managed system must have a shared memory pool that can 
      accommodate the partition's entitled and maximum memory values, as well as its 
      redundant paging requirements. If the destination managed system has a shared 
      memory pool, inability to support the mobile shared memory partition can be due 
      to lack of sufficient memory in the pool, or lack of a paging space device in 
      the pool that meets the mobile partition's redundancy and size requirements. 
      Details:
      HSCLA297 The DLPAR Resource Manager (DRM) capability bits 0x0 for mover service
      partition (MSP) visav2 indicate that partition mobility functions are not 
      supported on the partition.
      HSCLA2FF  An internal Hardware Management Console error has occurred. If this 
      error persists, contact your service representative. 
      Warning:
      HSCLA246  The HMC cannot communicate migration commands to the partition visav2.
      Either the network connection is not available or the partition does not have a 
      level of software that is capable of supporting partition migration. Verify the 
      correct network and migration setup of the partition, and try the operation 
      again. 
                

      When we verify in VIOS:

      lsmap  	-all  	-npiv

      Name        Physloc                           ClntID  ClntName     ClntOS
      ----------- --------------------------------- ------- ------------ ------
      vfchost3    U9117.MMB.100302P-V1-C14             5      upt0052      AIX
      
      Status:LOGGED_IN
      FC name:fcs0                   FC  loc code:U78C0.001.DBJ0563-P2-C1-T1
      Ports logged in:35
      Flags:a<LOGGED_IN,STRIP_MERGE>
      VFC client name:fcs1            VFC client DRC:U8233.E8B.100244P-V5-C4-T1
          
          

      Name        Physloc                           ClntID  ClntName     ClntOS
      ----------- --------------------------------- ------- ------------ ------
      vfchost3    U9117.MMB.100302P-V1-C13
      
      Status:LOGGED_IN
      FC name:fcs0                   FC  loc code:U78C0.001.DBJ0563-P2-C1-T1
      Ports logged in:0
      Flags:4<NOT_LOGGED>
      VFC client name:                VFC client DRC
      
      

      Here the problem is vfchost3 and vfchost8 both mapped to same host (upt0058) and both mapped to same physical FC(fcs0). This is not the recommended setup. To fix this use either of these methods:

      • We need to map one of the vfchost to another FC (fcs1) on the server which is connected to the switch.
      • We can remove one of the vfchost through DLPAR.
    • This error basically represents the incompatibility between the source and target FC adapters. The incompatibility can be due to a number of reasons in terms of characteristics of FC adapter (For many different kinds of FC incompatibility problems or mapping problems, we may get "return code of 69".)
      hscroot@guandu5:~> migrlpar -o v -m flrx -t dash --ip destiny4  -u hscroot -p 
          upt0064
      HSCLA319 The migrating partition's virtual fibre channel client adapter 4 
      cannot be hosted by the existing Virtual I/O Server (VIOS) partitions on 
      the destination managed system. To migrate the partition, set up the 
      necessary VIOS host on the destination managed system, then try the 
      operation again. 
      HSCLA319 The migrating partition's virtual fibre channel client adapter 3 
      cannot be hosted by the existing Virtual I/O Server (VIOS) partitions on 
      the destination managed system. To migrate the partition, set up the 
      necessary VIOS host on the destination managed system, then try the 
      operation again. 
           
      Details:
      HSCLA356 The RMC command issued to partition dashv1 failed. This means that
      destination VIOS partition dashv1 cannot host the virtual adapter 4 on the 
      migrating partition. 
      HSCLA29A The RMC command issued to partition dashv1 failed. 
      The partition command is:
      migmgr -f find_devices -t vscsi -C 0x3 -d 1
      The RMC return code is:
      0
      The OS command return code is:
      69
      The OS standard out is:
      Running method '/usr/lib/methods/mig_vscsi'
      69
           
      The OS standard err is:
           
           
      The search was performed for the following device description:
            <vfc-server>
                     <generalInfo>    
                         <version>2.0 </version>
                         <maxTransfer>1048576</maxTransfer>
                         <minVIOSpatch>0</minVIOSpatch>
                         <minVIOScompatability>1</minVIOScompatability>
                         <effectiveVIOScompatability>-1</effectiveVIOScompatability>
                         <numPaths>1</numPaths>
                         <numPhysAdapPaths>1</numPhysAdapPaths>
                         <numWWPN>34</numWWPN>
                         <adpInterF>2</adpInterF>
                         <adpCap>5</adpCap>
                         <linkSpeed>400</linkSpeed>
                         <numIniat>6</numIniat>
                         <activeWWPN>0xc0507601a6730036</activeWWPN>
                         <inActiveWWPN>0xc0507601a6730037</inActiveWWPN>
                         <nodeName>0xc0507601a6730036</nodeName>
                         <streamID>0x0</streamID>
                      <generalInfo>
                        <ras>
                              <partitionID>1</partitionID>
                         </ras>
                        <wwpn_list>
                                      <wwpn>0x201600a0b84771ca</wwpn>
                                      <wwpn>0x201700a0b84771ca</wwpn>
                                      <wwpn>0x202400a0b824588d</wwpn>
                                      <wwpn>0x203400a0b824588d</wwpn>
                                      <wwpn>0x202500a0b824588d</wwpn>
                                      <wwpn>0x203500a0b824588d</wwpn>
                                      <wwpn>0x5005076303048053</wwpn>
                                      <wwpn>0x5005076303098053</wwpn>
                                      <wwpn>0x5005076303198053</wwpn>
                                      <wwpn>0x500507630319c053</wwpn>
                                      <wwpn>0x500507630600872d</wwpn>
                                      <wwpn>0x50050763060b872d</wwpn>
                                      <wwpn>0x500507630610872d</wwpn>
                                      <wwpn>0x5005076306ib872d</wwpn>
                                      <wwpn>0x500a098587e934b3</wwpn>
                                      <wwpn>0x500a098887e934b3</wwpn>
                                      <wwpn>0x20460080e517b812</wwpn>
                                      <wwpn>0x20470080e517b812</wwpn>
                                      <wwpn>0x201400a0b8476a74</wwpn>
                                      <wwpn>0x202400a0b8476a74</wwpn>
                                      <wwpn>0x201500a0b8476a74</wwpn>
                                      <wwpn>0x202500a0b8476a74</wwpn>
                                      <wwpn>0x5005076304108e9f</wwpn>
                                      <wwpn>0x500507630410ce9f</wwpn>
                                      <wwpn>0x50050763043b8e9f</wwpn>
                                      <wwpn>0x50050763043bce9f</wwpn>
                                      <wwpn>0x201e00a0b8119c78</wwpn>
                                      <wwpn>0x201f00a0b8119c78</wwpn>
                                      <wwpn>0x5001738003d30151</wwpn>
                                      <wwpn>0x5001738003d30181</wwpn>
                                      <wwpn>0x5005076801102be5</wwpn>
                                      <wwpn>0x5005076801102dab</wwpn>
                                      <wwpn>0x5005076801402be5</wwpn>
                                      <wwpn>0x5005076801402dab</wwpn>
                          </wwpn_list>
                          
         <vfc-server>
          

      The solution can be any one of the following (or it may fail due to other mismatching characteristic of target FC adapters):

      • Make sure the characteristic of FC adapter is the same between source and target.
      • Make sure the source and target adapters reach the same set of targets (check the zoning).
      • Make sure that the FC adapter is connected properly.

      Sometimes the configuration log at the time of validation or migration is required to debug the errors. To get the log, run the following command from source MSP:

      alog -t cfg -o > cfglog
      

      NPIV mapping steps for LPM:

      1. Zone both NPIV WWN (World Wide Name) and SAN WWN together.
      2. Mask LUN's and NPIV client WWN together.
      3. Make sure the target source and target VIOS have a path to the SAN subsystem.

      vSCSI mapping steps for LPM:

      1. Zone both source and target VIOS WWN and SAN WWN together.
      2. Make sure LUN is masked with source and target VIOS together from SAN subsystem.

LPM enhancement in POWER7

As per the prerequisites for LPM section for doing LPM, the LPAR should not have any physical adapters, but if it is a POWER7, it can have Host Ethernet Adapter (Integrated Virtualized Ethernet) attached to it. However, for a POWER7 LPAR which you want to migrate to other POWER7 can have HEA attached to it, but we must create etherchannel on a newly created virtual adapter and HEA in aggregation mode. When we migrate at the target we see only virtual adapter configured with IP and etherchannel; HEA will not be migrated. Also, make sure the VLANs used in virtual adapters to create etherchannel are added to both source and target VIOS.

Before LPM:

# lsdev -Cc adapter
ent0  Available       Logical Host Ethernet Port (lp-hea)
ent1  Available       Logical Host Ethernet Port (lp-hea)
ent2  Available       Logical Host Ethernet Port (lp-hea)
ent3  Available       Logical Host Ethernet Port (lp-hea)
ent4  Available       Virtual I/O Ethernet Port (l-lan)
ent7  Available       Virtual I/O Ethernet Port (l-lan)
ent6  Available       Virtual I/O Ethernet Port (l-lan)
ent7  Available       Virtual I/O Ethernet Port (l-lan)
ent8  Available       EtherChannel / 802.3ad Link Aggregation
ent9  Available       EtherChannel / 802.3ad Link Aggregation
ent10 Available       EtherChannel / 802.3ad Link Aggregation
ent11 Available       EtherChannel / 802.3ad Link Aggregation
fcs0  Available C3-T1 Virtual Fibre Channel Adapter
fcs1  Available C3-T1 Virtual Fibre Channel Adapter
lhea0 Available       Logical Host Ethernet Adapter (l-hea)
lhea1 Available       Logical Host Ethernet Adapter (l-hea)    
vsa0  Available       LPAR Virtual Serial Adapter
[root@upt0017] /

In this case doing LPM is also a bit different compared to the earlier method; this has to be done from the LPAR using smitty (also called client side LPM), not from HMC. But, LPAR must install with SSH fileset to do LPM through smitty.

openssh.base.client
openssh.base.server
openssh.license
openssh.man.en_US
openssl.base
openssl.license
openssl.man.en_US

Use smitty to migrate an Power7 LPAR with HEA. Smit --> Applications will be the first step to do LPM from smitty.

# smit

System Management
Move cursor to desired item and press Enter
  
  Software Installation and Maintenance
  Software License Management
  Mange Edition
  Devices
  System Storage Management  *Physical & Logical Storage)
  Security & User
  Communication Applications and Services
  Workload Partition Administration
  Print Spooling
  Advanced Accounting
  Problem Determination
  Performance & Resource Scheduling
  System Environments
  Processes & Subsystems
  Applications
  Installation Assistant
  Electronic Service Agent
  Cluster Systems Management
  Using SMIT (information only)
  
 

After selecting "Applications", then select "Live Partition Mobility with Host Ethernet Adapter (HEA)" to proceed.

Move cursor to desired item and press Enter

Live Partition Mobility with Host Ethernet Adapter (HEA)
    

Next enter the required fields such as source and destination HMC and HMC users, source and destination managed system names, LPAR name.

                   Live Partition Mobility with Host Ethernet Adapter (HEA)    
 
Type or select values in the entry fields.
Press Enter AFTER making all desired changes

                                                     [Entry Fields]
* Source HMC Hostname or IP address                [destinty2]
* Source HMC Username             [hscroot]
* Migration between two HMCs                         no
        Remote HMC hostname or IP address          [ ]
        Remote HMC Username                        [ ]
*Source System                                     [link]
* Destination System                               [king]
* Migrating Partition Name                         [upt0017]
* Migration validation only                          yes

Once the successful migration the smitty command output says OK.

Command Status
                                           
Command: OK            stdout: yes           Stderr: no
Before command completion, additional instruction may appear below.

Setting up SSH credentials wit destinty2
If prompted for a password, please enter password for user hscroot on HMC destinty2
Verifying EtherChannel configuration ...
Modifying EtherChannel configuration for mobility ...
Starting partition mobility process. This process is complete.
DO NOT halt or kill the migration process. Unexpected results may occur if the migration
process is halted or killed.
Partition mobility process is complete. The partition has migrated.

After successful LPM, all HEA's will be in defined state, but still the etherchannel between HEA and Virtual adapter exists and IP is still configured on Etherchannel.

      
[root@upt0017] /
# lsdev -Cc adapter

ent0   Defined             Logical Host Ethernet Port  (lp-hea)
ent1   Defined             Logical Host Ethernet Port  (lp-hea)
ent2   Defined             Logical Host Ethernet Port  (lp-hea)
ent3   Defined             Logical Host Ethernet Port  (lp-hea)
ent4   Available           Virtual I/O Ethernet Adapter  (l-lan)
ent5   Available           Virtual I/O Ethernet Adapter  (l-lan)
ent6   Available           Virtual I/O Ethernet Adapter  (l-lan)
ent7   Available           Virtual I/O Ethernet Adapter  (l-lan)
ent8   Available           EtherChannel  /  IEEE 802.3ad Link Aggregation
ent9   Available           EtherChannel  /  IEEE 802.3ad Link Aggregation
ent10  Available           EtherChannel  /  IEEE 802.3ad Link Aggregation
ent11  Available           EtherChannel  /  IEEE 802.3ad Link Aggregation
fcs0   Available  C3-T1    Virtual Fibre Channel Client Adapter
fcs1   Available  C4-T1    Virtual Fibre Channel Client Adapter
lhea0  Defined             Logical Host Ethernet Adapter  (l-hea)
lhea1  Defined             Logical Host Ethernet Adapter  (l-hea)
vsa0   Available           LPAR Virtual Serial Adapter
[root@upt0017] /
# netstat -i
Name  Mtu    Network      Address            Ipkts   Ierrs        Opkts  Oerrs  Coll
en8   1500   link#2      0.21.5E.72.AE.40    9302210    0       819878     0       0
en8   1500   10.33       upt0017.upt.aust    9302210    0       819978     0       0
en9   1500   link#3      0.21.5e.72.ae.52      19667    0          314     2       0
en9   1500   192.168.17  upt0017e0.upt.au      19667    0          314     2       0
en10  1500   link#4      0.21.5e.72.ae.61      76881    0         1496     0       0
en10  1500   192.168.18  upt0017g0.upt.au      76881    0         1496     0       0
en11  1500   link#5      0.21.5e.72.ae.73       1665    0         2200     2       0
en11  1500   192.168.19  upt0017d0.upt.au       1665    0         2200     2       0
lo0   16896  link#1                          1660060    0       160060     0       0
lo0   16896  loopback    localhost ''        1660060    0       160060     0       0
lo0   16896  ::1%1                           1660060    0       160060     0       0
[root@upt0017] /
# 

Other enhancements for POWER7

Additional enhancements are:

  • User defined Virtual Target Device names are preserved across LPM (vSCSI).
  • Support for shared persistent (SCSI-3) reserves on LUNs of a migrating partition (vSCSI).
  • Support for migration of a client across non-symmetric VIOS configurations. The migrations involve a loss of redundancy. It requires an HMC level V7r3.5.0 and the GUI "override errors" option or command line --force flag. It also allows for moving a client partition to a CEC whose VIOS configuration does not provide the same level of redundancy found on the source.
  • The CLI interface to configure IPSEC tunneling for the data connection between MSPs.
  • Support to allow the user to select the MSP IP addresses to use during a migration.

Limitations

  • LPM cannot be performed on a stand-alone LPAR; it should be a VIOS client.
  • It must have virtual adapters for both network and storage.
  • It requires PowerVM Enterprise Edition.
  • The VIOS cannot be migrated.
  • When migrating between systems, only the active profile is updated for the partition and VIOS.
  • A partition that is in a crashed or failed state is not capable of being migrated.
  • A server that is running on battery power is not allowed to be the destination of a migration. A server that is running on battery power may be the source of a migrating partition.
  • For a migration to be performed, the destination server must have resources (for example, processors and memory) available that are equivalent to the current configuration of the migrating partition. If a reduction or increase of resources is required then a DLPAR operation needs to be performed separate from migration.
  • This is not a replacement for PowerHA solution or a Disaster Recovery Solution.
  • The partition data is not encrypted when transferred between MSPs.

Conclusion

This article gives administrators, testers, and developers information so that they can configure and troubleshoot LPM. A step-by-step command line and GUI configuration procedure is explained for LPM activity. This article also explains prerequisites and limitations while performing LPM activity.


Resources

Learn

Discuss

About the authors

Raghavendra Prasannakumar

Raghavendra Prasannakumar works as a systems software engineer on the IBM AIX UPT release team in Bangalore, India and has worked there for 3 years. He has worked on AIX testing on Power series and on AIX virtualization features like VIOS, VIOSNextGen, AMS, NPIV, LPM, and AIX-TCP features testing. He has also worked on customer configuration setup using AIX and Power series. You can reach him atkpraghavendra@in.ibm.com.

Shashidhar Soppin

Shashidhar Soppin works as a system software test specialist on the IBM AIX UPT release team in Bangalore, India. Shashidhar has over nine years of experience working on development tasks in RTOS, Windows and UNIX platforms and has been involved in AIX testing for 5 years. He works on testing various software vendors' applications and databases for pSeries servers running AIX. He specializes in Veritas 5.0 VxVM and VxFS configuration and installation, ITM 6.x installation and configuration and Workload Development tasks on AIX. He is an IBM Certified Advanced Technical Expert (CATE)-IBM System p5 2006. He holds patents and has been previously published. You can reach him at shsoppin@in.ibm.com.

Shivendra Ashish

Shivendra Ashish works as software engineer on the IBM AIX UPT release team in Bangalore, India. He has worked on AIX, PowerHA, PowerVM components on pSeries for the last 2 years at IBM India Software Labs. He also worked on various customer configurations and engagements using PowerHA, PowerVM, and AIX on pSeries. You can reach him at shiv.ashish@in.ibm.co





- http://www.ibm.com/developerworks/wikis/display/virtualization/Live+Partition+Mobility


Live Partition Mobility (LPM) Nutshell

Neil Koropoff, 06/11/2012

Access pdf version of the Live Partition Mobility (LPM) Nutshell


The information below is from multiple sources including:

This document is not intended to replace the LPM Redbook or other IBM documentation. It is only to summarize the requirements and point out fixes that have been released.Changes are in bold.

Major requirements for active Live Partition Mobility are:

Software Version Requirements:

Base LPM:

Hardware Management Console (HMC) minimum requirements

  • Version 7 Release 3.2.0 or later with required fixes MH01062 for both active and inactive partition migration. If you do not have this level, upgrade the HMC to the correct level.
  • Model 7310-CR2 or later, or the 7310-C03
  • Version 7 Release 710 when managing at least one Power7 server.

Systems Director Management Console requirements

  • Systems Director Management Console requirements would be the base SDMC and the minimum requirements for the machine types involved when attached to an SDMC.

Integrated Virtualization Manager (IVM) minimum requirements

  • IVM is provided by the Virtual I/O Server at release level 1.5.1.1 or higher.

PowerVM minimum requirements:

  • Both source and destination systems must have the PowerVM Enterprise Edition license code installed.
  • Both source and destination systems must be at firmware level 01Ex320 or later, where x is an S for BladeCenter®, an L for Entry servers (such as the Power 520, Power 550, and Power 560), an M for Midrange servers (such as the Power 570) or an H for Enterprise servers (such as the Power 595).
    Although there is a minimum required firmware level, each system may have a different level of firmware. The level of source system firmware must be compatible with the destination firmware. The latest firmware migration matrix can be found here:

http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7hc3/p7hc3firmwaresupportmatrix.htm

Note:  When a partition mobility operation is performed from a Power6 system running at xx340_122 (or earlier) firmware level to a Power7 system running at a firmware level xx720_yyy, a BA000100 error log is generated on the target system, indicating that the partition is running with the partition firmware of the source system.  The partition will continue to run normally on the target system, however future firmware updates cannot be concurrently activated on that partition.  IBM recommends that the partition be rebooted in a maintenance window so the firmware updates can be applied to the partition on the target system.

If the partition has been booted on a server running firmware level xx340_132 (or later), it is not subject to the generation of an error log after a mobility operation.

Source and destination Virtual I/O Server minimum requirements

  • At least one Virtual I/O Server at release level 1.5.1.1or higher has to be installed both on the source and destination systems.
  • Virtual I/O Server at release level 2.12.11 with Fix Pack 22.1 and Service Pack 1, or later for Power7 servers.

Client LPARs using Shared Storage Pool virtual devices are now supported for Live LPAR Mobility (LPM) with the PowerVM V2.2 refresh that shipped on 12/16/2011.

Operating system minimum requirements
The operating system running in the mobile partition has to be AIX or Linux. A Virtual I/O Server logical partition or a logical partition running the IBM i operating system cannot be migrated. The operating system must be at one of the following levels:

Note: The operating system level of the lpar on the source hardware must already meet the minimum operating system requirements of the destination hardware prior to a migration.

  • AIX 5L™ Version 5.3 Technology Level 7 or later (the required level is 5300-07-01) or AIX Version 5.3 Technology Level 09 and Service Pack 7 or later for Power7 servers.
  • AIX Version 6.1 or later (the required level is 6100-00-01) or AIX Version 6.1 Technology Level 02 and Service Pack 8 or later.
  • Red Hat Enterprise Linux Version 5 (RHEL5) Update 1 or later (with the required kernel security update)
  • SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later (with the required kernel security update)

Active Memory Sharing (AMS) software minimum requirements:

Note: AMS is not a requirement to perform LPM and is listed because both are features of PowerVM Enterprise Edition but AMS has more restrictive software requirements in case you plan to use both.

  • PowerVM Enterprise activation
  • Firmware level 01Ex340_075
  • HMC version 7.3.4 service pack 2 (V7R3.4.0M2) for HMC managed systems
  • Virtual I/O Server Version 2.1.0.1-FP21 for both HMC and IVM managed systems
  • AIX 6.1 TL 3
  • Novell SuSE SLES 11
    See above requirements for Power7 servers.

Virtual Fibre Channel (NPIV) software minimum requirements:

  • HMC Version 7 Release 3.4, or later
  • Virtual I/O Server Version 2.1 with Fix Pack 20.1, or later (V2.1.3 required for NPIV on FCoCEE)
  • AIX 5.3 TL9, or later
  • AIX 6.1 TL2 SP2, or later

Configuration Requirements:

Base LPM:

Source and destination system requirements for HMC managed systems:

The source and destination system must be an IBM Power Systems POWER6 technology- based model such as:

  • 8203-E4A (IBM Power System 520 Express)
  • 8231-E2B, E1C (IBM Power System 710 Express)
  • 8202-E4B, E4C (IBM Power System 720 Express)
  • 8231-E2B, E2C (IBM Power System 730 Express)
  • 8205-E6B, E6C (IBM Power System 740 Express)
  • 8204-E8A (IBM Power System 550 Express)
  • 8234-EMA (IBM Power System 560 Express)
  • 9117-MMA (IBM Power System 570)
  • 9119-FHA (IBM Power System 595)
  • 9125-F2A (IBM Power System 575)
  • 8233-E8B (IBM Power System 750 Express)
  • 9117-MMB, MMC (IBM Power System 770)
  • 9179-MHB, MHC (IBM Power System 780)
  • 9119-FHB (IBM Power System 795)

Source and destination system requirements for IVM managed systems:

  • 8203-E4A (IBM Power System 520 Express)
  • 8204-E8A (IBM Power System 550 Express)
  • 8234-EMA (IBM Power System 560 Express)
  • BladeCenter JS12
  • BladeCenter JS22
  • BladeCenter JS23
  • BladeCenter JS43
  • BladeCenter PS700
  • BladeCenter PS701
  • BladeCenter PS702
  • BladeCenter PS703
  • BladeCenter PS704
  • 8233-E8B (IBM Power System 750 Express)
  • 8231-E2B, E1C (IBM Power System 710 Express)
  • 8202-E4B, E4C (IBM Power System 720 Express)
  • 8231-E2B, E2C (IBM Power System 730 Express)
  • 8205-E6B, E6C (IBM Power System 740 Express)

Source and destination system requirements for Flexible Systems Manager (FSM) managed systems:

  • 7895-22X (IBM Flex System p260 Compute Node)
  • 7895-42X (IBM Flex System p460 Compute Node)

A system is capable of being either the source or destination of a migration if it contains the necessary processor hardware to support it.

Source and destination Virtual I/O Server minimum requirements

  • A new partition attribute, called the mover service partition, has been defined that enables you to indicate whether a mover-capable Virtual I/O Server partition should be considered during the selection process of the MSP for a migration. By default, all Virtual I/O Server partitions have this new partition attribute set to FALSE.
  • In addition to having the mover partition attribute set to TRUE, the source and destination mover service partitions communicate with each other over the network. On both the source and destination servers, the Virtual Asynchronous Services Interface (VASI) device provides communication between the mover service partition and the POWER Hypervisor.

Storage requirements

For a list of supported disks and optical devices, see the Virtual I/O Server
data sheet for VIOS:

http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html

Make sure the reserve_lock on SAN disk is set to no_reserve.

Network requirements

The migrating partition uses the virtual LAN (VLAN) for network access. The VLAN must be bridged (if there is more than one, then it also has to be bridged) to a physical network using a shared Ethernet adapter in the Virtual I/O Server partition. Your LAN must be configured so that migrating partitions can continue to communicate with other necessary clients and servers after a migration is completed.

Note:  VIOS V2.1.3 adds support of 10 Gb FCoE Adapters (5708, 8275) on the Linux and IBM AIX operating systems by adding N_Port ID Virtualization (NPIV).+
 

Requirements for remote migration

The Remote Live Partition Mobility feature is available starting with HMC Version 7 Release 3.4.
This feature allows a user to migrate a client partition to a destination server that is managed by a different HMC. The function relies on Secure Shell (SSH) to communicate with the remote HMC.

The following list indicates the requirements for remote HMC migrations:

  • A local HMC managing the source server
  • A remote HMC managing the destination server
  • Version 7 Release 3.4 or later HMC version
  • Network access to a remote HMC
  • SSH key authentication to the remote HMC

The source and destination servers, mover service partitions, and Virtual I/O Servers are required to be configured exactly as though they were going to be performing migrations managed by a single HMC.

To initiate the remote migration operation, you may use only the HMC that contains the mobile partition.

Active Memory Sharing configuration

An IBM Power System server based on the POWER6 or POWER7 processor

Virtual Fibre Channel (NPIV) configuration

Virtual Fibre Channel is a virtualization feature. Virtual Fibre Channel uses N_Port ID Virtualization (NPIV), and enables PowerVM logical partitions to access SAN resources using virtual Fibre Channel adapters mapped to a physical NPIV-capable adapter.

The mobile partition must meet the requirements described above. In addition, the following components must be configured in the environment:

  • An NPIV-capable SAN switch
  • An NPIV-capable physical Fibre Channel adapter on the source and destination Virtual I/O Servers
  • Each virtual Fibre Channel adapter on the Virtual I/O Server mapped to an NPIV-capable physical Fibre Channel adapter
  • Each virtual Fibre Channel adapter on the mobile partition mapped to a virtual Fibre Channel adapter in the Virtual I/O Server
  • At least one LUN mapped to the mobile partition's virtual Fibre Channel adapter
  • Mobile partitions may have virtual SCSI and virtual Fibre Channel LUNs. Migration of LUNs between virtual SCSI and virtual Fibre Channel is not supported at the time of publication.

Fixes that effect LPM:

Firmware Release Ex340_095:
On systems running system firmware Ex340_075 and Active Memory Sharing, a problem was fixed that might have caused a partition to lose I/O entitlement after the partition was moved from one system to another using PowerVM Mobility.

 Firmware Release Ex350_063
A problem was fixed which caused software licensing issues after a live partition mobility operation in which a partition was moved to an 8203-E4A or 8204-E8A system.

Firmware Release Ex350_103
HIPER: IBM testing has uncovered a potential undetected data corruption issue when a mobility operation is performed on an AMS (Active Memory Sharing) partition.  The data corruption can occur in rare instances due to a problem in IBM firmware.  This issue was discovered during internal IBM testing, and has not been reported on any customer system. IBM recommends that systems running on EL340_075 or later move to EL350_103 to pick up the fix for this potential problem.  (Firmware levels older than EL340_075 are not exposed to the problem.)

Firmware Release AM710_083
A problem was fixed that caused SRC BA210000 to be erroneously logged on the target system when a partition was moved (using Live Partition Mobility) from a Power7 system to a Power6 system.
A problem was fixed that caused SRC BA280000 to be erroneously logged on the target system when a partition was moved (using Live Partition Mobility) from a Power7 system to a Power6 system.

Firmware Release AM720_082
A problem was fixed that caused the system ID to change, which caused software licensing problems, when a live partition mobility operation was done where the target system was an 8203-E4A or an 8204-E8A.
A problem was fixed that caused SRC BA210000 to be erroneously logged on the target system when a partition was moved (using Live Partition Mobility) from a Power7 system to a Power6 system.
A problem was fixed that caused SRC BA280000 to be erroneously logged on the target system when a partition was moved (using Live Partition Mobility) from a Power7 system to a Power6 system.
A problem was fixed that caused a partition to hang following a partition migration operation (using Live Partition Mobility) from a system running Ax720 system firmware to a system running Ex340, or older, system firmware. 

Firmware Release Ax720_090

HIPER: IBM testing has uncovered a potential undetected data corruption issue when a mobility operation is performed on an AMS (Active Memory Sharing) partition. The data corruption can occur in rare instances due to a problem in IBM firmware. This issue was discovered during internal IBM testing, and has not been reported on any customer system.

 
VIOS V2.1.0
Fixed problem with disk reservation changes for partition mobility
Fixed problem where Mobility fails between VIOS levels due to version mismatch

VIOS V2.1.2.10FP22
Fixed problems with NPIV LP Mobility
Fixed problem with disk reservation changes for partition mobility
Fixed problem where Mobility fails between VIOS levels due to version mismatch

VIOS V2.1.2.10FP22.1
Fixed a problem with NPIV Client; reconnect failure

VIOS V2.1.2.10FP22.1 NPIV Interim Fix
Fixed a problem with the NPIV vfc adapters

VIOS V2.1.3.10FP23 IZ77189 Interim Fix
Corrects a problem where an operation to change the preferred path of a LUN can cause an I/O operation to become stuck, which in turn causes the operation to hang.

VIOS V2.2.0.10 FP-24
Fixed problems with Live partition mobility Fix Pack 24 (VIOS V2.2.0.10 FP-24)
Documented limitation with shared storage pools

VIOS V2.2.0.12 FP-24 SP 02, V2.2.0.13 FP-24 SP03
Fixed a potential VIOS crash issue while running AMS or during live partition mobility
Fixed an issue with wrong client DRC name after mobility operation

Change history:
10/20/2009 - added SoD for converged network adapters.
04/15/2010 - added NPIV support for converged network adapters.
09/13/2010 - added new Power7 models
01/25/2011 - added OS source to destination requirement, updated fix information
03/28/2011 - added BA000100 error log entry on certain firmware levels and firmware fixes.

10/17/2011 - added new Power7 models and SDMC information.                                                                                                                                                                                                                   01/27/2012 - corrected new location for supported firmware matrix, LPM now supported with SSPs.                                                                                                                                                          06/11/2012 - corrected availability of NPIV with FCoE adapters, added Flex Power nodes.



 



'IBM Power' 카테고리의 다른 글

PowerVM QuickStart  (0) 2012.07.03
SEA 구성 및 설정 #1 (vios가 하나 일 경우...)  (0) 2012.07.03
env_chk.sh  (0) 2012.06.28
ftp.sh  (0) 2012.06.28
awk 활용 구문 정리  (0) 2012.06.28
블로그 이미지

Melting

,

cat /opt/ibm/director/version.srv

블로그 이미지

Melting

,

env_chk.sh

IBM Power 2012. 6. 28. 15:30

#!/usr/bin/ksh
if [ $# -ne 1 ]; then
 echo " Usage: env_chk.sh <log_file> "
 echo "    ex. env_chk.sh env_variables.log "
 exit 1
fi

base_param=${1}
if [ -f ${base_param} ]
   then
        continue
   else
        env >> ${base_param}
        ulimit -a >> ${base_param}
        schedo -aF >> ${base_param}
        vmo -aF >> ${base_param}
        no -aF >> ${base_param}
fi

current_param="${1}_`date +\"%Y%m%d%H%M%S\"`"
env >> ${current_param}
ulimit -a >> ${current_param}
schedo -aF >> ${current_param}
vmo -aF >> ${current_param}
no -aF >> ${current_param}

compare_result=`diff ${base_param} ${current_param} | wc -l`
#echo ${compare_result}
if [[ ${compare_result} -ne 0 ]]
   then
        echo 'parameters are changed... backing up the current variables...'
        echo '#ftp.sh ....           '
        echo 'updating base param....'
        cp ${current_param} ${base_param}
        # exit
   else
        echo 'parameters are not changed... moving to next check'
        # exit
 fi

#echo "removing current param"
rm ${current_param}



블로그 이미지

Melting

,

ftp.sh

IBM Power 2012. 6. 28. 15:01

#!/usr/bin/ksh
if [ $# -ne 5 ]; then
    echo " Usage: ftp.sh <ftp server ip> <ftp user> <ftp password> <target director> <source file> "
    echo "    ex. ftp.sh 10.10.20.101 root passwd /home/ftp_back /etc/hosts "
    exit 1
fi

source_full_path=${5}
source_file=`echo ${source_full_path} | awk -F"/" '{print$NF}'`
source_path=`echo ${source_full_path} | awk -F'/[^/]*$' '{print $1}'`

new_file_name="${source_file}_`date +\"%Y%m%d%H%M%S\"`"

ftp -n ${1}  <<! >> ftp.log

        user ${2} ${3}
        lcd ${source_path}
        cd ${4}
        as

        put ${source_file} ${new_file_name}
        bye
!



'IBM Power' 카테고리의 다른 글

LPM (Live Partition Mobility) 가 않될 경우... 체크 사항...  (0) 2012.07.03
env_chk.sh  (0) 2012.06.28
awk 활용 구문 정리  (0) 2012.06.28
diff 와 sdiff (문자열비교)  (0) 2012.06.28
AIX & HMC에 등록된 ssh-key 삭제하기  (0) 2012.06.25
블로그 이미지

Melting

,

awk 활용 구문 정리

IBM Power 2012. 6. 28. 14:41

+. awk 수행시 field 구분자 지정(-F, -NF)
  ex. 파일 절대경로에서 directory path 와 file name을 분리하는 스크립트 (awk -F && -NF)

   # cat path.sh
   ---------------------------------------------------------------------
#!/usr/bin/ksh
source_full_path="/home/event_action/ftp.log"

      source_file=`echo ${source_full_path} | awk -F"/" '{print$NF}'`
      source_path=`echo ${source_full_path} | awk -F'/[^/]*$' '{print $1}'`


echo "source_full_path: " ${source_full_path}
echo "source_file: " ${source_file}
echo "source_path: " ${source_path}
    ---------------------------------------------------------------------

+. awk 로 field 추출시 조건문 사용 
    ex. 5번째 field의 값이 IPv4 이거나 IPv6인 경우, 특정 컬럼 또는 전체 컬럼을 출력
      # lsof | awk '$5=="IPv4"||$5=="IPv6"{print}'
        >> awk로 field의 조건만 검사한 후 전체 데이터를 그대로 출력하려면 'print'만 지정
      # lsof | awk '$5=="IPv4"||$5=="IPv6"{$1,$2,$3,$5,$7}'
      # lsof | awk '$5!="IPv4"&&$5!="IPv6"{print}'


+. awk로 받은 값을 shell을 통해서 명령을 실행하는 경우
    (device 리스트 조회 후 파라미터 변경시에 주로 사용)
      # lsdev –Cc disk | grep –v hdisk0 | awk '{print "chdev -l" $1 "-a fc_err_recov=fast_fail" }' | sh -x




------------------------------
awk '{print substr($0, index($0,$9))}'
awk -v N=9 '{sep=""; for (i=N; i<=NF; i++) {printf("%s%s",sep,$i); sep=OFS}; printf("\n")}' 
awk -v N=9 '{for(i=1;i<N;i++){$i=""}}1'
awk '{ s = ""; for (i = 9; i <= NF; i++) s = s $i " "; print s }'


'IBM Power' 카테고리의 다른 글

env_chk.sh  (0) 2012.06.28
ftp.sh  (0) 2012.06.28
diff 와 sdiff (문자열비교)  (0) 2012.06.28
AIX & HMC에 등록된 ssh-key 삭제하기  (0) 2012.06.25
AIX -> HMC 비번없이 SSH로 로그인하기  (0) 2012.06.25
블로그 이미지

Melting

,

a1:/home# diff 1.txt 2.txt
1a2
>
4d4
< aaa

a1dir2:/home# sdiff 1.txt 2.txt
111                                                                111
                                                                >
222                                                                222

aaa                                                             <

a1:/home#

'IBM Power' 카테고리의 다른 글

ftp.sh  (0) 2012.06.28
awk 활용 구문 정리  (0) 2012.06.28
AIX & HMC에 등록된 ssh-key 삭제하기  (0) 2012.06.25
AIX -> HMC 비번없이 SSH로 로그인하기  (0) 2012.06.25
rset 관련 스크립트  (0) 2012.06.21
블로그 이미지

Melting

,

+. aix에 등록된 ssh-key 삭제

   > /.ssh/known_hosts 에서 해당 IP 정보 삭제


+. HMC에 등록된 ssh-key 삭제 (~/.ssh/authorized_keys2 파일을 수정하는 명령)

   >  mkauthkeys --remove '$user@$hostname'

     ex. mkauthkeys --remove 'root@10.10.10.121'

'IBM Power' 카테고리의 다른 글

awk 활용 구문 정리  (0) 2012.06.28
diff 와 sdiff (문자열비교)  (0) 2012.06.28
AIX -> HMC 비번없이 SSH로 로그인하기  (0) 2012.06.25
rset 관련 스크립트  (0) 2012.06.21
HMC SSH 접속을 위한 방화벽 설정  (0) 2012.06.15
블로그 이미지

Melting

,