+. ssh configuration

  >> cluster생성시 ssh와 scp type을 사용할 경우, 아래와 같이 ssh key를 먼저 등록해야 함

(lpar11g) # ssh lpar21g   >> exit (key만 생성 -> /.ssh 디렉토리가 생성됨)

(lpar11g) # ssh-keygen -t rsa >> 엔터만 치고 실행완료

(lpar11g) # cat /.ssh/id_rsa.pub >> /.ssh/authorized_keys  (자신의 public key도 넣어주어야함)

(lpar11g) # ' cat /.ssh/id_rsa.pub ' 명령으로 생성된 내용을 (lpar12g) 서버의 /.ssh/authorized_keys 파일에 추가

 >>> lpar21g에서도 동일한 작업 반복


+. rsh configuration


+. basic configuration scripts 

-------------------------------------------------------------

### fs re-size

chfs -a size=+512M /

chfs -a size=+1G /usr

chfs -a size=+1G /var

chfs -a size=+512M /tmp

chfs -a size=+1G /home

chfs -a size=+512M /opt



### IO state

chdev -l sys0 -a iostat=true


### Disk Attr

ins=1

while [ ${ins} -le 42 ]

do

        chdev -l hdisk${ins} -a pv=yes

        chdev -l hdisk${ins} -a reserve_policy=no_reserve

        ((ins=ins+1))

done


### Time sync

setclock lpar11 ; date ; rsh lpar11 date


### hushlogin (turn off the login msg)

touch /.hushlogin


-------------------------------------------------------------


+. .profile

-------------------------------------------------------------

lpar11:/# cat .profile

export GPFS_HOME=/usr/lpp/mmfs

export PS1=`hostname -s`':$PWD# '

export PATH=/usr/local/bin:${GPFS_HOME}/bin:${PATH}


set -o vi


banner `hostname`

-------------------------------------------------------------


+. /etc/hosts

-------------------------------------------------------------

#-- team 1 (multi-cluster #1)

10.10.10.151           lpar11

10.10.10.161           lpar21

10.10.11.151           lpar11g

10.10.11.161           lpar21g


#-- team 2 (multi-cluster #2)

10.10.10.152           lpar12

10.10.10.162           lpar22

10.10.11.152           lpar12g

10.10.11.162           lpar22g

-------------------------------------------------------------



+. lslpp -l gpfs*

  >> 기본 fileset외에 patch를 같이 깔아줘야 기동이 됨  (fixcentral)

     ex. GPFS 3.5.0.0 > 기동않됨 -> GPFS 3.5.0.6 로 update 후 기동


+. cluster configuration file

# cat /home/gpfs/gpfs.allnodes 

lpar11g:quorum-manager

lpar21g:quorum-manager


# >> 구문 >> NodeName:NodeDesignations:AdminNodeName  (NodeDesignations와 AdminNodeName은 optional)

  >> NodeDesignations 항목은 'manager|client'-'quorum|nonquorum' 으로 지정

  >> manager|client - 'file system manager'의 pool에 넣을 건지의 여부 (default는 client)

  >> quorum|nonquorum - default는 noquorum 

# >> quorum의 max는 8개이며, 모든 quorum 은 tiebreak disk에 대해서 access가 가능해야 함

# >> lpar11_gpfs, lpar21_gpfs > should be register to /etc/hosts

  >> private/public network 모두 가능하나, 당연히 private network 권장(subnet을 나누기를 권장)

# >> 일반적인 RAC구성에서는 모든 노드를 guorum-manager로 구성



+. gpfs cluster creation - 사전 정의된 nodelist 파일을 이용 #1

# mmcrcluster -n /home/gpfs/gpfs.allnodes -p lpar11g -s lpar21g -C gpfs_cluster -r /usr/bin/ssh -R /usr/bin/scp

#  >> ssh 와 scp type으로 설치시 private network의 hostname으로 지정되어야 password관련 문제가 없음


# >> -n : node description file

  >> -p : primary node

  >> -s : secondary node

  >> -C : cluster name

  >> -r : remote shell (default 는 rsh)

  >> -R : remote copy (default 는 rcp)

  

+. mmlscluster  

  

+. license agreement (gpfs3.3+)

# mmchlicense server --accept -N lpar11g,lpar21g


+. gpfs 기동 및 status check

# mmstartup -a

# mmgetstate -a

 Node number  Node name        GPFS state

------------------------------------------

       1      lpar11g          active

       2      lpar21g          active


# mmlscluster


GPFS cluster information

========================

  GPFS cluster name:         gpfs_cluster.lpar11g

  GPFS cluster id:           1399984813853589142

  GPFS UID domain:           gpfs_cluster.lpar11g

  Remote shell command:      /usr/bin/rsh

  Remote file copy command:  /usr/bin/rcp


GPFS cluster configuration servers:

-----------------------------------

  Primary server:    lpar11g

  Secondary server:  lpar21g


 Node  Daemon node name  IP address        Admin node name              Designation

------------------------------------------------------------------------------------

   1   lpar11g           170.24.46.151     lpar11g                      quorum-manager

   2   lpar21g           170.24.46.161     lpar21g                      quorum-manager


+. gpfs cluster creation - 사전 정의된 nodelist 파일을 이용 (man mmaddnode) #2

# mmcrcluster -N lpar11g:manager-quorum -p lpar11g -r /usr/bin/ssh -r /usr/bin/scp

# mmaddnode -N lpar21g

# mmchcluster -s lpar21g

# mmchnode -N lpar21g --client --nonquorum

# mmchnode -N lpar21g --manager --quorum

# mmlscluster

  

  >> 삭제는 mmdelnode -N lpar21g 

  >> Primary Node와 Secondary Node는 삭제가 불가능

  

+. cluster 내 node의 기동 및 종료

# mmstartup -a / mmshutdown -a

# mmstartup -N lpar21g / mmshutdown -N lpar21g

# mmgetstate -a / mmgetstate -N lpar21g

# mmgetstate -a


 Node number  Node name        GPFS state

------------------------------------------

       1      lpar11g          active

       2      lpar21g          active

  

+. gpfs cluster관련 log

# tail -f /var/adm/ras/mmfs.log.latest



+. NSD Configuration

# cat /home/gpfs/gpfs.clusterDisk 

hdisk1:::dataAndMetadata::nsd1:

hdisk2:::dataAndMetadata::nsd2:

hdisk3:::dataAndMetadata::nsd3:

hdisk4:::dataAndMetadata::nsd4:

hdisk5:::dataAndMetadata::nsd5:

hdisk6:::dataAndMetadata::nsd6:

hdisk7:::dataAndMetadata::nsd7:


  >> [NSD로 사용할 Disk]:[Primary Server]:[Backup Server]:[Disk Usage]:[Failure Group]:[Desired NSD Name]:[Storage Pool]

  >> [NSD로 사용할 Disk] - '/dev/hdisk3' 형태로도 지정 가능

     [Primary Server] && [Backup Server] 

        - cluster내에서 I/O를 수행하는 Primary && Backup Server

- cluster의 node들이 SAN으로 연결되어 있고, 모두 같은 disk를 공유할 경우 >> 이 두항목을 Blank로 비워둬야 함

        -  case 1) (lpar11g 와 lpar21g 가 san으로 연결되고 GPFS server로 작동) && (lpar12g 는 san 연결없이 client 로 작동)

          -> lpar12g에서 nsd의 위치를 알 수 없기 때문에.... 

     hdisk1:lpar11g:lpar21g:dataAndMetadata::nsd1:

 와 같이 정의하고, lpar12g는 node추가시 client로 등록

  case 2) lpar11g 와 lpar21g 가 san으로 연결되고 GPFS server && client 로 작동

          -> 모든 server와 client에서 nsd에 직접 접근이 가능하므로...

                      hdisk1:::dataAndMetadata::nsd1:

             와 같이 정의해도 됨.

     [Disk Usage] 

        - 'dataOnly|metadataOnly|dataAndMetadata|descOnly'

        - system pool의 경우 dataAndMetadata 가 default && storage pool은 dataOnly 가 default

     [Desired NSD Name] - cluster내에서 unique 해야하며, 미지정시 'gpfs1nsd' 와 같은 형식으로 생성됨

 

 

# mmcrnsd -F /home/gpfs/gpfs.clusterDisk

# mmlsnsd

 File system   Disk name    NSD servers

---------------------------------------------------------------------------

 (free disk)   nsd1         (directly attached)

 (free disk)   nsd2         (directly attached)

 (free disk)   nsd3         (directly attached)

 (free disk)   nsd4         (directly attached)

 (free disk)   nsd5         (directly attached)

 (free disk)   nsd6         (directly attached)

 (free disk)   nsd7         (directly attached)

 

# mmdelnsd nsd7 

  >> 삭제 후 개별 NSD 추가는 gpfs.clusterDisk2 파일을 추가로 생성 후, mmcrnsd -F gpfs.clusterDisk2 로 수행

*. 기존에 gpfs로 한번 잡힌 disk는 lspv에서 'gpfs'표시되며, 이 경우 다시 gpfs용으로 잡을 수 없음 > 정보를 깨고 다시 구성해야함
lpar11 && lpar12 >>  
  dd if=/dev/zero of=/dev/rhdiskXX bs=1024 count=100
  rmdev -dl hdiskXX
  cfgmgr -v
  chdev -l hdiskXX -a reserve_policy=no_reserve
  chdev -l hdiskXX -a pv=yes

  


+. tiebreak disk 설정

# mmshutdown -a

# mmchconfig tiebreakerDisks=nsd7

  >> Tiebreaker disk를 1개만 설정할 경우...

# mmchconfig tiebreakerDisks=no

# mmchconfig tiebreakerDisks='nsd5;nsd6;nsd7'

  >> Tiebreaker disk는 3개이상을 권장하며, 3-node이상에서는 tiebreaker disk가 불필요

# mmlsconfig | grep tiebreakerDisks

tiebreakerDisks nsd5;nsd6;nsd7



+. gpfs file system 생성

# cp /home/gpfs/gpfs.clusterDisk /home/gpfs/gpfs.clusterDisk.fs

  >> /home/gpfs/gpfs.clusterDisk.fs 에서 nsd1, nsd2, nsd3, nsd4 외의 항목삭제

# mmcrfs /gpfs fs1 -F /home/gpfs/gpfs.clusterDisk.fs -A yes -B 512k -n 16

  >> '/gpfs' : mount point

  >> 'fs1' : device name (filesystem name) > '/dev/fs1' 처럼 주기도 함

  >> '-F' /home/gpfs/gpfs.clusterDisk.fs : filesystem으로 등록할 NSD 정의 (mmcrnsd하면 자동으로 생성됨)

  >> '-A yes' : mmstartup시 automount 여부

  >> '-B 512k' : block size로 16k~1MB까지 설정가능. Oracle은 일반적으로 256k(512k)를 권장하나, 

                 file size가 작은 그룹웨어나 이메일 시스템은 block size를 작게 설정해야함

  >> '-n 16' : 파일시스템을 사용할 노드의 개수, 한번 설정하면 수정이 불가능하므로 여유를 둬서 크게 설정할 것

# mmmount all -a

# mmlsfs fs1

flag                value                    description

------------------- ------------------------ -----------------------------------

 -f                 8192                     Minimum fragment size in bytes

 -i                 512                      Inode size in bytes

 -I                 16384                    Indirect block size in bytes

 -m                 1                        Default number of metadata replicas

 -M                 2                        Maximum number of metadata replicas

 -r                 1                        Default number of data replicas

 -R                 2                        Maximum number of data replicas

 -j                 cluster                  Block allocation type

 -D                 nfs4                     File locking semantics in effect

 -k                 all                      ACL semantics in effect

 -n                 32                       Estimated number of nodes that will mount file system

 -B                 262144                   Block size

 -Q                 none                     Quotas enforced

                    none                     Default quotas enabled

 --filesetdf        no                       Fileset df enabled?

 -V                 13.01 (3.5.0.0)          File system version

 --create-time      Mon Nov 26 14:08:51 2012 File system creation time

 -u                 yes                      Support for large LUNs?

 -z                 no                       Is DMAPI enabled?

 -L                 4194304                  Logfile size

 -E                 yes                      Exact mtime mount option

 -S                 no                       Suppress atime mount option

 -K                 whenpossible             Strict replica allocation option

 --fastea           yes                      Fast external attributes enabled?

 --inode-limit      67584                    Maximum number of inodes

 -P                 system                   Disk storage pools in file system

 -d                 nsd1;nsd2;nsd3;nsd4      Disks in file system

 --perfileset-quota no                       Per-fileset quota enforcement

 -A                 yes                      Automatic mount option

 -o                 none                     Additional mount options

 -T                 /gpfs                    Default mount point

 --mount-priority   0                        Mount priority

# mmdf fs1

disk                disk size  failure holds    holds              free KB             free KB

name                    in KB    group metadata data        in full blocks        in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 98 GB)

nsd1                 10485760       -1 yes      yes        10440704 (100%)           488 ( 0%)

nsd2                 10485760       -1 yes      yes        10440448 (100%)           248 ( 00%)

nsd3                 10485760       -1 yes      yes        10440960 (100%)           248 ( 00%)

nsd4                 10485760       -1 yes      yes        10440192 (100%)           472 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         41943040                              41762304 (100%)          1456 ( 00%)


                =============                         ==================== ===================

(total)              41943040                              41762304 (100%)          1456 ( 00%)


Inode Information

-----------------

Number of used inodes:            4038

Number of free inodes:           63546

Number of allocated inodes:      67584

Maximum number of inodes:        67584



+. gpfs filesystem nsd disk 관리 

# mmlsfs all

File system attributes for /dev/fs1:

====================================

flag                value                    description

------------------- ------------------------ -----------------------------------

 -f                 8192                     Minimum fragment size in bytes

 -i                 512                      Inode size in bytes

 -I                 16384                    Indirect block size in bytes

 -m                 1                        Default number of metadata replicas

 -M                 2                        Maximum number of metadata replicas

 -r                 1                        Default number of data replicas

 -R                 2                        Maximum number of data replicas

 -j                 cluster                  Block allocation type

 -D                 nfs4                     File locking semantics in effect

 -k                 all                      ACL semantics in effect

 -n                 16                       Estimated number of nodes that will mount file system

 -B                 262144                   Block size

 -Q                 none                     Quotas enforced

                    none                     Default quotas enabled

 --filesetdf        no                       Fileset df enabled?

 -V                 13.01 (3.5.0.0)          File system version

 --create-time      Mon Nov 26 14:08:51 2012 File system creation time

 -u                 yes                      Support for large LUNs?

 -z                 no                       Is DMAPI enabled?

 -L                 4194304                  Logfile size

 -E                 yes                      Exact mtime mount option

 -S                 no                       Suppress atime mount option

 -K                 whenpossible             Strict replica allocation option

 --fastea           yes                      Fast external attributes enabled?

 --inode-limit      67584                    Maximum number of inodes

 -P                 system                   Disk storage pools in file system

 -d                 nsd1;nsd2;nsd3;nsd4      Disks in file system

 --perfileset-quota no                       Per-fileset quota enforcement

 -A                 yes                      Automatic mount option

 -o                 none                     Additional mount options

 -T                 /gpfs                    Default mount point

 --mount-priority   0                        Mount priority


#  mmlsdisk fs1

disk         driver   sector failure holds    holds                            storage

name         type       size   group metadata data  status        availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

nsd1         nsd         512      -1 yes      yes   ready         up           system

nsd2         nsd         512      -1 yes      yes   ready         up           system

nsd3         nsd         512      -1 yes      yes   ready         up           system

nsd4         nsd         512      -1 yes      yes   ready         up           system


# mmdeldisk fs1 nsd4

  >> 'fs1' filesystem에서 'nsd4' disk 제거

Deleting disks ...

GPFS: 6027-589 Scanning file system metadata, phase 1 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 2 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 3 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 4 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-565 Scanning user file metadata ...

 100.00 % complete on Mon Nov 26 17:05:54 2012

GPFS: 6027-552 Scan completed successfully.

Checking Allocation Map for storage pool 'system'

GPFS: 6027-370 tsdeldisk64 completed.

mmdeldisk: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


# mmunmount /gpfs -N lpar21g

# mmunmount /gpfs -a

# mmdelfs fs1

  >> 'fs1' filesystem 자체를 삭제

GPFS: 6027-573 All data on following disks of fs1 will be destroyed:

    nsd1

    nsd2

    nsd3

GPFS: 6027-574 Completed deletion of file system /dev/fs1.

mmdelfs: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


  

+. GPFS 운영 및 관리 (Administration)

# mmlsmgr fs1

# mmchmgr fs1 lpar21g

GPFS: 6027-628 Sending migrate request to current manager node 170.24.46.151 (lpar11g).

GPFS: 6027-629 Node 170.24.46.151 (lpar11g) resigned as manager for fs1.

GPFS: 6027-630 Node 170.24.46.161 (lpar21g) appointed as manager for fs1.

# mmlsmgr fs1

file system      manager node       [from 170.24.46.161 (lpar21g)]

---------------- ------------------

fs1              170.24.46.161 (lpar21g)

# mmchconfig autoload=yes

  >> system 기동시 자동으로 gpfs daemon 기동

mmchconfig: Command successfully completed

mmchconfig: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

# mmlsconfig | grep autoload

autoload yes


# mmfsadm dump config

  >> GPFS 파라미터 모두 조회

# mmfsadm dump config | grep pagepool

   pagepool 536870912

   pagepoolMaxPhysMemPct 75

   pagepoolPageSize 65536

   pagepoolPretranslate 0


----------------------------------------------------

+. Storage Pools, Filesets and Policies

----------------------------------------------------

+. clean-up test env

# mmumount all -a ; mmdelfs fs1 ; mmdelnsd "nsd1;nsd2;nsd3;nsd4"

  >> 'nsd5;nsd6;nsd7' 의 tiebreaker disk는 그대로 유지


+. create nsd

#  cat /home/gpfs/gpfs.clusterDisk.storagePool

hdisk1:::dataAndMetadata::nsd1:system

hdisk2:::dataAndMetadata::nsd2:system

hdisk3:::dataOnly::nsd3:pool1

hdisk4:::dataOnly::nsd4:pool1


# mmcrnsd -F /home/gpfs/gpfs.clusterDisk.storagePool

mmcrnsd: Processing disk hdisk1

mmcrnsd: Processing disk hdisk2

mmcrnsd: Processing disk hdisk3

mmcrnsd: Processing disk hdisk4

mmcrnsd: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


# mmcrfs /gpfs fs1 -F /home/gpfs/gpfs.clusterDisk.storagePool -A yes -B 512k -n 16

GPFS: 6027-531 The following disks of fs1 will be formatted on node lpar11:

    nsd1: size 10485760 KB

    nsd2: size 10485760 KB

    nsd3: size 10485760 KB

    nsd4: size 10485760 KB

GPFS: 6027-540 Formatting file system ...

GPFS: 6027-535 Disks up to size 103 GB can be added to storage pool 'system'.

GPFS: 6027-535 Disks up to size 103 GB can be added to storage pool 'pool1'.

Creating Inode File

Creating Allocation Maps

Creating Log Files

Clearing Inode Allocation Map

Clearing Block Allocation Map

Formatting Allocation Map for storage pool 'system'

Formatting Allocation Map for storage pool 'pool1'

GPFS: 6027-572 Completed creation of file system /dev/fs1.

mmcrfs: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


# mmlsfs fs1

# mmmount /gpfs -a


# mmdf fs1

disk                disk size  failure holds    holds              free KB             free KB

name                    in KB    group metadata data        in full blocks        in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 96 GB)

nsd1                 10485760       -1 yes      yes        10427904 ( 99%)           976 ( 0%)

nsd2                 10485760       -1 yes      yes        10428416 ( 99%)           992 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20856320 ( 99%)          1968 ( 0%)


Disks in storage pool: pool1 (Maximum disk size allowed is 96 GB)

nsd3                 10485760       -1 no       yes        10483200 (100%)           496 ( 0%)

nsd4                 10485760       -1 no       yes        10483200 (100%)           496 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20966400 (100%)           992 ( 0%)


                =============                         ==================== ===================

(data)               41943040                              41822720 (100%)          2960 ( 0%)

(metadata)           20971520                              20856320 ( 99%)          1968 ( 0%)

                =============                         ==================== ===================

(total)              41943040                              41822720 (100%)          2960 ( 0%)


Inode Information

-----------------

Number of used inodes:            4022

Number of free inodes:           63562

Number of allocated inodes:      67584

Maximum number of inodes:        67584


+. create fileset

#  mmcrfileset fs1 fileset1

Snapshot 'fileset1' created with id 1.

#  mmcrfileset fs1 fileset2

Snapshot 'fileset2' created with id 2.

#  mmcrfileset fs1 fileset3

Snapshot 'fileset3' created with id 3.

#  mmcrfileset fs1 fileset4

Snapshot 'fileset4' created with id 4.

#  mmcrfileset fs1 fileset5

Snapshot 'fileset5' created with id 5.

# mmlsfileset fs1

Filesets in file system 'fs1':

Name                     Status    Path

root                     Linked    /gpfs

fileset1                 Unlinked  --

fileset2                 Unlinked  --

fileset3                 Unlinked  --

fileset4                 Unlinked  --

fileset5                 Unlinked  --


#  mmlinkfileset fs1 fileset1 -J /gpfs/fileset1

Fileset 'fileset1' linked at '/gpfs/fileset1'.

#  mmlinkfileset fs1 fileset2 -J /gpfs/fileset2

Fileset 'fileset2' linked at '/gpfs/fileset2'.

#  mmlinkfileset fs1 fileset3 -J /gpfs/fileset3

Fileset 'fileset3' linked at '/gpfs/fileset3'.

#  mmlinkfileset fs1 fileset4 -J /gpfs/fileset4

Fileset 'fileset4' linked at '/gpfs/fileset4'.

#  mmlinkfileset fs1 fileset5 -J /gpfs/fileset5

Fileset 'fileset5' linked at '/gpfs/fileset5'.

# mmlsfileset fs1

Filesets in file system 'fs1':

Name                     Status    Path

root                     Linked    /gpfs

fileset1                 Linked    /gpfs/fileset1

fileset2                 Linked    /gpfs/fileset2

fileset3                 Linked    /gpfs/fileset3

fileset4                 Linked    /gpfs/fileset4

fileset5                 Linked    /gpfs/fileset5



+. file placement policy

# cat /home/gpfs/placementpolicy.txt

/* The fileset does not matter, we want all .dat and .DAT files to go to pool1 */

RULE 'datfiles' SET POOL 'pool1' WHERE UPPER(name) like '%.DAT'

/* All non *.dat files placed in fileset5 will go to pool1 */

RULE 'fs5' SET POOL 'pool1' FOR FILESET ('fileset5')

/* Set a default rule that sends all files not meeting the other criteria to the system pool */

RULE 'default' set POOL 'system'


# mmchpolicy fs1 /home/gpfs/placementpolicy.txt

Validated policy `placementpolicy.txt': parsed 3 Placement Rules, 0 Restore Rules, 0 Migrate/Delete/Exclude Rules,

        0 List Rules, 0 External Pool/List Rules

GPFS: 6027-799 Policy `placementpolicy.txt' installed and broadcast to all nodes.


# mmlspolicy fs1 -L

/* The fileset does not matter, we want all .dat and .DAT files to go to pool1 */

RULE 'datfiles' SET POOL 'pool1' WHERE UPPER(name) like '%.DAT'

/* All non *.dat files placed in fileset5 will go to pool1 */

RULE 'fs5' SET POOL 'pool1' FOR FILESET ('fileset5')

/* Set a default rule that sends all files not meeting the other criteria to the system pool */

RULE 'default' set POOL 'system'



+. placement policy test

   >> 'dd if=/dev/zero of=/gpfs/fileset1/bigfile1 bs=64k count=1000' 수행 전후 결과를 mmdf fs1 으로 비교

   >> system pool 에 file 이 입력됨 (default rule)

   

   >> 'dd if=/dev/zero of=/gpfs/fileset1/bigfile1.dat bs=64k count=1000'  수행 전후 결과를 mmdf fs1 으로 비교

   >> pool1 에 file 이 입력됨 (datfiles rule)

   

   >> 'dd if=/dev/zero of=/gpfs/fileset5/bigfile2 bs=64k count=1000'  수행 전후 결과를 mmdf fs1 으로 비교

   >> pool1 에 file 이 입력됨 (fs5 rule)

   

   >> 'mmlsattr -L /gpfs/fileset5/bigfile2' 처럼 mmlsattr 명령으로도 확인 가능

   

# mmdf fs1

disk                disk size  failure holds    holds              free KB             free KB

name                    in KB    group metadata data        in full blocks        in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 96 GB)

nsd1                 10485760       -1 yes      yes        10427904 ( 99%)           976 ( 0%)

nsd2                 10485760       -1 yes      yes        10427392 ( 99%)           992 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20855296 ( 99%)          1968 ( 0%)


Disks in storage pool: pool1 (Maximum disk size allowed is 96 GB)

nsd3                 10485760       -1 no       yes        10483200 (100%)           496 ( 0%)

nsd4                 10485760       -1 no       yes        10483200 (100%)           496 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20966400 (100%)           992 ( 0%)


                =============                         ==================== ===================

(data)               41943040                              41821696 (100%)          2960 ( 0%)

(metadata)           20971520                              20855296 ( 99%)          1968 ( 0%)

                =============                         ==================== ===================

(total)              41943040                              41821696 (100%)          2960 ( 0%)


Inode Information

-----------------

Number of used inodes:            4027

Number of free inodes:           63557

Number of allocated inodes:      67584

Maximum number of inodes:        67584

# dd if=/dev/zero of=/gpfs/fileset1/bigfile1 bs=64k count=1000

1000+0 records in.

1000+0 records out.

# mmdf fs1

disk                disk size  failure holds    holds              free KB             free KB

name                    in KB    group metadata data        in full blocks        in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 96 GB)

nsd1                 10485760       -1 yes      yes        10395648 ( 99%)          1472 ( 0%)

nsd2                 10485760       -1 yes      yes        10395136 ( 99%)           992 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20790784 ( 99%)          2464 ( 0%)


Disks in storage pool: pool1 (Maximum disk size allowed is 96 GB)

nsd3                 10485760       -1 no       yes        10483200 (100%)           496 ( 0%)

nsd4                 10485760       -1 no       yes        10483200 (100%)           496 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20966400 (100%)           992 ( 0%)


                =============                         ==================== ===================

(data)               41943040                              41757184 (100%)          3456 ( 0%)

(metadata)           20971520                              20790784 ( 99%)          2464 ( 0%)

                =============                         ==================== ===================

(total)              41943040                              41757184 (100%)          3456 ( 0%)


Inode Information

-----------------

Number of used inodes:            4028

Number of free inodes:           63556

Number of allocated inodes:      67584

Maximum number of inodes:        67584



# dd if=/dev/zero of=/gpfs/fileset1/bigfile1.dat bs=64k count=1000

1000+0 records in.

1000+0 records out.

# mmdf fs1

disk                disk size  failure holds    holds              free KB             free KB

name                    in KB    group metadata data        in full blocks        in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 96 GB)

nsd1                 10485760       -1 yes      yes        10395648 ( 99%)          1472 ( 0%)

nsd2                 10485760       -1 yes      yes        10395136 ( 99%)           976 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20790784 ( 99%)          2448 ( 0%)


Disks in storage pool: pool1 (Maximum disk size allowed is 96 GB)

nsd3                 10485760       -1 no       yes        10451456 (100%)           496 ( 0%)

nsd4                 10485760       -1 no       yes        10450944 (100%)           496 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20902400 (100%)           992 ( 0%)


                =============                         ==================== ===================

(data)               41943040                              41693184 ( 99%)          3440 ( 0%)

(metadata)           20971520                              20790784 ( 99%)          2448 ( 0%)

                =============                         ==================== ===================

(total)              41943040                              41693184 ( 99%)          3440 ( 0%)


Inode Information

-----------------

Number of used inodes:            4029

Number of free inodes:           63555

Number of allocated inodes:      67584

Maximum number of inodes:        67584



# dd if=/dev/zero of=/gpfs/fileset5/bigfile2 bs=64k count=1000

1000+0 records in.

1000+0 records out.

lpar11:/home/gpfs# mmdf fs1

disk                disk size  failure holds    holds              free KB             free KB

name                    in KB    group metadata data        in full blocks        in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 96 GB)

nsd1                 10485760       -1 yes      yes        10395648 ( 99%)          1456 ( 0%)

nsd2                 10485760       -1 yes      yes        10395136 ( 99%)           976 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20790784 ( 99%)          2432 ( 0%)


Disks in storage pool: pool1 (Maximum disk size allowed is 96 GB)

nsd3                 10485760       -1 no       yes        10419200 ( 99%)           496 ( 0%)

nsd4                 10485760       -1 no       yes        10419200 ( 99%)           496 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20838400 ( 99%)           992 ( 0%)


                =============                         ==================== ===================

(data)               41943040                              41629184 ( 99%)          3424 ( 0%)

(metadata)           20971520                              20790784 ( 99%)          2432 ( 0%)

                =============                         ==================== ===================

(total)              41943040                              41629184 ( 99%)          3424 ( 0%)


Inode Information

-----------------

Number of used inodes:            4030

Number of free inodes:           63554

Number of allocated inodes:      67584

Maximum number of inodes:        67584



# mmlsattr -L /gpfs/fileset1/bigfile1

file name:            /gpfs/fileset1/bigfile1

metadata replication: 1 max 2

data replication:     1 max 2

immutable:            no

appendOnly:           no

flags:

storage pool name:    system

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 11:28:29 2012

Windows attributes:   ARCHIVE


# mmlsattr -L /gpfs/fileset1/bigfile1.dat

file name:            /gpfs/fileset1/bigfile1.dat

metadata replication: 1 max 2

data replication:     1 max 2

immutable:            no

appendOnly:           no

flags:

storage pool name:    pool1

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 11:33:36 2012

Windows attributes:   ARCHIVE


# mmlsattr -L /gpfs/fileset5/bigfile2

file name:            /gpfs/fileset5/bigfile2

metadata replication: 1 max 2

data replication:     1 max 2

immutable:            no

appendOnly:           no

flags:

storage pool name:    pool1

fileset name:         fileset5

snapshot name:

creation Time:        Tue Nov 27 11:35:55 2012

Windows attributes:   ARCHIVE


# dd if=/dev/zero of=/gpfs/fileset3/bigfile3 bs=64k count=1000

1000+0 records in.

1000+0 records out.

# dd if=/dev/zero of=/gpfs/fileset4/bigfile4 bs=64k count=1000

1000+0 records in.

1000+0 records out.



+. file management with policy

# cat /home/gpfs/managementpolicy.txt

RULE 'datfiles' DELETE WHERE UPPER(name) like '%.DAT'

RULE 'bigfiles' MIGRATE TO POOL 'pool1' WHERE UPPER(name) like 'BIG%'


# mmapplypolicy fs1 -P /home/gpfs/managementpolicy.txt -I test

   >> 지정된 policy에 대해 Test 수행

[I] GPFS Current Data Pool Utilization in KB and %

pool1   133120  20971520        0.634766%

system  308736  20971520        1.472168%

[I] 4032 of 67584 inodes used: 5.965909%.

[I] Loaded policy rules from /home/gpfs/managementpolicy.txt.

Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP = 2012-11-27@02:53:48 UTC

parsed 0 Placement Rules, 0 Restore Rules, 2 Migrate/Delete/Exclude Rules,

        0 List Rules, 0 External Pool/List Rules

RULE 'datfiles' DELETE WHERE UPPER(name) like '%.DAT'

RULE 'bigfiles' MIGRATE TO POOL 'pool1' WHERE UPPER(name) like 'BIG%'

[I]2012-11-27@02:53:49.218 Directory entries scanned: 11.

[I] Directories scan: 5 files, 6 directories, 0 other objects, 0 'skipped' files and/or errors.

[I]2012-11-27@02:53:49.231 Sorting 11 file list records.

[I] Inodes scan: 5 files, 6 directories, 0 other objects, 0 'skipped' files and/or errors.

[I]2012-11-27@02:53:49.303 Policy evaluation. 11 files scanned.

[I]2012-11-27@02:53:49.315 Sorting 5 candidate file list records.

[I]2012-11-27@02:53:49.323 Choosing candidate files. 5 records scanned.

[I] Summary of Rule Applicability and File Choices:

 Rule#  Hit_Cnt KB_Hit  Chosen  KB_Chosen       KB_Ill  Rule

  0     1       64000   1       64000   0       RULE 'datfiles' DELETE WHERE(.)

  1     4       256000  3       192000  0       RULE 'bigfiles' MIGRATE TO POOL 'pool1' WHERE(.)


[I] Filesystem objects with no applicable rules: 6.


[I] GPFS Policy Decisions and File Choice Totals:

 Chose to migrate 192000KB: 3 of 4 candidates;

 Chose to premigrate 0KB: 0 candidates;

 Already co-managed 0KB: 0 candidates;

 Chose to delete 64000KB: 1 of 1 candidates;

 Chose to list 0KB: 0 of 0 candidates;

 0KB of chosen data is illplaced or illreplicated;

Predicted Data Pool Utilization in KB and %:

pool1   261120  20971520        1.245117%

system  116736  20971520        0.556641%


# mmapplypolicy fs1 -P /home/gpfs/managementpolicy.txt 

[I] GPFS Current Data Pool Utilization in KB and %

pool1   133120  20971520        0.634766%

system  308736  20971520        1.472168%

[I] 4032 of 67584 inodes used: 5.965909%.

[I] Loaded policy rules from /home/gpfs/managementpolicy.txt.

Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP = 2012-11-27@02:54:46 UTC

parsed 0 Placement Rules, 0 Restore Rules, 2 Migrate/Delete/Exclude Rules,

        0 List Rules, 0 External Pool/List Rules

RULE 'datfiles' DELETE WHERE UPPER(name) like '%.DAT'

RULE 'bigfiles' MIGRATE TO POOL 'pool1' WHERE UPPER(name) like 'BIG%'

[I]2012-11-27@02:54:47.697 Directory entries scanned: 11.

[I] Directories scan: 5 files, 6 directories, 0 other objects, 0 'skipped' files and/or errors.

[I]2012-11-27@02:54:47.708 Sorting 11 file list records.

[I] Inodes scan: 5 files, 6 directories, 0 other objects, 0 'skipped' files and/or errors.

[I]2012-11-27@02:54:47.727 Policy evaluation. 11 files scanned.

[I]2012-11-27@02:54:47.759 Sorting 5 candidate file list records.

[I]2012-11-27@02:54:47.761 Choosing candidate files. 5 records scanned.

[I] Summary of Rule Applicability and File Choices:

 Rule#  Hit_Cnt KB_Hit  Chosen  KB_Chosen       KB_Ill  Rule

  0     1       64000   1       64000   0       RULE 'datfiles' DELETE WHERE(.)

  1     4       256000  3       192000  0       RULE 'bigfiles' MIGRATE TO POOL 'pool1' WHERE(.)


[I] Filesystem objects with no applicable rules: 6.


[I] GPFS Policy Decisions and File Choice Totals:

 Chose to migrate 192000KB: 3 of 4 candidates;

 Chose to premigrate 0KB: 0 candidates;

 Already co-managed 0KB: 0 candidates;

 Chose to delete 64000KB: 1 of 1 candidates;

 Chose to list 0KB: 0 of 0 candidates;

 0KB of chosen data is illplaced or illreplicated;

Predicted Data Pool Utilization in KB and %:

pool1   261120  20971520        1.245117%

system  116736  20971520        0.556641%

[I]2012-11-27@02:54:50.399 Policy execution. 4 files dispatched.

[I] A total of 4 files have been migrated, deleted or processed by an EXTERNAL EXEC/script;

        0 'skipped' files and/or errors.


+. External Pool Management

# cat /home/gpfs/expool1.ksh

#!/usr/bin/ksh

dt=`date +%h%d%y-%H_%M_%S`

results=/tmp/FileReport_${dt}


echo one $1

if [[ $1 == 'MIGRATE' ]];then

echo Filelist

echo There are `cat $2 | wc -l ` files that match >> ${result}

cat $2 >> ${results}

echo ----

echo - The file list report has been placed in ${results}

echo ----

fi


# cat /home/gpfs/listrule1.txt

RULE EXTERNAL POOL 'externalpoolA' EXEC '/home/gpfs/expool1.ksh'

RULE 'MigToExt' MIGRATE TO POOL 'externalpoolA' WHERE FILE_SIZE > 2


# mmapplypolicy fs1 -P /home/gpfs/listrule1.txt

[I] GPFS Current Data Pool Utilization in KB and %

pool1   261120  20971520        1.245117%

system  116736  20971520        0.556641%

[I] 4031 of 67584 inodes used: 5.964429%.

[I] Loaded policy rules from /home/gpfs/listrule1.txt.

Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP = 2012-11-27@04:09:22 UTC

parsed 0 Placement Rules, 0 Restore Rules, 1 Migrate/Delete/Exclude Rules,

        0 List Rules, 1 External Pool/List Rules

RULE EXTERNAL POOL 'externalpoolA' EXEC '/home/gpfs/expool1.ksh'

RULE 'MigToExt' MIGRATE TO POOL 'externalpoolA' WHERE FILE_SIZE > 2

one TEST

[I]2012-11-27@04:09:23.436 Directory entries scanned: 10.

[I] Directories scan: 4 files, 6 directories, 0 other objects, 0 'skipped' files and/or errors.

[I]2012-11-27@04:09:23.447 Sorting 10 file list records.

[I] Inodes scan: 4 files, 6 directories, 0 other objects, 0 'skipped' files and/or errors.

[I]2012-11-27@04:09:23.474 Policy evaluation. 10 files scanned.

[I]2012-11-27@04:09:23.501 Sorting 4 candidate file list records.

[I]2012-11-27@04:09:23.503 Choosing candidate files. 4 records scanned.

[I] Summary of Rule Applicability and File Choices:

 Rule#  Hit_Cnt KB_Hit  Chosen  KB_Chosen       KB_Ill  Rule

  0     4       256000  4       256000  0       RULE 'MigToExt' MIGRATE TO POOL 'externalpoolA' WHERE(.)


[I] Filesystem objects with no applicable rules: 6.


[I] GPFS Policy Decisions and File Choice Totals:

 Chose to migrate 256000KB: 4 of 4 candidates;

 Chose to premigrate 0KB: 0 candidates;

 Already co-managed 0KB: 0 candidates;

 Chose to delete 0KB: 0 of 0 candidates;

 Chose to list 0KB: 0 of 0 candidates;

 0KB of chosen data is illplaced or illreplicated;

Predicted Data Pool Utilization in KB and %:

pool1   5120    20971520        0.024414%

system  116736  20971520        0.556641%

one MIGRATE27@04:09:23.505 Policy execution. 0 files dispatched.  \.......

Filelist

There are 4 files that match

----

- The file list report has been placed in /tmp/FileReport_Nov2712-04_09_23

----

[I]2012-11-27@04:09:23.531 Policy execution. 4 files dispatched.

[I] A total of 4 files have been migrated, deleted or processed by an EXTERNAL EXEC/script;

        0 'skipped' files and/or errors.


# more /tmp/FileReport_Nov2712-04_09_23

47621 65538 0   -- /gpfs/fileset1/bigfile1

47623 65538 0   -- /gpfs/fileset5/bigfile2

47624 65538 0   -- /gpfs/fileset3/bigfile3

47625 65538 0   -- /gpfs/fileset4/bigfile4




----------------------------------------------------

+. Replication (file 단위/filesystem 단위)

----------------------------------------------------

# mmlsfs fs1 -mrMR

  >> Replication 정보를 확인. 만일 Replication 이 없을 경우는...

  >> mmcrfs /gpfs fs1 -F pooldesc.txt -B 64k

flag                value                    description

------------------- ------------------------ -----------------------------------

 -m                 1                        Default number of metadata replicas

 -r                 1                        Default number of data replicas

 -M                 2                        Maximum number of metadata replicas

 -R                 2                        Maximum number of data replicas


# mmlsdisk fs1

disk         driver   sector failure holds    holds                            storage

name         type       size   group metadata data  status        availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

nsd1         nsd         512      -1 yes      yes   ready         up           system

nsd2         nsd         512      -1 yes      yes   ready         up           system

nsd3         nsd         512      -1 no       yes   ready         up           pool1

nsd4         nsd         512      -1 no       yes   ready         up           pool1

# mmchdisk fs1 change -d "nsd1::::1::"

Verifying file system configuration information ...

mmchdisk: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

# mmchdisk fs1 change -d "nsd2::::2::"

Verifying file system configuration information ...

mmchdisk: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

# mmchdisk fs1 change -d "nsd3::::3::"

Verifying file system configuration information ...

mmchdisk: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

# mmchdisk fs1 change -d "nsd4::::4::"

Verifying file system configuration information ...

mmchdisk: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


# mmlsdisk fs1

disk         driver   sector failure holds    holds                            storage

name         type       size   group metadata data  status        availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

nsd1         nsd         512       1 yes      yes   ready         up           system

nsd2         nsd         512       2 yes      yes   ready         up           system

nsd3         nsd         512       3 no       yes   ready         up           pool1

nsd4         nsd         512       4 no       yes   ready         up           pool1

GPFS: 6027-740 Attention: Due to an earlier configuration change the file system

is no longer properly replicated.


# mmdf fs1

disk                disk size  failure holds    holds              free KB             free KB

name                    in KB    group metadata data        in full blocks        in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 96 GB)

nsd1                 10485760        1 yes      yes        10427392 ( 99%)          1440 ( 0%)

nsd2                 10485760        2 yes      yes        10427392 ( 99%)           976 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20854784 ( 99%)          2416 ( 0%)


Disks in storage pool: pool1 (Maximum disk size allowed is 96 GB)

nsd3                 10485760        3 no       yes        10356224 ( 99%)           496 ( 0%)

nsd4                 10485760        4 no       yes        10354176 ( 99%)           496 ( 0%)

                -------------                         -------------------- -------------------

(pool total)         20971520                              20710400 ( 99%)           992 ( 0%)


                =============                         ==================== ===================

(data)               41943040                              41565184 ( 99%)          3408 ( 0%)

(metadata)           20971520                              20854784 ( 99%)          2416 ( 0%)

                =============                         ==================== ===================

(total)              41943040                              41565184 ( 99%)          3408 ( 0%)


Inode Information

-----------------

Number of used inodes:            4031

Number of free inodes:           63553

Number of allocated inodes:      67584

Maximum number of inodes:        67584


+. file 단위로 replication 하기

# dd if=/dev/zero of=/gpfs/fileset1/bigfile0 bs=64k count=1000

# mmlsattr -L /gpfs/fileset1/bigfile0

file name:            /gpfs/fileset1/bigfile0

metadata replication: 1 max 2

data replication:     1 max 2

immutable:            no

appendOnly:           no

flags:

storage pool name:    system

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 13:29:47 2012

Windows attributes:   ARCHIVE


# mmchattr -m 2 -r 2 /gpfs/fileset1/bigfile0


# mmlsattr -L /gpfs/fileset1/bigfile0

file name:            /gpfs/fileset1/bigfile0

metadata replication: 2 max 2

data replication:     2 max 2

immutable:            no

appendOnly:           no

flags:                unbalanced

storage pool name:    system

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 13:29:47 2012

Windows attributes:   ARCHIVE


+. file system 단위로 replication 하기

# dd if=/dev/zero of=/gpfs/fileset1/bigfile1 bs=64k count=1000


# mmlsattr -L /gpfs/fileset1/bigfile1

file name:            /gpfs/fileset1/bigfile1

metadata replication: 1 max 2

data replication:     1 max 2

immutable:            no

appendOnly:           no

flags:

storage pool name:    pool1

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 11:28:29 2012

Windows attributes:   ARCHIVE


# mmchfs fs1 -m 2 -r 2

   >> filesystem에 대한 replication 속성을 2로 변경

   >> 변경이후로 생성되는 file은 replication이 2개로 바로 생성되나,

   >> filesystem 변경이전의 file들은 mmrestripefs 를 해주어야만 replication이 반영됨


# mmlsattr -L /gpfs/fileset1/bigfile1

file name:            /gpfs/fileset1/bigfile1

metadata replication: 1 max 2

data replication:     1 max 2

immutable:            no

appendOnly:           no

flags:

storage pool name:    pool1

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 11:28:29 2012

Windows attributes:   ARCHIVE


# dd if=/dev/zero of=/gpfs/fileset1/bigfile2 bs=64k count=1000

# mmlsattr -L /gpfs/fileset1/bigfile2

file name:            /gpfs/fileset1/bigfile2

metadata replication: 2 max 2

data replication:     2 max 2

immutable:            no

appendOnly:           no

flags:

storage pool name:    system

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 13:38:29 2012

Windows attributes:   ARCHIVE


# mmrestripefs fs1 -R

   >> filesystem 변경이전의 file들은 mmrestripefs 를 해주어야만 replication이 반영됨

GPFS: 6027-589 Scanning file system metadata, phase 1 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 2 ...

Scanning file system metadata for pool1 storage pool

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 3 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 4 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-565 Scanning user file metadata ...

 100.00 % complete on Tue Nov 27 13:39:04 2012

GPFS: 6027-552 Scan completed successfully.



# mmlsattr -L /gpfs/fileset1/bigfile1

file name:            /gpfs/fileset1/bigfile1

metadata replication: 2 max 2

data replication:     2 max 2

immutable:            no

appendOnly:           no

flags:                unbalanced

storage pool name:    pool1

fileset name:         fileset1

snapshot name:

creation Time:        Tue Nov 27 11:28:29 2012

Windows attributes:   ARCHIVE



----------------------------------------------------

+. Snapshot

----------------------------------------------------

# echo "hello world:snap1" > /gpfs/fileset1/snapfile1

# mmcrsnapshot fs1 snap1

Writing dirty data to disk

Quiescing all file system operations

Writing dirty data to disk again

Resuming operations.

Checking fileset ...


# echo "hello world:snap2" >> /gpfs/fileset1/snapfile1

# mmcrsnapshot fs1 snap2

Writing dirty data to disk

Quiescing all file system operations

Writing dirty data to disk again

Resuming operations.

Checking fileset ...


# mmlssnapshot fs1

   >> fs1 파일시스템에 생성된 snapshot

Snapshots in file system fs1:

Directory                SnapId    Status  Created

snap1                    1         Valid   Tue Nov 27 13:43:56 2012

snap2                    2         Valid   Tue Nov 27 13:45:19 2012

# cat /gpfs/.snapshots/snap1/fileset1/snapfile1

# cat /gpfs/.snapshots/snap2/fileset1/snapfile1

   >> snapshot data는 해당 filesystem 의 .snapshot 아래에 저장됨


# rm /gpfs/fileset1/snapfile1

# cp /gpfs/.snapshots/snap2/fileset1/snapfile1 /gpfs/fileset1/snapfile1

   >> snapshot 복원


# mmdelsnapshot fs1 snap1

# mmdelsnapshot fs1 snap2

   >> 저장된 snapshot 제거

# mmlssnapshot fs1





----------------------------------------------------

+. GPFS Multi-Cluster 

  > http://www.ibm.com/developerworks/systems/library/es-multiclustergpfs/

  > 'All intercluster communication is handled by the GPFS daemon, which internally uses Secure Socket Layer (SSL).'

----------------------------------------------------

(cluster1-lpar11g) # mmauth genkey new

Generating RSA private key, 512 bit long modulus

.......++++++++++++

.......++++++++++++

e is 65537 (0x10001)

writing RSA key

mmauth: Command successfully completed


(cluster1-lpar11g) # mmshutdown -a

(cluster1-lpar11g) # mmauth update . -l AUTHONLY

Verifying GPFS is stopped on all nodes ...

mmauth: Command successfully completed


(cluster1-lpar11g) # mmstartup -a 

(cluster1-lpar11g) # rcp lpar11g:/var/mmfs/ssl/id_rsa.pub lpar12g:/tmp/lpar11g_id_rsa.pub


(cluster2-lpar12g) # mmauth genkey new

(cluster2-lpar12g) # mmshutdown -a

(cluster2-lpar12g) # mmauth update . -l AUTHONLY

(cluster2-lpar12g) # mmstartup -a 

(cluster2-lpar12g) # rcp lpar12g:/var/mmfs/ssl/id_rsa.pub lpar11g:/tmp/lpar12g_id_rsa.pub


(cluster1-lpar11g) # mmauth add gpfs_cluster2.lpar12g -k /tmp/lpar12g_id_rsa.pub

   >> gpfs_cluster2.lpar12g 와 같이 cluster의 node이름을 같이 지정해야 함

   >> mmauth 로 생성된 id_rsa.pub 파일을 확인

mmauth: Command successfully completed 


(cluster1-lpar11g) # mmauth grant gpfs_cluster2.lpar12g -f /dev/fs1

mmauth: Granting cluster gpfs_cluster2.lpar12g access to file system fs1:

        access type rw; root credentials will not be remapped.

mmauth: Command successfully completed


(cluster2-lpar12g) # mmremotecluster add gpfs_cluster.lpar11g -n lpar11g,lpar21g -k /tmp/lpar11g_id_rsa.pub

   >> "-n lpar11g,lpar21g" : gpfs_cluster.lpar11g 에 포함된 node list

mmremotecluster: Command successfully completed

mmremotecluster: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


(cluster2-lpar12g) # mmremotefs add remotefs -f fs1 -C gpfs_cluster.lpar11g -T /remotefs

   >> cluster2 에서 gpfs_cluster.lpar11g 클러스터의 fs1을 file system에 추가

mmremotefs: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


# mmchconfig opensslibname="/usr/lib/libssl.a(libssl64.so.0.9.8)" -N r07s6vlp1


(cluster2-lpar12g) # mmremotecluster show all

Cluster name:    gpfs_cluster.lpar11g

Contact nodes:   lpar11g,lpar21g

SHA digest:      7dcff72af5b5d2190ebe471e20bcfe8897d0e1cb

File systems:    remotefs (fs1)


(cluster2-lpar12g) # mmremotefs show all

Local Name  Remote Name  Cluster name       Mount Point        Mount Options    Automount  Drive  Priority

remotefs    fs1          gpfs_cluster.lpar11g /remotefs          rw               no           -        0


(cluster2-lpar12g) # mmmount remotefs

(cluster2-lpar12g) # mmdf remotefs


*. multicluster 구성시 꼬여서... gpfs cluster가 기동되지 않고, '6027-2114' 에러가 나는 경우...

     >>> cipherList 를 reset하면 됨

# mmchconfig cipherList=""

# mmauth show all

Cluster name:        gCluster5.lpar15 (this cluster)

Cipher list:         (none specified)

SHA digest:          (undefined)

File system access:  (all rw)



----------------------------------------------------

+. GPFS Call-back method

----------------------------------------------------

# cat /home/gpfs/nodedown.sh 

#!/bin/sh

echo "Logging a node leave event at: `date` " >> /home/gpfs/log/nodedown.log

echo "The event occurred on node:" $1  >> /home/gpfs/log/nodedown.log

echo "The quorum nodes are:" $2 >> /home/gpfs/log/nodedown.log


# rcp lpar11g:/home/gpfs/nodedown.sh lpar21g:/home/gpfs/

# rsh lpar21g chmod u+x /home/gpfs/nodedown.sh 


# mmaddcallback NodeDownCallback --command /home/gpfs/nodedown.sh --event nodeLeave --parms %eventNode --parms %quorumNodes

mmaddcallback: 6027-1371 Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.


# mmlscallback

NodeDownCallback

        command       = /home/gpfs/nodedown.sh

        event         = nodeLeave

        parms         = %eventNode %quorumNodes


# mmshutdown -N lpar21g ; cat /home/gpfs/log/nodedown.log





블로그 이미지

Melting

,