Typically, creating a clone of a LUN and mounting the file system on the original server is a trivial process. The process becomes more complex if volume management is involved. Server based volume management software provides many benefits, but complicates matters where LUN clones are used. In the case of IBM's Logical Volume Management (LVM), mounting clones on the same server results in duplicate volume group information. Luckily, AIX allows LVM to have duplicate physical volume IDs (PVID) for a "short period" of time without crashing the system. Not sure exactly what a "short period" of time equates too, but in my testing I didn't experience a crash.
The process to "import" a cloned volume group for the first time is disruptive in that the original volume group must be exported. It is necessary to have the original volume group exported so that the physical volume IDs (PVIDs) on the cloned LUNs can be regenerated. The recreatevg command is used to generate new PVIDs and to rename the volume names in the cloned volume group. Note that the /etc/filesystem entries need to be manually updated because the recreatevg command prepends /fs to the original mount point names for the clones. Once the /etc/filesystem file is updated, the original volume group can be re-imported with importvg.
Subsequent refreshes of previously imported clones can be accomplished without exporting the original because ODM remembers the previous PVID to hdisk# association. It does not reread the actual PVID from the disk until an operation is performed against the volume group. The recreatevg command will change the PVIDs and volume names on the cloned volume group without affecting the source volume group.
Process for initial import of cloned volume group:
- Clone the LUNs comprising the volume group
- Make sure to clone in a consistent state
- Make sure to clone in a consistent state
- Unmount and export original volume groups
- Use df to associate file systems to volumes
- Unmount file systems
- Use lsvg to list the volume groups
- Issue varoffvg to each affected volume group
- Use lspv to view the PVIDs for each disk associated with the volume groups
- Remember the volume group names and which disks belong to each VG that will be exported
- Use varyoffvg to offline each VG
- Use exportvg to export the VGs
- Use df to associate file systems to volumes
- Bring in the new VG
- Execute cfgmgr to discover new disks
- Use lspv to identify the duplicate PVIDs
- Execute recreatevg on each new VG listing all disks associated with the volume group and –y option to name the VG
- Use lspv to verify no duplicate PVIDs
- Execute cfgmgr to discover new disks
- Import the original volume groups
- Execute importvg with the name of one member hdisk and the –y option with the original name
- Mount the original file systems.
- Execute importvg with the name of one member hdisk and the –y option with the original name
- Mount the cloned file systems
- Make mount point directories for the cloned file systems
- Edit /etc/filesystems to update the mount points for the cloned VG file systems
- Use mount command to mount the cloned file systems
- Make mount point directories for the cloned file systems
Process to refresh cloned volume group:
- Unmount and vary off the cloned volume groups to be refreshed
- Execute umount on associated file systems
- Use varyoffvg to offline each target VG
- Execute umount on associated file systems
- Refresh the clones on the storage system
- Bring in the refreshed clone VGs
- Execute cfgmgr
- Use lspv and notice that ODM remembers the hdisk/PVID and volume group associations
- Use lspv and notice that ODM remembers the hdisk/PVID and volume group associations
- Use exportvg to export the VGs noting the hdisk numbers for each VG
- Execute recreatevg on each refreshed VG naming all disks associated with the volume group and –y option to name the VG to its original name
- Now lspv displays new unique PVIDs for each hdisk
- Mounting the refreshed clone file systems
- Edit /etc/filesystem to correct the mount points for each volume
- Issue mount command to mount the refreshed clones
- Edit /etc/filesystem to correct the mount points for each volume
bash-3.00# df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 1048576 594456 44% 13034 17% /
/dev/hd2 20971520 5376744 75% 49070 8% /usr
/dev/hd9var 2097152 689152 68% 11373 13% /var
/dev/hd3 2097152 1919664 9% 455 1% /tmp
/dev/hd1 1048576 42032 96% 631 12% /home
/dev/hd11admin 524288 523488 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 4194304 3453936 18% 9152 3% /opt
/dev/livedump 524288 523552 1% 4 1% /var/adm/ras/livedump
/dev/pocdbbacklv 626524160 578596720 8% 8 1% /proddbback
/dev/fspoclv 1254359040 1033501496 18% 2064 1% /cl3data
/dev/fspocdbloglv 206438400 193491536 7% 110 1% /cl3logs
/dev/poclv 1254359040 1033501480 18% 2064 1% /proddb
/dev/pocdbloglv 206438400 193158824 7% 115 1% /proddblog
/dev/datalv2 836239360 615477152 27% 2064 1% /datatest2
/dev/loglv2 208404480 195088848 7% 118 1% /logtest2
bash-3.00#
bash-3.00$ umount /datatest2/
bash-3.00# umount /logtest2/
bash-3.00# lsvg
rootvg
pocdbbackvg
dataclvg
logsclvg
pocvg
pocdblogvg
datavg2
logvg2
bash-3.00# varyoffvg datavg2
NOTE: remember the hdisk and vg names for the exported vg's.
bash-3.00# lspv
hdisk0 00f62aa942cec382 rootvg active
hdisk1 none None
hdisk2 00f62aa997091888 pocvg active
hdisk3 00f62aa9a608de30 dataclvg active
hdisk4 00f62aa9a60970fc logsclvg active
hdisk10 00f62aa9972063c0 pocdblogvg active
hdisk11 00f62aa997435bfa pocdbbackvg active
hdisk5 00f62aa9a6798a0c datavg2
hdisk6 00f62aa9a6798acf datavg2
hdisk7 00f62aa9a6798b86 datavg2
hdisk8 00f62aa9a6798c36 datavg2
hdisk9 00f62aa9a67d6c9c logvg2 active
hdisk12 00f62aa9a67d6d51 logvg2 active
bash-3.00# varyoffvg logvg2
bash-3.00# lsvg
rootvg
pocdbbackvg
dataclvg
logsclvg
pocvg
pocdblogvg
datavg2
logvg2
bash-3.00# exportvg datavg2
bash-3.00# exportvg logvg2
bash-3.00#
bash-3.00# exportvg datavg2
bash-3.00# exportvg logvg2
bash-3.00# cfgmgr
bash-3.00# lspv
hdisk0 00f62aa942cec382 rootvg active
hdisk1 none None
hdisk2 00f62aa997091888 pocvg active
hdisk3 00f62aa9a608de30 dataclvg active
hdisk4 00f62aa9a60970fc logsclvg active
hdisk10 00f62aa9972063c0 pocdblogvg active
hdisk11 00f62aa997435bfa pocdbbackvg active
hdisk5 00f62aa9a6798a0c None
hdisk6 00f62aa9a6798acf None
hdisk7 00f62aa9a6798b86 None
hdisk8 00f62aa9a6798c36 None
hdisk13 00f62aa9a6798a0c None
hdisk14 00f62aa9a6798acf None
hdisk15 00f62aa9a6798b86 None
hdisk9 00f62aa9a67d6c9c None
hdisk12 00f62aa9a67d6d51 None
hdisk16 00f62aa9a6798c36 None
hdisk17 00f62aa9a67d6c9c None
hdisk18 00f62aa9a67d6d51 None
bash-3.00#
Notice the duplicate PVIDs. Use the recreatevg command naming all of the new disks in each volume group of the newly mapped clones.
bash-3.00# recreatevg -y dataclvg2 hdisk13 hdisk14 hdisk15 hdisk16
dataclvg2
bash-3.00# recreatevg -y logclvg2 hdisk17 hdisk18
logclvg2
bash-3.00# importvg -y datavg2 hdisk5
datavg2
bash-3.00# importvg -y logvg2 hdisk9
logvg2
bash-3.00# lspv
hdisk0 00f62aa942cec382 rootvg active
hdisk1 none None
hdisk2 00f62aa997091888 pocvg active
hdisk3 00f62aa9a608de30 dataclvg active
hdisk4 00f62aa9a60970fc logsclvg active
hdisk10 00f62aa9972063c0 pocdblogvg active
hdisk11 00f62aa997435bfa pocdbbackvg active
hdisk5 00f62aa9a6798a0c datavg2 active
hdisk6 00f62aa9a6798acf datavg2 active
hdisk7 00f62aa9a6798b86 datavg2 active
hdisk8 00f62aa9a6798c36 datavg2 active
hdisk13 00f62aa9c63a5ec2 dataclvg2 active
hdisk14 00f62aa9c63a5f9b dataclvg2 active
hdisk15 00f62aa9c63a6070 dataclvg2 active
hdisk9 00f62aa9a67d6c9c logvg2 active
hdisk12 00f62aa9a67d6d51 logvg2 active
hdisk16 00f62aa9c63a6150 dataclvg2 active
hdisk17 00f62aa9c63bf6b2 logclvg2 active
hdisk18 00f62aa9c63bf784 logclvg2 active
bash-3.00#
Notice the PVID numbers are all unique now.
remount original file systems
bash-3.00# mount /datatest2
bash-3.00# mount /logtest2
bash-3.00#
create new mount points and edit /etc/filesystems
bash-3.00# mkdir /dataclone1test2
bash-3.00# mkdir /logclone1test2
bash-3.00# cat /etc/filesystems
…
/fs/datatest2:
dev = /dev/fsdatalv2
vfs = jfs2
log = /dev/fsloglv03
mount = true
check = false
options = rw
account = false
/fs/logtest2:
dev = /dev/fsloglv2
vfs = jfs2
log = /dev/fsloglv04
mount = true
check = false
options = rw
account = false
/datatest2:
dev = /dev/datalv2
vfs = jfs2
log = /dev/loglv03
mount = true
check = false
options = rw
account = false
/logtest2:
dev = /dev/loglv2
vfs = jfs2
log = /dev/loglv04
mount = true
check = false
options = rw
account = false
bash-3.00#
Notice the cloned duplicates are prefixed with /fs on the mount point by the recreatevg command. Also the volume names were changed to prevent duplicate entries in /dev. Update /etc/filesysems with the mount points created previously.
bash-3.00# mount /dataclone1test2
Replaying log for /dev/fsdatalv2.
bash-3.00# mount /logclone1test2
Replaying log for /dev/fsloglv2.
bash-3.00# df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 1048576 594248 44% 13064 17% /
/dev/hd2 20971520 5376744 75% 49070 8% /usr
/dev/hd9var 2097152 688232 68% 11373 13% /var
/dev/hd3 2097152 1919664 9% 455 1% /tmp
/dev/hd1 1048576 42032 96% 631 12% /home
/dev/hd11admin 524288 523488 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 4194304 3453936 18% 9152 3% /opt
/dev/livedump 524288 523552 1% 4 1% /var/adm/ras/livedump
/dev/pocdbbacklv 626524160 578596720 8% 8 1% /proddbback
/dev/fspoclv 1254359040 1033501496 18% 2064 1% /cl3data
/dev/fspocdbloglv 206438400 193491536 7% 110 1% /cl3logs
/dev/poclv 1254359040 1033501480 18% 2064 1% /proddb
/dev/pocdbloglv 206438400 193158824 7% 115 1% /proddblog
/dev/datalv2 836239360 615477152 27% 2064 1% /datatest2
/dev/loglv2 208404480 195088848 7% 118 1% /logtest2
/dev/fsdatalv2 836239360 615477160 27% 2064 1% /dataclone1test2
/dev/fsloglv2 208404480 195744288 7% 114 1% /logclone1test2
bash-3.00#
Great write up! Thanks, Steve!
ReplyDeleteNice One
ReplyDelete