Creating ASM on HP-UX SLVM (Shared LVM)

What is a Shared Logical Volume Manager?

VM which support volume sharing across multiple systems is shared SLVM. Here a single Volume Group is available on multiple systems.

Assumption : HP Serviceguard Extensions for RAC are installed & configured on all cluster nodes

Creating ASM on SLVM is similar to creating ASM on LVM. Crux of the operation is creating SLVM.

Following steps can be used for creating SLVM

STEP 1
======

Create Physical Volume (PV)

[root@ /]pvcreate -f /dev/rdsk/c5t1d2
Physical volume “/dev/rdsk/c5t1d2” has been successfully created.

[root@ /]pvcreate -f /dev/rdsk/c5t1d3
Physical volume “/dev/rdsk/c5t1d3” has been successfully created.

[root@ /]pvcreate -f /dev/rdsk/c5t1d4
Physical volume “/dev/rdsk/c5t1d4” has been successfully created.

STEP 2
======

Create Volume Group (VG)

[root@ /] vgcreate /dev/vgasmdata /dev/dsk/c5t1d2
Increased the number of physical extents per physical volume to 12800.
Volume group “/dev/vgasmdata” has been successfully created.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] vgextend /dev/vgasmdata /dev/dsk/c5t1d3
Volume group “/dev/vgasmdata” has been successfully extended.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] vgcreate /dev/vgasmfra /dev/dsk/c5t1d4
Increased the number of physical extents per physical volume to 12800.
Volume group “/dev/vgasmfra” has been successfully created.
Volume Group configuration for /dev/vgasmfra has been saved in /etc/lvmconf/vgasmfra.conf

[root@ /] vgdisplay
— Volume groups —

VG Name                     /dev/vgasmdata
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      2
Open LV                     2
Max PV                      16
Cur PV                      2
Act PV                      2
Max PE per PV               12800
VGDA                        4
PE Size (Mbytes)            4
Total PE                    25598
Alloc PE                    0
Free PE                     25598
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800

VG Name                     /dev/vgasmfra
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      1
Open LV                     1
Max PV                      16
Cur PV                      1
Act PV                      1
Max PE per PV               12800
VGDA                        2
PE Size (Mbytes)            4
Total PE                    12799
Alloc PE                    0
Free PE                     12799
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800

STEP 3
======

Create Logical Volumes (LV) for each of the ASM physical volumes:

[root@ /] lvcreate -n oradata1 vgasmdata
Logical volume “/dev/vgasmdata/oradata1” has been successfully created with
character device “/dev/vgasmdata/roradata1”.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvcreate -n oradata2 vgasmdata
Logical volume “/dev/vgasmdata/oradata2” has been successfully created with
character device “/dev/vgasmdata/roradata2”.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvcreate -n orafra1 vgasmfra
Logical volume “/dev/vgasmfra/orafra1” has been successfully created with
character device “/dev/vgasmfra/rorafra1”.
Volume Group configuration for /dev/vgasmfra has been saved in /etc/lvmconf/vgasmfra.conf

[root@ /] vgdisplay -v  vgasmdata
— Volume groups —
VG Name                     /dev/vgasmdata
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      2
Open LV                     2
Max PV                      16
Cur PV                      2
Act PV                      2
Max PE per PV               12800
VGDA                        4
PE Size (Mbytes)            4
Total PE                    25598
Alloc PE                    0
Free PE                     25598
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800

   — Logical volumes —
   LV Name                     /dev/vgasmdata/oradata1
   LV Status                   available/syncd
   LV Size (Mbytes)            0
   Current LE                  0
   Allocated PE                0
   Used PV                     0

   LV Name                     /dev/vgasmdata/oradata2
   LV Status                   available/syncd
   LV Size (Mbytes)            0
   Current LE                  0
   Allocated PE                0
   Used PV                     0

   — Physical volumes —
   PV Name                     /dev/dsk/c5t1d2
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/dsk/c5t1d3
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

[root@ /] vgdisplay -v  vgasmfra
— Volume groups —
VG Name                     /dev/vgasmfra
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      1
Open LV                     1
Max PV                      16
Cur PV                      1
Act PV                      1
Max PE per PV               12800
VGDA                        2
PE Size (Mbytes)            4
Total PE                    12799
Alloc PE                    0
Free PE                     12799
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800

   — Logical volumes —
   LV Name                     /dev/vgasmfra/orafra1
   LV Status                   available/syncd
   LV Size (Mbytes)            0
   Current LE                  0
   Allocated PE                0
   Used PV                     0

   — Physical volumes —
   PV Name                     /dev/dsk/c5t1d4
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

STEP 4
======

Mark the LV as contiguous

[root@ /] lvchange -C y /dev/vgasmdata/oradata1
Logical volume “/dev/vgasmdata/oradata1” has been successfully changed.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvchange -C y /dev/vgasmdata/oradata2
Logical volume “/dev/vgasmdata/oradata2” has been successfully changed.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvchange -C y /dev/vgasmfra/orafra1
Logical volume “/dev/vgasmfra/orafra1” has been successfully changed.
Volume Group configuration for /dev/vgasmfra has been saved in /etc/lvmconf/vgasmfra.conf

-C contiguous:

Specify the contiguous allocation policy. Physical extents are allocated in ascending order without any gap between adjacent extents and all extents are contained in a single physical volume.  Contiguous can have one of the following values:

y    Set a contiguous allocation policy.
n    Do not set a contiguous allocation policy.

STEP 5
======

Configure a logical volume (LV) using all available space on PV

NOTE: Few things to consider before creating LV’s

a) Logical volumes should not be striped or mirrored.
b) Should not span multiple PVs (1 LV should contain within 1 PV)

[root@ /] lvextend -l 12700  /dev/vgasmdata/oradata1 /dev/dsk/c5t1d2
Logical volume “/dev/vgasmdata/oradata1” has been successfully extended.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvextend -l 12700 /dev/vgasmdata/oradata2 /dev/dsk/c5t1d3
Logical volume “/dev/vgasmdata/oradata2” has been successfully extended.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvextend -l 12700 /dev/vgasmfra/orafra1 /dev/dsk/c5t1d4
Logical volume “/dev/vgasmfra/orafra1” has been successfully extended.
Volume Group configuration for /dev/vgasmfra has been saved in /etc/lvmconf/vgasmfra.conf

-l le_number:

Increase the space, specified in logical extents, allocated to the logical volume or snapshot volume.

 STEP 6
======

Configure the IO timeout parameters

[root@ /] lvchange -t 60 /dev/vgasmdata/oradata1
Logical volume “/dev/vgasmdata/oradata1” has been successfully changed.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvchange -t 60 /dev/vgasmdata/oradata2
Logical volume “/dev/vgasmdata/oradata2” has been successfully changed.
Volume Group configuration for /dev/vgasmdata has been saved in /etc/lvmconf/vgasmdata.conf

[root@ /] lvchange -t 60 /dev/vgasmfra/orafra1
Logical volume “/dev/vgasmfra/orafra1” has been successfully changed.
Volume Group configuration for /dev/vgasmfra has been saved in /etc/lvmconf/vgasmfra.conf

STEP 7
======

Check to see if your volume groups are properly created and available:

# strings /etc/lvmtab

STEP 8
======

Export the volume group configuration

a) De-activate the volume groups

[root@ /] vgchange -a n /dev/vgasmdata
Volume group “/dev/vgasmdata” has been successfully changed.

[root@ /] vgchange -a n /dev/vgasmfra
Volume group “/dev/vgasmfra” has been successfully changed.

b) Create the volume group map file

[root@ /] vgexport -v -p -s -m vgasmdata.map /dev/vgasmdata
Beginning the export process on Volume Group “/dev/vgasmdata”.
/dev/dsk/c5t1d2
/dev/dsk/c5t1d3
vgexport: Preview of vgexport on volume group “/dev/vgasmdata” succeeded.

[root@ /] vgexport -v -p -s -m vgasmfra.map /dev/vgasmfra
Beginning the export process on Volume Group “/dev/vgasmfra”.
/dev/dsk/c5t1d4
vgexport: Preview of vgexport on volume group “/dev/vgasmfra” succeeded.

[root@/] cat *.map

VGID 409dd9844cc8a1f7
1 oradata1
2 oradata2

VGID 409dd9844cc8a206
1 orafra1

c) Copy the mapfile to all the nodes in the cluster:

[root@ /] scp  *.map node2:/tmp

STEP 9
======

Import the volume groups on the other nodes in the cluster

[root@ /] vgimport -v -s -N -m /tmp/vgasmdata.map /dev/vgasmdata
Beginning the import process on Volume Group “/dev/vgasmdata”.
Logical volume “/dev/vgasmdata/oradata1” has been successfully created
with lv number 1.
Logical volume “/dev/vgasmdata/oradata2” has been successfully created
with lv number 2.
vgimport: Volume group “/dev/vgasmdata” has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.

[root@ /] vgimport -v -s -N -m /tmp/vgasmfra.map /dev/vgasmfra
Beginning the import process on Volume Group “/dev/vgasmfra”.
Logical volume “/dev/vgasmfra/orafra1” has been successfully created
with lv number 1.
vgimport: Volume group “/dev/vgasmfra” has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.

b) Check to see if devices are imported

# strings /etc/lvmtab

[root@ /] strings /etc/lvmtab

/dev/vgasmdata
/dev/disk/disk61
/dev/disk/disk62
/dev/vgasmfra
/dev/disk/disk63

NOTE: This output may vary from node1 (mentioned below). But the internal disks are same (ex: /dev/disk/disk61 -> /dev/dsk/c5t1d2).

[root@ /] strings /etc/lvmtab

/dev/vgasmdata
/dev/dsk/c5t1d2
/dev/dsk/c5t1d3
/dev/vgasmfra
/dev/dsk/c5t1d4

STEP 10
=======

Change the permissions of the ASM volume group vgasmdata,vgasmfra and all raw logical volumes. Also change the owner to oracle:dba (ON ALL NODES)

[root@ /] chmod 777 /dev/vgasmdata
[root@ /] chmod 660 /dev/vgasmdata/r*
[root@ /] chown ora10gr2:dba /dev/vgasmdata/r*

[root@ /] chmod 777 /dev/vgasmfra
[root@ /] chmod 660 /dev/vgasmfra/r*
[root@ /] chown ora10gr2:dba /dev/vgasmfra/r*

STEP 11
=======

Make all RAC volume groups sharable (One time activity)

[root@ /] vgchange -S y -c y /dev/vgasmdata
Configuration change completed.
Volume group “/dev/vgasmdata” has been successfully changed.

[root@ /] vgchange -S y -c y /dev/vgasmfra
Configuration change completed.
Volume group “/dev/vgasmfra” has been successfully changed.

Following steps to be repeated, each time when we start the cluster (ON ALL NODES)

[root@ /] vgchange -a s /dev/vgasmdata
Activated volume group in Shared Mode.
This node is the Server.
Volume group “/dev/vgasmdata” has been successfully changed.

[root@ /] vgchange -a s /dev/vgasmfra
Activated volume group in Shared Mode.
This node is the Server.
Volume group “/dev/vgasmfra” has been successfully changed.

STEP 12
=======

Change the asm_diskstring settings

SQL> alter system set asm_diskstring=’/dev/vg*/*’
/

Create required diskgroup

SQL> create diskgroup fra external redundancy  disk ‘/dev/vgasmfra/rorafra1’ size 50800m
/
SQL> create diskgroup data external redundancy  disk ‘/dev/vgasmdata/roradata1’ size 50800m , ‘/dev/vgasmdata/roradata2’ size 50800m
/

Advertisements
This entry was posted in Oracle Automatic Storage Management and tagged . Bookmark the permalink.

2 Responses to Creating ASM on HP-UX SLVM (Shared LVM)

  1. Hector Morales says:

    Really a helpfull post. Thank you so much!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s