HPUX Mirrordisk/UX

Published on July 30, 2012 by in UNIX/Linux

0

These days, using some form of RAID protection for disks is common whether it be via hardware or software. I recently needed to add additional space to a customer server for Oracle archiving and found myself using HPUX Mirrordisk/UX to perform software mirroring. The server was rather old (rp5470 running HPUX 11.11) but the steps I used also apply to newer servers and later versions of HPUX.

The rp5470 has 4 internal disk slots and in our current configuration, only two were being used so our goal was to insert two new 146GB disks into the unused slots and use Mirrordisk/UX to protect them with RAID1 (mirroring).

Prior to asking the on site administrator to insert the new disks, I captured a “before” listing of the disk devices using the below command so I could afterwords easily tell which disks were the new ones.

 

# ioscan -funC disk

 

The rp5470 is capable of hot-plugging disks so there was no need to shutdown the server just to add the disks. I started a terminal window to monitor the system log file with the tail command and as soon as the disks were added by the on site administrator, entries were made into the log file.

 

# tail -f /var/adm/syslog/syslog.log

 

I ran another  ioscan command afterwords to verify HPUX detected the new 146GB disks and as expected, the 146GB disks were listed.

 

# ioscan -funC disk

Class I H/W Path Driver S/W State H/W Type Description

===================================================

disk 20 0/0/1/1.0.0 sdisk CLAIMED DEVICE HP 146 GST3146707LC

disk 19 0/0/2/0.0.0 sdisk CLAIMED DEVICE HP 146 GST3146707LC

 

Although the disks were detected by HPUX, there were no device files automatically created so the disks were not usable yet. Had we rebooted the server, the device files would have been automatically created. I used the insf command to create the required special device files on the fly. Since I knew which hardware paths were needed to build the new device files, I used the -H switch to restrict the insf command from rebuilding all device files.

 

# insf -v -H 0/0/1/1.0.0

insf: Installing special files for sdisk instance 20 address 0/0/1/1.0.0

making dsk/c1t0d0 b 31 0×010000

making rdsk/c1t0d0 c 188 0×010000

 

# insf -v -H 0/0/2/0.0.0

insf: Installing special files for sdisk instance 19 address 0/0/2/0.0.0

making dsk/c2t0d0 b 31 0×020000

making rdsk/c2t0d0 c 188 0×020000

 

To verify the device files were created, I ran another ioscan and as you can see, listed under disk 19 and 20 are the newly created device files for the 146GB disks:

 

# ioscan -funC disk

Class I H/W Path Driver S/W State H/W Type Description

==================================================

disk 20 0/0/1/1.0.0 sdisk CLAIMED DEVICE HP 146 GST3146707LC

/dev/dsk/c1t0d0 /dev/rdsk/c1t0d0

disk 19 0/0/2/0.0.0 sdisk CLAIMED DEVICE HP 146 GST3146707LC

/dev/dsk/c2t0d0 /dev/rdsk/c2t0d0

 

Now the disks are usable.

 

We can put them under LVM (Logical Volume Manager) control and the first step is to determine what volume group name is available by listing the current LVM group files.  We can see the next volume group name available is /dev/vg14.

# ll /dev/*/group

crw-r—– 1 root sys 64 0×000000 Jan 30 2004 /dev/vg00/group

cr–r–r– 1 root sys 64 0×010000 Mar 9 2004 /dev/vg06/group

cr–r–r– 1 root sys 64 0×020000 Mar 9 2004 /dev/vg07/group

cr–r–r– 1 root sys 64 0×030000 Mar 9 2004 /dev/vg08/group

cr–r–r– 1 root sys 64 0×040000 Mar 9 2004 /dev/vg09/group

cr–r–r– 1 root sys 64 0×050000 Dec 22 2008 /dev/vg10/group

cr–r–r– 1 root sys 64 0×060000 Dec 22 2008 /dev/vg11/group

cr–r–r– 1 root sys 64 0×070000 Dec 22 2008 /dev/vg12/group

cr–r–r– 1 root sys 64 0×080000 Dec 22 2008 /dev/vg13/group

 

To put the first disk under LVM control, I ran the pvcreate command:

 

# pvcreate /dev/rdsk/c2t0d0

Physical volume “/dev/rdsk/c2t0d0″ has been successfully created.

 

I made the volume group directory (vg14) and verified the correct permissions:

# mkdir /dev/vg14

# chmod 755 /dev/vg14

 

Using the mknod command, I created the required character device file but take note, the “64″ is the major number used for LVM 1.0. For LVM 2.0, the major number would be 128. Also note, the first two digits (09) of the minor number (0×090000) are normally matched up with the volume group name but I elected to use the next available minor number to follow the previous server’s administrator’s method.

 

# mknod /dev/vg14/group c 64 0×090000

 

Verify permissions and ownership are correct:

# chown -R root:sys /dev/vg14

# chmod 640 /dev/vg14/group

 

The next step was to create the new volume group (vg14) using disk /dev/dsk/c2t0d0.

# vgcreate -s 16 /dev/vg14 /dev/dsk/c2t0d0

Increased the number of physical extents per physical volume to 8750.

Volume group “/dev/vg14″ has been successfully created.

Volume Group configuration for /dev/vg14 has been saved in /etc/lvmconf/vg14.conf

(-s = Sets the number of megabytes in each physical extent)

 

Since I was going to create one large logical volume from vg14, I needed to know how much total space was available in the volume group. By using the vgdisplay command, I was able to see there were 8749 physical extents available and by using the “-s 16″ switch to the vgcreate command to have 16MB extents, a little math tells me there was a total of 139984MB available.

 

# vgdisplay -v vg14

 

Using the lvcreate command, I created one logical volume with a size of 139984MB:

# lvcreate -L 139984 /dev/vg14

Logical volume “/dev/vg14/lvol1″ has been successfully created with

character device “/dev/vg14/rlvol1″.

Logical volume “/dev/vg14/lvol1″ has been successfully extended.

Volume Group configuration for /dev/vg14 has been saved in /etc/lvmconf/vg14.conf

 

 

Next up was to create a filesystem allowing for large files:

# newfs -v -o largefiles /dev/vg14/rlvol1

newfs: /etc/default/fs is used for determining the file system type

/usr/sbin/mkfs -F vxfs -o largefiles /dev/vg14/rlvol1 143343616

version 4 layout

143343616 sectors, 17917952 blocks of size 8192, log size 256 blocks

unlimited inodes, largefiles supported

17917952 data blocks, 17916984 free data blocks

547 allocation units of 32768 blocks, 32768 data blocks

last allocation unit has 26624 data block

 

 

I created a mount point and set permissions:

# mkdir /dbase8

# chown oracle:oinstall /dbase8

# chmod 755 /dbase8

 

 

To make the mount persistent across a reboot, I edited the /etc/fstab file as follows:

/dev/vg14/lvol1 /dbase8 vxfs rw,suid,largefiles,delaylog,datainlog 0 2

 

 

A simple mount command mounted the new logical volume and was verified using the bdf command:

 

# mount /dbase8

# bdf | grep dbase8

/dev/vg14/lvol1 143343616 6664 142217144 0% /dbase8

 

With the first disk completely configured, I was ready to add the second disk into volume group 14 and configure the mirroring.

As with the first disk, I put the second disk under LVM control using the pvcreate command:

 

# pvcreate /dev/rdsk/c1t0d0

Physical volume “/dev/rdsk/c1t0d0″ has been successfully created.

 

Using the vgextend command, I added the second disk into the vg14 volume group:

 # vgextend /dev/vg14 /dev/dsk/c1t0d0

Volume group “/dev/vg14″ has been successfully extended.

Volume Group configuration for /dev/vg14 has been saved in /etc/lvmconf/vg14.conf

 

 

The last command to run to start the mirroring process was the lvextend command. The below command was used to mirror the single logical volume we created in volume group 14 to the second disk:

 

# lvextend -m 1 /dev/vg14/lvol1 /dev/dsk/c1t0d0

 

 

Leave a Reply

You must be logged in to post a comment.