[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LVM



----- "Rich Graves" <rgraves@carleton.edu> wrote:
> 
> I've got my original /opt/zimbra on a 1TB LV, because I assumed, as
> many of you did, that I would find the online expansion features
> useful. That has never proven to be true over the last 18 months.
> Linux LVM cannot take advantage of online expansion of SAN LUNs
> because unlike most commercial UNIXes and Windows, currently supported
> versions of the Linux kernel+device-mapper are unable to rescan and
> use expanded LUNs on the fly (/sys/block/sdX/device/rescan fails if
> anything is holding the device open). You either need to pvmove the
> data to a larger LUN, which can be done live but kills I/O, or append
> a second LUN to the volume group, which increases the risk of user
> error and means you can no longer get a consistent snapshot across the
> whole volume (at least in my SAN, where snapshots are serialized and
> non-atomic).
> 


I'm not sure if we have a different setup than you Rich or not.  We've found however that we are able to grow storage on the fly with LVM.  It basically works like this for us...

    *  Grow the LUN on the SAN
          o wait for the LUN to finish growing by checking the 'Jobs / Current Jobs' display until the "Volume Initialization" job is finished. 
    * reboot the host to see the new partition size.
          o (partprobe -s is supposed to do this, but it doesn't) 
    * find the device name:
          o pvs | grep VolumeName? 
    * grow the Volume Group:
          o pvresize -v /dev/XXX 
    * Verify the new size:
          o pvs | grep VolumeName? 
    * grow the logical volume:
          o Grow by a specific size: lvextend --size +NNg /dev/VgName/LvName
          o Grow to use all free space: lvextend -l +100%FREE /dev/VgName/LvName 
    * grow the file system:
          o Online method (dangerous?)
                + ext2online /dev/VgName/LvName 
          o Offline method (safer?)
                + umount /mountpoint
                + e2fsck -f /dev/Vgname/Lvname
                + resize2fs /dev/Vgname/LvName
                + mount /dev/VgName/LvName /mountpoint 
    * Verify new filesystem size:
          o df -h /mountpoint 

I've always used the online method (marked "dangerous?" by one of my cohorts) and never had a problem.  One other thing we've been able to do with LVM that has been a benefit is migrating data to a new LUN...

   1.  Find the new physical volume that is associated with the correct LUN#. On the Zimbra servers you can use this MPP (linuxrdac) tool.

      # /opt/mpp/lsvdev

   2. Prepare the physical volume with PVCREATE.

      # pvcreate /dev/sdX

   3. Extend the logical volume to the new physical volume with VGEXTEND.

      # vgextend /dev/VolGroupName /dev/sdX

   4. Use LVDISPLAY to make sure you are moving from the right physical volume.

      # lvdisplay /dev/VolGroupName/LogVolName -m

      Example Results
      ===========
        --- Logical volume ---
        LV Name                /dev/VgMbs03Backup/LvMbs03Backup
        VG Name                VgMbs03Backup
        LV UUID                0vZQx3-5A22-a4ZO-4VmV-2naM-jwoi-yc6r6k
        LV Write Access        read/write
        LV Status              available
        # open                 0
        LV Size                580.00 GB
        Current LE             148479
        Segments               1
        Allocation             inherit
        Read ahead sectors     0
        Block device           253:6
         
        --- Segments ---
        Logical extent 0 to 148478:
          Type                linear
          Physical volume     /dev/sdab
          Physical extents    0 to 148478

   5. Move the Volume Group to the new physical volume using PVMOVE.

      # pvmove -i 60 -v /dev/sdZ /dev/sdX

      -i 60     : Show progress every 60 seconds
      -v         : Verbose
      /dev/sdZ     : Physical volume we are moving from
      /dev/sdX     : Physical volume we are moving to

   6. When the move is completed use VGREDUCE to reduce the volume group down to the new physical volume.

      # vgreduce /dev/VolGroupName /dev/sdZ


> 
> LVM doesn't hurt performance, but can add substantial overhead on the
> sysadmin's brain, especially in time of crisis. It isn't supported by
> Zimbra's cluster scripts, though that's easy to fix. I ended up
> deciding that clustering does more harm than good anyway.

We're pretty close here to deciding clustering is not worth it either....at least Redhat clustering isn't.  It has a tendency to arbitrarily decide that the application is down(when it's not) and rip the file system out from under the running application....which has seemed to have caused us some file corruption.  Maybe it's unrelated, but applications don't generally like it when their file systems disappear.  Manually recovering from these "cluster hiccups" (which can take the whole cluster down, not just the one node) can be a nightmare, and we think it would just be easier and faster to bring the volumes up manually on a standby server and start Zimbra by hand if there was ever a problem.  We haven't completely decided on that yet though.

Matt