[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LVM



Tom,

We've got Sun StorageTek arrays, so we use the Linux RDAC drivers for multi-pathing.  I don't know if that's the answer to this problem for you or not.  It's hard to find, but this Sun document refers to it.

http://docs.sun.com/source/820-4738-10/chapsing.html

I found references to it in IBM's storage stuff too.

Matt


----- Original Message -----
From: "Tom Golson" <tgolson@tamu.edu>
To: "John Fulton" <fultonj@lafayette.edu>
Cc: "zimbra-hied-admins" <zimbra-hied-admins@sfu.ca>
Sent: Monday, October 27, 2008 11:02:07 AM GMT -06:00 US/Canada Central
Subject: Re: LVM

Hi John,
	In my testing, it looks like the native multi-pathing in linux will
increment the device file whenever there's a path failure, and a device
becomes visible on a new path.  Similarly, if the path fails back, that
will also result in the device file being incremented.  Using vanilla,
one LUN to one-filesystem mappings, multipathd works just fine.  It's
only when I layer LVM on top that I get problems.
	When I tried adding LUN's as physical volumes to LVM using
device-mapper names (e.g. /dev/mapper/<blah>) or device-id names
(/dev/disk-by-id/<blah>), after adding them, and doing a vgdisplay -v,
the physical volumes that LVM reported were still of the form, /dev/sdb1
or /dev/sdc1.  I only run Sun/STK gear right now, but am looking at
buying some HP gear.  None of them, though, provide a device driver for
Linux.  Both use Linux's native multipathd.  So, I really don't know
what one should expect from something that's mapped as what sounds like
an EMC-specific device.
	I agree with you, though, on what _should_ happen, so far for me,
though, LVM seems to cling to the underlying devices, and does not
behave gracefully when path failovers occur.  Out of curiosity, if you
add physical volumes accessed via an EMC device to LVM, and then run a
verbose vgdisplay on the volume group, does LVM report the
/dev/emcpowera device, or does it report the "mapped-at-the-moment"
/dev/sd* device?  Also if EMC is providing a device driver, they may not
be relying on the native multipathd, but using their own multipath
software, and the device incrementing behavior might be specific to
multipathd.

--Tom

John Fulton wrote:
> Tom Golson wrote:
>> LVM fundamentally seems to manage physical slices by their /dev/[sh]d
>> device name, and not the disk id's.  Disk id's are persistent across
>> path changes, but /dev/sdX id's aren't.  It seems that when a path
>> change happens, multipathd increments the /dev/sdX value, so what was
>> perhaps /dev/sdc is now /dev/sdd, and this just causes LVM a world of
>> pain.  Mount points go read-only and you've got to unmount, vgscan,
>> vgchange -a y, and then remount.
> 
> Under what conditions does a path change and how often?
> 
> With EMC's PowerPath /dev/sd{X,Y,Z} differ but it creates a virtual
> device /dev/emcpowera on top of them. LUN trespassing (changing switches
> or service processors within the SAN) would then change the active
> device within /dev/sd{X,Y,Z} but /dev/emcpowera would remain the mounted
> device. I can see /dev/emcpowera changing if I added more LUNs, but if
> I'm sticking with two EMC Power devices (one for fiber the other for
> SATA) and could grow them, then I don't think I'd add extra LUNs often.
> That is, assuming I use LVM on a single LUN to make separate logical
> volumes for /opt/zimbra/{db,index,...} and use the SAN's ability to grow
> the LUN itself with lvextend to grow the desired volume.
> 
> If the device name changed on each LUN trespass that would require lots
> of "vgchange -a y" interaction and be a pain. I'm wondering if it's
> because of multipathd vs PowerPath or something else.
> 
> Thanks,
>   John