[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LVM



We're using LVM in a zimbra cluster.  It does add quite a bit of complexity, but once you understand how it all fits together it gives you a lot of power and flexibility.  Similar to Veritas Volume Manager for anyone who is familiar with that in Solaris.

I've grown volumes.  I've created an identical sized LUN on another SAN, added it to the volume group to mirror the data, and then removed the old physical volume from the old SAN....and it migrates the data perfectly to the new SAN.  All this is done on a live system....so no down time for the users.

I'd say more but my laptop battery is dying... :)

Let me know if you have other questions.

Matt


----- Original Message -----
From: "Steve Hillman" <hillman@sfu.ca>
To: "John Fulton" <fultonj@lafayette.edu>
Cc: "zimbra-hied-admins" <zimbra-hied-admins@sfu.ca>
Sent: Thursday, October 16, 2008 10:16:53 PM GMT -06:00 US/Canada Central
Subject: Re: LVM


----- "John Fulton" <fultonj@lafayette.edu> wrote:

> 
> Has anyone expanded a Zimbra volume with LVM or does anyone have
> multiple primary and secondary mail stores on the same server?
> 

zmvolume does allow you to add new primary message stores, but here's the catch: Only one can be "current" - i.e all new mail will be delivered to whichever volume is the 'active' one. Now that I think about it though, I don't know whether it's possible to move the 'active' primary back to a previous volume. I had been operating under the assumption that you couldn't - i.e. you fill up vol1, so you add vol2 and it becomes current, but as users delete old mail, or it migrates to HSM, vol1 would gradually gain space again. If you can switch back to making this the current one, then you can reuse that space.

However, this doesn't help you at all if you run out of space on the index or db volumes. Although you can add new Index volumes, only *new* users will go to the new Index volume. Current users will continue to use the old index volume. If it's full, you've got a problem. Even worse for the db volume, since if it fills up, you're toast (and both the index and db volumes hold data for both the primary store and any HSM storage, so they'll just keep growing indefinitely). We've resigned ourselves to periodically (maybe once every year or two) having to move our DB volumes onto larger disk arrays. Luckily with NetApp backends, this is relatively painless.

We aren't using any LVM yet, but we're considering using it to create remote mirrors for our redolog volume. By creating two iSCSI LUNs, one on a local NetApp and one that's remote, and using LVM to mirror across them, even a NetApp volume corruption (which we've had before) wouldn't take out our redologs. My only real concern about this is how much (if any) overhead LVM adds -- the responsiveness of the redolog volume seems to be the most critical in terms of affecting the user experience (makes sense when you consider that Zimbra does the Right Thing and fsync's all transactions to the redolog before returning an "ok" to the user.)
 
-- 
Steve Hillman                                IT Architect
hillman@sfu.ca                               IT Infrastructure
778-782-3960                                 Simon Fraser University