[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: What are you doing for Zimbra storage?



Greetings,

We're a mostly Solaris shop here too.  I myself learned most of what I know about UNIX/Linux from running the BSDs and Linux....so I'm not a huge fan of the Solaris OS myself...too much stuff doesn't work the way I expect it to or at least the way I'm used to on Linux/BSD.  I'm happy running Zimbra on RHEL....in fact I'm not sure I'd jump to SolarisOS if they ever make it a supportable platform.  As far as I know, all or most of Zimbra's internal development is going on primarily on RHEL/SUSE/Ubuntu....and I want to be on the same platform they are using internally.  We've got enough other problems to deal with without having to figure out if it's a problem in the SolarisOS version of the product.  Run what the product developers run and you eliminate potential problems.

That being said...I absolutely love Solaris hardware.

We're running all x4200 servers.  1 LDAP, 1 MTA, 1 Secondary LDAP/MTA, and 5 mailstores.  Between 15-20k active accounts(3000 employees and 13000+ students).

When we were planning our storage, the Zimbra guys ran our calculations through their formulas and spit back out the disk setup.  We ran into a problem though in that we did not take into account the importing of local Outlook PST files on the users' desktops.  Many individuals had local folders and calendars and did not store much or all of their mail on our old IMAP/POP system that we were running previously.  I think we planned for some....but we grossly underestimated.

For storage we have 2 6140 4GBFC Sun Arrays, and each server has 2 4GB FC cards in them for dual paths to each array (which we have yet to finish setting up).

The first is for fast storage and is filled with 2+TB of 136GB FC drives.  The original plan was for 6 mailstores, but we had to cut it back to 5 when we realized we needed a little more space.  On that we put the DB, LOGGER, STORE, INDEX, and main ZIMBRA volumes for each of the mailstores.  It's RAID10 so is super fast.

The second 6140 is for slower storage and has two trays.  The first tray is filled with 500GB SATA drives.  When we realized we needed ALOT more space for migrating old email, we went and added a second tray and filled it with 750GB SATA drives.  In total a little over 18TB of storage space.  On this array we put the BACKUP and HSM volumes.  These are RAID5.

We're very happy with this hardware.  Performance has not been an issue.  The FC array can be busy, but it's handling the traffic no problem.  The usage on the SATA arrays is very low except when backups are running.  Even then though...the arrays are not the bottleneck...I don't think the servers can feed data fast enough to overpower the SATA arrays.

We've not had any problems with file corruption and the ext3 filesystem.  I'd love to have tried zfs, but it wasn't really ready at the time....and it still may not be.

Personally....I would highly recommend the fastest possible disk you can for your STORE, LOGGER, and DB volumes.  Ideally fiber channel arrays and disks.  Zimbra is doing database operations and tons of reading/writing to your primary storage and it needs to be as fast as you can afford to make it.  Save the slower SATA and iSCSI stuff for the BACKUP and HSM.  You definitely do not need the faster storage for those areas.

HSM works great...we rotate everything older than 30 days off to slow storage.  We have a 50GB STORE for primary storage on the fast disks and it's hovers around 25GB usage average on each mailstore.  HSM volumes are 400GB on the slow storage and the mailstores are each around 200GB in use on that...so 50% there as well.  Plan to make your BACKUP volumes at least as big as HSM....maybe bigger depending on how many days you want to be able to go back.

Personal recommendations:  You absolutely must plan for RAID10 on your STORE, DB, and LOGGER volumes.  You should if you can afford it...go with FC disks and arrays for this primary storage.  If you're a smallish shop you might get by with SATA.  But I'm not sure I would recommend iSCSI for any of this stuff, mind you I have no experience with iSCSI, but I would want my mail system directly FC attached to whatever storage I could come up with.

Let me know if anyone has other questions.

Matt

----- Original Message -----
From: "Steve Hillman" <hillman@sfu.ca>
To: "zimbra-hied-admins" <zimbra-hied-admins@sfu.ca>
Sent: Thursday, May 29, 2008 6:58:19 PM GMT -06:00 US/Canada Central
Subject: What are you doing for Zimbra storage?

Hi folks,
  This list is now up to 60 members representing nearly 40 sites. Some are merely interested in Zimbra, while others have it running in full production. We're somewhere in the middle - we have a pilot deployment now with a few hundred users, but have just completed purchase of a site license and are looking to ramp up our system.

It looks like our biggest challenge is going to be balancing storage requirements vs cost and performance. Big fast Intel servers are now cheap, but big fast storage isn't. 

We're a Solaris shop, but have been forced to run Red Hat Linux to host Zimbra. We also make heavy use of NFS for our other systems, but Zimbra doesn't support NFS. All of this leaves us having to run a lot of technology that we're not overly comfortable with. 

We've chosen to make use of our NetApp storage as much as possible, and will be deploying onto a pair of FAS3040 heads (shared with other things as well) using iSCSI. Our NetApps are all loaded with SATA shelves, which has us concerned about performance bottlenecks (they're already showing up). We went with the ext3 filesystem on the Linux side because that's the default and is what Zimbra recommends. But in just 6 weeks, we've already had our first case of filesystem corruption which took 2 hours to fix with fsck (and that's with 30gb of data. What happens when we're at 5TB?). So now we're investigating Veritas File System, which we already use heavily on our Solaris ERP database servers and are very happy with. 

As you can see, we're barely into our pilot and are already having storage related issues. We know this'll be a learning exercise as we balance the performance that Zimbra demands with what we can actually afford, but it would be great to hear what other sites are doing - especially the larger ones. Some of you have already summarized your config and its in the list archives, but for those people newer to the list, perhaps you'd like to share how you've architected your environment and how well it's performing.

-- 
Steve Hillman                                IT Architect
hillman@sfu.ca                               IT Infrastructure
778-782-3960                                 Simon Fraser University
Sent from Zimbra