[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Thoughts about storage



Stanford has been running Zimbra Network Edition in production since 7/2008 and we've been fully migrated from Cyrus IMAP since the end of 11/2008; we're on version 5.10 right now and looking at 5.11.

We also use NetApp as the storage back-end but use FC SAN not iSCSI.

Your summary is very interesting because we're looking at some of the same issues.

On Jan 7, 2009, at 11:50 AM, Tom Golson wrote:

So, Texas A&M is coming up on it's one-year anniversary of running
Zimbra in production, and my thoughts are being turned to raising quotas.

We are running iSCSI against a pair of IBM-branded (N5500) NetApp
controllers. Whatever the previous generation of the 3070 was. All of
our drives are 10k rpm, 300 GB.  It is with extreme infrequency that I
see either controller turn more than 1,500 IOPS, or more than 30 MB/ s of
combined I/O.

We use FC SAN against a FAS6040 cluster that is not dedicated to Zimbra. FC disks handle the index, db, and redo log while SATA disks handle the store and zmbackup.

For backups, we acquired (a while ago) a Sun x4500 "Thumper" with about
27 TB of storage.  The threshold of pain associated with running linux
on that was unimaginable so we ran Solaris 10 for x86 and ZFS. That was
a combination that never worked for backups; the NFS performance was
incredibly slow and resisted all efforts at tuning. Last month, though,
we converted the machine to OpenSolaris with ZFS and, with a bare
minimum of tuning, we are now seeing very nice NFS performance and are
running backups to the x4500.

So you're writing the zmbackup output to NFS? It was our understanding that this was not supported in the Network Edition. I hope I'm wrong because this would drastically change our cost structure for the better. Anybody else doing this?

What are folks monitoring on the OS side to signal storage performance issues? IOSTAT svctime, atime, avgqu-sz?

I ask because during our zmbackup process the store LUNs reach very high utilization and we needed to add spindles to the raid groups to remediate it.

-=--=-
gerald villabroza <geraldv at stanford.edu>
technical lead, its storage, stanford university