On 09/08/2017 01:31 PM, hw wrote:> Mark Haney wrote: > > I/O is not heavy in that sense, that?s why I said that?s not the > application. > There is I/O which, as tests have shown, benefits greatly from low > latency, which > is where the idea to use SSDs for the relevant data has arisen from.? > This I/O > only involves a small amount of data and is not sustained over long > periods of time. > What exactly the problem is with the application being slow with > spinning disks is > unknown because I don?t have the sources, and the maker of the > application refuses > to deal with the problem entirely. > > Since the data requiring low latency will occupy about 5% of the > available space on > the SSDs and since they are large enough to hold the mail spool for > about 10 years at > its current rate of growth besides that data, these SSDs could be well > used to hold > that mail spool.See, this is the kind of information that would have made this thread far shorter.? (Maybe.)? The one thing that you didn't explain is whether this application is the one /using/ the mail spool or if you're adding Cyrus to that system to be a mail server.>>>> BTRFS isn't going to impact I/O any more significantly than, say, XFS. >>> >>> But mdadm does, the impact is severe.? I know there are ppl saying >>> otherwise, >>> but I?ve seen the impact myself, and I definitely don?t want it on that >>> particular server because it would likely interfere with other >>> services.? I don?t >>> know if the software RAID of btrfs is better in that or not, though, >>> but I?m >>> seeing btrfs on SSDs being fast, and testing with the particular >>> application has >>> shown a speedup of factor 20--30. >> I never said anything about MD RAID.? I trust that about as far as I >> could throw it.? And having had 5 surgeries on my throwing shoulder >> wouldn't be far. > > How else would I create a RAID with these SSDs? > > I?ve been using md-RAID for years, and it always worked fine. > >>> That is the crucial improvement.? If the hardware RAID delivers >>> that, I?ll use >>> that and probably remove the SSDs from the machine as it wouldn?t >>> even make sense >>> to put temporary data onto them because that would involve software >>> RAID. >> Again, if the idea is to have fast primary storage, there are pretty >> large SSDs available now and I've hardware RAIDED SSDs before without >> trouble, though not for any heavy lifting, it's my test servers at >> home. Without an idea of the expected mail traffic, this is all >> speculation. > > The SSDs don?t need to be large, and they aren?t.? They are already > greatly oversized at > 512GB nominal capacity. > > There?s only a few hundred emails per day.? There is no special > requirement for their > storage, but there is a lot of free space on these SSDs, and since the > email traffic is > mostly read-only, it won?t wear out the SSDs.? It simply would make > sense to put the > mail spool onto these SSDs. > >>>> It does have serious stability/data integrity issues that XFS >>>> doesn't have.? There's no reason not to use SSDs for storage of >>>> immediate data and mechanical drives for archival data storage. >>>> >>>> As for VMs we run a huge Zimbra cluster in VMs on VPC with large >>>> primary SSD volumes and even larger (and slower) secondary volumes >>>> for archived mail.? It's all CentOS 6 and works very well.? We >>>> process 600 million emails a month on that virtual cluster.? All >>>> EXT4 inside LVM. >>> >>> Do you use hardware RAID with SSDs? >> We do not here where I work, but that was setup LONG before I arrived. > > Probably with the very expensive SSDs suited for this ...Possibly, but that's somewhat irrelevant.? I've taken off the shelf SSDs and hardware RAID'd them.? If they work for the hell I put them through (processing weather data), they'll work for the type of service you're saying you have.> >> If the SSDs you have aren't suitable for hardware RAID, then they >> aren't good for production level mail spools, IMHO.? I mean, you're >> talking like you're expecting a metric buttload of mail traffic, so >> it stands to reason you'll need really beefy hardware.? I don't think >> you can do what you seem to need on budget hardware. Personally, and >> solely based on this thread alone, if I was building this in-house, >> I'd get a decent server cluster together and build a FC or iSCSI SAN >> to a Nimble storage array with Flash/SSD front ends and large HDDs in >> the back end.? This solves virtually all your problems.? The servers >> will have tiny SSD boot drives (which I prefer over booting from the >> SAN) and then everything else gets handled by the storage back-end. > > If SSDs not suitable for RAID usage aren?t suitable for production > use, then basically > all SSDs not suitable for RAID usage are SSDs that can?t be used for > anything that > requires something less volatile than a ramdisk.? Experience with such > SSDs contradicts > this so far.Not true at all.? Maybe 5 years ago SSDs were hit or miss with hardware RAID.? Not anymore.? It's just another drive to the system, the controllers don't know the difference between a SATA HDD and a SATA SSD. Couple that with the low volume of mail, and you should be fine for HW RAID.> > There is no "storage backend" but a file server, which, instead of > 99.95% idling, is > being asisgned additional tasks, and since it is difficult to put a > cyrus mail spool > on remote storage, the email server is one of these tasks.Again, you never mentioned the volume of mail expected, and your previous threads seemed to indicate you were expecting enough to cause issues with SSDs and BTRFS. In IT when we get a 'my printer is broken', we ask for more info since that's not descriptive enough.? If this server as is asleep and you (now) make it sound, BTRFS might be fine.? Though, personally, I'd avoid it regardless.> >> In effect this is how our mail servers are setup here.? And they are >> virtual. > > You have entirely different requirements.I know that now.? Previously, you made it sound the your mail flow would be a lot closer to 'heavy' than what you've finally described.? I can only offer thoughts based on what information I'm given.> >>> >>> I stay away from LVM because that just sucks.? It wouldn?t even have >>> any advantage >>> in this case. >> LVM is a joke.? It's always been something I've avoided like the plague. > > I?ve also avoided it until I had an application where it would have > been advantageous > if it actually provided the benefits it seems supposed to provide.? It > turned out that > it didn?t and only made things much worse, and I continue to stay away > from it. > > > After all, you?re saying it?s a bad idea to use these SSDs, especially > with btrfs. > I don?t feel good about it, either, and I?ll try to avoid using them. >No, I'm not saying not to use your SSDs.? I'm saying that BTRFS is not worth using in any server.? The SSD question, prompted by you, was whether the SSDs could: 1) be hardware RAID'd 2) handle the load of mail you were expecting. 512GB SSDs are new enough to probably be HW RAID'd fine, assuming they are weird ones from a third party no one has really heard of. I know because my last company bought some inexpensive (I call them knockoffs) third party SSDs that were utter crap from the moment an OS was installed on them. If yours are from Seagate, WG, or other bigname drive maker, I would be surprised if they choked being on a hardware RAID card.? A setup like yours doesn't appear to need 'Enterprise' level hardware, SMB hardware appears would work for you just as well. Just not with BTRFS. On any drive.? Ever. -- Mark Haney Network Engineer at NeoNova 919-460-3330 option 1 mark.haney at neonova.net www.neonova.net
Mark Haney wrote:> On 09/08/2017 01:31 PM, hw wrote: >> Mark Haney wrote: >><snip>>> Probably with the very expensive SSDs suited for this ... > Possibly, but that's somewhat irrelevant.? I've taken off the shelf SSDs > and hardware RAID'd them.? If they work for the hell I put them through > (processing weather data), they'll work for the type of service you're > saying you have.<snip>> Not true at all.? Maybe 5 years ago SSDs were hit or miss with hardware > RAID.? Not anymore.? It's just another drive to the system, the > controllers don't know the difference between a SATA HDD and a SATA SSD. > Couple that with the low volume of mail, and you should be fine for HW > RAID.<snip> Actually, with the usage you're talking about, I'm surprised you're using SATA and not SAS. mark
Mark Haney wrote:> On 09/08/2017 01:31 PM, hw wrote: >> Mark Haney wrote: >> >> I/O is not heavy in that sense, that?s why I said that?s not the application. >> There is I/O which, as tests have shown, benefits greatly from low latency, which >> is where the idea to use SSDs for the relevant data has arisen from. This I/O >> only involves a small amount of data and is not sustained over long periods of time. >> What exactly the problem is with the application being slow with spinning disks is >> unknown because I don?t have the sources, and the maker of the application refuses >> to deal with the problem entirely. >> >> Since the data requiring low latency will occupy about 5% of the available space on >> the SSDs and since they are large enough to hold the mail spool for about 10 years at >> its current rate of growth besides that data, these SSDs could be well used to hold >> that mail spool. > See, this is the kind of information that would have made this thread far shorter. (Maybe.) The one thing that you didn't explain is whether this application is the one /using/ the mail spool or if you're adding Cyrus to that system to be a mail server.It was a simple question to begin with; I only wanted to know if something speaks against using btrfs for a cyrus mail spool. There are things that speak against doing that with NFS, so there might be things with btrfs. The application doesn?t use the mail spool at all, it has its own dataset.>>>> Do you use hardware RAID with SSDs? >>> We do not here where I work, but that was setup LONG before I arrived. >> >> Probably with the very expensive SSDs suited for this ... > Possibly, but that's somewhat irrelevant. I've taken off the shelf SSDs and hardware RAID'd them. If they work for the hell I put them through (processing weather data), they'll work for the type of service you're saying you have.Well, I can?t very well test them with the mail spool, so I?ve beeing going with what I?ve been reading about SSDs with hardware RAID.>> >>> If the SSDs you have aren't suitable for hardware RAID, then they aren't good for production level mail spools, IMHO. I mean, you're talking like you're expecting a metric buttload of mail traffic, so it stands to reason you'll need really beefy hardware. I don't think you can do what you seem to need on budget hardware. Personally, and solely based on this thread alone, if I was building this in-house, I'd get a decent server cluster together and build a FC or iSCSI SAN to a Nimble storage array with Flash/SSD front ends and large HDDs in the back end. This solves virtually all your problems. The servers will have tiny SSD boot drives (which I prefer over booting from the SAN) and then everything else gets handled by the storage back-end. >> >> If SSDs not suitable for RAID usage aren?t suitable for production use, then basically >> all SSDs not suitable for RAID usage are SSDs that can?t be used for anything that >> requires something less volatile than a ramdisk. Experience with such SSDs contradicts >> this so far. > Not true at all. Maybe 5 years ago SSDs were hit or miss with hardware RAID. Not anymore. It's just another drive to the system, the controllers don't know the difference between a SATA HDD and a SATA SSD. Couple that with the low volume of mail, and you should be fine for HW RAID.I?d need another controller to do hardware RAID, which would require another slot on board, and IIRC, there isn?t a suitable one free anymore. Or I?d have to replace two of the other disks with the SSDs, and that won?t be a good thing to do.>> There is no "storage backend" but a file server, which, instead of 99.95% idling, is >> being asisgned additional tasks, and since it is difficult to put a cyrus mail spool >> on remote storage, the email server is one of these tasks. > Again, you never mentioned the volume of mail expected, and your previous threads seemed to indicate you were expecting enough to cause issues with SSDs and BTRFS. > In IT when we get a 'my printer is broken', we ask for more info since that's not descriptive enough. If this server as is asleep and you (now) make it sound, BTRFS might be fine. Though, personally, I'd avoid it regardless.Of course --- the issue, or question, is btrfs, not the SSDs.>> After all, you?re saying it?s a bad idea to use these SSDs, especially with btrfs. >> I don?t feel good about it, either, and I?ll try to avoid using them. >> > No, I'm not saying not to use your SSDs. I'm saying that BTRFS is not worth using in any server. The SSD question, prompted by you, was whether the SSDs could: > 1) be hardware RAID'd > 2) handle the load of mail you were expecting.Yes, I?m the one saying not to use them. My question was if there?s anything that speaks against using btrfs for a cyrus mail spool. It wasn?t about SSDs. Hardware RAID for the SSDs is not really an option because the ports of the controllers are used otherwise, and it is unknown how well these SSDs would work with them. Otherwise I wouldn?t consider using btrfs.> 512GB SSDs are new enough to probably be HW RAID'd fine, assuming they are weird ones from a third party no one has really heard of. I know because my last company bought some inexpensive (I call them knockoffs) third party SSDs that were utter crap from the moment an OS was installed on them. > If yours are from Seagate, WG, or other bigname drive maker, I would be surprised if they choked being on a hardware RAID card. A setup like yours doesn't appear to need 'Enterprise' level hardware, SMB hardware appears would work for you just as well. > > Just not with BTRFS. On any drive. Ever.Well, that?s a problem because when you don?t want md-RAID and can?t do hardware RAID, the only other option is ZFS, which I don?t want either. That leaves me with not using the SSDs at all.
> Am 09.09.2017 um 19:22 schrieb hw <hw at gc-24.de>: > > Mark Haney wrote: >> On 09/08/2017 01:31 PM, hw wrote: >>> Mark Haney wrote: >>> >>> I/O is not heavy in that sense, that?s why I said that?s not the application. >>> There is I/O which, as tests have shown, benefits greatly from low latency, which >>> is where the idea to use SSDs for the relevant data has arisen from. This I/O >>> only involves a small amount of data and is not sustained over long periods of time. >>> What exactly the problem is with the application being slow with spinning disks is >>> unknown because I don?t have the sources, and the maker of the application refuses >>> to deal with the problem entirely. >>> >>> Since the data requiring low latency will occupy about 5% of the available space on >>> the SSDs and since they are large enough to hold the mail spool for about 10 years at >>> its current rate of growth besides that data, these SSDs could be well used to hold >>> that mail spool. >> See, this is the kind of information that would have made this thread far shorter. (Maybe.) The one thing that you didn't explain is whether this application is the one /using/ the mail spool or if you're adding Cyrus to that system to be a mail server. > > It was a simple question to begin with; I only wanted to know if something speaks > against using btrfs for a cyrus mail spool. There are things that speak against > doing that with NFS, so there might be things with btrfs. > > The application doesn?t use the mail spool at all, it has its own dataset. > >>>>> Do you use hardware RAID with SSDs? >>>> We do not here where I work, but that was setup LONG before I arrived. >>> >>> Probably with the very expensive SSDs suited for this ... >> Possibly, but that's somewhat irrelevant. I've taken off the shelf SSDs and hardware RAID'd them. If they work for the hell I put them through (processing weather data), they'll work for the type of service you're saying you have. > > Well, I can?t very well test them with the mail spool, so I?ve beeing going > with what I?ve been reading about SSDs with hardware RAID.It really depends on the RAID-controller and the SSDs. Every RAID-controller has a maximum number of IOPS it can process. Also, as pointed out, consumer SSD have various deficiencies that make them unsuitable for enterprise-use: https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/ <https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/> Enterprise SSDs also fail much more predictably. You basically get an SLA with them about the DWPD/TBW data. For small amounts of highly volatile data, I recommend looking into Optane SSDs.> > Well, that?s a problem because when you don?t want md-RAID and can?t do hardware RAID, > the only other option is ZFS, which I don?t want either. That leaves me with not using > the SSDs at all. >As for BTRFS: RedHat dumped it. So, it?s a SuSE/Ubuntu thing right now. Make of that what you want ;-) Personally, I?d prefer to use ZFS for SSDs. No Hardware-RAID for sure. Not sure if I?d use it on anything else but FreeBSD (even though a Linux port is available and code-wise it?s more or less the same). From personal experience, it?s better to even ditch the non-RAID HBA and just go with NVMe SSDs for the 2.5? drive slots (a.k.a. 8639 a.k.a U.2 form factor). If you have spare PCIe slots, you can also go for HHHL PCIe NVMe cards - but of course, you?d have to RAID them.