On Wed, Nov 11, 2020 at 3:38 PM Warren Young <warren at etr-usa.com> wrote:> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: > > > > I have yet to see software RAID that doesn't kill the performance. > > When was the last time you tried it? > > Why would you expect that a modern 8-core Intel CPU would impede I/O in > any measureable way as compared to the outdated single-core 32-bit RISC CPU > typically found on hardware RAID cards? These are the same CPUs, mind, > that regularly crunch through TLS 1.3 on line-rate fiber Ethernet links, a > much tougher task than mediating spinning disk I/O.the only 'advantage' hardware raid has is write-back caching. with ZFS you can get much the same performance boost out of a small fast SSD used as a ZIL / SLOG. -- -john r pierce recycling used bits in santa cruz
> On Nov 11, 2020, at 6:00 PM, John Pierce <jhn.pierce at gmail.com> wrote: > > On Wed, Nov 11, 2020 at 3:38 PM Warren Young <warren at etr-usa.com> wrote: > >> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: >>> >>> I have yet to see software RAID that doesn't kill the performance. >> >> When was the last time you tried it? >> >> Why would you expect that a modern 8-core Intel CPU would impede I/O in >> any measureable way as compared to the outdated single-core 32-bit RISC CPU >> typically found on hardware RAID cards? These are the same CPUs, mind, >> that regularly crunch through TLS 1.3 on line-rate fiber Ethernet links, a >> much tougher task than mediating spinning disk I/O. > > > the only 'advantage' hardware raid has is write-back caching.Just for my information: how do you map failed software RAID drive to physical port of, say, SAS-attached enclosure. I?d love to hot replace failed drives in software RAIDs, have over hundred physical drives attached to a machine. Do not criticize, this is box installed by someone else, I have ?inherited? it.To replace I have to query drive serial number, power off the machine and pulling drives one at a time read the labels... With hardware RAID that is not an issue, I always know which physical port failed drive is in. And I can tell controller to ?indicate? specific drive (it blinks respective port LED). Always hot replacing drives in hardware RAIDs, no one ever knows it has been done. And I?d love to deal same way with drives in software RAIDs. Thanks for advises in advance. And my apologies for ?stealing the thread" Valeri> with ZFS you can get much the same performance boost out of a small fast > SSD used as a ZIL / SLOG. > > -- > -john r pierce > recycling used bits in santa cruz > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos
On Nov 11, 2020, at 6:37 PM, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:> > how do you map failed software RAID drive to physical port of, say, SAS-attached enclosure.With ZFS, you set a partition label on the whole-drive partition pool member, then mount the pool with something like ?zpool mount -d /dev/disk/by-partlabel?, which then shows the logical disk names in commands like ?zpool status? rather than opaque ?/dev/sdb3? type things. It is then up to you to assign sensible drive names like ?cage-3-left-4? for the 4th drive down on the left side of the third drive cage. Or, maybe your organization uses asset tags, so you could label the disk the same way, ?sn123456?, which you find by looking at the front of each slot.
in large raids, I label my disks with the last 4 or 6 digits of the drive serial number (or for SAS disks, the WWN). this is visible via smartctl, and I record it with the zpool documentation I keep on each server (typically a text file on a cloud drive). zpools don't actually care WHAT slot a given pool member is in, you can shut the box down, shuffle all the disks, boot back up and find them all and put them back in the pool. the physical error reports that proceed a drive failure should list the drive identification beyond just the /dev/sdX kind of thing, which is subject to change if you add more SAS devices. I once researched what it would take to implement the drive failure lights on the typical brand name server/storage chassis, there's a command for manipulating SES devices such as those lights, the catch is figuring out the mapping between the drives and lights, its not always evident, so would require trial and error. On Wed, Nov 11, 2020 at 5:37 PM Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:> > > > On Nov 11, 2020, at 6:00 PM, John Pierce <jhn.pierce at gmail.com> wrote: > > > > On Wed, Nov 11, 2020 at 3:38 PM Warren Young <warren at etr-usa.com> wrote: > > > >> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: > >>> > >>> I have yet to see software RAID that doesn't kill the performance. > >> > >> When was the last time you tried it? > >> > >> Why would you expect that a modern 8-core Intel CPU would impede I/O in > >> any measureable way as compared to the outdated single-core 32-bit RISC > CPU > >> typically found on hardware RAID cards? These are the same CPUs, mind, > >> that regularly crunch through TLS 1.3 on line-rate fiber Ethernet > links, a > >> much tougher task than mediating spinning disk I/O. > > > > > > the only 'advantage' hardware raid has is write-back caching. > > Just for my information: how do you map failed software RAID drive to > physical port of, say, SAS-attached enclosure. I?d love to hot replace > failed drives in software RAIDs, have over hundred physical drives attached > to a machine. Do not criticize, this is box installed by someone else, I > have ?inherited? it.To replace I have to query drive serial number, power > off the machine and pulling drives one at a time read the labels... > > With hardware RAID that is not an issue, I always know which physical port > failed drive is in. And I can tell controller to ?indicate? specific drive > (it blinks respective port LED). Always hot replacing drives in hardware > RAIDs, no one ever knows it has been done. And I?d love to deal same way > with drives in software RAIDs. > > Thanks for advises in advance. And my apologies for ?stealing the thread" > > Valeri > > > with ZFS you can get much the same performance boost out of a small fast > > SSD used as a ZIL / SLOG. > > > > -- > > -john r pierce > > recycling used bits in santa cruz > > _______________________________________________ > > CentOS mailing list > > CentOS at centos.org > > https://lists.centos.org/mailman/listinfo/centos > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >-- -john r pierce recycling used bits in santa cruz
> > >> On Nov 11, 2020, at 6:00 PM, John Pierce <jhn.pierce at gmail.com> wrote: >> >> On Wed, Nov 11, 2020 at 3:38 PM Warren Young <warren at etr-usa.com> wrote: >> >>> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: >>>> >>>> I have yet to see software RAID that doesn't kill the performance. >>> >>> When was the last time you tried it? >>> >>> Why would you expect that a modern 8-core Intel CPU would impede I/O in >>> any measureable way as compared to the outdated single-core 32-bit RISC >>> CPU >>> typically found on hardware RAID cards? These are the same CPUs, mind, >>> that regularly crunch through TLS 1.3 on line-rate fiber Ethernet >>> links, a >>> much tougher task than mediating spinning disk I/O. >> >> >> the only 'advantage' hardware raid has is write-back caching. > > Just for my information: how do you map failed software RAID drive to > physical port of, say, SAS-attached enclosure. I?d love to hot replace > failed drives in software RAIDs, have over hundred physical drives > attached to a machine. Do not criticize, this is box installed by someone > else, I have ?inherited? it.To replace I have to query drive serial > number, power off the machine and pulling drives one at a time read the > labels...There are different methods depending on how the disks are attached. In some cases you can use a tool to show the corresponding disk or slot. Otherwise, once you have hot removed the drive from the RAID, you can either dd to the broken drive or make some traffic on the still working RAID and you'll see the disk immediately when looking at the disks busy LEDs. I've used Linux Software RAID during the last two decades and it has always worked nicely while I started to hate hardware RAID more and more. Now with U.2 NVMe SSD drives, at least when we started using them, there were no RAID controllers available at all. And performance with Linux Software RAID1 on AMD EPYC boxes is amazing :-) Regards, Simon