On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov <slw at zxy.spb.ru> wrote:> On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote: > > > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman <rkoberman at gmail.com> > wrote: > > > As already mentioned, unless you are using zfs, use gpart to label you > file > > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in > fstab. > > > > > > > ?Even if you are using ZFS, labelling the drives with the location of the > > disk in the system (enclosure, column, row, whatever) makes things so > much > > easier to work with when there are disk-related issues. > > > > Just create a single partition that covers the whole disk, label it, and > > use the label to create the vdevs in the pool.? > > Bad idea. > Re-placed disk in different bay don't relabel automaticly. >?Did the original disk get labelled automatically? No, you had to do that when you first started using it. So, why would you expect a replaced disk to get labelled automatically? Offline the dead/dying disk. Physically remove the disk. Insert the new disk. Partition / label the new disk. "zfs replace" using the new label to get it into the pool.?> Other issuse where disk placed in bay some remotely hands in data > center -- I am relay don't know how disk distributed by bays. >?You label the disks as they are added to the system the first time. That way, you always know where each disk is located, and you only deal with the labels. Then, when you need to replace a disk (or ask someone in a remote location to replace it) it's a simple matter: the label on the disk itself tells you where the disk is physically located. And it doesn't change if the controller decides to change the direction it enumerates devices. Which is easier to tell someone in a remote location: Replace disk enc0a6 (meaning enclosure 0, column A, row 6)? or Replace the disk called da36?? ?or Find the disk with serial number XXXXXXXX? or Replace the disk where the light is (hopefully) flashing (but I can't tell you which enclosure, front or back, or anything else like that)? The first one lets you know exactly where the disk is located physically. The second one just tells you the name of the device as determined by the OS, but doesn't tell you anything about where it is located. And it can change with a kernel update, driver update, or firmware update! The third requires you to pull every disk in turn to read the serial number off the drive itself. In order for the second or third option to work, you'd have to write down the device names and/or serial numbers and stick that onto the drive bay itself.?> Best way for identify disk -- uses enclouse services. >?Only if your enclosure services are actually working (or even enabled). I've yet to work on a box where that actually works (we custom-build our storage boxes using OTS hardware). Best way, IMO, is to use the physical location of the device as the actual device name itself. That way, there's never any ambiguity at the physical layer, the driver layer, the OS layer, or the ZFS pool layer.?> I have many sites with ZFS on whole disk and some sites with ZFS on > GPT partition. ZFS on GPT more heavy for administration. >?It's 1 extra step: partition the drive, supplying the location of the drive as the label for the partition. Everything else works exactly the same. I used to do everything with whole drives and no labels. Did that for about a month, until 2 separate drives on separate controllers died (in a 24-bay setup) and I couldn't figure out where they were located as a BIOS upgrade changed which controller loaded first. And then I had to work on a server that someone else configured with direct-attach bays (24 cables) that were connected almost at random. Then I used glabel(8) to label the entire disk, and things were much better. But that didn't always play well with 4K drives, and replacing drives that were the same size didn't always work as the number of sectors in each disk was different (ZFS plays better with this now). Then I started to GPT partition things, and life has been so much simpler. All the partitions are aligned to 1 MB, and I can manually set the size of the partition to work around different physical sector counts. All the partitions are labelled using the physical location of the disk (originally just row/column naming like a spreadsheet, but now I'm adding enclosure name as well as we expand to multiple enclosures per system). It's so much simpler now, ESPECIALLY when I have to get someone to do something remotely. :) ?Everyone has their own way to manage things. I just haven't seen any better setup than labelling the drives themselves using their physical location.? -- Freddie Cash fjwcash at gmail.com
Patrick M. Hausen
2015-Nov-17 08:08 UTC
ZFS on labelled partitions (was: Re: LSI SAS2008 mps driver preferred firmware version)
Hi, all,> Am 16.11.2015 um 22:19 schrieb Freddie Cash <fjwcash at gmail.com>: > > ?You label the disks as they are added to the system the first time. That > way, you always know where each disk is located, and you only deal with the > labels.we do the same for obvious reasons. But I always wonder about the possible downsides, because ZFS documentation explicitly states: ZFS operates on raw devices, so it is possible to create a storage pool comprised of logical volumes, either software or hardware. This configuration is not recommended, as ZFS works best when it uses raw physical devices. Using logical volumes might sacrifice performance, reliability, or both, and should be avoided. (from http://docs.oracle.com/cd/E19253-01/819-5461/gbcik/index.html) Can anyone shed some lght on why not using raw devices might sacrifice performance or reliability? Or is this just outdated folklore? Thanks, Patrick -- punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe Tel. 0721 9109 0 * Fax 0721 9109 100 info at punkt.de http://www.punkt.de Gf: J?rgen Egeling AG Mannheim 108285 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20151117/82263033/attachment.bin>
I disagree, get the remote hands to copy the serial number to an easily visible location on the drive when its in the enclosure. Then label the drives with the serial number (or a compatible version of it). That way the label is tied to the drive, and you dont have to rely on the remote hands 100%. Better still do the physical labelling yourself On 16 November 2015 at 21:19, Freddie Cash <fjwcash at gmail.com> wrote:> On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov <slw at zxy.spb.ru> > wrote: > > > On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote: > > > > > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman <rkoberman at gmail.com> > > wrote: > > > > As already mentioned, unless you are using zfs, use gpart to label > you > > file > > > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in > > fstab. > > > > > > > > > > ?Even if you are using ZFS, labelling the drives with the location of > the > > > disk in the system (enclosure, column, row, whatever) makes things so > > much > > > easier to work with when there are disk-related issues. > > > > > > Just create a single partition that covers the whole disk, label it, > and > > > use the label to create the vdevs in the pool.? > > > > Bad idea. > > Re-placed disk in different bay don't relabel automaticly. > > > > ?Did the original disk get labelled automatically? No, you had to do that > when you first started using it. So, why would you expect a replaced disk > to get labelled automatically? > > Offline the dead/dying disk. > Physically remove the disk. > Insert the new disk. > Partition / label the new disk. > "zfs replace" using the new label to get it into the pool.? > > > > Other issuse where disk placed in bay some remotely hands in data > > center -- I am relay don't know how disk distributed by bays. > > > > ?You label the disks as they are added to the system the first time. That > way, you always know where each disk is located, and you only deal with the > labels. > > Then, when you need to replace a disk (or ask someone in a remote location > to replace it) it's a simple matter: the label on the disk itself tells > you where the disk is physically located. And it doesn't change if the > controller decides to change the direction it enumerates devices. > > Which is easier to tell someone in a remote location: > Replace disk enc0a6 (meaning enclosure 0, column A, row 6)? > or > Replace the disk called da36?? > ?or > Find the disk with serial number XXXXXXXX? > or > Replace the disk where the light is (hopefully) flashing (but I can't > tell you which enclosure, front or back, or anything else like that)? > > The first one lets you know exactly where the disk is located physically. > > The second one just tells you the name of the device as determined by the > OS, but doesn't tell you anything about where it is located. And it can > change with a kernel update, driver update, or firmware update! > > The third requires you to pull every disk in turn to read the serial number > off the drive itself. > > In order for the second or third option to work, you'd have to write down > the device names and/or serial numbers and stick that onto the drive bay > itself.? > > > > Best way for identify disk -- uses enclouse services. > > > > ?Only if your enclosure services are actually working (or even enabled). > I've yet to work on a box where that actually works (we custom-build our > storage boxes using OTS hardware). > > Best way, IMO, is to use the physical location of the device as the actual > device name itself. That way, there's never any ambiguity at the physical > layer, the driver layer, the OS layer, or the ZFS pool layer.? > > > > I have many sites with ZFS on whole disk and some sites with ZFS on > > GPT partition. ZFS on GPT more heavy for administration. > > > > ?It's 1 extra step: partition the drive, supplying the location of the > drive as the label for the partition. > > Everything else works exactly the same. > > I used to do everything with whole drives and no labels. Did that for > about a month, until 2 separate drives on separate controllers died (in a > 24-bay setup) and I couldn't figure out where they were located as a BIOS > upgrade changed which controller loaded first. And then I had to work on a > server that someone else configured with direct-attach bays (24 cables) > that were connected almost at random. > > Then I used glabel(8) to label the entire disk, and things were much > better. But that didn't always play well with 4K drives, and replacing > drives that were the same size didn't always work as the number of sectors > in each disk was different (ZFS plays better with this now). > > Then I started to GPT partition things, and life has been so much simpler. > All the partitions are aligned to 1 MB, and I can manually set the size of > the partition to work around different physical sector counts. All the > partitions are labelled using the physical location of the disk (originally > just row/column naming like a spreadsheet, but now I'm adding enclosure > name as well as we expand to multiple enclosures per system). It's so much > simpler now, ESPECIALLY when I have to get someone to do something > remotely. :) > > ?Everyone has their own way to manage things. I just haven't seen any > better setup than labelling the drives themselves using their physical > location.? > > -- > Freddie Cash > fjwcash at gmail.com > _______________________________________________ > freebsd-stable at freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org" >
Slawa Olhovchenkov
2015-Nov-18 10:25 UTC
LSI SAS2008 mps driver preferred firmware version
On Mon, Nov 16, 2015 at 01:19:55PM -0800, Freddie Cash wrote:> On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov <slw at zxy.spb.ru> wrote: > > > On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote: > > > > > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman <rkoberman at gmail.com> > > wrote: > > > > As already mentioned, unless you are using zfs, use gpart to label you > > file > > > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in > > fstab. > > > > > > > > > > ?Even if you are using ZFS, labelling the drives with the location of the > > > disk in the system (enclosure, column, row, whatever) makes things so > > much > > > easier to work with when there are disk-related issues. > > > > > > Just create a single partition that covers the whole disk, label it, and > > > use the label to create the vdevs in the pool.? > > > > Bad idea. > > Re-placed disk in different bay don't relabel automaticly. > > > > ?Did the original disk get labelled automatically? No, you had to do that > when you first started using it. So, why would you expect a > replaced diskInitial labeling is problem too. For new chassis with 36 identical disk (already installed) -- what is simple way to labeling disks?> to get labelled automatically?Consistency keeping is another problem.> Offline the dead/dying disk. > Physically remove the disk. > Insert the new disk. > Partition / label the new disk. > "zfs replace" using the new label to get it into the pool.?New disk can be inserted in free bay. This is may be done by remote hand. And I can be miss information where disk is placed.> > Other issuse where disk placed in bay some remotely hands in data > > center -- I am relay don't know how disk distributed by bays. > > > > ?You label the disks as they are added to the system the first time. That > way, you always know where each disk is located, and you only deal with the > labels. > > Then, when you need to replace a disk (or ask someone in a remote location > to replace it) it's a simple matter: the label on the disk itself tells > you where the disk is physically located. And it doesn't change if the > controller decides to change the direction it enumerates devices. > > Which is easier to tell someone in a remote location:"Replace disk in bay with blinked led" Author: bapt Date: Sat Sep 5 00:06:01 2015 New Revision: 287473 URL: https://svnweb.freebsd.org/changeset/base/287473 Log: Add a new sesutil(8) utility This is an utility for managing SCSI Enclosure Services (SES) device. For now only one command is supported "locate" which will change the test of the external LED associated to a given disk. Usage if the following: sesutil locate disk [on|off] Disk can be a device name: "da12" or a special keyword: "all".> Replace disk enc0a6 (meaning enclosure 0, column A, row 6)? > or > Replace the disk called da36?? > ?or > Find the disk with serial number XXXXXXXX? > or > Replace the disk where the light is (hopefully) flashing (but I can't > tell you which enclosure, front or back, or anything else like that)? > > The first one lets you know exactly where the disk is located physically. > > The second one just tells you the name of the device as determined by the > OS, but doesn't tell you anything about where it is located. And it can > change with a kernel update, driver update, or firmware update! > > The third requires you to pull every disk in turn to read the serial number > off the drive itself.Usaly serial number can be read w/o pull disk (for SuperMicro cases this is true, remote hand replaced disk by S/N for me w/o pull every disk).> In order for the second or third option to work, you'd have to write down > the device names and/or serial numbers and stick that onto the drive bay > itself.? > > > > Best way for identify disk -- uses enclouse services. > > > > ?Only if your enclosure services are actually working (or even enabled). > I've yet to work on a box where that actually works (we custom-build our > storage boxes using OTS hardware). > > Best way, IMO, is to use the physical location of the device as the actual > device name itself. That way, there's never any ambiguity at the physical > layer, the driver layer, the OS layer, or the ZFS pool layer.? > > > > I have many sites with ZFS on whole disk and some sites with ZFS on > > GPT partition. ZFS on GPT more heavy for administration. > > > > ?It's 1 extra step: partition the drive, supplying the location of the > drive as the label for the partition. > > Everything else works exactly the same. > > I used to do everything with whole drives and no labels. Did that for > about a month, until 2 separate drives on separate controllers died (in a > 24-bay setup) and I couldn't figure out where they were located as a BIOS > upgrade changed which controller loaded first. And then I had to work on a > server that someone else configured with direct-attach bays (24 cables) > that were connected almost at random.All currently used by me servers have some randoms in detecting and reporting controllers and HDDs. No problem for ZFS and/or replacing by remote hands (by S/N).