John D Groenveld
2011-Oct-12 00:17 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identify itself as: Seagate-External-SG11-2.73TB Under both Solaris 10 and Solaris 11x, I receive the evil message: | I/O request is not aligned with 4096 disk sector size. | It is handled through Read Modify Write but the performance is very low. However, that''s not my big issue as I will use the zpool-12 hack. My big issue is that once I zpool(1M) export the pool from my W2100z running S10 or my Ultra 40 running S11x, I can''t import it. I thought weird USB connectivity issue, but I can run "format -> analyze -> read" merrily. Anyone seen this bug? John groenveld at acm.org
Cindy Swearingen
2011-Oct-12 17:15 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
Hi John, What is the error when you attempt to import this pool? Thanks, Cindy On 10/11/11 18:17, John D Groenveld wrote:> Banging my head against a Seagate 3TB USB3 drive. > Its marketing name is: > Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 > format(1M) shows it identify itself as: > Seagate-External-SG11-2.73TB > > Under both Solaris 10 and Solaris 11x, I receive the evil message: > | I/O request is not aligned with 4096 disk sector size. > | It is handled through Read Modify Write but the performance is very low. > > However, that''s not my big issue as I will use the zpool-12 hack. > > My big issue is that once I zpool(1M) export the pool from > my W2100z running S10 or my Ultra 40 running S11x, I can''t > import it. > > I thought weird USB connectivity issue, but I can run > "format -> analyze -> read" merrily. > > Anyone seen this bug? > > John > groenveld at acm.org > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John D Groenveld
2011-Oct-12 17:29 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4E95CB2A.30105 at oracle.com>, Cindy Swearingen writes:>What is the error when you attempt to import this pool?"cannot import ''foo'': no such pool available" John groenveld at acm.org # format -e Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <Seagate-External-SG11 cyl 45597 alt 2 hd 255 sec 63> /pci at 0,0/pci108e,6676 at 2,1/hub at 7/storage at 2/disk at 0,0 1. c8t0d0 <ATA -HITACHI HDS7225-A9CA cyl 30397 alt 2 hd 255 sec 63> /pci at 0,0/pci108e,6676 at 5/disk at 0,0 2. c8t1d0 <ATA -HITACHI HDS7225-A7BA cyl 30397 alt 2 hd 255 sec 63> /pci at 0,0/pci108e,6676 at 5/disk at 1,0 Specify disk (enter its number): ^C # zpool create foo c1t0d0 # zfs create foo/bar # zfs list -r foo NAME USED AVAIL REFER MOUNTPOINT foo 126K 2.68T 32K /foo foo/bar 31K 2.68T 31K /foo/bar # zpool export foo # zfs list -r foo cannot open ''foo'': dataset does not exist # truss -t open zpool import foo open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT open("/lib/libumem.so.1", O_RDONLY) = 3 open("/lib/libc.so.1", O_RDONLY) = 3 open("/lib/libzfs.so.1", O_RDONLY) = 3 open("/usr/lib/fm//libtopo.so", O_RDONLY) = 3 open("/lib/libxml2.so.2", O_RDONLY) = 3 open("/lib/libpthread.so.1", O_RDONLY) = 3 open("/lib/libz.so.1", O_RDONLY) = 3 open("/lib/libm.so.2", O_RDONLY) = 3 open("/lib/libsocket.so.1", O_RDONLY) = 3 open("/lib/libnsl.so.1", O_RDONLY) = 3 open("/usr/lib//libshare.so.1", O_RDONLY) = 3 open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_SGS.mo", O_RDONLY) Err#2 ENOENT open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo", O_RDONLY) Err#2 ENOENT open("/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3", O_RDONLY) = 3 open("/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3", O_RDONLY) = 3 open("/dev/zfs", O_RDWR) = 3 open("/etc/mnttab", O_RDONLY) = 4 open("/etc/dfs/sharetab", O_RDONLY) = 5 open("/lib/libavl.so.1", O_RDONLY) = 6 open("/lib/libnvpair.so.1", O_RDONLY) = 6 open("/lib/libuutil.so.1", O_RDONLY) = 6 open64("/dev/rdsk/", O_RDONLY) = 6 /3: openat64(6, "c8t0d0s0", O_RDONLY) = 9 /3: open("/lib/libadm.so.1", O_RDONLY) = 15 /9: openat64(6, "c8t0d0s2", O_RDONLY) = 13 /5: openat64(6, "c8t1d0s0", O_RDONLY) = 10 /7: openat64(6, "c8t1d0s2", O_RDONLY) = 14 /8: openat64(6, "c1t0d0s0", O_RDONLY) = 7 /4: openat64(6, "c1t0d0s2", O_RDONLY) Err#5 EIO /8: open("/lib/libefi.so.1", O_RDONLY) = 15 /3: openat64(6, "c1t0d0", O_RDONLY) = 9 /5: openat64(6, "c1t0d0p0", O_RDONLY) = 10 /9: openat64(6, "c1t0d0p1", O_RDONLY) = 13 /7: openat64(6, "c1t0d0p2", O_RDONLY) Err#5 EIO /4: openat64(6, "c1t0d0p3", O_RDONLY) Err#5 EIO /7: openat64(6, "c1t0d0s8", O_RDONLY) = 14 /2: openat64(6, "c7t0d0s0", O_RDONLY) = 8 /6: openat64(6, "c7t0d0s2", O_RDONLY) = 12 /1: Received signal #20, SIGWINCH, in lwp_park() [default] /3: openat64(6, "c7t0d0p0", O_RDONLY) = 9 /4: openat64(6, "c7t0d0p1", O_RDONLY) = 11 /5: openat64(6, "c7t0d0p2", O_RDONLY) = 10 /6: openat64(6, "c8t0d0p0", O_RDONLY) = 12 /6: openat64(6, "c8t0d0p1", O_RDONLY) = 12 /6: openat64(6, "c8t0d0p2", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t0d0p3", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t0d0p4", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t1d0p0", O_RDONLY) = 12 /8: openat64(6, "c7t0d0p3", O_RDONLY) = 7 /6: openat64(6, "c8t1d0p1", O_RDONLY) = 12 /6: openat64(6, "c8t1d0p2", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t1d0p3", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t1d0p4", O_RDONLY) Err#5 EIO /9: openat64(6, "c7t0d0p4", O_RDONLY) = 13 /7: openat64(6, "c7t0d0s1", O_RDONLY) = 14 /1: open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.cat", O_RDONLY) Err#2 ENOENT open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.mo", O_RDONLY) Err#2 ENOENT cannot import ''foo'': no such pool available
Cindy Swearingen
2011-Oct-12 18:48 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In the steps below, you''re missing a zpool import step. I would like to see the error message when the zpool import step fails. Thanks, Cindy On 10/12/11 11:29, John D Groenveld wrote:> In message <4E95CB2A.30105 at oracle.com>, Cindy Swearingen writes: >> What is the error when you attempt to import this pool? > > "cannot import ''foo'': no such pool available" > John > groenveld at acm.org > > # format -e > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c1t0d0 <Seagate-External-SG11 cyl 45597 alt 2 hd 255 sec 63> > /pci at 0,0/pci108e,6676 at 2,1/hub at 7/storage at 2/disk at 0,0 > 1. c8t0d0 <ATA -HITACHI HDS7225-A9CA cyl 30397 alt 2 hd 255 sec 63> > /pci at 0,0/pci108e,6676 at 5/disk at 0,0 > 2. c8t1d0 <ATA -HITACHI HDS7225-A7BA cyl 30397 alt 2 hd 255 sec 63> > /pci at 0,0/pci108e,6676 at 5/disk at 1,0 > Specify disk (enter its number): ^C > # zpool create foo c1t0d0 > # zfs create foo/bar > # zfs list -r foo > NAME USED AVAIL REFER MOUNTPOINT > foo 126K 2.68T 32K /foo > foo/bar 31K 2.68T 31K /foo/bar > # zpool export foo > # zfs list -r foo > cannot open ''foo'': dataset does not exist > # truss -t open zpool import foo > open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT > open("/lib/libumem.so.1", O_RDONLY) = 3 > open("/lib/libc.so.1", O_RDONLY) = 3 > open("/lib/libzfs.so.1", O_RDONLY) = 3 > open("/usr/lib/fm//libtopo.so", O_RDONLY) = 3 > open("/lib/libxml2.so.2", O_RDONLY) = 3 > open("/lib/libpthread.so.1", O_RDONLY) = 3 > open("/lib/libz.so.1", O_RDONLY) = 3 > open("/lib/libm.so.2", O_RDONLY) = 3 > open("/lib/libsocket.so.1", O_RDONLY) = 3 > open("/lib/libnsl.so.1", O_RDONLY) = 3 > open("/usr/lib//libshare.so.1", O_RDONLY) = 3 > open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_SGS.mo", O_RDONLY) Err#2 ENOENT > open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo", O_RDONLY) Err#2 ENOENT > open("/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3", O_RDONLY) = 3 > open("/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3", O_RDONLY) = 3 > open("/dev/zfs", O_RDWR) = 3 > open("/etc/mnttab", O_RDONLY) = 4 > open("/etc/dfs/sharetab", O_RDONLY) = 5 > open("/lib/libavl.so.1", O_RDONLY) = 6 > open("/lib/libnvpair.so.1", O_RDONLY) = 6 > open("/lib/libuutil.so.1", O_RDONLY) = 6 > open64("/dev/rdsk/", O_RDONLY) = 6 > /3: openat64(6, "c8t0d0s0", O_RDONLY) = 9 > /3: open("/lib/libadm.so.1", O_RDONLY) = 15 > /9: openat64(6, "c8t0d0s2", O_RDONLY) = 13 > /5: openat64(6, "c8t1d0s0", O_RDONLY) = 10 > /7: openat64(6, "c8t1d0s2", O_RDONLY) = 14 > /8: openat64(6, "c1t0d0s0", O_RDONLY) = 7 > /4: openat64(6, "c1t0d0s2", O_RDONLY) Err#5 EIO > /8: open("/lib/libefi.so.1", O_RDONLY) = 15 > /3: openat64(6, "c1t0d0", O_RDONLY) = 9 > /5: openat64(6, "c1t0d0p0", O_RDONLY) = 10 > /9: openat64(6, "c1t0d0p1", O_RDONLY) = 13 > /7: openat64(6, "c1t0d0p2", O_RDONLY) Err#5 EIO > /4: openat64(6, "c1t0d0p3", O_RDONLY) Err#5 EIO > /7: openat64(6, "c1t0d0s8", O_RDONLY) = 14 > /2: openat64(6, "c7t0d0s0", O_RDONLY) = 8 > /6: openat64(6, "c7t0d0s2", O_RDONLY) = 12 > /1: Received signal #20, SIGWINCH, in lwp_park() [default] > /3: openat64(6, "c7t0d0p0", O_RDONLY) = 9 > /4: openat64(6, "c7t0d0p1", O_RDONLY) = 11 > /5: openat64(6, "c7t0d0p2", O_RDONLY) = 10 > /6: openat64(6, "c8t0d0p0", O_RDONLY) = 12 > /6: openat64(6, "c8t0d0p1", O_RDONLY) = 12 > /6: openat64(6, "c8t0d0p2", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t0d0p3", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t0d0p4", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t1d0p0", O_RDONLY) = 12 > /8: openat64(6, "c7t0d0p3", O_RDONLY) = 7 > /6: openat64(6, "c8t1d0p1", O_RDONLY) = 12 > /6: openat64(6, "c8t1d0p2", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t1d0p3", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t1d0p4", O_RDONLY) Err#5 EIO > /9: openat64(6, "c7t0d0p4", O_RDONLY) = 13 > /7: openat64(6, "c7t0d0s1", O_RDONLY) = 14 > /1: open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.cat", O_RDONLY) Err#2 ENOENT > open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.mo", O_RDONLY) Err#2 ENOENT > cannot import ''foo'': no such pool available > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John D Groenveld
2011-Oct-12 19:02 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4E95E10F.9070108 at oracle.com>, Cindy Swearingen writes:>In the steps below, you''re missing a zpool import step. >I would like to see the error message when the zpool import >step fails."zpool import" returns nothing. The truss shows it poking around c1t0d0 fdisk partitions and Solaris slices presumably hunting for pools. John groenveld at acm.org
Edward Ned Harvey
2011-Oct-13 11:40 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Cindy Swearingen > > In the steps below, you''re missing a zpool import step. > I would like to see the error message when the zpool import > step fails.I see him doing this...> > # truss -t open zpool import fooThe following lines are informative, sort of.> > /8: openat64(6, "c1t0d0s0", O_RDONLY) = 7 > > /4: openat64(6, "c1t0d0s2", O_RDONLY) Err#5 EIOAnd the output result is:> > cannot import ''foo'': no such pool available
Casper.Dik at oracle.com
2011-Oct-13 11:50 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of Cindy Swearingen >> >> In the steps below, you''re missing a zpool import step. >> I would like to see the error message when the zpool import >> step fails. > >I see him doing this... > > >> > # truss -t open zpool import foo > >The following lines are informative, sort of. > > >> > /8: openat64(6, "c1t0d0s0", O_RDONLY) = 7 >> > /4: openat64(6, "c1t0d0s2", O_RDONLY) Err#5 EIO > >And the output result is: > > >> > cannot import ''foo'': no such pool available >What is the partition table? Casper
Edward Ned Harvey
2011-Oct-13 11:53 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
> From: Casper.Dik at oracle.com [mailto:Casper.Dik at oracle.com] > > What is the partition table?He also said this...> -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of John D Groenveld > > # zpool create foo c1t0d0Which, to me, suggests no partition table.
Casper.Dik at oracle.com
2011-Oct-13 12:01 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
>> From: Casper.Dik at oracle.com [mailto:Casper.Dik at oracle.com] >> >> What is the partition table? > >He also said this... > > >> -----Original Message----- >> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of John D Groenveld >> >> # zpool create foo c1t0d0 > >Which, to me, suggests no partition table.An EFI partition table (there needs to be some form of label so there is always a partition table). Casper
Cindy Swearingen
2011-Oct-13 15:28 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
John, Any USB-related messages in /var/adm/messages for this device? Thanks, Cindy On 10/12/11 11:29, John D Groenveld wrote:> In message <4E95CB2A.30105 at oracle.com>, Cindy Swearingen writes: >> What is the error when you attempt to import this pool? > > "cannot import ''foo'': no such pool available" > John > groenveld at acm.org > > # format -e > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c1t0d0 <Seagate-External-SG11 cyl 45597 alt 2 hd 255 sec 63> > /pci at 0,0/pci108e,6676 at 2,1/hub at 7/storage at 2/disk at 0,0 > 1. c8t0d0 <ATA -HITACHI HDS7225-A9CA cyl 30397 alt 2 hd 255 sec 63> > /pci at 0,0/pci108e,6676 at 5/disk at 0,0 > 2. c8t1d0 <ATA -HITACHI HDS7225-A7BA cyl 30397 alt 2 hd 255 sec 63> > /pci at 0,0/pci108e,6676 at 5/disk at 1,0 > Specify disk (enter its number): ^C > # zpool create foo c1t0d0 > # zfs create foo/bar > # zfs list -r foo > NAME USED AVAIL REFER MOUNTPOINT > foo 126K 2.68T 32K /foo > foo/bar 31K 2.68T 31K /foo/bar > # zpool export foo > # zfs list -r foo > cannot open ''foo'': dataset does not exist > # truss -t open zpool import foo > open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT > open("/lib/libumem.so.1", O_RDONLY) = 3 > open("/lib/libc.so.1", O_RDONLY) = 3 > open("/lib/libzfs.so.1", O_RDONLY) = 3 > open("/usr/lib/fm//libtopo.so", O_RDONLY) = 3 > open("/lib/libxml2.so.2", O_RDONLY) = 3 > open("/lib/libpthread.so.1", O_RDONLY) = 3 > open("/lib/libz.so.1", O_RDONLY) = 3 > open("/lib/libm.so.2", O_RDONLY) = 3 > open("/lib/libsocket.so.1", O_RDONLY) = 3 > open("/lib/libnsl.so.1", O_RDONLY) = 3 > open("/usr/lib//libshare.so.1", O_RDONLY) = 3 > open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_SGS.mo", O_RDONLY) Err#2 ENOENT > open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo", O_RDONLY) Err#2 ENOENT > open("/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3", O_RDONLY) = 3 > open("/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3", O_RDONLY) = 3 > open("/dev/zfs", O_RDWR) = 3 > open("/etc/mnttab", O_RDONLY) = 4 > open("/etc/dfs/sharetab", O_RDONLY) = 5 > open("/lib/libavl.so.1", O_RDONLY) = 6 > open("/lib/libnvpair.so.1", O_RDONLY) = 6 > open("/lib/libuutil.so.1", O_RDONLY) = 6 > open64("/dev/rdsk/", O_RDONLY) = 6 > /3: openat64(6, "c8t0d0s0", O_RDONLY) = 9 > /3: open("/lib/libadm.so.1", O_RDONLY) = 15 > /9: openat64(6, "c8t0d0s2", O_RDONLY) = 13 > /5: openat64(6, "c8t1d0s0", O_RDONLY) = 10 > /7: openat64(6, "c8t1d0s2", O_RDONLY) = 14 > /8: openat64(6, "c1t0d0s0", O_RDONLY) = 7 > /4: openat64(6, "c1t0d0s2", O_RDONLY) Err#5 EIO > /8: open("/lib/libefi.so.1", O_RDONLY) = 15 > /3: openat64(6, "c1t0d0", O_RDONLY) = 9 > /5: openat64(6, "c1t0d0p0", O_RDONLY) = 10 > /9: openat64(6, "c1t0d0p1", O_RDONLY) = 13 > /7: openat64(6, "c1t0d0p2", O_RDONLY) Err#5 EIO > /4: openat64(6, "c1t0d0p3", O_RDONLY) Err#5 EIO > /7: openat64(6, "c1t0d0s8", O_RDONLY) = 14 > /2: openat64(6, "c7t0d0s0", O_RDONLY) = 8 > /6: openat64(6, "c7t0d0s2", O_RDONLY) = 12 > /1: Received signal #20, SIGWINCH, in lwp_park() [default] > /3: openat64(6, "c7t0d0p0", O_RDONLY) = 9 > /4: openat64(6, "c7t0d0p1", O_RDONLY) = 11 > /5: openat64(6, "c7t0d0p2", O_RDONLY) = 10 > /6: openat64(6, "c8t0d0p0", O_RDONLY) = 12 > /6: openat64(6, "c8t0d0p1", O_RDONLY) = 12 > /6: openat64(6, "c8t0d0p2", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t0d0p3", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t0d0p4", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t1d0p0", O_RDONLY) = 12 > /8: openat64(6, "c7t0d0p3", O_RDONLY) = 7 > /6: openat64(6, "c8t1d0p1", O_RDONLY) = 12 > /6: openat64(6, "c8t1d0p2", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t1d0p3", O_RDONLY) Err#5 EIO > /6: openat64(6, "c8t1d0p4", O_RDONLY) Err#5 EIO > /9: openat64(6, "c7t0d0p4", O_RDONLY) = 13 > /7: openat64(6, "c7t0d0s1", O_RDONLY) = 14 > /1: open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.cat", O_RDONLY) Err#2 ENOENT > open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.mo", O_RDONLY) Err#2 ENOENT > cannot import ''foo'': no such pool available > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John D Groenveld
2011-Oct-13 15:40 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <201110131150.p9DBo8Yk011167 at acsinet22.oracle.com>, Casper.Dik at oracl e.com writes:>What is the partition table?I thought about that so I reproduced with the legacy SMI label and a Solaris fdisk partition with ZFS on slice 0. Same result as EFI; once I export the pool I cannot import it. John groenveld at acm.org
John D Groenveld
2011-Oct-13 15:43 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4E970387.3040800 at oracle.com>, Cindy Swearingen writes:>Any USB-related messages in /var/adm/messages for this device?Negative. cfgadm(1M) shows the drive and format->fdisk->analyze->read runs merrily. John groenveld at acm.org
John D Groenveld
2011-Oct-15 02:02 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
As a sanity check, I connected the drive to a Windows 7 installation. I was able to partition, create an NTFS volume on it, eject and remount it. I also tried creating the zpool on my Solaris 10 system, exporting and trying to import the pool on my Solaris 11X system and again no love. I''m baffled why zpool import is unable to find the pool on the drive, but the drive is definitely functional. John groenveld at acm.org
On Oct 14, 2011, at 7:02 PM, John D Groenveld wrote:> As a sanity check, I connected the drive to a Windows 7 installation. > I was able to partition, create an NTFS volume on it, eject and > remount it. > > I also tried creating the zpool on my Solaris 10 system, exporting > and trying to import the pool on my Solaris 11X system and again > no love. > > I''m baffled why zpool import is unable to find the pool on the > drive, but the drive is definitely functional.One of the best troubleshooting steps for a pool that won''t import is to look at the labels on the disk. zdb -l /dev/dsk/c1t0d0s0 The output should be the nvlist for each of 4 labels on the device. zpool import looks at those labels to determine if there is a pool available for import. If the labels cannot be seen, then you need to solve that problem before you can import the pool. -- richard -- ZFS and performance consulting http://www.RichardElling.com VMworld Copenhagen, October 17-20 OpenStorage Summit, San Jose, CA, October 24-27 LISA ''11, Boston, MA, December 4-9
On Fri, 14 Oct 2011, John D Groenveld wrote:> > I''m baffled why zpool import is unable to find the pool on the > drive, but the drive is definitely functional.What type of controller is this drive attached to? I have heard that some popular LSI controllers just don''t work for large drives. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
John D Groenveld
2011-Oct-18 14:18 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <201110150202.p9F22W2n000693 at elvis.arl.psu.edu>, John D Groenveld writes:>I''m baffled why zpool import is unable to find the pool on the >drive, but the drive is definitely functional.Per Richard Elling, it looks like ZFS is unable to find the requisite labels for importing. John groenveld at acm.org # prtvtoc /dev/rdsk/c1t0d0s2 * /dev/rdsk/c1t0d0s2 partition map * * Dimensions: * 4096 bytes/sector * 63 sectors/track * 255 tracks/cylinder * 16065 sectors/cylinder * 45599 cylinders * 45597 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 0 16065 16064 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 2 00 16065 732483675 732499739 2 5 01 0 732515805 732515804 8 1 01 0 16065 16064 # zpool create -f foobar c1t0d0s0 # zpool status foobar pool: foobar state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM foobar ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 errors: No known data errors # zdb -l /dev/dsk/c1t0d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3
Cindy Swearingen
2011-Oct-18 15:18 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
Hi John, I''m going to file a CR to get this issue reviewed by the USB team first, but if you could humor me with another test: Can you run newfs to create a UFS file system on this device and mount it? Thanks, Cindy On 10/18/11 08:18, John D Groenveld wrote:> In message <201110150202.p9F22W2n000693 at elvis.arl.psu.edu>, John D Groenveld writes: >> I''m baffled why zpool import is unable to find the pool on the >> drive, but the drive is definitely functional. > > Per Richard Elling, it looks like ZFS is unable to find > the requisite labels for importing. > > John > groenveld at acm.org > > # prtvtoc /dev/rdsk/c1t0d0s2 > * /dev/rdsk/c1t0d0s2 partition map > * > * Dimensions: > * 4096 bytes/sector > * 63 sectors/track > * 255 tracks/cylinder > * 16065 sectors/cylinder > * 45599 cylinders > * 45597 accessible cylinders > * > * Flags: > * 1: unmountable > * 10: read-only > * > * Unallocated space: > * First Sector Last > * Sector Count Sector > * 0 16065 16064 > * > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 2 00 16065 732483675 732499739 > 2 5 01 0 732515805 732515804 > 8 1 01 0 16065 16064 > # zpool create -f foobar c1t0d0s0 > # zpool status foobar > pool: foobar > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > foobar ONLINE 0 0 0 > c1t0d0s0 ONLINE 0 0 0 > > errors: No known data errors > # zdb -l /dev/dsk/c1t0d0s0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > failed to unpack label 0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > failed to unpack label 1 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > failed to unpack label 2 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > failed to unpack label 3 > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John D Groenveld
2011-Oct-18 15:29 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4E9D98B1.8040108 at oracle.com>, Cindy Swearingen writes:>I''m going to file a CR to get this issue reviewed by the USB team >first, but if you could humor me with another test: > >Can you run newfs to create a UFS file system on this device >and mount it?# uname -srvp SunOS 5.11 151.0.1.12 i386 # zpool destroy foobar # newfs /dev/rdsk/c1t0d0s0 newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y The device sector size 4096 is not supported by ufs! John groenveld at acm.org
Cindy Swearingen
2011-Oct-18 16:26 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
Yeah, okay, duh. I should have known that large sector size support is only available for a non-root ZFS file system. A couple more things if you''re still interested: 1. If you re-create the pool on the whole disk, like this: # zpool create foo c1t0d0 Then, resend the prtvtoc output for c1t0d0s0. We should be able to tell if format is creating a dummy label, which means the ZFS data is never getting written to this disk. This would be a bug. 2. You are running this early S11 release: SunOS 5.11 151.0.1.12 i386 You might retry this on more recent bits, like the EA release, which I think is b 171. I''ll still file the CR. Thanks, Cindy On 10/13/11 09:40, John D Groenveld wrote:> In message <201110131150.p9DBo8Yk011167 at acsinet22.oracle.com>, Casper.Dik at oracl > e.com writes: >> What is the partition table? > > I thought about that so I reproduced with the legacy SMI label > and a Solaris fdisk partition with ZFS on slice 0. > Same result as EFI; once I export the pool I cannot import it. > > John > groenveld at acm.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John D Groenveld
2011-Oct-18 16:50 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4E9DA8B1.7020302 at oracle.com>, Cindy Swearingen writes:>1. If you re-create the pool on the whole disk, like this: > ># zpool create foo c1t0d0 > >Then, resend the prtvtoc output for c1t0d0s0.# zpool create snafu c1t0d0 # zpool status snafu pool: snafu state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM snafu ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 errors: No known data errors # prtvtoc /dev/rdsk/c1t0d0s0 * /dev/rdsk/c1t0d0s0 partition map * * Dimensions: * 4096 bytes/sector * 732566642 sectors * 732566631 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 6 250 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 732549997 732550252 8 11 00 732550253 16384 732566636>We should be able to tell if format is creating a dummy label, >which means the ZFS data is never getting written to this disk. >This would be a bug.# zdb -l /dev/dsk/c1t0d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3>2. You are running this early S11 release: > >SunOS 5.11 151.0.1.12 i386 > >You might retry this on more recent bits, like the EA release, >which I think is b 171.Doubtful I''ll find time to install EA before S11 FCS''s November launch.>I''ll still file the CR.Thank you. John groenveld at acm.org
Cindy Swearingen
2011-Oct-18 16:58 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
This is CR 7102272. cs On 10/18/11 10:50, John D Groenveld wrote:> In message <4E9DA8B1.7020302 at oracle.com>, Cindy Swearingen writes: >> 1. If you re-create the pool on the whole disk, like this: >> >> # zpool create foo c1t0d0 >> >> Then, resend the prtvtoc output for c1t0d0s0. > > # zpool create snafu c1t0d0 > # zpool status snafu > pool: snafu > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > snafu ONLINE 0 0 0 > c1t0d0 ONLINE 0 0 0 > > errors: No known data errors > # prtvtoc /dev/rdsk/c1t0d0s0 > * /dev/rdsk/c1t0d0s0 partition map > * > * Dimensions: > * 4096 bytes/sector > * 732566642 sectors > * 732566631 accessible sectors > * > * Flags: > * 1: unmountable > * 10: read-only > * > * Unallocated space: > * First Sector Last > * Sector Count Sector > * 6 250 255 > * > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 732549997 732550252 > 8 11 00 732550253 16384 732566636 > >> We should be able to tell if format is creating a dummy label, >> which means the ZFS data is never getting written to this disk. >> This would be a bug. > > # zdb -l /dev/dsk/c1t0d0s0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > failed to unpack label 0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > failed to unpack label 1 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > failed to unpack label 2 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > failed to unpack label 3 > >> 2. You are running this early S11 release: >> >> SunOS 5.11 151.0.1.12 i386 >> >> You might retry this on more recent bits, like the EA release, >> which I think is b 171. > > Doubtful I''ll find time to install EA before S11 FCS''s > November launch. > >> I''ll still file the CR. > > Thank you. > > John > groenveld at acm.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John D Groenveld
2011-Oct-18 17:21 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4E9DB04B.80506 at oracle.com>, Cindy Swearingen writes:>This is CR 7102272.Anyone out there have Western Digital''s competing 3TB Passport drive handy to duplicate this bug? John groenveld at acm.org
John D Groenveld
2011-Nov-10 15:55 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4E9DB04B.80506 at oracle.com>, Cindy Swearingen writes:>This is CR 7102272.What is the title of this BugId? I''m trying to attach my Oracle CSI to it but Chuck Rozwat and company''s support engineer can''t seem to find it. Once I get upgraded from S11x SRU12 to S11, I''ll reproduce on a more recent kernel build. Thanks, John groenveld at acm.org
Cindy Swearingen
2011-Nov-10 16:26 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
Hi John, CR 7102272: ZFS storage pool created on a 3 TB USB 3.0 device has device label problems Let us know if this is still a problem in the OS11 FCS release. Thanks, Cindy On 11/10/11 08:55, John D Groenveld wrote:> In message<4E9DB04B.80506 at oracle.com>, Cindy Swearingen writes: >> This is CR 7102272. > > What is the title of this BugId? > I''m trying to attach my Oracle CSI to it but Chuck Rozwat > and company''s support engineer can''t seem to find it. > > Once I get upgraded from S11x SRU12 to S11, I''ll reproduce > on a more recent kernel build. > > Thanks, > John > groenveld at acm.org > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:> Under both Solaris 10 and Solaris 11x, I receive the evil message: > | I/O request is not aligned with 4096 disk sector size. > | It is handled through Read Modify Write but the performance is very low.I got similar with 4k sector ''disks'' (as a comstar target with blk=4096) when trying to use them to force a pool to ashift=12. The labels are found at the wrong offset when the block numbers change, and maybe the GPT label has issues too. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111111/0405f1e0/attachment.bin>
On Nov 10, 2011, at 18:41, Daniel Carosone wrote:> On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote: >> Under both Solaris 10 and Solaris 11x, I receive the evil message: >> | I/O request is not aligned with 4096 disk sector size. >> | It is handled through Read Modify Write but the performance is very low. > > I got similar with 4k sector ''disks'' (as a comstar target with > blk=4096) when trying to use them to force a pool to ashift=12. The > labels are found at the wrong offset when the block numbers change, > and maybe the GPT label has issues too.Anyone know if Solaris 11 has better support for detecting the native block size of the underlying storage? PSARC 2008/769 ("Multiple disk sector size support") was committed in OpenSolaris in commit revision 9889:68d0fe4c716e. It appears ZFS makes use of the check when opening a vdev: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_disk.c#287 Has anyone had a chance to play with S11 to confirm? We''re only going to get more and more Advanced Format drives, never mind all the SAN storage units out there as well (and VMFS often on top of that too).
On Nov 10, 2011, at 7:47 PM, David Magda wrote:> On Nov 10, 2011, at 18:41, Daniel Carosone wrote: > >> On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote: >>> Under both Solaris 10 and Solaris 11x, I receive the evil message: >>> | I/O request is not aligned with 4096 disk sector size. >>> | It is handled through Read Modify Write but the performance is very low. >> >> I got similar with 4k sector ''disks'' (as a comstar target with >> blk=4096) when trying to use them to force a pool to ashift=12. The >> labels are found at the wrong offset when the block numbers change, >> and maybe the GPT label has issues too. > > Anyone know if Solaris 11 has better support for detecting the native block size of the underlying storage?Better than ? If the disks advertise 512 bytes, the only way around it is with a whitelist. I would be rather surprised if Oracle sells 4KB sector disks for Solaris systems? -- richard -- ZFS and performance consulting http://www.RichardElling.com LISA ''11, Boston, MA, December 4-9
On Fri, Nov 11, 2011 at 09:55:29PM -0800, Richard Elling wrote:> On Nov 10, 2011, at 7:47 PM, David Magda wrote: > > > On Nov 10, 2011, at 18:41, Daniel Carosone wrote: > > > >> On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote: > >>> Under both Solaris 10 and Solaris 11x, I receive the evil message: > >>> | I/O request is not aligned with 4096 disk sector size. > >>> | It is handled through Read Modify Write but the performance is very low. > >> > >> I got similar with 4k sector ''disks'' (as a comstar target with > >> blk=4096) when trying to use them to force a pool to ashift=12. The > >> labels are found at the wrong offset when the block numbers change, > >> and maybe the GPT label has issues too. > > > > Anyone know if Solaris 11 has better support for detecting the native block size of the underlying storage? > > Better than ? > If the disks advertise 512 bytes, the only way around it is with a whitelist. I would > be rather surprised if Oracle sells 4KB sector disks for Solaris systems? >Afaik the disks advertise both the physical and logical sector size.. at least on Linux you can see that the disk emulates 512 bytes/sector, but natively it uses 4kB/sector. /sys/block/<disk>/queue/logical_block_size=512 /sys/block/<disk>/queue/physical_block_size=4096 The info should be available through IDENTIFY DEVICE (ATA) or READ CAPACITY 16 (SCSI) commands. -- Pasi
On Nov 12, 2011, at 00:55, Richard Elling wrote:> Better than ? > If the disks advertise 512 bytes, the only way around it is with a whitelist. I would > be rather surprised if Oracle sells 4KB sector disks for Solaris systems?Solaris 10. OpenSolaris. But would it be surprising to use SANs with Solaris? Or perhaps run Solaris under some kind of virtualized environment where the virtual disk has a particular block size? Or maybe SSDs, which tend to read/write/delete in certain block sizes? In these situations simply assuming 512 may slow things down. And if Solaris 11 is going to be around for a decade or so, I''d hazard to guess that 512B sector disks will become less and less prevalent as time goes on. Might as well enable the functionality now, when 4K is rarer, so you have more time to test and tunes things out?rather than later when you can potentially be left scrambling. As Pasi K?rkk?inen mentions, there''s not much you can do if the disks lies (just as has been seen with disks that lie about flushing the cache). This is mostly a temporary kludge for legacy''s sake. More and more disks will be truthful as times goes on.
On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote:> On Nov 12, 2011, at 00:55, Richard Elling wrote: > > > Better than ? > > If the disks advertise 512 bytes, the only way around it is with a whitelist. I would > > be rather surprised if Oracle sells 4KB sector disks for Solaris systems? > > Solaris 10. OpenSolaris. > > But would it be surprising to use SANs with Solaris? Or perhaps run Solaris under some kind of virtualized environment where the virtual disk has a particular block size? Or maybe SSDs, which tend to read/write/delete in certain block sizes? > > In these situations simply assuming 512 may slow things down. > > And if Solaris 11 is going to be around for a decade or so, I''d hazard to guess that 512B sector disks will become less and less prevalent as time goes on. Might as well enable the functionality now, when 4K is rarer, so you have more time to test and tunes things out?rather than later when you can potentially be left scrambling. > > As Pasi K?rkk?inen mentions, there''s not much you can do if the disks lies (just as has been seen with disks that lie about flushing the cache). This is mostly a temporary kludge for legacy''s sake. More and more disks will be truthful as times goes on. >Most "4kB"/sector disks already today properly report both the physical (4kB) and logical (512b) sector sizes. It sounds like *solaris is only checking the logical (512b) sector size, not the physical (4kB) sector size.. -- Pasi
On Nov 12, 2011, at 8:31 AM, Pasi K?rkk?inen wrote:> On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote: >> On Nov 12, 2011, at 00:55, Richard Elling wrote: >> >>> Better than ? >>> If the disks advertise 512 bytes, the only way around it is with a whitelist. I would >>> be rather surprised if Oracle sells 4KB sector disks for Solaris systems? >> >> Solaris 10. OpenSolaris. >> >> But would it be surprising to use SANs with Solaris? Or perhaps run Solaris under some kind of virtualized environment where the virtual disk has a particular block size? Or maybe SSDs, which tend to read/write/delete in certain block sizes? >> >> In these situations simply assuming 512 may slow things down. >> >> And if Solaris 11 is going to be around for a decade or so, I''d hazard to guess that 512B sector disks will become less and less prevalent as time goes on. Might as well enable the functionality now, when 4K is rarer, so you have more time to test and tunes things out?rather than later when you can potentially be left scrambling. >> >> As Pasi K?rkk?inen mentions, there''s not much you can do if the disks lies (just as has been seen with disks that lie about flushing the cache). This is mostly a temporary kludge for legacy''s sake. More and more disks will be truthful as times goes on. >> > > Most "4kB"/sector disks already today properly report both the physical (4kB) and logical (512b) sector sizes. > It sounds like *solaris is only checking the logical (512b) sector size, not the physical (4kB) sector size..ZFS uses the physical block size. http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/vdev_disk.c#294 -- richard -- ZFS and performance consulting http://www.RichardElling.com LISA ''11, Boston, MA, December 4-9
On Sat, Nov 12, 2011 at 10:08:04AM -0800, Richard Elling wrote:> > On Nov 12, 2011, at 8:31 AM, Pasi K?rkk?inen wrote: > > > On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote: > >> On Nov 12, 2011, at 00:55, Richard Elling wrote: > >> > >>> Better than ? > >>> If the disks advertise 512 bytes, the only way around it is with a whitelist. I would > >>> be rather surprised if Oracle sells 4KB sector disks for Solaris systems? > >> > >> Solaris 10. OpenSolaris. > >> > >> But would it be surprising to use SANs with Solaris? Or perhaps run Solaris under some kind of virtualized environment where the virtual disk has a particular block size? Or maybe SSDs, which tend to read/write/delete in certain block sizes? > >> > >> In these situations simply assuming 512 may slow things down. > >> > >> And if Solaris 11 is going to be around for a decade or so, I''d hazard to guess that 512B sector disks will become less and less prevalent as time goes on. Might as well enable the functionality now, when 4K is rarer, so you have more time to test and tunes things out?rather than later when you can potentially be left scrambling. > >> > >> As Pasi K?rkk?inen mentions, there''s not much you can do if the disks lies (just as has been seen with disks that lie about flushing the cache). This is mostly a temporary kludge for legacy''s sake. More and more disks will be truthful as times goes on. > >> > > > > Most "4kB"/sector disks already today properly report both the physical (4kB) and logical (512b) sector sizes. > > It sounds like *solaris is only checking the logical (512b) sector size, not the physical (4kB) sector size.. > > ZFS uses the physical block size. > http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/vdev_disk.c#294 >Hmm.. so everything should just work? Does some other part of the code use logical block size then, for example to calculate the ashift? Maybe I should read the code :) -- Pasi
tip below? On Nov 13, 2011, at 3:24 AM, Pasi K?rkk?inen wrote:> On Sat, Nov 12, 2011 at 10:08:04AM -0800, Richard Elling wrote: >> >> On Nov 12, 2011, at 8:31 AM, Pasi K?rkk?inen wrote: >> >>> On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote: >>>> On Nov 12, 2011, at 00:55, Richard Elling wrote: >>>> >>>>> Better than ? >>>>> If the disks advertise 512 bytes, the only way around it is with a whitelist. I would >>>>> be rather surprised if Oracle sells 4KB sector disks for Solaris systems? >>>> >>>> Solaris 10. OpenSolaris. >>>> >>>> But would it be surprising to use SANs with Solaris? Or perhaps run Solaris under some kind of virtualized environment where the virtual disk has a particular block size? Or maybe SSDs, which tend to read/write/delete in certain block sizes? >>>> >>>> In these situations simply assuming 512 may slow things down. >>>> >>>> And if Solaris 11 is going to be around for a decade or so, I''d hazard to guess that 512B sector disks will become less and less prevalent as time goes on. Might as well enable the functionality now, when 4K is rarer, so you have more time to test and tunes things out?rather than later when you can potentially be left scrambling. >>>> >>>> As Pasi K?rkk?inen mentions, there''s not much you can do if the disks lies (just as has been seen with disks that lie about flushing the cache). This is mostly a temporary kludge for legacy''s sake. More and more disks will be truthful as times goes on. >>>> >>> >>> Most "4kB"/sector disks already today properly report both the physical (4kB) and logical (512b) sector sizes. >>> It sounds like *solaris is only checking the logical (512b) sector size, not the physical (4kB) sector size.. >> >> ZFS uses the physical block size. >> http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/vdev_disk.c#294 >> > > Hmm.. so everything should just work? > Does some other part of the code use logical block size then, for example to calculate the ashift? > > Maybe I should read the code :)Or look at what your system reports :-) Though not directly intended for this use, echo ::sd_state | mdb -k and look for the un_phy_blocksize. -- richard -- ZFS and performance consulting http://www.RichardElling.com LISA ''11, Boston, MA, December 4-9
John D Groenveld
2011-Dec-06 03:24 UTC
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4EBBFB52.6 at oracle.com>, Cindy Swearingen writes:>CR 7102272: > > ZFS storage pool created on a 3 TB USB 3.0 device has device label >problems > >Let us know if this is still a problem in the OS11 FCS release.I finally got upgraded from Solaris 11 Express SRU 12 to S11 FCS. Solaris 11 11/11 still spews the "I/O request is not aligned with 4096 disk sector size" warnings but zpool(1M) create''s label persists and I can export and import between systems. John groenveld at acm.org