Displaying 20 results from an estimated 400 matches similar to: "4k sector support in Solaris 11?"
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB
Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is not aligned with 4096 disk sector size.
| It is handled through Read Modify Write but the performance
2010 Nov 06
10
Apparent SAS HBA failure-- now what?
My setup: A SuperMicro 24-drive chassis with Intel dual-processor
motherboard, three LSI SAS3081E controllers, and 24 SATA 2TB hard drives,
divided into three pools with each pool a single eight-disk RAID-Z2. (Boot
is an SSD connected to motherboard SATA.)
This morning I got a cheerful email from my monitoring script: "Zchecker has
discovered a problem on bigdawg." The full output is
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a
RAIDZ2 exported over fibre channel) but there''s no such thing as too much
speed, and these other two drive bays are just begging for drives in
them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase
speed, or will the extra parity writes reduce speed, or will the two factors
offset and leave things
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering
what drives to put in the bays. My chassis is a Supermicro SC846A, so the
backplane supports SAS or SATA; my controllers are LSI3081E, again
supporting SAS or SATA.
Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM
drive in both SAS and SATA configurations; the SAS model offers
2012 Dec 03
6
Sonnet Tempo SSD supported?
Anyone here using http://www.sonnettech.com/product/tempossd.html
with a zfs-capable OS? Is e.g. OpenIndiana supported?
Thanks.
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list,
and I''d like someone to confirm-or-reject the discussed statement.
Paraphrasing in my words and understanding:
"Labels, including Uberblock rings, are fixed 256KB in size each,
of which 128KB is the UB ring. Normally there is 1KB of data in
one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2012 Jul 18
7
Question on 4k sectors
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing
512 byte and 4k sector disks in one pool, or something else? I have seen
alot of discussion on the 4k issue but I haven''t understood what the actual
problem ZFS has with 4k sectors is. It''s getting harder and harder to find
large disks with 512 byte sectors so what should we do? TIA...
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi,
my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller.
So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2011 Jan 07
5
Migrating zpool to new drives with 4K Sectors
Hi ZFS Discuss,
I have a 8x 1TB RAIDZ running on Samsung 1TB 5400rpm drives with 512b sectors.
I will be replacing all of these with 8x Western Digital 2TB drives
with support for 4K sectors. The replacement plan will be to swap out
each of the 8 drives until all are replaced and the new size (~16TB)
is available with a `zfs scrub`.
My question is, how do I do this and also factor in the new
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools,
one ashift=9, one ashift=12. I sent the zvol to each of the pools on
b. The original source pool is ashift=9, and an old revision (2009_06
because it''s still running xen).
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. "zfs list -o space" showed figures at
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2011 Jul 29
12
booting from ashift=12 pool..
.. evidently doesn''t work. GRUB reboots the machine moments after
loading stage2, and doesn''t recognise the fstype when examining the
disk loaded from an alernate source.
This is with SX-151. Here''s hoping a future version (with grub2?)
resolves this, as well as lets us boot from raidz.
Just a note for the archives in case it helps someone else get back
the afternoon
2012 Jun 14
5
(fwd) Re: ZFS NFS service hanging on Sunday morning
>
> Offlist/OT - Sheer guess, straight out of my parts - maybe a cronjob to
> rebuild the locate db or something similar is hammering it once a week?
In the problem condition, there appears to be very little going on on the system. eg.,
root at server5:/tmp# /usr/local/bin/top
last pid: 3828; load avg: 4.29, 3.95, 3.84; up 6+23:11:4407:12:47
79 processes: 78 sleeping, 1 on
2012 Sep 24
20
cannot replace X with Y: devices have different sector alignment
Well this is a new one....
Illumos/Openindiana let me add a device as a hot spare that evidently has a
different sector alignment than all of the other drives in the array.
So now I''m at the point that I /need/ a hot spare and it doesn''t look like
I have it.
And, worse, the other spares I have are all the same model as said hot
spare.
Is there anything I can do with this or
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel,
Apparently your data is represented by rather small files (thus
many small data blocks), so proportion of metadata is relatively
high, and your<4k blocks are now using at least 4k disk space.
For data with small blocks (a 4k volume on an ashift=12 pool)
I saw metadata use up most of my drive - becoming equal to
data size.
Just for the sake of completeness, I brought up a
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
2012 Nov 13
9
Intel DC S3700
[This email is either empty or too large to be displayed at this time]
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2011 Jul 13
4
How about 4KB disk sectors?
So, what is the story about 4KB disk sectors? Should such disks be avoided with ZFS? Or, no problem? Or, need to modify some config file before usage?
--
This message posted from opensolaris.org
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few