Displaying 20 results from an estimated 61 matches for "ashift".
Did you mean:
shift
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools,
one ashift=9, one ashift=12. I sent the zvol to each of the pools on
b. The original source pool is ashift=9, and an old revision (2009_06
because it''s still running xen).
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. "zfs list -o space"...
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel,
Apparently your data is represented by rather small files (thus
many small data blocks), so proportion of metadata is relatively
high, and your<4k blocks are now using at least 4k disk space.
For data with small blocks (a 4k volume on an ashift=12 pool)
I saw metadata use up most of my drive - becoming equal to
data size.
Just for the sake of completeness, I brought up a similar problem
and a non-intrusive (compatibility-wise) solution in this bug:
https://www.illumos.org/issues/1044
Main idea was to let ZFS users specify a minumum data...
2010 Nov 23
14
ashift and vdevs
...hows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on metadata on the 512B sector drives.
Cheers,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101124/d5dfd208/atta...
2011 Jul 29
12
booting from ashift=12 pool..
.. evidently doesn''t work. GRUB reboots the machine moments after
loading stage2, and doesn''t recognise the fstype when examining the
disk loaded from an alernate source.
This is with SX-151. Here''s hoping a future version (with grub2?)
resolves this, as well as lets us boot from raidz.
Just a note for the archives in case it helps someone else get back
the afternoon
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
...like someone to confirm-or-reject the discussed statement.
Paraphrasing in my words and understanding:
"Labels, including Uberblock rings, are fixed 256KB in size each,
of which 128KB is the UB ring. Normally there is 1KB of data in
one UB, which gives 128 TXGs to rollback to. When ashift=12 is
used for 4k-sector disks, each UB is allocated a 4KB block, of
which 3KB is padding. And now we only have 32 TXGs of rollback."
Is this understanding correct? That''s something I did not think of
previously, indeed...
Thanks,
//Jim
http://groups.google.com/a/zfsonlinu...
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
...root'
id=0
guid=7417064082496892875
children[0]
type='disk'
id=0
guid=16996723219710622372
path='/dev/dsk/c1d0s0'
devid='id1,cmdk@AST3160812AS=____________9LS6M819/a'
phys_path='/pci@0,0/pci-ide@e/ide@0/cmdk@0,0:a'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=158882856960
is_log=0
tank
version=10
name='tank'
state=0
txg=9305484
pool_guid=6165551123815947851
hostname='cempedak'
vdev_tree
type='root'
id=0
guid=6165551123815947851
children[0]
type='raidz'
id=0
guid=18029757455913565148
nparity=1
metaslab_array=14
met...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...tree
type=''disk''
id=0
guid=3365726235666077346
path=''/dev/dsk/c3t50002AC00039040Bd0p0''
devid=''id1,sd at n50002ac00039040b/q''
whole_disk=0
metaslab_array=13
metaslab_shift=31
ashift=9
asize=322117566464
--------------------------------------------
LABEL 1
--------------------------------------------
version=3
name=''fpool0''
state=0
txg=4
pool_guid=10406529929620343615
top_guid=3365726235666077346
guid=3365726235666077346...
2012 Feb 16
3
4k sector support in Solaris 11?
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11,
will zpool let me create a new pool with ashift=12 out of the box or will
I need to play around with a patched zpool binary (or the iSCSI loopback)?
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
2012 Jan 11
0
Clarifications wanted for ZFS spec
...l block
byte offset from the beginning of a slice, the value
inside offset must be shifted over (<<) by 9 (2^9=512)
and this value must be added to 0x400000 (size of two
vdev_labels and boot block).
Does this calculation really go on in hard-coded 2^9
values, or in VDEV-dependant ashift values (i.e. 2^12
for 4k disks, 2^10 for default raidz, etc.)?
2) Likewise, in Section 2.6 (block size entries) the
values of lsize/psize/asize are said to be represented
by the number of 512-byte sectors. Does this statement
hold true for ashift!=9 VDEVs/pools as well?
3) In Section 1.3 they dis...
2012 Sep 24
20
cannot replace X with Y: devices have different sector alignment
Well this is a new one....
Illumos/Openindiana let me add a device as a hot spare that evidently has a
different sector alignment than all of the other drives in the array.
So now I''m at the point that I /need/ a hot spare and it doesn''t look like
I have it.
And, worse, the other spares I have are all the same model as said hot
spare.
Is there anything I can do with this or
2012 Jul 18
7
Question on 4k sectors
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing
512 byte and 4k sector disks in one pool, or something else? I have seen
alot of discussion on the 4k issue but I haven''t understood what the actual
problem ZFS has with 4k sectors is. It''s getting harder and harder to find
large disks with 512 byte sectors so what should we do? TIA...
2012 Jun 17
26
Recommendation for home NAS external JBOD
...y original approach. However I am totally unclear about the 512b vs 4Kb issue. What sata disk could I use that is big enough and still uses 512b? I know about the discussion about the upgrade from a 512b based pool to a 4 KB pool but I fail to see a conclusion. Will the autoexpand mechanism upgrade ashift? And what disks do not lie? Is the performance impact significant?
So I started to think about option 2. That would be using an external JOBD chassis (4-8 disks) and eSATA. But I would either need a JBOD with 4-8 eSATA connectors (which I am yet to find) or use a JBOD with a "good" expan...
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...''/dev/dsk/c7t1d0s0''
devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF607MH3A3KSK/a''
phys_path: ''/pci at 0,0/pci1043,8231 at 12/disk at 1,0:a''
whole_disk: 1
metaslab_array: 23
metaslab_shift: 33
ashift: 9
asize: 1000191557632
is_log: 0
DTL: 605
--------------------------------------------
LABEL 1
--------------------------------------------
version: 22
name: ''puddle''
state: 0
txg: 55553139
pool_guid: 13462109782214169516
hostid: 44...
2010 May 07
0
confused about zpool import -f and export
...hostname: ''nexenta_safemode''
top_guid: 7124011680357776878
guid: 15556832564812580834
vdev_children: 1
vdev_tree:
type: ''mirror''
id: 0
guid: 7124011680357776878
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 750041956352
is_log: 0
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 15556832564812580834
path: ''/dev/dsk/c0d0s0''
devid: ''id1,cmdk at AQEMU_HARDDI...
2011 Jan 07
5
Migrating zpool to new drives with 4K Sectors
Hi ZFS Discuss,
I have a 8x 1TB RAIDZ running on Samsung 1TB 5400rpm drives with 512b sectors.
I will be replacing all of these with 8x Western Digital 2TB drives
with support for 4K sectors. The replacement plan will be to swap out
each of the 8 drives until all are replaced and the new size (~16TB)
is available with a `zfs scrub`.
My question is, how do I do this and also factor in the new
2012 Nov 13
9
Intel DC S3700
[This email is either empty or too large to be displayed at this time]
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2011 Jul 13
4
How about 4KB disk sectors?
So, what is the story about 4KB disk sectors? Should such disks be avoided with ZFS? Or, no problem? Or, need to modify some config file before usage?
--
This message posted from opensolaris.org
2012 Jul 31
1
FreeBSD 9.1-BETA1 amd64 fails to mount ZFS rootfs with error 2 when system has more than 3584MB of RAM
Dear Everyone,
I am running FreeBSD 9.1-BETA1 amd64 on ZFS in KVM on Gentoo Linux on
ZFS. The root pool uses ashift=13 and is on a single disk. The kernel
fails to mount the root filesystem if the system has more than 3584MB of
RAM. I did a manual binary search to try to find the exact upper limit,
but stopped when I tried 3648MB.
FreeBSD 9.0-RELEASE works perfectly.
Yours truly,
Richard Yao
-------------- ne...
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few