Displaying 20 results from an estimated 1000 matches similar to: "booting from ashift=12 pool.."
2012 Jul 18
7
Question on 4k sectors
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing
512 byte and 4k sector disks in one pool, or something else? I have seen
alot of discussion on the 4k issue but I haven''t understood what the actual
problem ZFS has with 4k sectors is. It''s getting harder and harder to find
large disks with 512 byte sectors so what should we do? TIA...
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools,
one ashift=9, one ashift=12. I sent the zvol to each of the pools on
b. The original source pool is ashift=9, and an old revision (2009_06
because it''s still running xen).
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. "zfs list -o space" showed figures at
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel,
Apparently your data is represented by rather small files (thus
many small data blocks), so proportion of metadata is relatively
high, and your<4k blocks are now using at least 4k disk space.
For data with small blocks (a 4k volume on an ashift=12 pool)
I saw metadata use up most of my drive - becoming equal to
data size.
Just for the sake of completeness, I brought up a
2012 Oct 17
24
[zfs] portable zfs send streams (preview webrev)
We have finished a beta version of the feature. A webrev for it
can be found here:
http://cr.illumos.org/~webrev/sensille/fits-send/
It adds a command ''zfs fits-send''. The resulting streams can
currently only be received on btrfs, but more receivers will
follow.
It would be great if anyone interested could give it some testing
and/or review. If there are no objections,
2012 Sep 24
20
cannot replace X with Y: devices have different sector alignment
Well this is a new one....
Illumos/Openindiana let me add a device as a hot spare that evidently has a
different sector alignment than all of the other drives in the array.
So now I''m at the point that I /need/ a hot spare and it doesn''t look like
I have it.
And, worse, the other spares I have are all the same model as said hot
spare.
Is there anything I can do with this or
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB
Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is not aligned with 4096 disk sector size.
| It is handled through Read Modify Write but the performance
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi,
my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller.
So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list,
and I''d like someone to confirm-or-reject the discussed statement.
Paraphrasing in my words and understanding:
"Labels, including Uberblock rings, are fixed 256KB in size each,
of which 128KB is the UB ring. Normally there is 1KB of data in
one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2012 Nov 21
5
mixing WD20EFRX and WD2002FYPS in one pool
Hi,
after a flaky 8-drive Linux RAID10 just shredded about 2 TByte worth
of my data at home (conveniently just before I could make
a backup) I''ve decided to both go full redundancy as well as
all zfs at home.
A couple questions: is there a way to make WD20EFRX (2 TByte, 4k
sectors) and WD200FYPS (4k internally, reported as 512 Bytes?)
work well together on a current OpenIndiana? Which
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata:
pool: d
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://illumos.org/msg/ZFS-8000-72
scan: none requested
config:
NAME STATE
2008 Oct 14
6
code review for 6734731 and 6734123
I''d like reviewers for:
6734731 vif-vnic instances can race against each other
6734123 xpvd-event logging is dismal
The webrev is at http://dme.org/solaris/webrev/xvm-script-cleanup.
2008 Apr 01
10
Request for code review: the brendan() action
This came up as an RFE during the conference (I believe it''s been logged
as "4012008: brendan() action needed for DTrace Toolkit".)
As everyone here is aware, DTrace is not quite as user friendly as it
could be. For the uninitiated, it can be confusing to run a DTrace
script and not see the expected output. Brendan Gregg has addressed
this in the DTrace Toolkit[1] by
2008 Jun 18
6
Please advise: sending out bogus gratuitous ARP packet from vna
I''ve implemented code to send out bogus gratuitous ARP packet from vna
in order to fix CR 6701114. The webrev is at:
http://jurassic.eng/net/consulte.prc/export/build/xvm-6701114/webrev
Some backgroud information (see CR 6701114 for more detail):
During live migration of an xVM domain from one dom0 to another, the
VNIC will be moved from one switch port to another. But, the
2008 Aug 04
6
[Fwd: [networking-discuss] code-review: fine-grained privileges for datalink administration]
Crossbow team,
The following is of interest to the Crossbow project. Since a large
chunk of these changes also exist in the Crossbow gate, the delivery of
this wad will result in fewer lines of changes for Crossbow''s delivery.
If someone on Crossbow could participate in this review, that would be a
bonus (Eric Cheng made original changes in the Crossbow gate at some
point last year).
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub
diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk
--- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100
+++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400
@@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2006 May 14
3
hi all
Not much I can say except that I am very excited to enter this new
world(solely for me of course, kind of late) of rubyonrails.If anyone
has any patience or just plain kindness to help me on my first steps
(leaps?) , I would greatly appreciate any links, references, tips,
hellos, or whatever else is offered.
Thanks ahead of time,
Shai Rosenfeld
Octava
I
-------------- next part
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2017 Feb 18
2
[lld] Has anybody ever run into the Solaris linker before?
Recently LLD made it to the front page of HN (yay!):
https://news.ycombinator.com/item?id=13670458
This comment about the Solaris linker surprised me:
https://news.ycombinator.com/item?id=13672364
"""
> To me, the biggest advantage is cross compiling
Not all system linkers have this problem. For example, Solaris ld(1) is
perfectly capable of cross-linking any valid ELF file.
2012 Feb 16
3
4k sector support in Solaris 11?
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11,
will zpool let me create a new pool with ashift=12 out of the box or will
I need to play around with a patched zpool binary (or the iSCSI loopback)?
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com