Displaying 20 results from an estimated 6000 matches similar to: "compression=on and zpool attach"
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it''s better off to
have a 64MB cache disk in the on-line pool than in the backup set sitting
off-line all
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
spares
c0t7d0 AVAIL
c1t6d0 AVAIL
c1t7d0
2010 Aug 06
3
Reconfigure zpool
I have zpool like that
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
___c6t0d0 ONLINE 0 0 0
___c6t1d0 ONLINE 0 0 0
___c6t2d0 ONLINE 0 0 0
___c6t3d0 ONLINE 0 0 0
___c6t4d0 ONLINE
2006 Sep 28
13
jbod questions
Folks,
We are in the process of purchasing new san/s that our mail server
runs on (JES3). We have moved our mailstores to zfs and continue to
have checksum errors -- they are corrected but this improves on the
ufs inode errors that require system shutdown and fsck.
So, I am recommending that we buy small jbods, do raidz2 and let zfs
handle the raiding of these boxes. As we need more
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello,
I''m working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168 hours, that put completion at sometime tomorrow night.
However, he just reported zpool status shows:
2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
2) What is any hindrance is Z-Raid to this configuration, given the
complete randomness and size of these accesses?
Would
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :)
I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast!
I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2006 Jan 10
8
first ajax demo in Rails book - does it work for anyone?
Just tried the first AJAX example in the rails book (p.391-392, the
''word guessing'' thing), and the AJAX partial used seems to render as a
full page.
I''m not sure whether it''s
a) a partial bug
b) some interaction between ajax and partials
c) a change since the book came out
or
d) pilot error
I''ve checked the errata pages and it''s flagged up
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2007 Jan 15
4
iSCSI on a single interface?
Hi, are there currently any plans to make an iSCSI target created by
setting shareiscsi=on on a zvol
bindable to a single interface (setting tpgt or acls)?
I can cobble something together with ipfilter,
but that doesn''t give me enough granularity to say something like:
''host a can see target 1, host c can see targets 2-9'', etc.
Also, am I right in thinking without
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it
in terms of RAID5 I would expect to get (4-1)x18 worth of drive
space, but DF -h shows 4x18. Is this a bug or do I not understand?
2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB
drives and I want to make a RAIDZ of all of them I would expect the
18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2008 Jan 02
1
Adding to zpool: would failure of one device destroy all data?
I didn''t find any clear answer in the documentation, so here it goes:
I''ve got a 4-device RAIDZ array in a pool. I then add another RAIDZ array to the pool. If one of the arrays fails, would all the data on the array be lost, or would it be like disc spanning, and only the data on the failed array be lost?
Thanks in advance.
This message posted from opensolaris.org
2007 Dec 23
11
RAIDZ(2) expansion?
I skimmed the archives and found a thread from July earlier this year
about RAIDZ expansion. Not adding more RAIDZ stripes to a pool, but
adding more drives to the stripe itself. I''m wondering if an RFE has
been submitted for this and if any progress has been made, or is
expected? I find myself out of space on my current RAID5 setup and
would love to flip over to a ZFS raidz2 solution
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?"
That''s a kind of radical, possibly offensive, question formula
that I have lately.
Reading up on theory of RAID5, I grasped the idea of the write
hole (where one of the sectors of the stripe, such as the parity
data, doesn''t get written - leading to invalid data upon read).
In general, I think the same applies to bitrot of
2009 Apr 27
23
Raidz vdev size... again.
Hi,
i''m new to the list so please bare with me. This isn''t an OpenSolaris
related problem but i hope it''s still the right list to post to.
I''m on the way to move a backup server to using zfs based storage, but i
don''t want to spend too much drives to parity (the 16 drives are attached
to a 3ware raid controller so i could also just use raid6 there).
I