Displaying 20 results from an estimated 20000 matches similar to: "snv_110 -> snv_121 produces checksum errors on Raid-Z pool"
2010 Jan 10
5
Repeating scrub does random fixes
I''ve been using a 5-disk raidZ for years on SXCE machine which I converted to OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which was fixed.
So, now I''m at OSOL snv_111b and I''m finding that scrub repairs errors on random disks. If I repeat the scrub, it will fix errors on other disks. Occasionally it runs cleanly. That it doesn''t
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?"
That''s a kind of radical, possibly offensive, question formula
that I have lately.
Reading up on theory of RAID5, I grasped the idea of the write
hole (where one of the sectors of the stripe, such as the parity
data, doesn''t get written - leading to invalid data upon read).
In general, I think the same applies to bitrot of
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a
iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the
client. Is it necessary to create a mirror or use ditto blocks at the
client to ensure ZFS can recover if it detects a failure at the client?
Thanks,
Bruin
2007 Aug 07
5
Extending RAIDZ.
Yeah:)
I''d like to work on this. Here are my first observations:
- We need to call vdev_op_asize method with additonal ''offset'' argument,
- We need to move data to new disk starting from the very begining, so
we can''t reuse scrub/resilver code which does tree-walk through the
data.
Below you can see how I imagine to extend RAIDZ. Here is the legend:
2009 Feb 19
8
RFE for two-level ZFS
Should I file an RFE for this addition to ZFS? The concept would be
to run ZFS on a file server, exporting storage to an application
server where ZFS also runs on top of that storage. All storage
management would take place on the file server, where the physical
disks reside. The application server would still perform end-to-end
error checking but would notify the file server when it detected
2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
2) What is any hindrance is Z-Raid to this configuration, given the
complete randomness and size of these accesses?
Would
2008 Jul 23
72
The best motherboard for a home ZFS fileserver
I''m a fan of ZFS since I''ve read about it last year.
Now I''m on the way to build a home fileserver and I''m thinking to go with Opensolaris and eventually ZFS!!
Apart from the other components, the main problem is to choose the motherboard. The offer is incredibly high and I''m lost.
Minimum requisites should be:
- working well with Open Solaris ;-)
-
2009 Nov 22
9
Resilver/scrub times?
Hi all!
I''ve decided to take the "big jump" and build a ZFS home filer (although it
might also do "other work" like caching DNS, mail, usenet, bittorent and so
forth). YAY! I wonder if anyone can shed some light on how long a pool scrub
would take on a fairly decent rig. These are the specs as-ordered:
Asus P5Q-EM mainboard
Core2 Quad 2.83 GHZ
8GB DDR2/80
OS:
2 x
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2009 Nov 13
11
scrub differs in execute time?
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub??
Why is that?
--
This message posted from opensolaris.org
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
I''m seeing some pretty pitiful performance using ZFS on a NFS server, with a ZFS volume exported (only with rw=host.foo.com,root=host.foo.com opts) and mounted on a Linux host running kernel 2.4.31. This linux kernel I''m working with is limited in that I can only do NFSv2 mounts... irregardless of that aspect, I''m sure something''s amiss.
I mounted the zfs-based
2010 Aug 21
8
ZFS with Equallogic storage
I''m planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS.
The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI.
I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy.
Since I am hoping to provide a 2TB
2010 Jun 04
5
Depth of Scrub
Hi,
I have a small question about the depth of scrub in a raidz/2/3 configuration.
I''m quite sure scrub does not check spares or unused areas of the disks (it
could check if the disks detects any errors there).
But what about the parity? Obviously it has to be checked, but I can''t find
any indications for it in the literature. The man page only states that the
data is being
2008 Aug 05
5
OpenSolaris+ZFS+RAIDZ+VirtualBox - ready for production systems?
Hi all,
I have been looking at various alternatives for a system that runs several Linux & Windows guests. So far my favorite choice would be OpenSolaris+ZFS+RAIDZ+VirtualBox. Is this combo ready to be a host for Linux & Windows guests? Or is it not 100% stable (yet)?
Greetings,
Evert
This message posted from opensolaris.org
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
spares
c0t7d0 AVAIL
c1t6d0 AVAIL
c1t7d0
2007 Apr 02
4
Convert raidz
Hi
Is it possible to convert live 3 disks zpool from raidz to raidz2
And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch.
Thanks
This message posted from opensolaris.org
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading;
Documents = 147MB
Videos = 11G
Software= 1.4G
By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated;
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE -
It doesn''t look like
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks,
Myself and a colleague are currently involved in a prototyping exercise
to evaluate ZFS against our current filesystem. We are looking at the
best way to arrange the disks in a 3510 storage array.
We have been testing with the 12 disks on the 3510 exported as "nraid"
logical devices. We then configured a single ZFS pool on top of this,
using two raid-z arrays. We are getting