Displaying 20 results from an estimated 60000 matches similar to: "zfs remove vdev"
2010 Mar 27
4
Mixed ZFS vdev in same pool.
I have a question about using mixed vdev in the same zpool and what the community opinion is on the matter. Here is my setup:
I have four 1TB drives and two 500GB drives. When I first setup ZFS I was under the assumption that it does not really care much on how you add devices to the pool and it assumes you are thinking things through. But when I tried to create a pool (called group) with four
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS:
ZFS filesystem version 4
ZFS storage pool version 15
Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error
detached,when I copy a big file...
and after reboot in 2 wd green 1tb say me goodbye. One of them die and other
with zfs errors:
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=187921768448 size=512 error=6
2012 Dec 30
4
Expanding a raidz vdev in zpool
Hello All,
I have a zpool that consists of 2 raidz vdevs (raidz1-0 and raidz1-1). The
first vdev is 4 1.5TB drives. The second was 4 500GB drives. I replaced the
4 500GB drives with 4 3TB drives.
I replaced one at time, and resilvered each. Now the process is complete, I
expected to have an extra 10TB (4*2.5TB) of raw space, but it''s still the
same amount of space.
I did an export and
2007 Apr 19
14
Permanently removing vdevs from a pool
Is it possible to gracefully and permanently remove a vdev from a pool without data loss? The type of pool in question here is a simple pool without redundancies (i.e. JBOD). The documentation mentions for instance offlining, but without going into the end results of doing that. The thing I''m looking for is an option to evacuate, for the lack of a better word, the data from a specific
2007 Apr 18
2
zfs block allocation strategy
Hi,
quoting from zfs docs
"The SPA allocates blocks in a round-robin fashion from the top-level vdevs. A storage pool with multiple top-level vdevs allows the SPA to use dynamic striping to increase disk bandwidth. Since a new block may be allocated from any of the top-level vdevs, the SPA implements dynamic striping by spreading out writes across all available top-level vdevs"
Now,
2012 Jan 11
0
Clarifications wanted for ZFS spec
I''m reading the "ZFS On-disk Format" PDF (dated 2006 -
are there newer releases?), and have some questions
regarding whether it is outdated:
1) On page 16 it has the following phrase (which I think
is in general invalid):
The value stored in offset is the offset in terms of
sectors (512 byte blocks). To find the physical block
byte offset from the beginning of a slice,
2008 Jun 04
3
Util to remove (s)log from remaining vdevs?
After having to reset my i-ram card, I can no longer import my raidz pool on 2008.05.
Also trying to import the pool using the zpool.cache causes a kernel panic on 2008.05 and B89 (I''m waiting to try B90 when released).
So I have 2 options:
* Wait for a release that can import after log failure... (no time frame ATM)
* Use a util that removes the log vdev info from the remaining vdevs.
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi,
One of my colleagues was confused by the output of ''zpool status'' on a pool
where a hot spare is being resilvered in after a drive failure:
$ zpool status data
pool: data
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub:
2011 Nov 09
3
Data distribution not even between vdevs
Hi list,
My zfs write performance is poor and need your help.
I create zpool with 2 raidz1. When the space is to be used up, I add 2
another raidz1 to extend the zpool.
After some days, the zpool is almost full, I remove some old data.
But now, as show below, the first 2 raidz1 vdev usage is about 78% and the
last 2 raidz1 vdev usage is about 93%.
I have line in /etc/system
set
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys
I just do the test for use loop device as vdev for zpool
Procedures as followings:
1) mkfile -v 100m disk1
mkfile -v 100m disk2
2) lofiadm -a disk1 /dev/lofi
lofiadm -a disk2 /dev/lofi
3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2
4) zpool export pool_1and2
5) zpool import pool_1and2
error info here:
bash-3.00# zpool import pool1_1and2
cannot import
2007 Nov 13
0
in a zpool consist of regular files, when I remove the file vdev, zpool status can not detect?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2010 Jul 19
6
Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev
Hi guys, I am about to reshape my data spool and am wondering what performance diff. I can expect from the new config. Vs. The old.
The old config. Is a pool of a single vdev of 8 disks raidz2.
The new pool config is 2vdev''s of 7 disk raidz2 in a single pool.
I understand it should be better with higher io throughput....and better read/write rates...but interested to hear the science
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :)
I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast!
I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don''t put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on assumption that resilver time was limited by sustainable
throughput of disks, which
2007 Nov 13
3
zpool status can not detect the vdev removed?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2006 Oct 12
3
Best way to carve up 8 disks
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let''s say I have 8 disks.
Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Are there performance issues with mixing differently sized raidz vdevs
in a pool? If there *is* a performance hit to mix like that, would it
be greater or lesser than building
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME UUID SUNW-MSG-ID
Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2009 Mar 01
8
invalid vdev configuration after power failure
What does it mean for a vdev to have an invalid configuration and how
can it be fixed or reset? As you can see, the following pool can no
longer be imported: (Note that the "last accessed by another system"
warning is because I moved these drives to my test workstation.)
~$ zpool import -f pool0
cannot import ''pool0'': invalid vdev configuration
~$ zpool import
pool:
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations
4 x x6
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code,
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c
72 * All i/os smaller than zfs_vdev_cache_max will be turned into
73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each
75 * vdev''s vdev_cache.
While it