similar to: Permanently removing vdevs from a pool

Displaying 20 results from an estimated 7000 matches similar to: "Permanently removing vdevs from a pool"

2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks: And a system that does an extreme amount of small /(<20K) /random reads /(more than twice as many reads as writes) / 1) What performance gains, if any does Z-Raid offer over other RAID or Large filesystem configurations? 2) What is any hindrance is Z-Raid to this configuration, given the complete randomness and size of these accesses? Would
2008 Jun 04
3
Util to remove (s)log from remaining vdevs?
After having to reset my i-ram card, I can no longer import my raidz pool on 2008.05. Also trying to import the pool using the zpool.cache causes a kernel panic on 2008.05 and B89 (I''m waiting to try B90 when released). So I have 2 options: * Wait for a release that can import after log failure... (no time frame ATM) * Use a util that removes the log vdev info from the remaining vdevs.
2009 Nov 20
13
Data balance across vdevs
I''m migrating to ZFS and Solaris for cluster computing storage, and have some completely static data sets that need to be as fast as possible. One of the scenarios I''m testing is the addition of vdevs to a pool. Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more vdevs and would like to balance this data across the pool for performance. The data may be
2007 Jun 17
18
6 disk raidz2 or 3 stripe 2 way mirror
I''m playing around with ZFS and want to figure out the best use of my 6x 300GB SATA drives. The purpose of the drives is to store all of my data at home (video, photos, music, etc). I''m debating between: 6x 300GB disks in a single raidz2 pool --or-- 2x (3x 300GB disks in a pool) mirrored I''ve read up a lot on ZFS, but I can''t really figure out which is
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones. Thanks, budy -- This message posted from opensolaris.org
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :) I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast! I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2007 Dec 17
3
ZFS Roadmap - thoughts on expanding raidz / restriping / defrag
Hey folks, Does anybody know if any of these are on the roadmap for ZFS, or have any idea how long it''s likely to be before we see them (we''re in no rush - late 2008 would be fine with us, but it would be nice to know they''re being worked on)? I''ve seen many people ask for the ability to expand a raid-z pool by adding devices. I''m wondering if it
2007 Apr 22
1
Metaslab allocation control?
I was wondering if it''s planned to give some control over the metaslab allocation into the hands of the user. What I have in mind is an attribute on a ZFS filesystem that acts as modifier to the allocator. Scenarios for this would be directly controlling performance characteristics, e.g. having system and application files being allocated on the inner side of the platter while pushing
2008 Jun 07
4
Mixing RAID levels in a pool
Hi, I had a plan to set up a zfs pool with different raid levels but I ran into an issue based on some testing I''ve done in a VM. I have 3x 750 GB hard drives and 2x 320 GB hard drives available, and I want to set up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to the same pool. I tested detaching a drive and it seems to seriously mess up the entire pool and I
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2011 Nov 09
3
Data distribution not even between vdevs
Hi list, My zfs write performance is poor and need your help. I create zpool with 2 raidz1. When the space is to be used up, I add 2 another raidz1 to extend the zpool. After some days, the zpool is almost full, I remove some old data. But now, as show below, the first 2 raidz1 vdev usage is about 78% and the last 2 raidz1 vdev usage is about 93%. I have line in /etc/system set
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which
2009 Oct 27
2
root pool can not have multiple vdevs ?
This seems like a bit of a restriction ... is this intended ? # cat /etc/release Solaris Express Community Edition snv_125 SPARC Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 05 October 2009 # uname -a SunOS neptune 5.11 snv_125 sun4u sparc SUNW,Sun-Fire-880 #
2007 May 01
2
Multiple filesystem costs? Directory sizes?
While setting up my new system, I''m wondering whether I should go with plain directories or use ZFS filesystems for specific stuff. About the cost of ZFS filesystems, I read on some Sun blog in the past about something like 64k kernel memory (or whatever) per active filesystem. What are however the additional costs? The reason I''m considering multiple filesystems is for instance
2007 Apr 12
10
How to bind the oracle 9i data file to zfs volumes
Experts, I''m installing Oracle 9i on Solaris 10 11/06(update 3),I created some zfs volumes which will be used by oracle data file,as: # zfs create -V 200m ora_pool/controlfile01_200m # zfs create -V 800m ora_pool/system_800m ... # ls -l /dev/zvol/rdsk/ora_pool lrwxrwxrwx 1 root root 39 Apr 11 12:23 controlfile01_200m -> ../../../../devices/pseudo/zfs at 0:1c,raw
2007 May 05
3
Issue with adding existing EFI disks to a zpool
I spend yesterday all day evading my data of one of the Windows disks, so that I can add it to the pool. Using mount-ntfs, it''s a pain due to its slowness. But once I finished, I thought "Cool, let''s do it". So I added the disk using the zero slice notation (c0d0s0), as suggested for performance reasons. I checked the pool status and noticed however that the pool size
2007 May 05
13
Optimal strategy (add or replace disks) to build a cheap and raidz?
Hello, i have an 8 port sata-controller and i don''t want to spend the money for 8 x 750 GB Sata Disks right now. I''m thinking about an optimal way of building a growing raidz-pool without loosing any data. As far as i know there are two ways to achieve this: - Adding 750 GB Disks from time to time. But this would lead to multiple groups with multiple redundancy/parity disks. I
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2011 Mar 01
5
btrfs wishlist
Hi all Having managed ZFS for about two years, I want to post a wishlist. INCLUDED IN ZFS - Mirror existing single-drive filesystem, as in ''zfs attach'' - RAIDz-stuff - single and hopefully multiple-parity RAID configuration with block-level checksumming - Background scrub/fsck - Pool-like management with multiple RAIDs/mirrors (VDEVs) - Autogrow as in ZFS autoexpand NOT
2010 Mar 27
4
Mixed ZFS vdev in same pool.
I have a question about using mixed vdev in the same zpool and what the community opinion is on the matter. Here is my setup: I have four 1TB drives and two 500GB drives. When I first setup ZFS I was under the assumption that it does not really care much on how you add devices to the pool and it assumes you are thinking things through. But when I tried to create a pool (called group) with four