similar to: zfs block allocation strategy

Displaying 20 results from an estimated 1100 matches similar to: "zfs block allocation strategy"

2011 Jan 29
19
multiple disk failure
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the box rebooted. After it rebooted, the entire pool is gone and in the state below. I had only written a few
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2011 Jun 06
3
Available space confusion
I recently created a raidz of four 2TB-disks and moved a bunch of movies onto them. And then I noticed that I''ve somehow lost a full TB of space. Why? nebol at filez:/$ zfs list tank2 NAME USED AVAIL REFER MOUNTPOINT tank2 3.12T 902G 32.9K /tank2 nebol at filez:/$ zpool list tank2 NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank2 5.44T 4.18T 1.26T 76% ONLINE -
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2007 Feb 07
4
NFS share problem with mac os x client
Hello, I test right now the beauty of zfs. I have installed opensolaris on a spare server to test nfs exports. After creating tank1 with zpool and a subfilesystem with zfs tank1/nfsshare, I have set the option sharenfs=on to tank1/nfsshare. With Mac OS X as client I can mount the filesystem in Finder.app with nfs://server/tank1/nfsshare, but if I copy a file an error ocours. Finder say "The
2010 Jun 02
11
ZFS recovery tools
Hi, I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks to some great forum posts from Victor Latushkin, however without his posts I would still be crying at night... I think the worst example is the zdb man page, which all it does is to ask you
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2010 Aug 25
6
(preview) Whitepaper - ZFS Pools Explained - feedback welcome
Hello list, while following this list for more then 1 year, I feel that this list was a great way to get insights into ZFS. Thank you all for contributing. Over the last month''s I was writing a little "whitepaper" trying to consolidate the knowledge collected here. It has now reached a "beta" state and I would like to share the result with you. I call it -
2009 Nov 20
13
Data balance across vdevs
I''m migrating to ZFS and Solaris for cluster computing storage, and have some completely static data sets that need to be as fast as possible. One of the scenarios I''m testing is the addition of vdevs to a pool. Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more vdevs and would like to balance this data across the pool for performance. The data may be
2009 May 01
2
current zfs tuning in RELENG_7 (AMD64) suggestions ?
I gave the AMD64 version of 7.2 RC2 a spin and all installed as expected off the dvd INTEL S3200SHV MB, Core2Duo, 4G of RAM In the past it had been suggested that for zfs tuning, something like vm.kmem_size_max="1073741824" vm.kmem_size="1073741824" vfs.zfs.prefetch_disable=1 However doing a simple test with bonnie and dd, there does not seem to be very much difference in
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs. Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations 4 x x6
2010 Jan 07
1
Trying to get Xen going with svn_130...
I''m trying to get linux domU going with opensolaris ''130. The CPU is a plain P4 3.2Ghz (no hardware Virt) but all I''m trying is --paravirt Here''s the command line I''m trying..... virt-install --paravirt --name=dom1 --ram=1024 --vnc \ --os-type=linux --os-variant=fedora8 \ --network bridge \ --file /dev/zvol/dsk/rpool/dom1 \ --location
2012 Dec 20
3
Pool performance when nearly full
Hi I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking (and I''d check the ZFS wikis but the websites are down at the moment). Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ? zfs list: used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4% zpool iostat: used
2007 Dec 23
11
RAIDZ(2) expansion?
I skimmed the archives and found a thread from July earlier this year about RAIDZ expansion. Not adding more RAIDZ stripes to a pool, but adding more drives to the stripe itself. I''m wondering if an RFE has been submitted for this and if any progress has been made, or is expected? I find myself out of space on my current RAID5 setup and would love to flip over to a ZFS raidz2 solution
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :) I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast! I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2006 Nov 28
7
Convert Zpool RAID Types
Hello, Is it possible to non-destructively change RAID types in zpool while the data remains on-line? -J
2010 Nov 08
8
Any limit on pool hierarchy?
Folks, >From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be built across a few raidz vdevs. Is my understanding correct? Also, is there a limit on the depth of a vdev? Thank you in advance for your help. Regards, Peter -- This message posted from opensolaris.org
2006 Oct 12
3
Best way to carve up 8 disks
Ok, previous threads have lead me to believe that I want to make raidz vdevs [0] either 3, 5 or 9 disks in size [1]. Let''s say I have 8 disks. Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev? Are there performance issues with mixing differently sized raidz vdevs in a pool? If there *is* a performance hit to mix like that, would it be greater or lesser than building
2011 Nov 09
3
Data distribution not even between vdevs
Hi list, My zfs write performance is poor and need your help. I create zpool with 2 raidz1. When the space is to be used up, I add 2 another raidz1 to extend the zpool. After some days, the zpool is almost full, I remove some old data. But now, as show below, the first 2 raidz1 vdev usage is about 78% and the last 2 raidz1 vdev usage is about 93%. I have line in /etc/system set