similar to: ZFS + Raid-Z pool size incorrect?

Displaying 20 results from an estimated 3000 matches similar to: "ZFS + Raid-Z pool size incorrect?"

2006 Nov 01
0
RAID-Z1 pool became faulted when a disk was removed.
So I have attached to my system two 7-disk SCSI arrays, each of 18.2 GB disks. Each of them is a RAID-Z1 zpool. I had a disk I thought was a dud, so I pulled the fifth disk in my array and put the dud in. Sure enough, Solaris started spitting errors like there was no tomorrow in dmesg, and wouldn''t use the disk. Ah well. Remove it, put the original back in - hey, Solaris still thinks
2006 Jun 13
4
ZFS panic while mounting lofi device?
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR build 39) that happens to reside on a ZFS file system. The problem is 100% reproducible. I''m quite new to OpenSolaris, so I may be incorrect in saying it''s ZFS'' fault. Also, let me know if you need any additional information or debug output to help diagnose things. Config: [b]bash-3.00#
2007 Mar 07
0
anyone want a Solaris 10u3 core file...
I executed sync just before this happened.... ultra:ultra# mdb -k unix.0 vmcore.0 Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy md ip sctp usba fctl nca crypto zfs random nfs ptm cpc fcip sppp lofs ] > $c vpanic(7b653bd8, 7036fca0, 7036fc70, 7b652990, 0, 60002d0b480) zio_done+0x284(60002d0b480, 0, a8, 7036fca0, 0, 60000b08d80) zio_vdev_io_assess+0x178(60002d0b480, 8000,
2006 Aug 28
1
Sol 10 x86_64 intermittent SATA device locks up server
Hello All, I have an issue where I have two SATA cards with 5 drives each in one zfs pool. The issue is one of the devices has been intermittently failing. The problem is that the entire box seems to lock up on occasion when this happens. I currently have the SATA cable to that device disconnected in the hopes that the box will at least stay up for now. This is a new build that I am
2008 Jun 17
6
mirroring zfs slice
Hi All, I had a slice with zfs file system which I want to mirror, I followed the procedure mentioned in the amin guide I am getting this error. Can you tell me what I did wrong? root # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT export 254G 230K 254G 0% ONLINE - root # echo |format Searching for disks...done
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it in terms of RAID5 I would expect to get (4-1)x18 worth of drive space, but DF -h shows 4x18. Is this a bug or do I not understand? 2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB drives and I want to make a RAIDZ of all of them I would expect the 18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello, I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2011 Mar 04
13
cannot replace c10t0d0 with c10t0d0: device is too small
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz storage pool and then shelved the other two for spares. One of the disks failed last night so I shut down the server and replaced it with a spare. When I tried to zpool replace the disk I get: zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: device is too small The 4 original disk partition tables look like
2006 May 19
11
tracking error to file
In my testing, I''ve found the following error: zpool status -v pool: local state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested
2009 Dec 18
1
part of active zfs pool error message reports incorrect decive
I am seeing this issue posted a lot in the forums: A zpool add/replace command is run, for example: zpool add archive spare c2t0d2 invalid vdev specification use ''-f'' to override the following errors: /dev/dsk/c2t1d7s0 is part of active ZFS pool archive. Please see zpool(1M). (-f just says: the following errors must be manually repaired:) Also, when running format and
2006 Jul 18
1
file access algorithm within pools
Hello, What is the access algorithm used within multi-component pools for a given pool, and does it change when one or more members of the pool become degraded ? examples: zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror c5t0d0 c6t0d0 or; zpool create ztank raidz c1t0d0 c2t0d0 c3t0d0 raidz c4t0d0 c5t0d0 c6t0d0 As files are created on the filesystem within these pools,
2007 Jun 13
5
drive displayed multiple times
So I just imported an old zpool onto this new system. The problem would be one drive (c4d0) is showing up twice. First it''s displayed as ONLINE, then it''s displayed as "UNAVAIL". This is obviously causing a problem as the zpool now thinks it''s in a degraded state, even though all drives are there, and all are online. This pool should have 7 drives total,
2007 May 24
1
how do I revert back from ZFS partitioned disk to original partitions
I accidentally created a zpool on a boot disk, it paniced the system and now I can jumpstart and install the OS on it. This is what it looks like. partition> p Current partition table (original): Total disk sectors available: 17786879 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 34 8.48GB
2012 Nov 11
0
Expanding a ZFS pool disk in Solaris 10 on VMWare (or other expandable storage technology)
Hello all, This is not so much a question but rather a "how-to" for posterity. Comments and possible fixes are welcome, though. I''m toying (for work) with a Solaris 10 VM, and it has a dedicated virtual HDD for data and zones. The template VM had a 20Gb disk, but a particular application needs more. I hoped ZFS autoexpand would do the trick transparently, but it turned out
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont know how to fix... I had a pool of two drives: bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 emcpower1a ONLINE
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2008 Mar 13
12
7-disk raidz achieves 430 MB/s reads and 220 MB/s writes on a $1320 box
I figured the following ZFS ''success story'' may interest some readers here. I was interested to see how much sequential read/write performance it would be possible to obtain from ZFS running on commodity hardware with modern features such as PCI-E busses, SATA disks, well-designed SATA controllers (AHCI, SiI3132/SiI3124). So I made this experiment of building a fileserver by
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2007 Sep 14
3
Convert Raid-Z to Mirror
Is there a way to convert a 2 disk raid-z file system to a mirror without backing up the data and restoring? We have this: bash-3.00# zpool status pool: archives state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM archives ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>