Displaying 20 results from an estimated 11000 matches similar to: "OpenSolaris ZFS NAS Setup"
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it
in terms of RAID5 I would expect to get (4-1)x18 worth of drive
space, but DF -h shows 4x18. Is this a bug or do I not understand?
2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB
drives and I want to make a RAIDZ of all of them I would expect the
18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2011 Apr 01
15
Zpool resize
Hi,
LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m
changing LUN size on netapp and solaris format see new value but zpool
still have old value.
I tryed zpool export and zpool import but it didn''t resolve my problem.
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start:
Status immediately after starting resilver:
# zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2008 Jun 17
6
mirroring zfs slice
Hi All,
I had a slice with zfs file system which I want to mirror, I
followed the procedure mentioned in the amin guide I am getting this
error. Can you tell me what I did wrong?
root # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
export 254G 230K 254G 0% ONLINE -
root # echo |format
Searching for disks...done
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi,
yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98.
I can''t use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it
To make the disk bootable I used:
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
using the executable from my new
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2008 Jan 31
7
mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export
2006 May 19
11
tracking error to file
In my testing, I''ve found the following error:
zpool status -v
pool: local
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello,
I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,
zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0
and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2008 Jul 31
17
Can I trust ZFS?
Hey folks,
I guess this is an odd question to be asking here, but I could do with some feedback from anybody who''s actually using ZFS in anger.
I''m about to go live with ZFS in our company on a new fileserver, but I have some real concerns about whether I can really trust ZFS to keep my data alive if things go wrong. This is a big step for us, we''re a 100% windows
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
spares
c0t7d0 AVAIL
c1t6d0 AVAIL
c1t7d0
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it
was "constantly busy", and since our x4500 has always died miserably in
the past when a HDD dies, they wanted to replace it before the HDD
actually died.
The usual was done, HDD replaced, resilvering started and ran for about
50 minutes. Then the system hung, same as always, all ZFS related
commands would just
2006 Jun 13
4
ZFS panic while mounting lofi device?
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR build 39) that happens to reside on a ZFS file system. The problem is 100% reproducible. I''m quite new to OpenSolaris, so I may be incorrect in saying it''s ZFS'' fault. Also, let me know if you need any additional information or debug output to help diagnose things.
Config:
[b]bash-3.00#
2006 Nov 01
0
RAID-Z1 pool became faulted when a disk was removed.
So I have attached to my system two 7-disk SCSI arrays, each of 18.2 GB
disks.
Each of them is a RAID-Z1 zpool.
I had a disk I thought was a dud, so I pulled the fifth disk in my array and
put the dud in. Sure enough, Solaris started spitting errors like there was
no tomorrow in dmesg, and wouldn''t use the disk. Ah well. Remove it, put the
original back in - hey, Solaris still thinks