Displaying 20 results from an estimated 400 matches similar to: ""no pool_props" for OpenSolaris 2009.06 with old SPARC hardware"
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it
in terms of RAID5 I would expect to get (4-1)x18 worth of drive
space, but DF -h shows 4x18. Is this a bug or do I not understand?
2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB
drives and I want to make a RAIDZ of all of them I would expect the
18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2009 May 20
2
zfs raidz questions
Hi there,
i''m building a small NAS with 5x1TB Disks. The disks contains at the moment some data, ntfs as the fs and aren''t a raid.
Now my im wondering if its possible to add the parity later. So that i add step by step one disk to the pool. And when i add the last disk, i enable the parity.
(i have only one another 1 tb disk to backup the files)
Thank you for you replies and
2008 Jun 17
6
mirroring zfs slice
Hi All,
I had a slice with zfs file system which I want to mirror, I
followed the procedure mentioned in the amin guide I am getting this
error. Can you tell me what I did wrong?
root # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
export 254G 230K 254G 0% ONLINE -
root # echo |format
Searching for disks...done
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2009 Jul 01
14
can''t boot 2009.06 domU on Xen 3.4.1 / CentOS 5.3 dom0
I''ve got a CentOS 5.3 dom0 with Xen 3.4.1-rc5 (or so). I''ve tried the same stuff below with 3.4.0, no difference. I''m trying to install 2009.06 PV domU based on instructions from [1] and [2]. I can run the install fine, I can also get the kernel and boot archive (from [2]) after the install. But for the life of me I can''t get the installed domU to boot.
If I
2006 May 19
11
tracking error to file
In my testing, I''ve found the following error:
zpool status -v
pool: local
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS?
Keith McAndrew
Senior Systems Engineer
Northern California
SUN Microsystems - Data Management Group
<mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com
916 715 8352 Cell
CONFIDENTIALITY NOTICE
The information contained in this transmission may contain privileged and
confidential information of SUN
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello,
I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2005 Oct 31
3
1.5TB ext3 partitions - mke2fs problems at 2^31 blocks
I am trying to get a 9550SX to support a 1.5TB raid partition. I am unsure
whether this is a driver problem, or an ext3 problem (as am getting some
other wierdness detecting LUNs), but...
fdisk recognizes the disk OK. I make a single extended partition with a
single 1.5TB logical partition inside it. I then run
mke2fs -j /dev/sdb
It gets to writing inode tables, and wants to write 11176 block
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2006 Jun 13
4
ZFS panic while mounting lofi device?
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR build 39) that happens to reside on a ZFS file system. The problem is 100% reproducible. I''m quite new to OpenSolaris, so I may be incorrect in saying it''s ZFS'' fault. Also, let me know if you need any additional information or debug output to help diagnose things.
Config:
[b]bash-3.00#
2007 Sep 25
23
device alias
Hi. I''d like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or the zpool command, in particular) had a text field for
arbitrary information, it would be possible to add something that
would indicate what LUN on what array the disk in question might be.
This would make troubleshooting and general
2006 Nov 01
0
RAID-Z1 pool became faulted when a disk was removed.
So I have attached to my system two 7-disk SCSI arrays, each of 18.2 GB
disks.
Each of them is a RAID-Z1 zpool.
I had a disk I thought was a dud, so I pulled the fifth disk in my array and
put the dud in. Sure enough, Solaris started spitting errors like there was
no tomorrow in dmesg, and wouldn''t use the disk. Ah well. Remove it, put the
original back in - hey, Solaris still thinks
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below zpool instead of
having to do it at each level.
(Maybe it was in a dream...)
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/ROOT at 01
--($ ~)-- sudo zfs snapshot rpool/ROOT at 02
--($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02
--($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01
cannot rollback to ''rpool/ROOT at 01'': more
2009 Mar 03
8
zfs list extentions related to pNFS
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
store pNFS stripe DMU objects. A pNFS dataset gets created with the
"zfs
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello,
I have a situation where a host, which is booted off its ''rpool'', need
to temporarily import the ''rpool'' of another host, edit some files in
it, and export the pool back retaining its original name ''rpool''. Can
this be done ?
Here is what I am trying to do:
# zpool import -R /a rpool temp-rpool
# zfs set mountpoint=/mnt