Displaying 20 results from an estimated 8000 matches similar to: "upgrading to the latest zfs version"
2006 Jun 19
0
snv_42 zfs/zpool dump core and kernel/fs/zfs won''t load.
I''m pretty sure this is my fault but I need some help in fixing the system.
It was installed at one point with snv_29 with the pre integration
SUNWzfs package. I did a live upgrade to snv_42 but forgot to remove
the old SUNWzfs before I did so. When the system booted up got
complaints about kstat install because I still had an old zpool kernel
module lying around.
So I did pkgrm
2007 Oct 30
1
Different Sized Disks Recommendation
Hi,
I was first attracted to ZFS (and therefore OpenSolaris) because I thought that ZFS allowed the used of different sized disks in raidz pools without wasted disk space. Further research has confirmed that this isn''t possible--by default.
I have seen a little bit of documentation around using ZFS with slices. I think this might be the answer, but I would like to be sure what the
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0
I have not been able to find any discussion on whether (or when) to
2008 Oct 22
12
Hotplug issues on USB removable media.
Hi,
As a part of the next stages of the time-slider project we are looking into doing actual backups onto
removable media devices such as USB media. The goal is to be able to view snapshots stored on the
media and merge these into the list of viewable snapshots in nautilus giving the user a broader
selection of restore points. In an ideal world we would like to detect the insertion of the
2012 Jun 13
0
ZFS NFS service hanging on Sunday morning problem
>
> Shot in the dark here:
>
> What are you using for the sharenfs value on the ZFS filesystem? Something like rw=.mydomain.lan ?
They are IP blocks or hosts specified as FQDNs, eg.,
pptank/home/tcrane sharenfs rw=@192.168.101/24,rw=serverX.xx.rhul.ac.uk:serverY.xx.rhul.ac.uk
>
> I''ve had issues where a ZFS server loses connectivity to the primary DNS server and
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2008 Jan 31
3
I.O error: zpool metadata corrupted after powercut
Last 2 weeks we had 2 zpools corrupted.
Pool was visible via zpool import, but could not be imported anymore. During import attempt we got I/O error,
After a first powercut we lost our jumpstart/nfsroot zpool (another pool was still OK). Luckaly jumpstart data was backed up and easely restored, nfsroot Filesystems where not but those where just test machines. We thought the metadata corruption
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2009 Aug 21
0
bug :zpool create allow member driver as the raw drive of full partition
IF you run solaris and opensolaris ?for example you my use c0t0d0 (for scsi disk) or c0d0 (for ide /SATA disk ) as the system disk.
In default ,solaris x86 and opensolaris will use RAW driver :
c0t0d0s0 (/dev/rdsk/c0t0d0s0) as the member driver of rpool.
Infact, solaris2 partition can be more then one in each Hard Disk, so we also can use the RAW driver like : c0t0d0p1 (/dev/rdsk/c0t0d0p1)
2009 Jun 15
0
100% kernel usage
Some more insight:
I have the following zpools setup:
aaa_zvol: 2 250GB IDE in Raid0
storage raidZ1:
1 500 GB IDE
1 500 GB SATA
/..../aaa_zvol/aaa_zvol (the zvol exported from the aaa_zvol pool)
When I run the array in a degraded mode, ie place one of the drives in the offline state, the kernel doesn''t seem to spike. When I put the offline drive online and resliver, the 100%
2008 Nov 03
0
Zpool with raidz+mirror = wrong size displayed?
Hi,
I installed a zpool containing of
zpool
__mirror
____disk1 500gb
____disk2 500gb
__raidz
____disk3 1tb
____disk4 1tb
____disk5 1tb
It works fine, but it displays the wrong size (terminal -> zpool list). It should be 500gb (mirrored) + 2TB (3TB raidz) = 2,5 TB, right? But it displays it has 3,17TB diskspace available.
I first created the mirror and then added the raidz to it (zpool add
2009 Apr 08
0
zpool history coredump
Pawel,
another one (though minor, I suppose) bug report: while playing with my poor
pool, I tried to interact with it on -current, thus importing it with -f
(without upgrading, of course).
After reverting to RELENG_7, I found I no more can access history:
root@moose:~# /usr/obj/usr/src/cddl/sbin/zpool/zpool history
History for 'm':
2008-10-14.23:04:28 zpool create m raidz ad4h ad6h
2008 Sep 16
1
Interesting Pool Import Failure
Hello... Since there has been much discussion about zpool import failures
resulting in loss of an entire pool, I thought I would illustrate a scenario
I just went through to recover a faulted pool that wouldn''t import under
Solaris 10 U5. While this is a simple scenario, and the data was not
terribly important, I think the exercise should at least give some piece of
mind to those who
2010 Apr 27
2
ZFS version information changes (heads up)
Hi everyone,
Please review the information below regarding access to ZFS version
information.
Let me know if you have questions.
Thanks,
Cindy
CR 6898657:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6898657
ZFS commands zpool upgrade -v and zfs upgrade -v refer to URLs that
are no longer redirected to the correct location after April 30, 2010.
Description
The
2006 Jun 26
2
raidz2 is alive!
Already making use of it, thank you!
http://www.justinconover.com/blog/?p=17
I took 6x250gb disk and tried raidz2/raidz/none
# zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0
df -h zfs
Filesystem size used avail capacity Mounted on
zfs 915G 49K 915G 1% /zfs
# zpool destroy -f zfs
Plain old raidz (raid-5ish)
# zpool create zfs raidz c0d0
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2006 Jun 21
2
ZFS and Virtualization
Hi experts,
I have few issues about ZFS and virtualization:
[b]Virtualization and performance[/b]
When filesystem traffic occurs on a zpool containing only spindles dedicated to this zpool i/o can be distributed evenly. When the zpool is located on a lun sliced from a raid group shared by multiple systems the capability of doing i/o from this zpool will be limited. Avoiding or limiting i/o to
2009 Oct 29
2
Difficulty testing an SSD as a ZIL
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did:
I created a file to use as a backing store for my new pool:
mkfile 1g /data01/test2/1gtest
Created a new pool:
zpool create ziltest2 /data01/test2/1gtest
Added the
2007 Feb 13
1
Zpool complain about missing devices
Hello,
We had a situation at customer site where one of the zpool complains about missing devices. We do not know which devices are missing. Here are the details:
Customer had a zpool created on a hardware raid(SAN). There is no redundancy in the pool. Pool had 13 LUN''s, customer wanted to increase the size of and added 5 more Luns. During zpool add process system paniced with zfs
2006 Jun 12
3
ZFS + Raid-Z pool size incorrect?
I''m seeing odd behaviour when I create a ZFS raidz pool using three disks. The output of "zpool status" shows the pool size as the size of the three disks combined (as if it were a Raid 0 volume). This isn''t expected behaviour is it? When I create a mirrored volume in ZFS everything is as one would expect the pool is the size of a single drive.
My setup:
Compaq