Displaying 20 results from an estimated 2000 matches similar to: "ZFS ontop of SVM - CKSUM errors"
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
I''m seeing some pretty pitiful performance using ZFS on a NFS server, with a ZFS volume exported (only with rw=host.foo.com,root=host.foo.com opts) and mounted on a Linux host running kernel 2.4.31. This linux kernel I''m working with is limited in that I can only do NFSv2 mounts... irregardless of that aspect, I''m sure something''s amiss.
I mounted the zfs-based
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2).
The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2006 Oct 18
5
ZFS and IBM sdd (vpath)
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, MPXIO or VxDMP.
Here is the error message when I try to create my pool:
bash-3.00# zpool create tank /dev/dsk/vpath1a
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c
bash-3.00# zpool create tank /dev/dsk/vpath1c
cannot open
2008 Dec 23
1
device error without a check sum error ?
I have a system running in a VM with a root pool. The root pool
occasionally shows a fairly stern warning. This warning comes with no check
sum errors.
bash-3.00# zpool status -vx
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated.
------------------- EMAIL -------------------
List of faulty resources:
2006 Jan 27
2
Do I have a problem? (longish)
Hi,
to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config:
c2t9d0 9G
c2t10d0 9G
c2t11d0 18G
c2t12d0 18G
c2t11d0 is devided in two:
selecting c2t11d0
[disk formatted]
/dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M).
/dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M).
/dev/dsk/c2t11d0s2 is in use by zpool storedge. Please
2012 Jan 31
0
(gang?)block layout question, and how to decipher ZDB output?
Hello, all
I''m "playing" with ZDB again on another test system,
the rpool being uncompressed with 512-byte sectors.
Here''s some output that puzzles me (questions follow):
# zdb -dddddddd -bbbbbb rpool/ROOT/nightly-2012-01-31 260050
...
1e80000 L0 DVA[0]=<0:200972e00:20200>
DVA[1]=<0:391820a00:200> [L0 ZFS plain file] fletcher4 uncompressed
2006 Oct 05
0
Crash when doing rm -rf
Not an really good subject, I know but that''s kind of what happend.
I''m trying to build an backup-solution server, Windows users using OSCAR (which uses rsync) to sync their files to an folder and when complete takes a snapshot. It has worked before but then I turned on the -R switch to rsync and when I then removed the folder with rm -rf it crashed. I didn''t save what
2007 Jan 08
11
NFS and ZFS, a fine combination
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
____________________________________________________________________________________
Performance, Availability & Architecture Engineering
Roch Bourbonnais Sun Microsystems, Icnc-Grenoble
Senior Performance Analyst 180, Avenue De L''Europe, 38330,
Montbonnot Saint
2007 Jun 19
0
Re: [storage-discuss] Performance expectations of iscsi targets?
Paul,
> While testing iscsi targets exported from thumpers via 10GbE and
> imported 10GbE on T2000s I am not seeing the throughput I expect,
> and more importantly there is a tremendous amount of read IO
> happending on a purely sequential write workload. (Note all systems
> have Sun 10GbE cards and are running Nevada b65.)
The read IO activity you are seeing is a direct
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi,
We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe.
Each disk (of 4) is divided up like this
/ 6GB UFS s0
Swap 8GB s1
/var 6GB UFS s3
Metadb 50MB UFS s4
/data 48GB ZFS s5
For SVM we do a 4 way mirror on /,swap, and /var
So we have 3 SVM mirrors
d0=root (sub mirrors d10, d20, d30, d40)
d1=swap (sub mirrors d11, d21,d31,d41)
2008 Jul 23
0
where was zpool status information keeping.
the os ''s / first is on mirror /dev/dsk/c1t0d0s0 and /dev/dsk/c1t1d0s0, and then created home_pool using mirror, here is the mirror information.
pool: omp_pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
omp_pool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t3d0s0 ONLINE
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no
longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices
in another pool I don''t think this is related, since I the pools are ofline
pending access to the volumes.
I tried running find /dev/zvol/dsk/poolname -type f and here is the stack,
hopefully this someone a hint at what the issue is, I have
2014 May 21
1
Dovecot ontop of glusterfs issue.
Hey,
I am testing Glusterfs as a storage backend for dovecot as a LDA and
imap server.
I have seen similar lines in the logs to these:
May 21 10:46:01 mailgw dovecot: imap(eliezer at ngtech.co.il): Warning:
Created dotlock file's timestamp is different than current time
(1400658105 vs 1400658361):
/home/vmail/ngtech.co.il/eliezer/Maildir/.Mailing_lists.ceph_users/dovecot-uidlist
May 21
2006 May 16
8
ZFS recovery from a disk losing power
running b37 on amd64. after removing power from a disk configured as
a mirror, 10 minutes has passed and ZFS has still not offlined it.
# zpool status tank
pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear
2007 Apr 28
4
What tags are supported on a zvol?
I assume that a zvol has a vtoc. What tags are supported?
Thanks,
Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070427/57e1a64f/attachment.html>
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2008 Nov 24
2
replacing disk
somehow I have issue replacing my disk.
[20:09:29] root at adas: /root > zpool status mypooladas
pool: mypooladas
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ''zpool online''.
see: