Displaying 20 results from an estimated 4000 matches similar to: "zpool scrub checksum error"
2011 Jul 12
1
Can zpool permanent errors fixed by scrub?
Hi, we had a server that lost connection to fiber attached disk array where data luns were housed, due to 3510 power fault. After connection restored alot of the zpool status had these permanent errors listed as per below. I check the files in question and as far as I could see they were present and ok. I ran a zpool scrub against other zpools and they came back with no errors and the list of
2007 Apr 09
5
CAD application not working with zfs
Hello,
was use several cad applications and with one of those we have problems using zfs.
OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the cad application is catia v4.
There are several configuration and data files stored on the server and shared via nfs to solaris and aix clients. The application is crashing on the aix client except the server is sharing those files from a ufs
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2009 Aug 31
0
zpool scrub results in pool deadlock
I just ran zpool scrub on an active pool on an x4170 running S10U7 with the latest patches and iostat immediately dropped to 0 for all the pool devices and all processes associated with that device where hard locked, e.g., kill -9 on a zpool status processes was ineffective. However, other zpool on the system, such as the root pool, continued to work.
Neither init 6 nor reboot where able to take
2013 Jan 19
0
zpool errors without fmdump or dmesg errors
Hi all,
I am running S11 on a Dell PE650. It has 5 zpools attached that are made
out of 240 drives, connected via fibre. On thursday all of the sudden
two out of three zpools on one FC channel showed numerous errors and one
of them showed this:
root at solaris11a:~# zpool status vsmPool01
pool: vsmPool01
state: SUSPENDED
status: One or more devices is currently being resilvered. The pool
2010 Aug 19
0
Zpool scrub and reboot.
Suppose I start a zpool scrub and reboot before the scrub is finished. On reboot does the scrub carry on from where it left off or does it start at the beginning again?
Thanks
--
This message posted from opensolaris.org
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s
been patched and seems to have
2007 Mar 05
3
How to interrupt a zpool scrub?
Dear all
Is there a way to stop a running scrub on a zfs pool? Same question applies to a running resilver.
Both render our fileserver unusable due to massive CPU load so we''d like to postpone them.
In the docs it says that resilvering and scrubbing survive a reboot, so I am not even sure if a reboot would stop scrubbing or resilvering.
Any help greatly appreciated!
Cheers, Thomas
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0
I have not been able to find any discussion on whether (or when) to
2009 Feb 11
0
failmode= continue prevents zpool processes from hanging and being unkillable?
> Dear ZFS experts,
> somehow one of my zpools got corrupted. Symptom is that I cannot
> import it any more. To me it is of lesser interest why that happened.
> What is really challenging is the following.
>
> Any effort to import the zpool hangs and is unkillable. E.g. if I
> issue a "zpool import test2-app" the process hangs and cannot be
> killed. As this
2008 Jul 02
0
Q: grow zpool build on top of iSCSI devices
Hi all.
We currenty move out a number of iSCSI servers based on Thumpers
(x4500) running both, Solaris 10 and OpenSolaris build 90+. The
targets on the machines are based on ZVOLs. Some of the clients use those
iSCSI "disks" to build mirrored Zpools. As the volumes size on the x4500
can easily be grown I would like to know if that growth in space can be
propagated to the client
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2007 Feb 13
1
Zpool complain about missing devices
Hello,
We had a situation at customer site where one of the zpool complains about missing devices. We do not know which devices are missing. Here are the details:
Customer had a zpool created on a hardware raid(SAN). There is no redundancy in the pool. Pool had 13 LUN''s, customer wanted to increase the size of and added 5 more Luns. During zpool add process system paniced with zfs
2009 Jan 23
2
zpool import fails to find pool
Hi all,
I moved from Sol 10 Update4 to update 6.
Before doing this I exported both of my zpools, and replace the discs containing the ufs root on with two new discs (these discs did not have any zpool /zfs info and are raid mirrored in hardware)
Once I had installed update6 I did a zpool import, but it only shows (and was able to) import one of the two pools.
Looking at dmesg it appears as
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi,
We are seeing more long delays in zpool import, say, 4~5 or even
25~30 minutes, especially when backup jobs are going on in the FC SAN
the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array,
some pools takes a few seconds, but minutes for some. the pattern
seems random to me so far. It''s first noticed soon after being upgraded to
Solaris 10 U6
2008 Feb 15
2
[storage-discuss] Preventing zpool imports on boot
On Thu, Feb 14, 2008 at 11:17 PM, Dave <dave-opensolaris at dubkat.com> wrote:
> I don''t want Solaris to import any pools at bootup, even when there were
> pools imported at shutdown/at crash time. The process to prevent
> importing pools should be automatic and not require any human
> intervention. I want to *always* import the pools manually.
>
> Hrm... what
2008 Jan 31
3
I.O error: zpool metadata corrupted after powercut
Last 2 weeks we had 2 zpools corrupted.
Pool was visible via zpool import, but could not be imported anymore. During import attempt we got I/O error,
After a first powercut we lost our jumpstart/nfsroot zpool (another pool was still OK). Luckaly jumpstart data was backed up and easely restored, nfsroot Filesystems where not but those where just test machines. We thought the metadata corruption
2011 Dec 18
0
Scrub found error in metadata:0x0, is that always fatal? No checksum errors now...
2011-12-17 21:59, Steve Gonczi wrote:
> Coincidentally, I am pretty sure entry 0 of these meta dnode objects is
> never used,
> so the block with the checksum error does never comes into play.
> Steve
I wonder if this is true indeed - seems so, because the pool
seems to work reardless of the seemingly deep metadata error.
Now, can someone else please confirm this guess? If I were
to
2005 Nov 29
1
configure 3.0.21rc1 on solaris
Hello,
i got an error while configuring samba on solaris 5.8:
lib/smbldap.c: In function `smbldap_connect_system':
lib/smbldap.c:770: warning: passing arg 2 of `ldap_set_rebind_proc' from
incompatible pointer type
lib/smbldap.c:770: error: too few arguments to function `ldap_set_rebind_proc'
make: *** [lib/smbldap.o] Error 1
As 3.0.20b is configuring without this error, are there