Displaying 20 results from an estimated 40000 matches similar to: "Zpool scrub and reboot."
2009 Aug 31
0
zpool scrub results in pool deadlock
I just ran zpool scrub on an active pool on an x4170 running S10U7 with the latest patches and iostat immediately dropped to 0 for all the pool devices and all processes associated with that device where hard locked, e.g., kill -9 on a zpool status processes was ineffective. However, other zpool on the system, such as the root pool, continued to work.
Neither init 6 nor reboot where able to take
2007 Apr 02
0
zpool scrub checksum error
Hello,
i?ve already read many posts about checksum error on zpools but i like to have some more informations, please.
We use 2 sun servers (amd x64, SunOS, 5.10 Generic_118855-36, hopefully all patches) with two hardware raids (raid 10) connected through fibre channel. Disk space is about 3 TB split into 4 pool including several (about 10 - 15) zfs each. After 22 days uptime i got a first
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings,
my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system.
Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot.
Now when investigating
2007 Mar 05
3
How to interrupt a zpool scrub?
Dear all
Is there a way to stop a running scrub on a zfs pool? Same question applies to a running resilver.
Both render our fileserver unusable due to massive CPU load so we''d like to postpone them.
In the docs it says that resilvering and scrubbing survive a reboot, so I am not even sure if a reboot would stop scrubbing or resilvering.
Any help greatly appreciated!
Cheers, Thomas
2011 Jul 12
1
Can zpool permanent errors fixed by scrub?
Hi, we had a server that lost connection to fiber attached disk array where data luns were housed, due to 3510 power fault. After connection restored alot of the zpool status had these permanent errors listed as per below. I check the files in question and as far as I could see they were present and ok. I ran a zpool scrub against other zpools and they came back with no errors and the list of
2007 Jul 25
3
Any fix for zpool import kernel panic (reboot loop)?
My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying Opensolaris now has the zpool-related kernel panic reboot loop.
Booting into failsafe mode or another solaris installation and attempting:
''zpool import -F rootpool'' results in a kernel panic and reboot.
A search shows this type of kernel panic has been discussed on this forum over the last year.
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s
been patched and seems to have
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2009 Dec 12
0
Messed up zpool (double device label)
Hi!
I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen''t seem to be well-engineered.
After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
2007 Nov 13
3
zpool status can not detect the vdev removed?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all,
I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2008 Jan 11
4
zpool remove problem
I have a pool with 3 partitions in it. However, one of them is no longer valid, the disk was removed and modified so that the original partition is no longer available. I cannot get zpool to remove it from the pool. How do I tell zfs to take this item out of the pool if not with "zfs remove" ?
Thanks,
Wyllys
here is my pool:
zpool status
pool: bigpool
state: FAULTED
status:
2010 Jan 03
2
"zpool import -f" not forceful enough?
I had to use the labelfix hack (and I had to recompile it at that) on 1/2 of an old zpool. I made this change:
/* zio_checksum(ZIO_CHECKSUM_LABEL, &zc, buf, size); */
zio_checksum_table[ZIO_CHECKSUM_LABEL].ci_func[0](buf, size, &zc);
and I''m assuming [0] is the correct endianness, since afterwards I saw it come up with "zpool import".
Unfortunately, I
2007 Nov 13
0
in a zpool consist of regular files, when I remove the file vdev, zpool status can not detect?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2008 Aug 03
1
Scrubbing only checks used data?
Hi there,
I am currently evaluating OpenSolaris as a replacement for my linux installations. I installed it as a xen domU, so there is a remote chance, that my observations are caused by xen.
First, my understanding of "zpool [i]scrub[/i]" is "Ok, go ahead, and rewrite [b]each block of each device[/b] of the zpool".
Whereas "[i]resilvering[/i]" means "Make
2011 Feb 05
0
40MB repaired on a disk during scrub but no errors
Hey folks,
While scrubbing, zpool status shows nearly 40MB "repaired" but 0 in each of the read/write/checksum columns for each disk. One disk has "(repairing)" to the right but once the scrub completes there''s no mention that anything ever needed fixing. Any idea what would need to be repaired on that disk? Are there any other types of errors besides
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All,
over the last couple of weeks, I had to boot from my rpool from various physical
machines because some component on my laptop mainboard blew up (you know that
burned electronics smell?). I can''t retrospectively document all I did, but I am
sure I recreated the boot-archive, ran devfsadm -C and deleted
/etc/zfs/zpool.cache several times.
Now zpool status is referring to a
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives.
The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In
2008 Jan 22
0
zpool attach problem
On a V240 running s10u4 (no additional patches), I had a pool which looked like this:
<pre>
> # zpool status
> pool: pool01
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> pool01 ONLINE 0 0 0
> mirror
2008 Mar 13
3
[Bug 759] New: ''zpool create -o keysource=,'' hanged
http://defect.opensolaris.org/bz/show_bug.cgi?id=759
Summary: ''zpool create -o keysource=,'' hanged
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: i86pc/i386
OS/Version: Solaris
Status: NEW
Severity: minor
Priority: P3
Component: other