similar to: Kernel panic receiving incremental snapshots

Displaying 20 results from an estimated 400 matches similar to: "Kernel panic receiving incremental snapshots"

2007 Nov 25
2
Corrupted pool
Howdy, We are using ZFS on one of our Solaris 10 servers, and the box paniced this evening with the following stack trace: Nov 24 04:03:35 foo unix: [ID 100000 kern.notice] Nov 24 04:03:35 foo genunix: [ID 802836 kern.notice] fffffe80004a14d0 fffffffffb9b49f3 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1550 zfs:space_map_remove+239 () Nov 24 04:03:35 foo genunix: [ID
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2006 Oct 25
4
Panic while scrubbing
Hello, I am not sure if I am posting in the correct forum, but it seems somewhat zfs related, so I thought I''d share it. While the machine was idle, I started a scrub. Around the time the scrubbing was supposed to be finished, the machine panicked. This might be related to the ''metadata corruption'' that happened earlier to me. Here is the log, any ideas? Oct 24
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be less than the product of used and compressratio? For example, # zfs get -p all home1/home1mm01 NAME PROPERTY VALUE SOURCE home1/home1mm01 type volume - home1/home1mm01 creation 1254440045 - home1/home1mm01 used 14902492672
2008 Apr 24
0
panic on zfs scrub on builds 79 & 86
This just started happening to me. It''s a striped non mirrored pool (I know I know). A zfs scrub causes a panic under a minute. I can also trigger a panic by doing tars etc. x86 64-bit kernel ... any ideas? Just to help rule out some things, I changed the motherboard, memory and cpu and it still happens ... I also think it happens on a 32-bit kernel. genunix: [ID 335743 kern.notice] BAD
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
I?d like to report the ZFS related crash/bug described below. How do I go about reporting the crash and what additional information is needed? I?m using my own very simple test app that creates numerous directories and files of randomly generated data. I have run the test app on two machines, both 64 bit. OpenSolaris crashes a few minutes after starting my test app. The crash has occurred on
2007 Oct 10
6
server-reboot
Hi. Just migrated to zfs on opensolaris. I copied data to the server using rsync and got this message: Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ffffff0007f1bc80: Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ffffff0007f1b640 addr=fffffffecd873000 Oct 10 17:24:04 zetta unix: [ID 100000 kern.notice] Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice]
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82): Apr 23 02:02:21 SERVER144 i/o to invalid geometry Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82): Apr 23 02:02:21 SERVER144
2008 May 26
2
indiana as nfs server: crash due to zfs
hello all, i have indiana freshly installed on a sun ultra 20 machine. It only does nfs server. During one night, the kernel had crashed, and i got this messages: " May 22 02:18:57 ultra20 unix: [ID 836849 kern.notice] May 22 02:18:57 ultra20 ^Mpanic[cpu0]/thread=ffffff0003d06c80: May 22 02:18:57 ultra20 genunix: [ID 603766 kern.notice] assertion failed: sm->sm_space == 0 (0x40000000 ==
2006 May 09
3
Possible corruption after disk hiccups...
I''m not sure exactly what happened with my box here, but something caused a hiccup on multiple sata disks... May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,5c at 9/pci-ide at a/ide at 0 (ata6): May 9 16:47:43 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 7/ide at 1 (ata3): May 9 16:47:43 sol timeout: abort request, target=0
2003 Jun 05
2
rsync behind NAT
Hi I use rsync as a client to copy a directory structure from a remote server, once every month. Until a few weeks ago, we had direct access to the internet - and the way it would work is that the guy maintaining the rsync server would add my machine's IP to an access list and I would run the rsync command from my machine. Recently however, our network was 'upgraded' to a NAT
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2008 Jun 04
2
panic on `zfs export` with UNAVAIL disks
hi list, initial situation: SunOS alusol 5.11 snv_86 i86pc i386 i86pc SunOS Release 5.11 Version snv_86 64-bit 3 USB HDDs on 1 USB hub: zpool status: state: ONLINE NAME STATE READ WRITE CKSUM usbpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c7t0d0p0 ONLINE 0 0 0 c8t0d0p0
2007 Sep 02
8
DTraceTools Update
Is there any work on keeping DTraceTools up to date with the latest snv builds. These scripts are pretty useful and help get a novice dtrace user like me doing useful work quickly. Specifically the tcp stack tools like tcptop and tcpsnoop don''t work with later OpenSolaris builds. Thanks, Gary -- This message posted from opensolaris.org
2015 Nov 12
10
[Bug 2495] New: add GSI GSSAPI SSO authentication to OpenSSH
https://bugzilla.mindrot.org/show_bug.cgi?id=2495 Bug ID: 2495 Summary: add GSI GSSAPI SSO authentication to OpenSSH Product: Portable OpenSSH Version: 7.1p1 Hardware: amd64 OS: Linux Status: NEW Severity: enhancement Priority: P5 Component: Kerberos support Assignee:
2013 May 09
1
Crossrealm Kerberos problems
I am running dovecot 2.1.7 on Debian Squeeze 64 bit, config information at the end of the email. I am working on a Kerberos/GSSAPI based setup that requires cross-realm authentication. I have regular GSSAPI working, I can log in using pam_krb5 with password based logins or with the GSSAPI support when using a kerberos ticket in the default realm. However when I attempt to authenticate using
2007 Aug 09
5
Unremovable file in ZFS filesystem.
I managed to create a link in a ZFS directory that I can''t remove. Session as follows: # ls bayes.lock.router.3981 bayes_journal user_prefs # ls -li bayes.lock.router.3981 bayes.lock.router.3981: No such file or directory # ls bayes.lock.router.3981 bayes_journal user_prefs # /usr/sbin/unlink bayes.lock.router.3981 unlink: No such file or directory # find . -print
2007 Jul 25
3
Any fix for zpool import kernel panic (reboot loop)?
My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying Opensolaris now has the zpool-related kernel panic reboot loop. Booting into failsafe mode or another solaris installation and attempting: ''zpool import -F rootpool'' results in a kernel panic and reboot. A search shows this type of kernel panic has been discussed on this forum over the last year.
2007 Jan 10
1
Solaris 10 11/06
Now that Solaris 10 11/06 is available, I wanted to post the complete list of ZFS features and bug fixes that were included in that release. I''m also including the necessary patches for anyone wanting to get all the ZFS features and fixes via patches (NOTE: later patch revision may already be available): Solaris 10 Update 3 (11/06) Patches sparc Patches * 118833-36 SunOS 5.10: