Displaying 20 results from an estimated 100 matches similar to: "savecore: warning: /kernel version mismatch: ..."
2003 Jun 26
1
changes in kernel affecting savecore/dumps ...
David gave me some suggestions to check out on the servers, but so far,
its all drawing a blank ... I have two servers right now that are updated
to recent 4.8-STABLE kernels ... one was June 22nd, and the other was
upgraded June 20th ... both of them have crashed since that date, and both
of them tell me that they are unable to produce a core file, with the same
errors:
Jun 26 04:27:14 jupiter
2003 Jun 25
0
savecore no longer works ... /kernel version mismatch
'k, now one of my other machines is getting the same thing on a crash:
savecore: warning: /kernel version mismatch: "FreeBSD 4.8-STABLE #2: Fri Jun 20 18:34:14 ADT 2003 " and ""
It was working, last successful savecore on that machine was Jun 6th:
-rw------- 1 root wheel 4227792896 Jun 6 23:58 vmcore.1
-rw------- 1 root wheel 4227792896 Jun 6 10:37
2007 Jan 29
3
dumpadm and using dumpfile on zfs?
Hi All,
I''d like to set up dumping to a file. This file is on a mirrored pool
using zfs. It seems that the dump setup doesn''t work with zfs. This
worked for both a standard UFS slice and a SVM mirror using zfs.
Is there something that I''m doing wrong, or is this not yet supported on
ZFS?
Note this is Solaris 10 Update 3, but I don''t think that should
2001 Mar 16
1
ssh_exchange_identification: Connection closed by remote host
hello,
i built an ssh 2.5.1p2 package for solaris. it's installed into
/usr/local (with sysconfdir=/etc) on an administrative host with write
access to /usr/local. other hosts nfs mount /usr/local. i had a
script copy the following files generated from the package install
into each host's /etc directory:
primes ssh_prng_cmds sshd_config ssh_config
then ran
2013 Feb 14
2
i386: vm.pmap kernel local race condition
Hi!
I've got FreeBSD 8.3-STABLE/i386 server that can be reliably panicked
using just 'squid -k rotatelog' command. It seems the system suffers
from the problem described here:
http://cxsecurity.com/issue/WLB-2010090156
I could not find any FreeBSD Security Advisory containing a fix.
My server has 4G physical RAM (about 3.2G available) and runs
squid (about 110M VSS) with 500
2008 May 19
2
Suspend/resume on IBM X31
Greetings
I am having trouble with suspend/resume on my Thinkpad X31, running
7.0-STABLE as of April 23. Any help would be appreciated.
First problem: When I run "acpiconf -s3" from mulituser mode, the
system suspends immediately, without executing /etc/rc.suspend (which
has mode 755); then on resume, I get a panic.
Second problem: When the system panics, I don't get a dump (or
2009 Nov 02
2
How do I protect my zfs pools?
Hi,
I may have lost my first zpool, due to ... well, we''re not yet sure.
The ''zpool import tank'' causes a panic -- one which I''m not even
able to capture via savecore.
I''m glad this happened when it did.
At home I am in the process of moving all my data from a Linux NFS
server to OpenSolaris. It''s something I''d been meaning to do
2009 Jan 13
4
zfs null pointer deref, getting data out of single-user mode
My home NAS box, that I''d upgraded to Solaris 2008.11 after a series of
crashes leaving the smf database damaged, and which ran for 4 days
cleanly, suddenly fell right back to where the old one had been before.
Looking at the logs, I see something similar to (this is manually
transcribed to paper and retyped):
Bad trap: type=e (page fault) rp=f..f00050e3250 addr=28 module ZFS null
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all,
I am not sure my original mail got through to the list
(I haven''t received it back), so I attach it below.
Anyhow, now I have a saved kernel crash dump of the system
panicking when it tries to - I believe - deferred-release
the corrupted deduped blocks which are no longer referenced
by the userdata/blockpointer tree.
As I previously wrote in my thread on unfixeable
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
2007 Nov 09
3
Major problem with a new ZFS setup
We recently installed a 24 disk SATA array with an LSI controller attached
to a box running Solaris X86 10 Release 4. The drives were set up in one
big pool with raidz, and it worked great for about a month. On the 4th, we
had the system kernel panic and crash, and it''s now behaving very badly.
Here''s what diagnostic data I''ve been able to collect so far:
In the
2006 Apr 19
0
AHC Panic
I've finally been able to capture the panic, as now it occurs even with DDB
configured. Of the six machines I have running 6.1-RC (CVSupped today),
this is the only one that does this.
/boot/kernel/kernel text=0x30c488 data=0x3b6a0+0x3170c syms=[0x4+0x46430+0x4+0x58da4]
no such file or directory
-
Hit [Enter] to boot immediately, or any other key for command prompt.
Booting
2003 Jul 02
0
union_lookup panics ...
grep union /var/log/messages
Jul 2 12:53:01 jupiter savecore: reboot after panic: union_lookup returning . (0xc68e9e90) not same as startdir (0xc5e062c0)
Jul 2 14:35:07 jupiter savecore: reboot after panic: union_lookup returning . (0xbf6fee90) not same as startdir (0xbb6d58c0)
had two of them today, dumping nice cores ... I'm suspecting its someone
trying to remove a file that is
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it
was "constantly busy", and since our x4500 has always died miserably in
the past when a HDD dies, they wanted to replace it before the HDD
actually died.
The usual was done, HDD replaced, resilvering started and ran for about
50 minutes. Then the system hung, same as always, all ZFS related
commands would just
2012 Nov 27
6
How to clean up /
Hello.
I recently upgraded to 9.1-RC3, everything went fine, however the / partition its about to get full. Im really new to FreeBSD so I don?t know what files can be deleted safely.
# find -x / -size +10000 -exec du -h {} \;
16M /boot/kernel/kernel
60M /boot/kernel/kernel.symbols
6.7M /boot/kernel/if_ath.ko.symbols
6.4M /boot/kernel/vxge.ko.symbols
9.4M
2003 Jun 07
0
FFS related panic in 4.8-STABLE
Every few days now I get a panic like the one shown in the attached
backtrace. I've made sure my filesystem is clean with fsck and even
turned off write caching on my IDE drive. Maybe someone here can
figure out what is wrong. Let me know if you need more information.
Michael
-------------- next part --------------
root@taco /usr/src/sys/compile/ZOE> gdb -k -c /opt/savecore/vmcore.0
2009 Dec 20
0
On collecting data from "hangs"
There seems to be a rash of posts lately where people are resetting or
rebooting without getting any data, so I thought I''d post a quick
overview on collecting crash dumps. If you think you''ve got a hang
problem with ZFS and you want to gather data for someone to look at,
then here are a few steps you should take.
If you already know all about gathering crash dumps on
2003 Sep 29
3
FreeBSD 4.9 RC1 (more bad news)
A few hours ago I downloaded .../i386/ISO-IMAGES/4.9-RC1-i386-disc1.iso
and made kern/mfsroot floppies from it. If I enable my ICH5R SATA
controller in "native" mode, the 4.9-RC1 GENERIC kernel hangs solidly
(only the system reset button can unwedge it) during device configuration.
The last line of bootstrap monologue written by the kernel is:
plip0" <PLIP network
2008 Jul 22
3
6.3-RELEASE-p3 recurring panics on multiple SM PDSMi+
We have 10 SuperMicro PDSMi+ 5015M-MTs that are panic'ing every few
days. This started shortly after upgrade from 6.2-RELEASE to
6.3-RELEASE with freebsd-update.
Other than switching to a debugging kernel, a little sysctl tuning,
and patching with freebsd-update, they are stock. The debugging
kernel was built from source that is also being patched with
freebsd-update.
These systems are
2003 May 14
0
System totally borked after installworld and mergemaster at 16:20:03 MSK
Well, 4 days after previous upgrade i cvs'ed RELENG_4 from
local cvs tree (updated hourly), made buildworld buildkernel
installkernel, rebooted into single user, made installworld
and mergemaster.
Mergemaster updated newsyslog.conf, syslogd.conf, ok, rebooted...
Wow...
All scripts from /usr/local/etc/rc.d refused to load.
And the system is in "Amnesiac" mode
Attempt to run