Displaying 20 results from an estimated 400 matches similar to: "Crash dump in umount"
2003 Jun 23
2
Kernel core dump in recent 4.8-STABLE
Today my system coredumped (4.8-STABLE from Saturday), I believe it's somehow
X11 related:
X11 crashed first (signal 11). I was running it as root (I know I shouldn't).
I didn't think about it and restarted X11. While it was starting, I had a look
at the console, there was a bright white message: issignal.
This shows up at X11 startup.
Then the system coredumped.
Below is more
2003 Apr 22
0
kmem_map too small: 260046848 total allocated
After about a day and a half or so of uptime, I'm getting the
aforementioned panic on the server ... better then having it hang solid,
but right now I'm not sure if this is replacing it, or just one being
triggered earlier then the other ...
First scan through Google, I came across some posts talking about
NMBCLUSTERS ... since its at the same settings as my other server (the
default)
2003 Aug 07
0
understanding a panic / crash dump
Just trying to understand if anything might be going on with this crash
dump or just faulty hardware ? dmesg at the end
---Mike
# gdb -k /kernel.debug vmcore.0
GNU gdb 4.18 (FreeBSD)
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type
2003 Sep 29
4
panics on 24 hour boundaries
Hi stable, nice you see you again. I was one of those guys who was seeing
constand panics on 24 hour boundaries but couldn't provide a backtrace due
to the ar device not taking a dump. I installed a dedicated drive just to
take the dump, and then didn't have a panic for a couple weeks. Now, I am
back with, and I have traces to share.
The first two, from 2003-09-27 and 2003-09-28
2006 Mar 13
2
panic: ffs_valloc: dup alloc
I get the above panic after nfs clients attach to this nfs server and
being read/write ops on it after an unclean shutdown. I've fsck'ed the
fs, and it marks it as clean, but I get this every time. It's an NFS
share of a GEOM stripe (about 2TB).
mode = 0100600, inum = 58456203, fs = /mnt
panic: ffs_valloc: dup alloc
I do have dumps from two crashes so far.
This is
2003 Jun 26
5
apache panics on a recent 4.8-STABLE
Yesterday I begin a couple of update to the latest 4.8-STABLE.
After that the two boxes continues to go in panics as soon as Apache (1.3
from the ports, also freshly recompiled, 2.0.x seems NOT to hang) starts.
I don't know if it is related to the other thread : "Kernel core dump in
recent 4.8-STABLE" but it is easily reproducible by cvsupping to a today
-STABLE and then running
2003 Oct 31
2
vinum question: how could one correctly delete vinum module?
Dear colleagues,
[I'm under 4-STABLE]
What is the correct sequence to delete existing vinum module (for example,
raid10) and do *not* use -f flags for vinum?
in my case t is raid10 vovume:
vinum -> l -r t
V t State: up Plexes: 2 Size: 8191 MB
P t.p0 S State: up Subdisks: 2 Size: 8191 MB
P t.p1 S State:
2003 Aug 04
4
bootstrapping vinum root
Well, colleagues, I'm stuck a bit.
I tried many different ways to setup system with vinum root (the only
reference I found yet besides old "bootstrapping vinum" article is Joerg's
commit message: http://freebsd.rambler.ru/bsdmail/cvs-all_2003/msg01225.html
I failed. I have 4-stable system set up at ad0, and tried to set up pair of
drives for new system at ad2 and ad3 (actually,
2003 Jun 12
0
panic possibly related to soft updates? (4.8-STABLE, Jun 12 2003)
Hello list,
I have been fighting this problem for a few days now. I have changed memory
and opened the case and monitored for heat. I have been getting the same
panic about every 12 to 24 hours. I can let the system sit idle, or run it
under a heavy load (cpu and disk), but the panics dont seem to be related to
system load. It looks to me like a dangling pointer in
softdep_update_inodeblock,
2006 Apr 07
1
PAE and gvinum
Hi all,
I got a machine with 8GB of RAM and plenty of disk space. I need gvinum to
manage big number of file systems but PAE enabled kernel does not compile
modules. I couldn't figure out how to get vinum statically compiled in the
kernel if that is the way to go. I am running 6-STABLE.
Please advise on how to get PAE kernel and gvinum working together!
TIA,
Stoyan
2003 Jul 20
2
mismatching vinum configurations
Hi,
I had a power failure, and the on-disk configuration for vinum went
bizarre. The logs read from disks are at http://biaix.org/pk/debug/
(log.$DEVICE files). The logs in da0 (barracuda) are the ones obviously
wrong, I'm pretty sure the others are ok. Is this a 'virtually' dead
drive? Can I force vinum to use the other's drive configuration? What's
the less traumatic
2003 Jun 29
1
vinum drive referenced / disklabel inconsistency
I am trying to setup vinum on a box using 4.8 RELENG_4 (as of about a
week ago snapshot). This box was running 4.6 /w vinum on same hard
drives for the last 4 months wonderfully... but since it is my
current 'scratch/backup' box, I just reinstalled with -STABLE.
# uname -a
FreeBSD polya.axista.com 4.8-STABLE FreeBSD 4.8-STABLE #22: Tue Jun 24
17:01:07 EDT 2003
2003 Apr 11
1
Vinum crash, advice needed
I made an array out of 3 IBM 20 gigs and 3 Maxtor 20 gigs
I used striping to make one big drive
A couple of days ago I was copying a movie onto to it and the computer
decided to reboot
when it came back up I was greeted with this message
Can someone please tell me if I can recover some how? or should I just
rebuild the array? or is one of my disks physically bad?
Detects all 7 drives (1 10gig
2003 Jul 29
1
kern/53717: 4.8-RELEASE kernel panic (page fault)
Some more crashes of 4.8-RELEASE.
Lets see what do I have today:
# ls -l /var/crash
total 1576676
-rw-r--r-- 1 root wheel 2 Jul 30 13:31 bounds
-rw-r--r-- 1 root wheel 2193252 Jun 25 17:30 kernel.0
-rw-r--r-- 1 root wheel 2193252 Jul 4 00:08 kernel.1
-rw-r--r-- 1 root wheel 2193252 Jul 15 19:28 kernel.2
-rw-r--r-- 1 root wheel 2193252 Jul 16 17:50 kernel.3
2003 Jun 28
1
'vinum list' wierd output
Hi,
I'm not sure this is supposed to happen (my computer rebooted halfway
through adding an other drive to a volume):
[long lines needed]
(02:44:50 <~>) 0 $ sudo vinum ld
D storage State: up Device /dev/ad0d Avail: 38166/381 66 MB (100%)
D worthless State: up Device /dev/ad2d Avail: 38038/380 38 MB (100%)
D barracuda
2003 Aug 11
1
vinum (root on vinum too) throw_rude_remark crash: endless loop
Dear colleagues,
experimenting with vinum stripes and mirrors, I'd stuck myself with the
following:
panic: throw_rude_remark: called without config lock
(from vinumconfig.c:throw_rude_remark:103)
The system before this panic has two 160G drives with two vinum partitions on
each (one for mirrored root and one for the rest with swap between); these were
ad0 and ad2.
For the experiments,
2003 May 15
0
panic under 4.8...?
Hi, all--
I've got a 1997 Dell XPS D300 which has been rock-solid over the years,
which I'd just upgraded via a PowerLeap iP3/T-1400C. The system seemed
stable for several days, so I cvsup'ed and updated this machine from
4.7p10 to 4.8-STABLE, only to get a panic a few hours later.
Can anyone make an educated guess as to whether the panic below is
related to this upgrade, or
2003 Jul 01
2
Okay, looks like I might have a *good* one here ... inode hang
neptune# ps -M vmcore.1 -N kernel.debug -axl | grep inode | wc -l
961
and I have a vmcore to work on here !! :)
(kgdb) proc 99643
(kgdb) bt
#0 mi_switch () at machine/globals.h:119
#1 0x8014a1f9 in tsleep (ident=0x8a4ef600, priority=8, wmesg=0x80263d4a "inode", timo=0) at /usr/src/sys/kern/kern_synch.c:479
#2 0x80141507 in acquire (lkp=0x8a4ef600, extflags=16777280,
2003 Apr 05
1
Weird EINVAL readig *large* file
Hi All,
I'm about to clean and re-initialize a Maxtor 160GB disk that
shows a weird problem but I think I've found a bug in RELENG_4_7
somewhere so if preserving my disk can help conquer one more bug
I'm happy to wait a couple of days and help solve that bug.
Here's the story:
While upgrading from 4.3-STABLE to 4.7p4 and at the same time
reorganizing my vinum volumes I copied
2013 Sep 12
1
9.2-RC1 panic at shutdown
Hello folks,
I have a panic at shutdown related to FUSE.
#0 doadump (textdump=<value optimized out>) at pcpu.h:234
234 pcpu.h: No such file or directory.
in pcpu.h
(kgdb) bt full
#0 doadump (textdump=<value optimized out>) at pcpu.h:234
No locals.
#1 0xffffffff8090d9a6 in kern_reboot (howto=260) at
/usr/src/sys/kern/kern_shutdown.c:449
_ep = (struct