Displaying 20 results from an estimated 800 matches similar to: "BAD TRAP with Crossbow Beta October 31 2008"
2008 Dec 28
2
zfs mount hangs
Hi,
System: Netra 1405, 4x450Mhz, 4GB RAM and 2x146GB (root pool) and
2x146GB (space pool). snv_98.
After a panic the system hangs on boot and manual attempts to mount
(at least) one dataset in single user mode, hangs.
The Panic:
Dec 27 04:42:11 base ^Mpanic[cpu0]/thread=300021c1a20:
Dec 27 04:42:11 base unix: [ID 521688 kern.notice] [AFT1] errID
0x00167f73.1c737868 UE Error(s)
Dec 27
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror):
[root at einstein;0]~# zpool status poolm
pool: poolm
state: FAULTED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolm UNAVAIL 0 0 0 insufficient replicas
mirror UNAVAIL 0 0 0 corrupted data
c2t0d0s0 ONLINE 0
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors.
The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic.
I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14).
But still when trying to do zpool import
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144
2012 Apr 17
10
kernel panic during zfs import [UPDATE]
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracle support) since build 164, but there is no fix for Solaris 11 available so far (will be fixed in S11U7?).
There is a workaround available that works
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
I?d like to report the ZFS related crash/bug described below. How do I go about reporting the crash and what additional information is needed?
I?m using my own very simple test app that creates numerous directories and files of randomly generated data. I have run the test app on two machines, both 64 bit.
OpenSolaris crashes a few minutes after starting my test app. The crash has occurred on
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all.
One of our server had a panic and now can''t mount the zpool anymore!
Here is what I get at boot:
Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200:
Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67
00009000), file: ../../common/fs/zfs/space_map.c, line: 126
Mar 21 11:09:17 SERVER142
2007 Oct 10
6
server-reboot
Hi.
Just migrated to zfs on opensolaris. I copied data to the server using
rsync and got this message:
Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ffffff0007f1bc80:
Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP:
type=e (#pf Page fault) rp=ffffff0007f1b640 addr=fffffffecd873000
Oct 10 17:24:04 zetta unix: [ID 100000 kern.notice]
Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice]
2007 Apr 30
4
B62 AHCI and ZFS
Hardware Supermicro X7DAE (AHCI BIOS) dual Intel Woodcrest processors, 6 x Western Digital Raptor SATA drives.
I have installed b62 running 64 bit succesfully on a PATA drive. The BIOS is configured to access the SATA drives in native mode using hte AHCI Bios.
I have 6 SATA II drives accessed via the Solaris AHCI driver. I have created a ZFS file system across all 6 drives. This works fine until
2008 May 26
2
indiana as nfs server: crash due to zfs
hello all,
i have indiana freshly installed on a sun ultra 20 machine. It only does nfs server. During one night, the kernel had crashed, and i got this messages:
"
May 22 02:18:57 ultra20 unix: [ID 836849 kern.notice]
May 22 02:18:57 ultra20 ^Mpanic[cpu0]/thread=ffffff0003d06c80:
May 22 02:18:57 ultra20 genunix: [ID 603766 kern.notice] assertion failed: sm->sm_space == 0 (0x40000000 ==
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.>
I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.
Here is the log in /var/adm/messages
Feb 8 16:07:09 amber unix: [ID 836849 kern.notice]
Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40:
Feb 8
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command ''zpool export f3-2'' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 67 220 110 20 0 0 0 0
2006 Jun 13
4
ZFS panic while mounting lofi device?
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR build 39) that happens to reside on a ZFS file system. The problem is 100% reproducible. I''m quite new to OpenSolaris, so I may be incorrect in saying it''s ZFS'' fault. Also, let me know if you need any additional information or debug output to help diagnose things.
Config:
[b]bash-3.00#
2004 Jul 31
3
one extention, multiple phones
Is it possible to get a few 7960's and asterisk to allow all
of the 7960 phones to use one extentsion and can only be used
by one person at a time, have it indicate on the other 7960's
when one of the others has the line engaged. Basicly so like
I can setup a rule when an incoming call comes from IAX to
divert to this extension, it will ring the extension (thus all
phones), and allow me to
2007 Aug 26
3
Kernel panic receiving incremental snapshots
Before I open a new case with Sun, I am wondering if anyone has seen this
kernel panic before? It happened on an X4500 running Sol10U3 while it was
receiving incremental snapshot updates.
Thanks.
Aug 25 17:01:50 ldasdata6 ^Mpanic[cpu0]/thread=fffffe857d53f7a0:
Aug 25 17:01:50 ldasdata6 genunix: [ID 895785 kern.notice] dangling dbufs (dn=fffffe82a3532d10, dbuf=fffffe8b4e338b90)
Aug 25 17:01:50
2006 Dec 07
6
cannot load kqemu modul on solaris 5.10 3/05
Hi @all
I want to load the precompile kqemu modul on my solaris 5.10 3/05, but this dont work.
MAKE
---
-bash-3.00# make install
cp kqemu-solaris-i386 kqemu
/usr/sbin/install -f /usr/kernel/drv -m 755 -u root -g sys kqemu
new owner is root
kqemu installed as /usr/kernel/drv/kqemu
cp kqemu-solaris-x86_64 kqemu
/usr/sbin/install -f /usr/kernel/drv/amd64 -m 755 -u root -g sys kqemu
new owner is
2006 Oct 26
2
experiences with zpool errors and glm flipouts
Tonight I''ve been moving some of my personal data around on my
desktop system and have hit some on-disk corruption. As you may
know, I''m cursed, and so this had a high probability of ending badly.
I have two SCSI disks and use live upgrade, and I have a partition,
/aux0, where I tend to keep personal stuff. This is on an SB2500
running snv_46.
The upshot is that I have a slice
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m
occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000
xcalls a second). The machine is pretty much idle, only receiving a
bunch of multicast video streams and
2009 Sep 02
6
SXCE 121 Kernel Panic while installing NetBSD 5.0.1 PVM DomU
Hi all!
I am running SXCE 121 on a dual quad-core X2200M2 (64 bit of course).
During an installation of a NetBSD 5.0.1 PVM domU, the entire machine
crashed with a kernel panic. Here''s what I managed to salvage from
the LOM console of the machine:
Sep 2 18:55:19 glaurung genunix: /xpvd/xdb@41,51712 (xdb5) offline
Sep 2 18:55:19 glaurung genunix: /xpvd/xdb@41,51728 (xdb6) offline
2006 Sep 11
7
installing a pseudo driver in a Solaris DOM U and DOM U reboot
Hello,
on a v20z, we have as DOM 0 a Solaris XEN on snv44 64bits
and we have as DOM U a Solaris XEN on snv44 64 bits.
We then install a pseudo driver in the Solaris DOM 1 XEN snv44:
installation is ok and driver works as expected.
But on reboot of DOM 1, the driver is no more
there (in modinfo, driver not found).
Is there something special to do after a pseudo driver installation in
a Solaris