Displaying 20 results from an estimated 120 matches similar to: "9.2PRERELEASE ZFS panic in lzjb_compress"
2013 Sep 27
1
lock order reversal in 10-alpha2
After booting from a 10-alpha2 disk I am seeing "lock order reversal"
messages show up from time to time. Current logs have 35 entries.
The machine normally is running 9.1 from zfs root and I have setup a
separate disk (eSATA case connected through backplane port to onboard
SATA port) that I have installed 10-alpha amd64 onto a ufs partition to
test port building with. I started by
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the
result was the machine grinding to a halt while copying some large
(.wav) files to it from another filesystem in the same pool.
The system became very unresponsive, taking several seconds to echo
keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty
of grunt for this.
Comments?
Ian
2008 Nov 24
1
RELENG_7 panic under load: vm_page_unwire: invalid wire count: 0
Box with fresh RELENG_7 panic under heavy network load (more than 50k connections).
This panics seems to be senfile(2) related, because when sendfile disabled in nginx, I can't reproduce the problem.
Backtrace in all cases like this:
# kgdb kernel /spool/crash/vmcore.1
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why?
--
This message posted from opensolaris.org
2012 Nov 13
1
thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
Hi there
We have a pair of servers running FreeBSD 9.1-RC3 that act as transparent layer 7 loadbalancer (relayd) and pop/imap proxy (dovecot). Only one of them is active at a given time, it's a failover setup. From time to time the active one gets in a state in which the 'thread taskq' thread uses up 100% of one cpu on its own, like here:
----
PID USERNAME PRI NICE SIZE
2012 Jul 13
2
stable/9 panic Bad tailq NEXT(0xffffffff80e52660->tqh_last) != NULL
Well this is new. I haven't a clue what Dell has done on this R620, but
this popped up today after I did a boat load of BIOS updates and tried
to install stable/9 from our yahoo tree. If anyone sees the obvious
solution here, I'd love to figure it out.
found-> vendor=0x14e4, dev=0x165f, revid=0x00
domain=0, bus=2, slot=0, func=1
class=02-00-00, hdrtype=0x00, mfdev=1
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all,
I am not sure my original mail got through to the list
(I haven''t received it back), so I attach it below.
Anyhow, now I have a saved kernel crash dump of the system
panicking when it tries to - I believe - deferred-release
the corrupted deduped blocks which are no longer referenced
by the userdata/blockpointer tree.
As I previously wrote in my thread on unfixeable
2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps.
Now space maps, intent log, spa history are compressed.
Not I''m thinking about disabling checksums. All
2012 Sep 21
3
tws bug ? (LSI SAS 9750)
Hi,
I have been trying out a nice new tws controller and decided to enable
debugging in the kernel and run some stress tests. With a regular
GENERIC kernel, it boots up fine. But with debugging, it panics on
boot. Anyone know whats up ? Is this something that should be sent
directly to LSI ?
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
2010 Aug 30
5
pool died during scrub
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
stripe. They''re all supermicro-based with retail LSI cards.
I''ve noticed a tendency for things to go a little bonkers during the
weekly scrub (they all scrub over the weekend), and that''s when I''ll
lose a disk here and there. OK, fine, that''s sort of the point, and
they''re
2005 Oct 19
2
Automatic rounding of values after factors , converted to numeric, are multipled by a real number
I am wondering if someone would have any suggestion about my issue?
I have the following code:
wgts<-aggregate(subset(lendata,select=c(Length)),list(lendata$Cruise,len
data$Station,lendata$Region,lendata$Total),mean)
wgts<-wgts[order(wgts$Group.3,wgts$Group.1,wgts$Group.1),]
names(wgts)<-c("Cruise","Station","Region","Total","MLen")
2003 May 23
1
Opening a file in mode "r+" or "r+b"
While using the file functions have found a few more issues (on both Unix
and Windows) ie
R documentation:
In the "close" description (base package) we see that the possible values
for the mode 'open' the "r+" and "r+b" values are repeated, and are
incorrect the second time. The second set actually corresponds to the "w+" /
"w+b", see
2014 Apr 05
0
[PATCH] Use EVP_Digest
Hi,
It would be preferable to use EVP_Digest for oneshot digest calculation:
- one calloc/free less
- EVP_Digest properly sets oneshot flag (certain hardware accelerators
work only if the flag is set)
Please consider applying the following patch:
diff -ru openssh-6.6p1.orig/digest-openssl.c openssh-6.6p1/digest-openssl.c
--- openssh-6.6p1.orig/digest-openssl.c 2014-02-04 02:25:45.000000000
2013 Jul 01
1
ZFS Panic after freebsd-update
Hello,
I have not had much time to research this problem yet, so please let me
know what further information I might be able to provide.
This weekend I attempted to upgrade a computer from 8.2-RELEASE-p3 to 8.4
using freebsd-update. After I rebooted to test the new kernel, I got a
panic. I had to take a picture of the screen. Here's a condensed version:
panic: page fault
cpuid = 1
KDB:
2009 Feb 26
13
o2dlm mle hash patches - round 2
The changes from the last drop are:
1. Patch 11 removes struct dlm_lock_name.
2. Patch 12 is an unrelated bugfix. Actually is related to a bugfix
that we are retracting in mainline currently. The patch may need more testing.
While I did hit the condition in my testing, Marcos hasn't. I am sending it
because it can be queued for 2.6.30. Give us more time to test.
3. Patch 13 will be useful
2013 Jun 30
1
locks under printf(9) and WITNESS = panic?
when booting stable/9 under a debug kernel with WITNESS
enabled and verbose I get the following panic..
It seems very much like the discussion from a year back on
current: http://lists.freebsd.org/pipermail/freebsd-current/2012-January/031375.html
Any ideas?
uhub1: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus1
uhub0: 2 ports with 2 removable, self powered
uhub1: 2
2009 Jan 24
4
panic in callout_reset: bad link in callwheel
System: FreeBSD 7.1-STABLE i386 (revision 187025)
Panic message:
kernel trap 12 with interrupts disabled
Fatal trap 12: page fault while in kernel mode
fault virtual address = 0xd2006ad0
fault code = supervisor write, page not present
instruction pointer = 0x20:0xc05623aa
stack pointer = 0x28:0xdd4f6c34
frame pointer = 0x28:0xdd4f6c40
code segment
2008 May 07
0
Kernel panic - em0 culprit?
Hello,
My server is experiencing occasional kernel panics when under moderate
load. I'm attaching a crash dump and the dmesg output. I'm not sure how
to read the kernel backtrace but it looks like the Intel NIC (em0)
caused the problem. Occasionally, I used to get a "em0: watchdog timeout
-- resetting" error message, but I never had a kernel panic. The problem
started last
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334
Summary: zpool destroy panics after zfs_force_umount_stress
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
AssignedTo:
2008 Apr 24
0
panic on zfs scrub on builds 79 & 86
This just started happening to me. It''s a striped non mirrored pool (I know I know). A zfs scrub causes a panic under a minute. I can also trigger a panic by doing tars etc. x86 64-bit kernel ... any ideas? Just to help rule out some things, I changed the motherboard, memory and cpu and it still happens ... I also think it happens on a 32-bit kernel.
genunix: [ID 335743 kern.notice] BAD