similar to: dumpadm and using dumpfile on zfs?

Displaying 20 results from an estimated 400 matches similar to: "dumpadm and using dumpfile on zfs?"

2009 Jan 13
4
zfs null pointer deref, getting data out of single-user mode
My home NAS box, that I''d upgraded to Solaris 2008.11 after a series of crashes leaving the smf database damaged, and which ran for 4 days cleanly, suddenly fell right back to where the old one had been before. Looking at the logs, I see something similar to (this is manually transcribed to paper and retyped): Bad trap: type=e (page fault) rp=f..f00050e3250 addr=28 module ZFS null
2007 Nov 13
2
Creating a manifests ''release'' under SVN; trouble with SVN headers
Dear all I''ve gotten into the habit of including SVN headers in my templates, etc so it is easy to see where the file installed into /etc/puppet/ came from. Furthermore, we use svn cp to create release branches. Therefore, you''ll see something like this: # $Id: dumpadm.conf 1239 2007-10-23 16:04:06Z sa_dewha $ # $URL:
2009 Dec 20
0
On collecting data from "hangs"
There seems to be a rash of posts lately where people are resetting or rebooting without getting any data, so I thought I''d post a quick overview on collecting crash dumps. If you think you''ve got a hang problem with ZFS and you want to gather data for someone to look at, then here are a few steps you should take. If you already know all about gathering crash dumps on
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all, I am not sure my original mail got through to the list (I haven''t received it back), so I attach it below. Anyhow, now I have a saved kernel crash dump of the system panicking when it tries to - I believe - deferred-release the corrupted deduped blocks which are no longer referenced by the userdata/blockpointer tree. As I previously wrote in my thread on unfixeable
2003 Jun 22
1
savecore: warning: /kernel version mismatch: ...
Morning all ... 'K, this one is a first for me ... server crashed this aft and savecore wouldn't dump the resultant core: pluto# savecore -v /vm/crash dumplo = 4362141696 (8519808 * 512) savecore: warning: /kernel version mismatch: "FreeBSD 4.8-STABLE #1: Sat May 31 22:57:04 ADT 2003 " and " #(#(#(" savecore: reboot savecore: dump time is zero
2003 Jun 26
1
changes in kernel affecting savecore/dumps ...
David gave me some suggestions to check out on the servers, but so far, its all drawing a blank ... I have two servers right now that are updated to recent 4.8-STABLE kernels ... one was June 22nd, and the other was upgraded June 20th ... both of them have crashed since that date, and both of them tell me that they are unable to produce a core file, with the same errors: Jun 26 04:27:14 jupiter
2008 May 19
2
Suspend/resume on IBM X31
Greetings I am having trouble with suspend/resume on my Thinkpad X31, running 7.0-STABLE as of April 23. Any help would be appreciated. First problem: When I run "acpiconf -s3" from mulituser mode, the system suspends immediately, without executing /etc/rc.suspend (which has mode 755); then on resume, I get a panic. Second problem: When the system panics, I don't get a dump (or
2013 Feb 14
2
i386: vm.pmap kernel local race condition
Hi! I've got FreeBSD 8.3-STABLE/i386 server that can be reliably panicked using just 'squid -k rotatelog' command. It seems the system suffers from the problem described here: http://cxsecurity.com/issue/WLB-2010090156 I could not find any FreeBSD Security Advisory containing a fix. My server has 4G physical RAM (about 3.2G available) and runs squid (about 110M VSS) with 500
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it was "constantly busy", and since our x4500 has always died miserably in the past when a HDD dies, they wanted to replace it before the HDD actually died. The usual was done, HDD replaced, resilvering started and ran for about 50 minutes. Then the system hung, same as always, all ZFS related commands would just
2009 Nov 02
2
How do I protect my zfs pools?
Hi, I may have lost my first zpool, due to ... well, we''re not yet sure. The ''zpool import tank'' causes a panic -- one which I''m not even able to capture via savecore. I''m glad this happened when it did. At home I am in the process of moving all my data from a Linux NFS server to OpenSolaris. It''s something I''d been meaning to do
2009 Mar 09
1
Other zvols for swap and dump?
Can you use a different zvol for dump and swap rather than using the swap and dump zvol created by liveupgrade? Casper
2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with: # zpool import -f <long id number> Old_rpool but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB. This hard disk is partitioned, with a
2008 Jun 09
6
Dtrace on OpenSolaris/VirtualBox
I''m running OpenSolaris 2008.05 in VirtualBox on a Windows XP host. Playing around with various probes, I found that trying to load any probe associated with bdev_strategy dumps core. I can think of one or two likely and reasonable causes for this, but am assuming it''s undesirable behavior. Anyone know what''s happening here? -- This message posted from
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2012 Nov 27
6
How to clean up /
Hello. I recently upgraded to 9.1-RC3, everything went fine, however the / partition its about to get full. Im really new to FreeBSD so I don?t know what files can be deleted safely. # find -x / -size +10000 -exec du -h {} \; 16M /boot/kernel/kernel 60M /boot/kernel/kernel.symbols 6.7M /boot/kernel/if_ath.ko.symbols 6.4M /boot/kernel/vxge.ko.symbols 9.4M
2003 Sep 29
3
FreeBSD 4.9 RC1 (more bad news)
A few hours ago I downloaded .../i386/ISO-IMAGES/4.9-RC1-i386-disc1.iso and made kern/mfsroot floppies from it. If I enable my ICH5R SATA controller in "native" mode, the 4.9-RC1 GENERIC kernel hangs solidly (only the system reset button can unwedge it) during device configuration. The last line of bootstrap monologue written by the kernel is: plip0" <PLIP network
2008 Jul 22
3
6.3-RELEASE-p3 recurring panics on multiple SM PDSMi+
We have 10 SuperMicro PDSMi+ 5015M-MTs that are panic'ing every few days. This started shortly after upgrade from 6.2-RELEASE to 6.3-RELEASE with freebsd-update. Other than switching to a debugging kernel, a little sysctl tuning, and patching with freebsd-update, they are stock. The debugging kernel was built from source that is also being patched with freebsd-update. These systems are
2004 Aug 06
0
dumpfile with libshout2/icecast2
On Wednesday 10 March 2004 06:56, Andrew Taylor wrote: > Heyas, > > I'm trying to get setDumpfile working with the java libshout bindings. > I am calling shout_set_dumpfile(shout,char*) after specifying the port, > host, mount and password, yet, the dumpfile is not created on the server > side. To be more specific, I'm trying this: > > bin/streamAdmin -d
2004 Aug 06
1
dumpfile with libshout2/icecast2
Thanks for the reply, Mike. Is there currently any way to accomplish the same thing (ie, recording of a dumpfile for a mount for a given duration) with icecast2 as it stands? Perhaps through the admin interface, or via a config change and reload? I'm surprised this feature has not been more requested, it would certainly be welcome here. If not, no biggie, I could just use a local