Displaying 12 results from an estimated 12 matches for "dumpadm".
Did you mean:
domadm
2007 Jan 29
3
dumpadm and using dumpfile on zfs?
...SVM mirror using zfs.
Is there something that I''m doing wrong, or is this not yet supported on
ZFS?
Note this is Solaris 10 Update 3, but I don''t think that should matter..
thanks,
peter
Using ZFS
========
HON hcb116 ~ $ mkfile -n 1g /var/adm/crash/dump-file
HON hcb116 ~ $ dumpadm -d /var/adm/crash/dump-file
dumpadm: dumps not supported on /var/adm/crash/dump-file
Using UFS
========
HON hcb115 ~ $ mkfile -n 1g /data/0/test
HON hcb115 ~ $ dumpadm -d /data/0/test
Dump content: kernel pages
Dump device: /data/0/test (dedicated)
Savecore directory: /var/crash/st...
2007 Nov 13
2
Creating a manifests ''release'' under SVN; trouble with SVN headers
Dear all
I''ve gotten into the habit of including SVN headers in my templates, etc
so it is easy to see where the file installed into /etc/puppet/ came
from. Furthermore, we use svn cp to create release branches.
Therefore, you''ll see something like this:
# $Id: dumpadm.conf 1239 2007-10-23 16:04:06Z sa_dewha $
# $URL:
svn://engsun05/bootstrap/rel/release-20071030/puppet/modules/crashdump/t
emplates/dumpadm.conf $
All well and good until I create another release and the svn export
looks like this:
# $Id: dumpadm.conf 1239 2007-10-23 16:04:06Z sa_dewha $
# $URL:...
2009 Dec 20
0
On collecting data from "hangs"
...d you want to gather data for someone to look at,
then here are a few steps you should take.
If you already know all about gathering crash dumps on Solaris, feel
free to delete now.
1) Make sure crash dumps are enabled
Enable saving of crash dumps by executing as root or with pfexec
''dumpadm -y''.
The most reasonable trade-off of information vs size in the crash dump
is probably ''dumpadm -c curproc''
If you''re running Opensolaris you likely already have a dedicated zvol
as a dump device. If you''re running SXCE you may need to dedicate a...
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well,
and the choice of which disk got which name was perfect!
But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted
2009 Mar 09
1
Other zvols for swap and dump?
Can you use a different zvol for dump and swap rather than using the swap
and dump zvol created by liveupgrade?
Casper
2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with:
# zpool import -f <long id number> Old_rpool
but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB.
This hard disk is partitioned, with a
2008 Jun 09
6
Dtrace on OpenSolaris/VirtualBox
I''m running OpenSolaris 2008.05 in VirtualBox on a Windows XP host. Playing around with various probes, I found that trying to load any probe associated with bdev_strategy dumps core.
I can think of one or two likely and reasonable causes for this, but am assuming it''s undesirable behavior.
Anyone know what''s happening here?
--
This message posted from
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
...core: [ID 570001 auth.error] reboot after
> panic: BAD TRAP: type=e (#pf Page fault) rp=ffffff0010a5e920 addr=30
> occurred in module "zfs" due to a NULL pointer dereference
> Feb 4 04:11:11 bofh-sol savecore: [ID 564761 auth.error] Panic crashdump
> pending on dump device but dumpadm -n in effect; run savecore(1M)
> manually to extract. Image UUID 4f6725c1-509f-eba4-8774-e627e1925461.
> Feb 4 04:11:13 bofh-sol unix: [ID 954099 kern.info] NOTICE: IRQ17 is
> being shared by drivers with different interrupt levels.
> Feb 4 04:11:13 bofh-sol This may result in reduced s...
2009 Jan 13
4
zfs null pointer deref, getting data out of single-user mode
My home NAS box, that I''d upgraded to Solaris 2008.11 after a series of
crashes leaving the smf database damaged, and which ran for 4 days
cleanly, suddenly fell right back to where the old one had been before.
Looking at the logs, I see something similar to (this is manually
transcribed to paper and retyped):
Bad trap: type=e (page fault) rp=f..f00050e3250 addr=28 module ZFS null
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello,
I have a situation where a host, which is booted off its ''rpool'', need
to temporarily import the ''rpool'' of another host, edit some files in
it, and export the pool back retaining its original name ''rpool''. Can
this be done ?
Here is what I am trying to do:
# zpool import -R /a rpool temp-rpool
# zfs set mountpoint=/mnt
2013 Mar 20
11
System started crashing hard after zpool reconfigure and OI upgrade
I have two identical Supermicro boxes with 32GB ram. Hardware details at
the end of the message.
They were running OI 151.a.5 for months. The zpool configuration was one
storage zpool with 3 vdevs of 8 disks in RAIDZ2.
The OI installation is absolutely clean. Just next-next-next until done.
All I do is configure the network after install. I don''t install or enable
any other services.
2010 Nov 11
8
zpool import panics
Hi,
I just had my Dell R610 reboot with a kernel panic when I threw a couple
of zfs clone commands in the terminal at it.
Now, after the system had rebooted zfs will not import my pool anylonger
and instead the kernel will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e