Displaying 16 results from an estimated 16 matches for "panic_on_oop".
Did you mean:
panic_on_oops
2009 Jun 05
2
Default Values of heartbeat dead threshold
Hi Guys
i got two node RAC cluster Running on RHEL 5.0 .. i just want to what is oracle recomendid Defaults values for below mention parameters
Thanks for your help
Heartbeat dead threshold
network idle timeout
network keepalive delay in ms
network reconnect delay in ms
kernel.panic_on_oops
kernel.panic
Regards,
Devender
2008 Aug 11
0
[patch] kexec and kdump documentation for xen
...depending on the kernel version
+On ia64
+ Either a vmlinuz or vmlinux.gz image may be used
+
+
+2. Execute
+----------
+
+Once the second kernel is loaded, the crash kernel will be executed if the
+hypervisor panics. It will also be executed if dom0 panics or if dom0
+oopses and /proc/sys/kernel/panic_on_oops is set to a non-zero value
+
+echo 1 > /proc/sys/kernel/panic_on_oops
+
+Kdump may also be triggered (for testing)
+
+ a. From Domain 0
+
+ echo c > /proc/sysrq-trigger
+
+ b. From Xen
+
+ Enter the xen console
+
+ ctrl^a ctrl^a (may be bound to a different key, this is the defau...
2013 Feb 14
2
[PATCH] x86/xen: don't assume %ds is usable in xen_iret for 32-bit PVOPS.
...ets called after all registers other than those handled by
> IRET got already restored, hence a null selector in %ds or a non-null
> one that got loaded from a code or read-only data descriptor would
> cause a kernel mode fault (with the potential of crashing the kernel
> as a whole, if panic_on_oops is set)."
>
> The way to fix this is to realize that the we can only relay on the
> registers that IRET restores. The two that are guaranteed are the
> %cs and %ss as they are always fixed GDT selectors. Also they are
> inaccessible from user mode - so they cannot be altered....
2013 Feb 14
2
[PATCH] x86/xen: don't assume %ds is usable in xen_iret for 32-bit PVOPS.
...ets called after all registers other than those handled by
> IRET got already restored, hence a null selector in %ds or a non-null
> one that got loaded from a code or read-only data descriptor would
> cause a kernel mode fault (with the potential of crashing the kernel
> as a whole, if panic_on_oops is set)."
>
> The way to fix this is to realize that the we can only relay on the
> registers that IRET restores. The two that are guaranteed are the
> %cs and %ss as they are always fixed GDT selectors. Also they are
> inaccessible from user mode - so they cannot be altered....
2010 Oct 27
0
OCFS2 CLUSTER HANG
...aiting each other for writing. Just restarting the application servers, the hypothetical deadlock was resolved.
Question: Is ocfs2 capable to detect the deadlock and fence one of the nodes in this situation?
The configuration is:
# 2010-02-10: OCFS2 cluster-aware filesystem configuration
kernel.panic_on_oops = 1
kernel.panic = 30
?
[oracle at mapcms1 ocfs2]$ /etc/init.d/o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ucmcluster: Online
Hea...
2018 Feb 08
4
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...ite admin
> about that.
>
> Now, here's a bit more information on my continued testing. As I
> mentioned on IRC, one of the things that struck me as odd was that if
> I ran into the issue previously described, the L1 guest would enter a
> reboot loop if configured with kernel.panic_on_oops=1. In other words,
> I would savevm the L1 guest (with a running L2), then loadvm it, and
> then the L1 would stack-trace, reboot, and then keep doing that
> indefinitely. I found that weird because on the second reboot, I would
> expect the system to come up cleanly.
Guess the L1 sta...
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...explanation. :)
>> Now, here's a bit more information on my continued testing. As I
>> mentioned on IRC, one of the things that struck me as odd was that if
>> I ran into the issue previously described, the L1 guest would enter a
>> reboot loop if configured with kernel.panic_on_oops=1. In other words,
>> I would savevm the L1 guest (with a running L2), then loadvm it, and
>> then the L1 would stack-trace, reboot, and then keep doing that
>> indefinitely. I found that weird because on the second reboot, I would
>> expect the system to come up cleanly.
&...
2016 May 18
2
[PATCH RFC 0/1] virtio-balloon vs. endianness
...xd6/0x270
[ 3.273679] [<000003ff9d2f82c4>] 0x3ff9d2f82c4
[ 3.273680] INFO: lockdep is turned off.
[ 3.273681] Last Breaking-Event-Address:
[ 3.273683] [<0000000000769a60>] io_int_handler+0x17c/0x298
[ 3.273686]
[ 3.273688] Kernel panic - not syncing: Fatal exception: panic_on_oops
The crash is gone by either forcing the device to legacy (max_revision=0)
or by applying the patch below in the guest.
[There also have been reports of people getting immediate "Out of puff!"
messages, but I don't know how to reproduce that.]
Problems should presumably also arise...
2016 May 18
2
[PATCH RFC 0/1] virtio-balloon vs. endianness
...xd6/0x270
[ 3.273679] [<000003ff9d2f82c4>] 0x3ff9d2f82c4
[ 3.273680] INFO: lockdep is turned off.
[ 3.273681] Last Breaking-Event-Address:
[ 3.273683] [<0000000000769a60>] io_int_handler+0x17c/0x298
[ 3.273686]
[ 3.273688] Kernel panic - not syncing: Fatal exception: panic_on_oops
The crash is gone by either forcing the device to legacy (max_revision=0)
or by applying the patch below in the guest.
[There also have been reports of people getting immediate "Out of puff!"
messages, but I don't know how to reproduce that.]
Problems should presumably also arise...
2010 Mar 13
5
reboot guest on panic
I have a guest that keeps crashing and want to automatically reboot it
when it crashes. See:
xen PV guest kernel 2.6.32 processes lock up in D state
https://bugzilla.redhat.com/show_bug.cgi?id=550724
if you want to look at the details on the crashing.
Anyway, I boot the guest with the kernel command line parameter:
hung_task_panic=1
I have kernel.panic = 15 in the guest /etc/sysctl.conf
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...aid he was going to poke the site admin
about that.
Now, here's a bit more information on my continued testing. As I
mentioned on IRC, one of the things that struck me as odd was that if
I ran into the issue previously described, the L1 guest would enter a
reboot loop if configured with kernel.panic_on_oops=1. In other words,
I would savevm the L1 guest (with a running L2), then loadvm it, and
then the L1 would stack-trace, reboot, and then keep doing that
indefinitely. I found that weird because on the second reboot, I would
expect the system to come up cleanly.
I've now changed my L2 guest'...
2018 Feb 08
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
>> In short: there is no (live) migration support for nested VMX yet. So as
>> soon as your guest is using VMX itself ("nVMX"), this is not expected to
>> work.
>
> Hi David, thanks for getting back to us on this.
Hi Florian,
(sombeody please correct me if I'm wrong)
>
> I see your point, except the issue Kashyap and I are describing does
> not
2007 Mar 20
15
How to bypass failed OST without blocking?
Hi
I want my lustre do such things during OST failed: if some file
has stripe data on th failed OST, any operation on the file will
return IO error without blocking, and also at this moment I can
create and read/write new file or read/write files which have no stripe
data on the failed OST without blocking.
What should I do ? How to configure?
thanks!
swin
-------------- next part
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi,
This set of patches contains initial kexec/kdump implementation for Xen v2
(previous version were posted to few people by mistake; sorry for that).
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.
Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi,
This set of patches contains initial kexec/kdump implementation for Xen v2
(previous version were posted to few people by mistake; sorry for that).
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.
Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi,
This set of patches contains initial kexec/kdump implementation for Xen v2
(previous version were posted to few people by mistake; sorry for that).
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.
Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code.