I made the GFS2 shared fiber disk array, but each time you run 1-2 hours,
the directory corresponding to the GFS2 can not view, can not be
accessed,But the two hosts is normal operation.
*Before running the OCFS2 problems, the replacement became the GFS2.
I replaced the Red Hat kernel, test a week no problem, the above problems using
XEN kernel, how can I fix it? The cause of some kernel parameters?*
2012/3/7 任我飞 <renwofei423@gmail.com>
> Hardware environment
> CPU: Intel E5410,8 core
> MEM: 24G
> Fibre disk array:IBM 5.7T (mkfs.gfs2 this)
> 3 NIC: 1000M
>
>
> -------------------------------------------------------------
> Software environment
> 2 nodes,hostname: vm1, vm2 .
> Both have 2 kernels:
> XEN:
> Linux vm1 2.6.32.46-xen #2 SMP Mon Mar 5 16:05:01 CST 2012 x86_64
> x86_64 x86_64 GNU/Linux
> RHEL6.0:
> Linux vm1 2.6.32-71.29.1.el6.x86_64 #1 SMP Thu May 19 13:15:41 CST
> 2011 x86_64 x86_64 x86_64 GNU/Linux
> RHEL6.0 kernel just for test.
> kernel''s .config you can see Attachment.
> XEN :4.1.2
> ifconfig infomation:
> eth0 Link encap:Ethernet HWaddr 00:1E:90:66:8E:F8
> inet addr:172.19.0.81 Bcast:172.19.255.255 Mask:255.255.0.0
> inet6 addr: fe80::21e:90ff:fe66:8ef8/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:1124960 errors:0 dropped:0 overruns:0 frame:0
> TX packets:618733 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:140496518 (133.9 MiB) TX bytes:115003293 (109.6 MiB)
> Memory:fd780000-fd7a0000
>
> eth1 Link encap:Ethernet HWaddr 00:1E:90:66:8E:F9
> inet addr:172.19.0.91 Bcast:172.19.255.255 Mask:255.255.0.0
> inet6 addr: fe80::21e:90ff:fe66:8ef9/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:527352 errors:0 dropped:0 overruns:0 frame:0
> TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:34900470 (33.2 MiB) TX bytes:4015 (3.9 KiB)
> Memory:fd7c0000-fd7e0000
>
> eth2 Link encap:Ethernet HWaddr 00:1B:21:14:A0:21
> inet addr:10.0.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::21b:21ff:fe14:a021/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:527232 errors:0 dropped:0 overruns:0 frame:0
> TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:34892627 (33.2 MiB) TX bytes:4000 (3.9 KiB)
>
> lo Link encap:Local Loopback
> inet addr:127.0.0.1 Mask:255.0.0.0
> inet6 addr: ::1/128 Scope:Host
> UP LOOPBACK RUNNING MTU:16436 Metric:1
> RX packets:239 errors:0 dropped:0 overruns:0 frame:0
> TX packets:239 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:36123 (35.2 KiB) TX bytes:36123 (35.2 KiB)
>
> cat /etc/cluster/cluster.conf
> <?xml version="1.0"?>
> <cluster config_version="3" name="gfscluster">
> <fence_daemon post_fail_delay="0"
post_join_delay="3"/>
> <clusternodes>
> <clusternode name="vm1" nodeid="1"
votes="1">
> <fence>
> <method name="single" >
> <device name="human" nodename="vm1" />
> </method>
> </fence>
> </clusternode>
> <clusternode name="vm2" nodeid="2"
votes="1">
> <fence>
> <method name="single" >
> <device name="human" nodename="vm2" />
> </method>
> </fence>
> </clusternode>
> </clusternodes>
> <cman expected_votes="1" two_node="1"/>
> <fencedevices>
> <fencedevice name="human" agent="fence_manual" />
> </fencedevices>
> <rm>
> <failoverdomains/>
> <resources/>
> </rm>
> <dlm plock_ownership="1"
plock_rate_limit="0"/>
> <gfs_controld plock_rate_limit="0"/>
> </cluster>
>
> cat /etc/xen/xend-config.sxp
> # -*- sh -*-
>
> #
> # Xend configuration file.
> #
>
> # This example configuration is appropriate for an installation that
> # utilizes a bridged network configuration. Access to xend via http
> # is disabled.
>
> # Commented out entries show the default for that entry, unless otherwise
> # specified.
>
> #(logfile /var/log/xen/xend.log)
> #(loglevel DEBUG)
>
> # Uncomment the line below. Set the value to flask, acm, or dummy to
> # select a security module.
>
> #(xsm_module_name dummy)
>
> # The Xen-API server configuration.
> #
> # This value configures the ports, interfaces, and access controls for the
> # Xen-API server. Each entry in the list starts with either unix, a port
> # number, or an address:port pair. If this is "unix", then a UDP
socket is
> # opened, and this entry applies to that. If it is a port, then Xend will
> # listen on all interfaces on that TCP port, and if it is an address:port
> # pair, then Xend will listen on the specified port, using the interface
> with
> # the specified address.
> #
> # The subsequent string configures the user-based access control for the
> # listener in question. This can be one of "none" or
"pam", indicating
> either
> # that users should be allowed access unconditionally, or that the local
> # Pluggable Authentication Modules configuration should be used. If this
> # string is missing or empty, then "pam" is used.
> #
> # The final string gives the host-based access control for that listener.
> If
> # this is missing or empty, then all connections are accepted. Otherwise,
> # this should be a space-separated sequence of regular expressions; any
> host
> # with a fully-qualified domain name or an IP address that matches one of
> # these regular expressions will be accepted.
> #
> # Example: listen on TCP port 9363 on all interfaces, accepting connections
> # only from machines in example.com or localhost, and allow access through
> # the unix domain socket unconditionally:
> #
> # (xen-api-server ((9363 pam ''^localhost$
example\\.com$'')
> # (unix none)))
> #
> # Optionally, the TCP Xen-API server can use SSL by specifying the private
> # key and certificate location:
> #
> # (9367 pam '''' xen-api.key
xen-api.crt)
> #
> # Default:
> # (xen-api-server ((unix)))
>
>
> (xend-http-server yes)
> (xend-unix-server yes)
> (xend-tcp-xmlrpc-server yes)
> (xend-unix-xmlrpc-server yes)
> #(xend-relocation-server no)
> (xend-relocation-server yes)
> #(xend-relocation-ssl-server no)
> #(xend-udev-event-server no)
>
> (xend-unix-path /var/lib/xend/xend-socket)
>
>
> # Address and port xend should use for the legacy TCP XMLRPC interface,
> # if xend-tcp-xmlrpc-server is set.
> #(xend-tcp-xmlrpc-server-address ''localhost'')
> #(xend-tcp-xmlrpc-server-port 8006)
>
> # SSL key and certificate to use for the legacy TCP XMLRPC interface.
> # Setting these will mean that this port serves only SSL connections as
> # opposed to plaintext ones.
> #(xend-tcp-xmlrpc-server-ssl-key-file xmlrpc.key)
> #(xend-tcp-xmlrpc-server-ssl-cert-file xmlrpc.crt)
>
>
> # Port xend should use for the HTTP interface, if xend-http-server is set.
> (xend-port 8000)
>
> # Port xend should use for the relocation interface, if
> xend-relocation-server
> # is set.
> (xend-relocation-port 8002)
>
> # Port xend should use for the ssl relocation interface, if
> # xend-relocation-ssl-server is set.
> #(xend-relocation-ssl-port 8003)
>
> # SSL key and certificate to use for the ssl relocation interface, if
> # xend-relocation-ssl-server is set.
> #(xend-relocation-server-ssl-key-file xmlrpc.key)
> #(xend-relocation-server-ssl-cert-file xmlrpc.crt)
>
> # Whether to use ssl as default when relocating.
> #(xend-relocation-ssl no)
>
> # Address xend should listen on for HTTP connections, if xend-http-server
> is
> # set.
> # Specifying ''localhost'' prevents remote connections.
> # Specifying the empty string '''' (the default) allows all
connections.
> (xend-address '''')
> #(xend-address localhost)
>
> # Address xend should listen on for relocation-socket connections, if
> # xend-relocation-server is set.
> # Meaning and default as for xend-address above.
> # Also, interface name is allowed (e.g. eth0) there to get the
> # relocation address to be bound on.
> (xend-relocation-address '''')
>
> # The hosts allowed to talk to the relocation port. If this is empty (the
> # default), then all connections are allowed (assuming that the connection
> # arrives on a port and interface on which we are listening; see
> # xend-relocation-port and xend-relocation-address above). Otherwise, this
> # should be a space-separated sequence of regular expressions. Any host
> with
> # a fully-qualified domain name or an IP address that matches one of these
> # regular expressions will be accepted.
> #
> # For example:
> # (xend-relocation-hosts-allow ''^localhost$
^.*\\.example\\.org$'')
> #
> (xend-relocation-server yes)
> (xend-relocation-hosts-allow '''')
> #(xend-relocation-hosts-allow ''^localhost$
^localhost\\.localdomain$'')
>
> # The limit (in kilobytes) on the size of the console buffer
> #(console-limit 1024)
>
> ##
> # To bridge network traffic, like this:
> #
> # dom0: ----------------- bridge -> real eth0 -> the network
> # |
> # domU: fake eth0 -> vifN.0 -+
> #
> # use
> #
> # (network-script network-bridge)
> #
> # Your default ethernet device is used as the outgoing interface, by
> default.
> # To use a different one (e.g. eth1) use
> #
> # (network-script ''network-bridge netdev=eth1'')
> #
> # The bridge is named eth0, by default (yes, really!)
> #
>
> # It is normally much better to create the bridge yourself in
> # /etc/network/interfaces. network-bridge start does nothing if you
> # already have a bridge, and network-bridge stop does nothing if the
> # default bridge name (normally eth0) is not a bridge. See
> # bridge-utils-interfaces(5) for full information on the syntax in
> # /etc/network/interfaces, but you probably want something like this:
> # iface xenbr0 inet static
> # address [etc]
> # netmask [etc]
> # [etc]
> # bridge_ports eth0
> #
> # To have network-bridge create a differently-named bridge, use:
> # (network-script ''network-bridge bridge=<name>'')
> #
> # It is possible to use the network-bridge script in more complicated
> # scenarios, such as having two outgoing interfaces, with two bridges, and
> # two fake interfaces per guest domain. To do things like this, write
> # yourself a wrapper script, and call network-bridge from it, as
> appropriate.
> #
> #(network-script network-bridge)
> #(network-script ''network-bridge netdev=bond0'')
> #(network-script my-network-script)
> (network-script ''network-bridge netdev=eth0'')
> #(network-script ''network-bridge netdev=eth1'')
> #(network-script ''network-bridge netdev=eth2'')
> #(network-script /bin/true)
> #(network-script muti-2-network-script)
>
> # The script used to control virtual interfaces. This can be overridden
> on a
> # per-vif basis when creating a domain or a configuring a new vif. The
> # vif-bridge script is designed for use with the network-bridge script, or
> # similar configurations.
> #
> # If you have overridden the bridge name using
> # (network-script ''network-bridge bridge=<name>'')
then you may wish to do
> the
> # same here. The bridge name can also be set when creating a domain or
> # configuring a new vif, but a value specified here would act as a default.
> #
> # If you are using only one bridge, the vif-bridge script will discover
> that,
> # so there is no need to specify it explicitly. The default is to use
> # the bridge which is listed first in the output from brctl.
> #
> (vif-script vif-bridge)
>
>
> ## Use the following if network traffic is routed, as an alternative to the
> # settings for bridged networking given above.
> #(network-script network-route)
> #(vif-script vif-route)
>
>
> ## Use the following if network traffic is routed with NAT, as an
> alternative
> # to the settings for bridged networking given above.
> #(network-script network-nat)
> #(vif-script vif-nat)
>
> # dom0-min-mem is the lowest permissible memory level (in MB) for dom0.
> # This is a minimum both for auto-ballooning (as enabled by
> # enable-dom0-ballooning below) and for xm mem-set when applied to dom0.
> (dom0-min-mem 196)
>
> # Whether to enable auto-ballooning of dom0 to allow domUs to be created.
> # If enable-dom0-ballooning = no, dom0 will never balloon out.
> (enable-dom0-ballooning yes)
>
> # 32-bit paravirtual domains can only consume physical
> # memory below 168GB. On systems with memory beyond that address,
> # they''ll be confined to memory below 128GB.
> # Using total_available_memory (in GB) to specify the amount of memory
> reserved
> # in the memory pool exclusively for 32-bit paravirtual domains.
> # Additionally you should use dom0_mem = <-Value> as a parameter in
> # xen kernel to reserve the memory for 32-bit paravirtual domains, default
> # is "0" (0GB).
> (total_available_memory 0)
>
> # In SMP system, dom0 will use dom0-cpus # of CPUS
> # If dom0-cpus = 0, dom0 will take all cpus available
> (dom0-cpus 0)
>
> # Whether to enable core-dumps when domains crash.
> #(enable-dump no)
>
> # The tool used for initiating virtual TPM migration
> #(external-migration-tool '''')
>
> # The interface for VNC servers to listen on. Defaults
> # to 127.0.0.1 To restore old ''listen everywhere''
behaviour
> # set this to 0.0.0.0
> #(vnc-listen ''127.0.0.1'')
>
> # The default password for VNC console on HVM domain.
> # Empty string is no authentication.
> (vncpasswd '''')
>
> # The VNC server can be told to negotiate a TLS session
> # to encryption all traffic, and provide x509 cert to
> # clients enabling them to verify server identity. The
> # GTK-VNC widget, virt-viewer, virt-manager and VeNCrypt
> # all support the VNC extension for TLS used in QEMU. The
> # TightVNC/RealVNC/UltraVNC clients do not.
> #
> # To enable this create x509 certificates / keys in the
> # directory ${XEN_CONFIG_DIR} + vnc
> #
> # ca-cert.pem - The CA certificate
> # server-cert.pem - The Server certificate signed by the CA
> # server-key.pem - The server private key
> #
> # and then uncomment this next line
> # (vnc-tls 1)
>
> # The certificate dir can be pointed elsewhere..
> #
> # (vnc-x509-cert-dir vnc)
>
> # The server can be told to request & validate an x509
> # certificate from the client. Only clients with a cert
> # signed by the trusted CA will be able to connect. This
> # is more secure the password auth alone. Passwd auth can
> # used at the same time if desired. To enable client cert
> # checking uncomment this:
> #
> # (vnc-x509-verify 1)
>
> # The default keymap to use for the VM''s virtual keyboard
> # when not specififed in VM''s configuration
> #(keymap ''en-us'')
>
> # Script to run when the label of a resource has changed.
> #(resource-label-change-script '''')
>
> # Rotation count of qemu-dm log file.
> #(qemu-dm-logrotate-count 10)
>
> # Path where persistent domain configuration is stored.
> # Default is /var/lib/xend/domains/
> #(xend-domains-path /var/lib/xend/domains)
>
> # Number of seconds xend will wait for device creation and
> # destruction
> #(device-create-timeout 100)
> #(device-destroy-timeout 100)
>
> # When assigning device to HVM guest, we use the strict check for HVM
> guest by
> # default. (For PV guest, we use loose check automatically if necessary.)
> # When we assign device to HVM guest, if we meet with the co-assignment
> # issues or the ACS issue, we could try changing the option to
''no'' --
> however,
> # we have to realize this may incur security issue and we can''t
make sure
> the
> # device assignment could really work properly even after we do this.
> #(pci-passthrough-strict-check yes)
>
> # If we have a very big scsi device configuration, start of xend is slow,
> # because xend scans all the device paths to build its internal PSCSI
> device
> # list. If we need only a few devices for assigning to a guest, we can
> reduce
> # the scan to this device. Set list list of device paths in same syntax
> like in
> # command lsscsi, e.g. (''16:0:0:0''
''15:0'')
> # (pscsi-device-mask (''*''))
>
>
>
------------------------------------------------------------------------------------------------------------
> I made the GFS2 shared fiber disk array, but each time you run 1-2 hours,
> the directory corresponding to the GFS2 can not view, can not be
> accessed,But the two hosts is normal operation.View the log information,
> error messages as follows(/var/log/messages):
>
> Mar 7 06:23:09 vm1 corosync[2152]: [QUORUM] Members[1]: 1
> Mar 7 06:23:09 vm1 corosync[2152]: [TOTEM ] A processor joined or left
> the membership and a new membership was formed.
> Mar 7 06:23:09 vm1 kernel: dlm: closing connection to node 2
> Mar 7 06:23:09 vm1 corosync[2152]: [CPG ] downlist received
> left_list: 1
> Mar 7 06:23:09 vm1 corosync[2152]: [CPG ] chosen downlist from node
> r(0) ip(172.19.0.81)
> Mar 7 06:23:09 vm1 corosync[2152]: [MAIN ] Completed service
> synchronization, ready to provide service.
> Mar 7 06:23:09 vm1 rgmanager[3108]: State change: vm2 DOWN
> Mar 7 06:23:09 vm1 kernel: GFS2: fsid=gfscluster:data1.1: jid=0: Trying
> to acquire journal lock...
> Mar 7 06:23:09 vm1 fenced[2206]: fencing node vm2
> Mar 7 06:23:09 vm1 fenced[2206]: fence vm2 dev 0.0 agent fence_manual
> result: error from agent
> Mar 7 06:23:09 vm1 fenced[2206]: fence vm2 failed
> Mar 7 06:23:09 vm1 corosync[2152]: [TOTEM ] A processor joined or left
> the membership and a new membership was formed.
> Mar 7 06:23:09 vm1 corosync[2152]: [QUORUM] Members[2]: 1 2
> Mar 7 06:23:09 vm1 corosync[2152]: [QUORUM] Members[2]: 1 2
> Mar 7 06:23:09 vm1 corosync[2152]: [CPG ] downlist received
> left_list: 0
> Mar 7 06:23:09 vm1 corosync[2152]: [CPG ] downlist received
> left_list: 0
> Mar 7 06:23:09 vm1 corosync[2152]: [CPG ] chosen downlist from node
> r(0) ip(172.19.0.81)
> Mar 7 06:23:09 vm1 corosync[2152]: [MAIN ] Completed service
> synchronization, ready to provide service.
> Mar 7 06:23:09 vm1 gfs_controld[2247]: receive_start 2:4 add node with
> started_count 3
> Mar 7 06:23:09 vm1 rgmanager[3108]: State change: vm2 UP
> Mar 7 06:23:12 vm1 fenced[2206]: fencing node vm2
> Mar 7 06:23:12 vm1 corosync[2152]: cman killed by node 2 because we were
> killed by cman_tool or other application
> Mar 7 06:23:12 vm1 gfs_controld[2247]: daemon cpg_dispatch error 2
> Mar 7 06:23:12 vm1 gfs_controld[2247]: cluster is down, exiting
> Mar 7 06:23:12 vm1 rgmanager[3108]: #67: Shutting down uncleanly
> Mar 7 06:23:12 vm1 dlm_controld[2222]: cluster is down, exiting
> Mar 7 06:23:12 vm1 dlm_controld[2222]: daemon cpg_dispatch error 2
> Mar 7 06:23:12 vm1 fenced[2206]: fence vm2 dev 0.0 agent none result:
> error from ccs
> Mar 7 06:23:12 vm1 fenced[2206]: fence vm2 failed
> Mar 7 06:23:20 vm1 kernel: dlm: closing connection to node 2
> Mar 7 06:23:20 vm1 kernel: dlm: closing connection to node 1
> Mar 7 06:25:14 vm1 kernel: INFO: task clvmd:2376 blocked for more than
> 120 seconds.
> Mar 7 06:25:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:25:14 vm1 kernel: clvmd D ffff8800280de180 0 2376
> 1 0x00000080
> Mar 7 06:25:14 vm1 kernel: ffff8805c8413ce0 0000000000000282
> 0000000000000000 ffff8800280841e8
> Mar 7 06:25:14 vm1 kernel: 00001e756d9fc134 ffff8805c318d460
> 0000000000000001 0000000101fc1a63
> Mar 7 06:25:14 vm1 kernel: ffff8805c8411a10 ffff8805c8413fd8
> 000000000000fc08 ffff8805c8411a10
> Mar 7 06:25:14 vm1 kernel: Call Trace:
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149ead6>] ?
> _spin_unlock_irqrestore+0x16/0x20
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8105d639>] ?
> try_to_wake_up+0x2f9/0x480
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8100922a>] ?
> hypercall_page+0x22a/0x1010
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0422fdd>]
> dlm_clear_proc_locks+0x3d/0x260 [dlm]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e9b0>] ?
_spin_lock_irq+0x10/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa042e556>]
device_close+0x66/0xc0
> [dlm]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff811533a5>] __fput+0xf5/0x210
> Mar 7 06:25:14 vm1 kernel: [<ffffffff811534e5>] fput+0x25/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8114eabd>] filp_close+0x5d/0x90
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8114eb8a>] sys_close+0x9a/0xf0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff810140f2>]
> system_call_fastpath+0x16/0x1b
> Mar 7 06:25:14 vm1 kernel: INFO: task kslowd001:2523 blocked for more
> than 120 seconds.
> Mar 7 06:25:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:25:14 vm1 kernel: kslowd001 D ffff880028066180 0 2523
> 2 0x00000080
> Mar 7 06:25:14 vm1 kernel: ffff8805c7ecf998 0000000000000246
> 0000210000002923 0001f5052c320000
> Mar 7 06:25:14 vm1 kernel: 3200000000211000 211800000113052d
> 51052e3200000000 0000002120000000
> Mar 7 06:25:14 vm1 kernel: ffff8805c7ec86b0 ffff8805c7ecffd8
> 000000000000fc08 ffff8805c7ec86b0
> Mar 7 06:25:14 vm1 kernel: Call Trace:
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa04240d2>] dlm_lock+0x62/0x1e0
[dlm]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81238c94>] ?
vsnprintf+0x484/0x5f0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0471de2>] gdlm_lock+0xe2/0x120
> [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0471ee0>] ? gdlm_ast+0x0/0xe0
[gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0471e20>] ? gdlm_bast+0x0/0x50
> [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0456ca3>] do_xmote+0x163/0x250
> [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa045715a>] run_queue+0xfa/0x170
> [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa045735a>]
gfs2_glock_nq+0x11a/0x330
> [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0457d79>]
> gfs2_glock_nq_num+0x69/0x90 [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa046a753>]
> gfs2_recover_work+0x93/0x7b0 [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81010b0f>] ?
> xen_restore_fl_direct_end+0x0/0x1
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8100c676>] ?
xen_mc_flush+0x106/0x250
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8100c2dd>] ?
xen_write_cr0+0x4d/0xa0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8100b4be>] ?
> xen_end_context_switch+0x1e/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81012776>] ?
__switch_to+0x166/0x320
> Mar 7 06:25:14 vm1 kernel: [<ffffffff810584b3>] ?
> finish_task_switch+0x53/0xe0
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0457d71>] ?
> gfs2_glock_nq_num+0x61/0x90 [gfs2]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:25:14 vm1 kernel: [<ffffffff810f87d2>]
> slow_work_execute+0x232/0x310
> Mar 7 06:25:14 vm1 kernel: [<ffffffff810f8a07>]
> slow_work_thread+0x157/0x370
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8108deb0>] ?
> autoremove_wake_function+0x0/0x40
> Mar 7 06:25:14 vm1 kernel: [<ffffffff810f88b0>] ?
> slow_work_thread+0x0/0x370
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8108db56>] kthread+0x96/0xa0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8101514a>] child_rip+0xa/0x20
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81014311>] ?
> int_ret_from_sys_call+0x7/0x1b
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81014a9d>] ?
> retint_restore_args+0x5/0x6
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81015140>] ? child_rip+0x0/0x20
> Mar 7 06:25:14 vm1 kernel: INFO: task rgmanager:3108 blocked for more
> than 120 seconds.
> Mar 7 06:25:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:25:14 vm1 kernel: rgmanager D ffff8800280a2180 0 3108
> 3106 0x00000080
> Mar 7 06:25:14 vm1 kernel: ffff8805c65d9ce0 0000000000000286
> ffff8805c65d9ca8 ffff8805c65d9ca4
> Mar 7 06:25:14 vm1 kernel: ffffffff81b52fa8 ffff8805e6c24500
> ffff880028066180 0000000101fc1a6d
> Mar 7 06:25:14 vm1 kernel: ffff8805c8f2c6b0 ffff8805c65d9fd8
> 000000000000fc08 ffff8805c8f2c6b0
> Mar 7 06:25:14 vm1 kernel: Call Trace:
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa0422fdd>]
> dlm_clear_proc_locks+0x3d/0x260 [dlm]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8149e9b0>] ?
_spin_lock_irq+0x10/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffffa042e556>]
device_close+0x66/0xc0
> [dlm]
> Mar 7 06:25:14 vm1 kernel: [<ffffffff811533a5>] __fput+0xf5/0x210
> Mar 7 06:25:14 vm1 kernel: [<ffffffff811534e5>] fput+0x25/0x30
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8114eabd>] filp_close+0x5d/0x90
> Mar 7 06:25:14 vm1 kernel: [<ffffffff8114eb8a>] sys_close+0x9a/0xf0
> Mar 7 06:25:14 vm1 kernel: [<ffffffff810140f2>]
> system_call_fastpath+0x16/0x1b
> Mar 7 06:27:14 vm1 kernel: INFO: task clvmd:2376 blocked for more than
> 120 seconds.
> Mar 7 06:27:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:27:14 vm1 kernel: clvmd D ffff8800280de180 0 2376
> 1 0x00000080
> Mar 7 06:27:14 vm1 kernel: ffff8805c8413ce0 0000000000000282
> 0000000000000000 ffff8800280841e8
> Mar 7 06:27:14 vm1 kernel: 00001e756d9fc134 ffff8805c318d460
> 0000000000000001 0000000101fc1a63
> Mar 7 06:27:14 vm1 kernel: ffff8805c8411a10 ffff8805c8413fd8
> 000000000000fc08 ffff8805c8411a10
> Mar 7 06:27:14 vm1 kernel: Call Trace:
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149ead6>] ?
> _spin_unlock_irqrestore+0x16/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8105d639>] ?
> try_to_wake_up+0x2f9/0x480
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8100922a>] ?
> hypercall_page+0x22a/0x1010
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0422fdd>]
> dlm_clear_proc_locks+0x3d/0x260 [dlm]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e9b0>] ?
_spin_lock_irq+0x10/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa042e556>]
device_close+0x66/0xc0
> [dlm]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811533a5>] __fput+0xf5/0x210
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811534e5>] fput+0x25/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8114eabd>] filp_close+0x5d/0x90
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8114eb8a>] sys_close+0x9a/0xf0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff810140f2>]
> system_call_fastpath+0x16/0x1b
> Mar 7 06:27:14 vm1 kernel: INFO: task kslowd001:2523 blocked for more
> than 120 seconds.
> Mar 7 06:27:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:27:14 vm1 kernel: kslowd001 D ffff880028066180 0 2523
> 2 0x00000080
> Mar 7 06:27:14 vm1 kernel: ffff8805c7ecf998 0000000000000246
> 0000210000002923 0001f5052c320000
> Mar 7 06:27:14 vm1 kernel: 3200000000211000 211800000113052d
> 51052e3200000000 0000002120000000
> Mar 7 06:27:14 vm1 kernel: ffff8805c7ec86b0 ffff8805c7ecffd8
> 000000000000fc08 ffff8805c7ec86b0
> Mar 7 06:27:14 vm1 kernel: Call Trace:
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa04240d2>] dlm_lock+0x62/0x1e0
[dlm]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81238c94>] ?
vsnprintf+0x484/0x5f0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471de2>] gdlm_lock+0xe2/0x120
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471ee0>] ? gdlm_ast+0x0/0xe0
[gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471e20>] ? gdlm_bast+0x0/0x50
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0456ca3>] do_xmote+0x163/0x250
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045715a>] run_queue+0xfa/0x170
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045735a>]
gfs2_glock_nq+0x11a/0x330
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0457d79>]
> gfs2_glock_nq_num+0x69/0x90 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa046a753>]
> gfs2_recover_work+0x93/0x7b0 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b0f>] ?
> xen_restore_fl_direct_end+0x0/0x1
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8100c676>] ?
xen_mc_flush+0x106/0x250
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8100c2dd>] ?
xen_write_cr0+0x4d/0xa0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8100b4be>] ?
> xen_end_context_switch+0x1e/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81012776>] ?
__switch_to+0x166/0x320
> Mar 7 06:27:14 vm1 kernel: [<ffffffff810584b3>] ?
> finish_task_switch+0x53/0xe0
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0457d71>] ?
> gfs2_glock_nq_num+0x61/0x90 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffff810f87d2>]
> slow_work_execute+0x232/0x310
> Mar 7 06:27:14 vm1 kernel: [<ffffffff810f8a07>]
> slow_work_thread+0x157/0x370
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108deb0>] ?
> autoremove_wake_function+0x0/0x40
> Mar 7 06:27:14 vm1 kernel: [<ffffffff810f88b0>] ?
> slow_work_thread+0x0/0x370
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108db56>] kthread+0x96/0xa0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101514a>] child_rip+0xa/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81014311>] ?
> int_ret_from_sys_call+0x7/0x1b
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81014a9d>] ?
> retint_restore_args+0x5/0x6
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81015140>] ? child_rip+0x0/0x20
> Mar 7 06:27:14 vm1 kernel: INFO: task gfs2_quotad:2531 blocked for more
> than 120 seconds.
> Mar 7 06:27:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:27:14 vm1 kernel: gfs2_quotad D ffff880028084180 0 2531
> 2 0x00000080
> Mar 7 06:27:14 vm1 kernel: ffff8805c57e9a98 0000000000000246
> 0000000000000000 0000000000000008
> Mar 7 06:27:14 vm1 kernel: 0000000000000001 0000000000016180
> 0000000000016180 0000000101fc6633
> Mar 7 06:27:14 vm1 kernel: ffff8805c85fc6f0 ffff8805c57e9fd8
> 000000000000fc08 ffff8805c85fc6f0
> Mar 7 06:27:14 vm1 kernel: Call Trace:
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81059356>] ?
update_curr+0xe6/0x1e0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa04240d2>] dlm_lock+0x62/0x1e0
[dlm]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b0f>] ?
> xen_restore_fl_direct_end+0x0/0x1
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8100c676>] ?
xen_mc_flush+0x106/0x250
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471de2>] gdlm_lock+0xe2/0x120
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471ee0>] ? gdlm_ast+0x0/0xe0
[gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471e20>] ? gdlm_bast+0x0/0x50
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0456ca3>] do_xmote+0x163/0x250
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045715a>] run_queue+0xfa/0x170
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045735a>]
gfs2_glock_nq+0x11a/0x330
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa046f234>]
> gfs2_statfs_sync+0x54/0x1a0 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa046f22c>] ?
> gfs2_statfs_sync+0x4c/0x1a0 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0467006>]
> quotad_check_timeo+0x46/0xb0 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0467134>]
gfs2_quotad+0xc4/0x1f0
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108deb0>] ?
> autoremove_wake_function+0x0/0x40
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0467070>] ?
gfs2_quotad+0x0/0x1f0
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108db56>] kthread+0x96/0xa0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101514a>] child_rip+0xa/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81014311>] ?
> int_ret_from_sys_call+0x7/0x1b
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81014a9d>] ?
> retint_restore_args+0x5/0x6
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81015140>] ? child_rip+0x0/0x20
> Mar 7 06:27:14 vm1 kernel: INFO: task rgmanager:3108 blocked for more
> than 120 seconds.
> Mar 7 06:27:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:27:14 vm1 kernel: rgmanager D ffff8800280a2180 0 3108
> 3106 0x00000080
> Mar 7 06:27:14 vm1 kernel: ffff8805c65d9ce0 0000000000000286
> ffff8805c65d9ca8 ffff8805c65d9ca4
> Mar 7 06:27:14 vm1 kernel: ffffffff81b52fa8 ffff8805e6c24500
> ffff880028066180 0000000101fc1a6d
> Mar 7 06:27:14 vm1 kernel: ffff8805c8f2c6b0 ffff8805c65d9fd8
> 000000000000fc08 ffff8805c8f2c6b0
> Mar 7 06:27:14 vm1 kernel: Call Trace:
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0422fdd>]
> dlm_clear_proc_locks+0x3d/0x260 [dlm]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e9b0>] ?
_spin_lock_irq+0x10/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa042e556>]
device_close+0x66/0xc0
> [dlm]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811533a5>] __fput+0xf5/0x210
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811534e5>] fput+0x25/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8114eabd>] filp_close+0x5d/0x90
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8114eb8a>] sys_close+0x9a/0xf0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff810140f2>]
> system_call_fastpath+0x16/0x1b
> Mar 7 06:27:14 vm1 kernel: INFO: task flush-253:5:4169 blocked for more
> than 120 seconds.
> Mar 7 06:27:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:27:14 vm1 kernel: flush-253:5 D ffff880028084180 0 4169
> 2 0x00000080
> Mar 7 06:27:14 vm1 kernel: ffff880533b31a80 0000000000000246
> 0000000000000000 000000000000002a
> Mar 7 06:27:14 vm1 kernel: ffffffff8101027d ffff880533b31a50
> ffffffff81010b22 0000000101fc64fb
> Mar 7 06:27:14 vm1 kernel: ffff8805bee57080 ffff880533b31fd8
> 000000000000fc08 ffff8805bee57080
> Mar 7 06:27:14 vm1 kernel: Call Trace:
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81010b0f>] ?
> xen_restore_fl_direct_end+0x0/0x1
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0455380>] ?
> gfs2_glock_holder_wait+0x0/0x20 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045538e>]
> gfs2_glock_holder_wait+0xe/0x20 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149d37f>]
__wait_on_bit+0x5f/0x90
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045e175>] ?
> gfs2_aspace_writepage+0x105/0x170 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0455380>] ?
> gfs2_glock_holder_wait+0x0/0x20 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149d428>]
> out_of_line_wait_on_bit+0x78/0x90
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108def0>] ?
> wake_bit_function+0x0/0x50
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0456506>]
gfs2_glock_wait+0x36/0x40
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa04574c0>]
gfs2_glock_nq+0x280/0x330
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa046fe22>]
> gfs2_write_inode+0x82/0x190 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa046fe1a>] ?
> gfs2_write_inode+0x7a/0x190 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8117807c>]
> writeback_single_inode+0x22c/0x310
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81178434>]
> writeback_inodes_wb+0x194/0x540
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811788ea>]
wb_writeback+0x10a/0x1d0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149d019>] ?
> schedule_timeout+0x199/0x2f0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81178b44>]
> wb_do_writeback+0x194/0x1a0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81178ba3>]
> bdi_writeback_task+0x53/0xf0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8111b660>] ?
bdi_start_fn+0x0/0xe0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8111b6d1>]
bdi_start_fn+0x71/0xe0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8111b660>] ?
bdi_start_fn+0x0/0xe0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108db56>] kthread+0x96/0xa0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101514a>] child_rip+0xa/0x20
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81014311>] ?
> int_ret_from_sys_call+0x7/0x1b
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81014a9d>] ?
> retint_restore_args+0x5/0x6
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81015140>] ? child_rip+0x0/0x20
> Mar 7 06:27:14 vm1 kernel: INFO: task dd:21764 blocked for more than 120
> seconds.
> Mar 7 06:27:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:27:14 vm1 kernel: dd D ffff880028066180 0 21764
> 4165 0x00000080
> Mar 7 06:27:14 vm1 kernel: ffff8804850c1820 0000000000000282
> 0000000000000000 000000000000000b
> Mar 7 06:27:14 vm1 kernel: 00000040ffffffff 0000000000000000
> ffff880000031908 0000000101fc5c6f
> Mar 7 06:27:14 vm1 kernel: ffff8805bf8650c0 ffff8804850c1fd8
> 000000000000fc08 ffff8805bf8650c0
> Mar 7 06:27:14 vm1 kernel: Call Trace:
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa04240d2>] dlm_lock+0x62/0x1e0
[dlm]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108dd77>] ?
bit_waitqueue+0x17/0xd0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8108de9f>] ?
wake_up_bit+0x2f/0x40
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471de2>] gdlm_lock+0xe2/0x120
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471ee0>] ? gdlm_ast+0x0/0xe0
[gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0471e20>] ? gdlm_bast+0x0/0x50
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0456ca3>] do_xmote+0x163/0x250
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045715a>] run_queue+0xfa/0x170
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa045735a>]
gfs2_glock_nq+0x11a/0x330
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa046d895>]
> gfs2_inplace_reserve_i+0x385/0x820 [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811404c5>] ?
> kmem_cache_alloc_notrace+0x115/0x130
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa0459a07>]
gfs2_createi+0x1a7/0xbb0
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811e7261>] ?
avc_has_perm+0x71/0x90
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa046559e>]
gfs2_create+0x7e/0x1b0
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffffa04598ff>] ?
gfs2_createi+0x9f/0xbb0
> [gfs2]
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8115ee64>] vfs_create+0xb4/0xe0
> Mar 7 06:27:14 vm1 kernel: [<ffffffff811627b2>]
do_filp_open+0x922/0xc70
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8116df95>] ?
alloc_fd+0x95/0x160
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8114ec49>]
do_sys_open+0x69/0x140
> Mar 7 06:27:14 vm1 kernel: [<ffffffff8114ed60>] sys_open+0x20/0x30
> Mar 7 06:27:14 vm1 kernel: [<ffffffff810140f2>]
> system_call_fastpath+0x16/0x1b
> Mar 7 06:29:14 vm1 kernel: INFO: task clvmd:2376 blocked for more than
> 120 seconds.
> Mar 7 06:29:14 vm1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 7 06:29:14 vm1 kernel: clvmd D ffff8800280de180 0 2376
> 1 0x00000080
> Mar 7 06:29:14 vm1 kernel: ffff8805c8413ce0 0000000000000282
> 0000000000000000 ffff8800280841e8
> Mar 7 06:29:14 vm1 kernel: 00001e756d9fc134 ffff8805c318d460
> 0000000000000001 0000000101fc1a63
> Mar 7 06:29:14 vm1 kernel: ffff8805c8411a10 ffff8805c8413fd8
> 000000000000fc08 ffff8805c8411a10
> Mar 7 06:29:14 vm1 kernel: Call Trace:
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8149e715>]
> rwsem_down_failed_common+0x85/0x1c0
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8149ead6>] ?
> _spin_unlock_irqrestore+0x16/0x20
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8105d639>] ?
> try_to_wake_up+0x2f9/0x480
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8149e8a6>]
> rwsem_down_read_failed+0x26/0x30
> Mar 7 06:29:14 vm1 kernel: [<ffffffff81239a14>]
> call_rwsem_down_read_failed+0x14/0x30
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8149ddd4>] ?
down_read+0x24/0x30
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8100922a>] ?
> hypercall_page+0x22a/0x1010
> Mar 7 06:29:14 vm1 kernel: [<ffffffffa0422fdd>]
> dlm_clear_proc_locks+0x3d/0x260 [dlm]
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8101027d>] ?
> xen_force_evtchn_callback+0xd/0x10
> Mar 7 06:29:14 vm1 kernel: [<ffffffff81010b22>] ?
check_events+0x12/0x20
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8149e9b0>] ?
_spin_lock_irq+0x10/0x30
> Mar 7 06:29:14 vm1 kernel: [<ffffffffa042e556>]
device_close+0x66/0xc0
> [dlm]
> Mar 7 06:29:14 vm1 kernel: [<ffffffff811533a5>] __fput+0xf5/0x210
> Mar 7 06:29:14 vm1 kernel: [<ffffffff811534e5>] fput+0x25/0x30
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8114eabd>] filp_close+0x5d/0x90
> Mar 7 06:29:14 vm1 kernel: [<ffffffff8114eb8a>] sys_close+0x9a/0xf0
> Mar 7 06:29:14 vm1 kernel: [<ffffffff810140f2>]
> system_call_fastpath+0x16/0x1b
> 7 Mar 08:00:02 ntpdate[21981]: adjust time server 172.19.0.175 offset
> 0.090922 sec
> Mar 7 08:38:35 vm1 ipwatchd[2801]: MAC address 0:1e:90:66:8e:f8 causes IP
> conflict with address 172.19.0.91 set on interface eth1 - passive mode -
> reply not sent
> Mar 7 08:44:20 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:44:20 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:44:21 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:44:21 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:44:21 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:44:24 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:44:33 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:45:00 vm1 avahi-daemon[2364]: Invalid query packet.
> Mar 7 08:47:40 vm1 ipwatchd[2801]: MAC address 0:1e:90:66:8e:f8 causes IP
> conflict with address 172.19.0.91 set on interface eth1 - passive mode -
> reply not sent
> Mar 7 08:52:30 vm1 ipwatchd[2801]: MAC address 0:1e:90:66:8e:f8 causes IP
> conflict with address 172.19.0.91 set on interface eth1 - passive mode -
> reply not sent
> Mar 7 09:06:13 vm1 ipwatchd[2801]: MAC address 0:1e:90:66:8e:f8 causes IP
> conflict with address 172.19.0.91 set on interface eth1 - passive mode -
> reply not sent
> Mar 7 09:19:55 vm1 ipwatchd[2801]: MAC address 0:1e:90:66:8e:f8 causes IP
> conflict with address 172.19.0.91 set on interface eth1 - passive mode -
> reply not sent
> Mar 7 09:33:16 vm1 ipwatchd[2801]: MAC address 0:1e:90:66:8e:f8 causes IP
> conflict with address 172.19.0.91 set on interface eth1 - passive mode -
> reply not sent
> Mar 7 09:36:10 vm1 ipwatchd[2801]: MAC address 0:1e:90:66:8e:f8 causes IP
> conflict with address 172.19.0.91 set on interface eth1 - passive mode -
> reply not sent
>
>
> clustat:
> Cluster Status for gfscluster @ Wed Mar 7 10:04:48 2012
> Member Status: Quorate
>
> Member Name ID Status
> ------ ---- ---- ------
> vm1 1 Offline
> vm2 2 Online, Local
>
> cat /etc/fstab
>
> #
> # /etc/fstab
> # Created by anaconda on Fri Dec 16 10:16:46 2011
> #
> # Accessible filesystems, by reference, are maintained under
''/dev/disk''
> # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
> #
> /dev/mapper/VolGroup-lv_root / ext4 defaults
> 1 1
> UUID=3e3d4927-c9ce-4543-96b7-940eaa429a49 /boot ext4
> defaults 1 2
> # UUID=cda393e2-cd8c-471a-9a42-11373d467e8e /vm ocfs2
> _netdev,nointr 0 0
> UUID="ae2f16af-f5c7-d867-75ad-c62f031a0b5f" /vm gfs2
> _netdev,noatime 0 0
> /dev/mapper/VolGroup-lv_home /home ext4 defaults
> 1 2
> /dev/mapper/VolGroup-lv_swap swap swap defaults
> 0 0
> tmpfs /dev/shm tmpfs defaults 0 0
> devpts /dev/pts devpts gid=5,mode=620 0 0
> sysfs /sys sysfs defaults 0 0
> proc /proc proc defaults 0 0
>
>
> *Before running the OCFS2 problems, the replacement became the GFS2.
> I replaced the Red Hat kernel, test a week no problem, the above problems
using
> XEN kernel, how can I fix it? The cause of some kernel parameters?*
>
_______________________________________________
Xen-users mailing list
Xen-users@lists.xen.org
http://lists.xen.org/xen-users