search for: node2

Displaying 20 results from an estimated 327 matches for "node2".

Did you mean: node
2010 Oct 23
1
Reg: ocfs2 two node cluster crashed, node2 crashed, when I rebooted node1 for maintenance.
Hi All, We have ocfs2 node cluster with oracle 11G RAC running, The node2 got crashed automatically, when i rebooted node one for maintenance. please check the log from node2 , before its got crashed. Oct 23 15:42:25 node2 kernel: ocfs2_dlm: Nodes in domain ("029C02C993E44E90879922E268FB161A"): 2 Oct 23 15:42:29 node2 kernel: ocfs2_dlm: Node 1 leaves domain 2...
2008 Nov 17
1
[LLVMdev] Question about ExtractLoop
...Loop() in CodeExtractor.cpp. The sample code is a simple list traversal, as attached. The generated bitcode (from llvm-gcc -O1) is shown below. -------------------------------------------------------------------------------------------------------------------------------- define i32 @walk(%struct.node2* %list2) nounwind { entry: %0 = icmp eq %struct.node2* %list2, null ; <i1> [#uses=1] br i1 %0, label %bb2, label %bb bb: ; preds = %bb, %entry %list2_addr.05 = phi %struct.node2* [ %list2, %entry ], [ %5, %bb ] ; <%struct.node...
2010 Sep 15
0
problem with gfs_controld
...error -1 Sep 13 08:19:11 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2 handle 2846d7ad00000000 MSG_PLOCK Sep 13 08:19:11 NODE1 gfs_controld[3101]: send plock message error -1 When this happens in the other node access to samba services begin to freeze and this error appears: Sep 13 08:08:22 NODE2 kernel: INFO: task smbd:23084 blocked for more than 120 seconds. Sep 13 08:08:22 NODE2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 13 08:08:22 NODE2 kernel: smbd D ffff810001576420 0 23084 6602 23307 19791 (NOTLB) Sep 13...
2003 Oct 09
4
howto enable ssh on connect to rsync daemon
Hi, I'm trying to setup automatic sync of files over SSH from node2 to node1. node2 is the "server" and node1 is the "client". I have setup rsync like this on the nodes which runs AIX (4.3.3 and 5.1): 1. Installed rsync from Linux Toolbox (rsync-2.5.4-1.aix4.3.ppc.rpm) on both nodes. 2. Added following to both nodes /etc/services file: rsy...
2010 Aug 16
1
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
...s 5.5 and do replication: links:http://wiki.samba.org/index.php/Samba4_joining_a_domain First of all do all the same as for the first centOS samba4 but do not provision, no smb.conf in /usr/local/samba/etc. Important things: both servers must new each other. So if named is installed on the second (node2)you need to tell em in his named.conf that the first server(node1) is a forwarder to search for example ?tuebingen.tst.loc?: Example my named.conf on node2 - 192.168.135.27 is node1, options { listen-on port 53 { 127.0.0.1;192.168.134.28; }; listen-on-v6 port 53 { ::1; };...
2010 Aug 09
2
HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
...s 5.5 and do replication: links:http://wiki.samba.org/index.php/Samba4_joining_a_domain First of all do all the same as for the first centOS samba4 but do not provision, no smb.conf in /usr/local/samba/etc. Important things: both servers must new each other. So if named is installed on the second (node2)you need to tell em in his named.conf that the first server(node1) is a forwarder to search for example ?tuebingen.tst.loc?: Example my named.conf on node2 - 192.168.135.27 is node1, options { listen-on port 53 { 127.0.0.1;192.168.134.28; }; listen-on-v6 port 53 { ::1; };...
2010 Oct 05
0
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
...s 5.5 and do replication: links:http://wiki.samba.org/index.php/Samba4_joining_a_domain First of all do all the same as for the first centOS samba4 but do not provision, no smb.conf in /usr/local/samba/etc. Important things: both servers must new each other. So if named is installed on the second (node2)you need to tell em in his named.conf that the first server(node1) is a forwarder to search for example ?tuebingen.tst.loc?: Example my named.conf on node2 - 192.168.135.27 is node1, options { listen-on port 53 { 127.0.0.1;192.168.134.28; }; listen-on-v6 port 53 { ::1; };...
2013 Jun 06
0
cross link connection fall down
...X/TX igb: eth4 NIC Link is Down igb: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX igb: eth4 NIC Link is Down if i bring the connection manually up (ethtool -r eth4), i just have to wait and the status shows "down" again. $ grep eth4 /var/log/messages Jun 5 18:35:17 node2 kernel: igb 0000:41:00.2: added PHC on eth4 Jun 5 18:35:17 node2 kernel: igb 0000:41:00.2: eth4: (PCIe:5.0Gb/s:Width x4) Jun 5 18:35:17 node2 kernel: igb 0000:41:00.2: eth4: PBA No: G13158-000 Jun 5 18:35:17 node2 kernel: 8021q: adding VLAN 0 to HW filter on device eth4 Jun 5 18:35:17 node2 ker...
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2013 Nov 21
3
Sync data
...ys! i have 2 servers in replicate mode, the node 1 has all data, and the cluster 2 is empty. I created a volume (gv0) and start it. Now, how can I synchronize all files on the node 1 by the node 2 ? Steps that I followed: gluster peer probe node1 gluster volume create gv0 replica 2 node1:/data node2:/data gluster volume gvo start thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131121/1972e825/attachment.html>
2016 Mar 10
0
different uuids, but still "Attempt to migrate guest to same host" error
...odes I've put SELinux into permissive mode and disabled firewalld. Interesting Bit: --------------- Debugging a bit further, I put the VM into an unmanaged state and then try with virsh, from the node currently running the VM: [root@node1 ~]# virsh migrate --live --verbose testvm qemu+ssh://node2/system error: internal error: Attempt to migrate guest to the same host node1.example.tld A quick google points toward uuid problems, however the two nodes are, afaict, working with different UUIDs. (Substantiating info shown toward the end.) I thought that since `hostname` only returns the nod...
2016 Oct 12
2
Selection DAG adding node question
I am having trouble adding a node to the selection DAG (e.g. during combine) E.g. node1 -> use1, use2 Now if you add a node2, with node1 -> node2 with node2 number of output values equal node1 number of output values, then combine as well as e.g. promotion pass with replace all of node1's uses with node2, leaving node1 dead. While this is kind of expected, does this mean a) always create new node (can you clone n...
2007 Oct 04
0
Dom0 crash at 40Mbps Iperf traffic with only 80% CPU utilization ??
...n Emulab (https://www.emulab.net/) Here is how the 3-node topology looks (This topology is specified by means of an NS-2 file) topology: _______________________________ | | Node0:eth0 ------|Node1:eth0 Node1:eth1|------ Node2:eth0 |_______________________________| Node0 and Node2 run some standard 2.6.* kernel whereas Node1 runs para-virtualized Xen 3.0 using LVM-created root & swap partitions for DomU (2.6.12-xenU) ******************************************************************* Traffic flow...
2007 Feb 02
1
Default router on different subnet - Is it possible?
In the below example normal Internet traffic is routed as shown from node1 to internet1. The node1 defaultrouter points to the firewall and the firewall points to internet1. For node2 is it possible to add it's own link to the Internet passing through the same default router and firewall as node1. The links do not need to failover or provide any redundancy. node1---node1 defaultrouter--firewall--internet1(default route node1) node2--/ \-internet2...
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you c...
2010 Mar 24
3
mounting gfs partition hangs
...d two machines for testing gfs filesystems. They are attached to a iscsi device and centos versions are: CentOS release 5.4 (Final) Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux The problem is if I try to mount a gfs partition it hangs. [root at node2 ~]# cman_tool status Version: 6.2.0 Config Version: 29 Cluster Name: gfs-test Cluster Id: 25790 Cluster Member: Yes Cluster Generation: 4156 Membership state: Cluster-Member Nodes: 2 Expected votes: 2 Quorum device votes: 2 Total votes: 4 Quorum: 3 Active subsystems: 9 Flags: Ports Bound: 0 Node n...
2005 Feb 02
1
Migration Errors
Hi, I am trying to migrate a domain (testDomain) from node1 to node2. Here is some information about the problem: node1#: xm migrate testDomain node2 After the command is executed, there is some communication between the node1 and node2 Xen hosts. xfrd daemon (which is forked by xend) is primarily responsible for vm migration and there is error in its log (/...
2013 Dec 17
1
Speed issue in only one direction
Hi all, I'm back again with my speed issues. The past issues where dependant of network I used. Now I run my tests in a lab, with 2 configurations linked by a Gigabit switch : node1: Intel Core i5-2400 with Debian 7.2 node2: Intel Core i5-3570 with Debian 7.2 Both have AES and PCLMULQDQ announced in /proc/cpuinfo. I use Tinc 1.1 from Git. When I run an iperf test from node2 (client) to node1 (server) with default options, I have 600+ Mbit/s. When I run an iperf test from node1 (client) to node2 (server) with defaul...
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
.../brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE node2: 8394638 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE arbiternode: find: '/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397...
2024 Jan 01
1
Replacing Failed Server Failing
Hi All (and Happy New Year), We had to replace one of our Gluster Servers in our Trusted Pool this week (node1). The new server is now built, with empty folders for the bricks, peered to the old Nodes (node2 & node3). We basically followed this guide: https://docs.rackspace.com/docs/recover-from-a-failed-server-in-a-glusterfs-array We are using the same/old IP address. So when we try to do a `gluster volume sync node2 all` we get a `volume sync node2 all : FAILED : Staging failed on node2. Ple...