similar to: Default router on different subnet - Is it possible?

Displaying 20 results from an estimated 4000 matches similar to: "Default router on different subnet - Is it possible?"

2005 Jan 30
5
simple questions about imq
Hi! I have read all informations i could find, but some things are still not clear. My setup is: ---INTERNET1(eth0)-\ /- Local net1 (eth2) GW ---INTERNET2(eth1)-/ \- Local net2 (eth3) I have NAT and a working setup using HTB,SFQ, classifying with the iptables -j CLASSIFY way. I shape only the traffic coming from the internet heading to the intranet. I would like
2010 Aug 16
1
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. First of all do not install the bind package coming with centos 5.5!! Install needs for samba yum install libacl* gnutls* readline* python* gdb* autoconf* Named installation: Here is a description on what to do: http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d nssec-nsec3-support/ The steps, yum
2010 Aug 09
2
HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. First of all do not install the bind package coming with centos 5.5!! Install needs for samba yum install libacl* gnutls* readline* python* gdb* autoconf* Named installation: Here is a description on what to do: http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d nssec-nsec3-support/ The steps, yum
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2016 Oct 12
2
Selection DAG adding node question
I am having trouble adding a node to the selection DAG (e.g. during combine) E.g. node1 -> use1, use2 Now if you add a node2, with node1 -> node2 with node2 number of output values equal node1 number of output values, then combine as well as e.g. promotion pass with replace all of node1's uses with node2, leaving node1 dead. While this is kind of expected, does this mean a) always
2003 Oct 09
4
howto enable ssh on connect to rsync daemon
Hi, I'm trying to setup automatic sync of files over SSH from node2 to node1. node2 is the "server" and node1 is the "client". I have setup rsync like this on the nodes which runs AIX (4.3.3 and 5.1): 1. Installed rsync from Linux Toolbox (rsync-2.5.4-1.aix4.3.ppc.rpm) on both nodes. 2. Added following to both nodes /etc/services file: rsync 873/tcp 3. Added
2010 Oct 23
1
Reg: ocfs2 two node cluster crashed, node2 crashed, when I rebooted node1 for maintenance.
Hi All, We have ocfs2 node cluster with oracle 11G RAC running, The node2 got crashed automatically, when i rebooted node one for maintenance. please check the log from node2 , before its got crashed. Oct 23 15:42:25 node2 kernel: ocfs2_dlm: Nodes in domain ("029C02C993E44E90879922E268FB161A"): 2 Oct 23 15:42:29 node2 kernel: ocfs2_dlm: Node 1 leaves domain
2012 Nov 14
2
How to filter xml value in R?
Hi, I have one xml file. <Class> <Node1 code ="1"> First node </Node1> <Node2 code ="1"> Second node </Node2> <Node3 code ="1"> Third node </Node3> <Node1 code ="2"> Fourth node </Node1> </Class> for (i in 1:xmlSize()) { print(Class[i]) # how can i filter Node1 ? } by
2010 Mar 24
3
mounting gfs partition hangs
Hi, I have configured two machines for testing gfs filesystems. They are attached to a iscsi device and centos versions are: CentOS release 5.4 (Final) Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux The problem is if I try to mount a gfs partition it hangs. [root at node2 ~]# cman_tool status Version: 6.2.0 Config Version: 29 Cluster Name:
2005 Feb 02
1
Migration Errors
Hi, I am trying to migrate a domain (testDomain) from node1 to node2. Here is some information about the problem: node1#: xm migrate testDomain node2 After the command is executed, there is some communication between the node1 and node2 Xen hosts. xfrd daemon (which is forked by xend) is primarily responsible for vm migration and there is error in its log (/var/log/xfrd.log) The exact
2008 Feb 19
1
DLMFS on OracleVM 2.1 (OEL5.0 based)
Hi List, I want to use DLMFS of OCFS2 to avoid multiple start of virtual machines on OracleVM. I want to use a wrapper around xm that spawns a deamon that keeps a file open in /dlm/DOMAIN. Now I played around a bit and followed the procedure in the document http://oss.oracle.com/projects/ocfs2/src/branches/ocfs2-1.2/dlmfs.txt for DLMFS. Theres one problem. The "O_NONBLOCK" option is
2017 Aug 06
1
State: Peer Rejected (Connected)
Hi Ji-Hyeon, Thanks to your help I could find out the problematic file. This would be the quota file of my volume it has a different checksum on node1 whereas node2 and arbiternode have the same checksum. This is expected as I had issues which my quota file and had to fix it manually with a script (more details on this mailing list in a previous post) and I only did that on node1. So what I now
2024 Jan 01
1
Replacing Failed Server Failing
Hi All (and Happy New Year), We had to replace one of our Gluster Servers in our Trusted Pool this week (node1). The new server is now built, with empty folders for the bricks, peered to the old Nodes (node2 & node3). We basically followed this guide: https://docs.rackspace.com/docs/recover-from-a-failed-server-in-a-glusterfs-array We are using the same/old IP address. So when we try
2013 Dec 17
1
Speed issue in only one direction
Hi all, I'm back again with my speed issues. The past issues where dependant of network I used. Now I run my tests in a lab, with 2 configurations linked by a Gigabit switch : node1: Intel Core i5-2400 with Debian 7.2 node2: Intel Core i5-3570 with Debian 7.2 Both have AES and PCLMULQDQ announced in /proc/cpuinfo. I use Tinc 1.1 from Git. When I run an iperf test from node2 (client) to
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran
2018 Mar 21
2
how to add a child to a child in XML
I am trying to add a child to a child using XML package in R. the following fails library(XML) node1 <- c("val1","val2","val3") names(node1) <- c("att1","att2","att3") root <- xmlNode("root", attrs=node1) node2 <- LETTERS[1:3] names(node2) <- paste("name",1:3,sep="") root <-
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >
2010 May 06
10
No connection between nodes on same LAN
Hi all, I am currently deploying tinc as an alternative to OpenVPN. My setup includes a lot of nodes and some of them are sitting together behind the same router on the same network segment. (E.g. connected to the same switch.) I noticed, that those nodes do never talk directly to each other via their private ip-addresses, but instead use the NATed address they got from the router.
2017 Aug 06
2
State: Peer Rejected (Connected)
Hi, I have a 3 nodes replica (including arbiter) volume with GlusterFS 3.8.11 and this night one of my nodes (node1) had an out of memory for some unknown reason and as such the Linux OOM killer has killed the glusterd and glusterfs process. I restarted the glusterd process but now that node is in "Peer Rejected" state from the other nodes and from itself it rejects the two other nodes
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43