search for: ofcause

Displaying 8 results from an estimated 8 matches for "ofcause".

2008 Aug 04
1
2.6.18 removed from Lenny
Hello, I saw the Linux 2.6.18 kernel was removed from Lenny. So there is no dom0 in Lenny anymore. Maybe it would be an idea to leave the 2.6.18 kernel in Lenny to use them as dom0. Only the i386 and amd64 ofcause. Then it would also be possible to add the latest code from xen.org to it. I saw Linux 2.6.27 already has a 64-bit Xen dom0. So the code is ready... Maybe not well enough tested. I also saw that Lenny will maybe have a 2.6.26 kernel. Met vriendelijke groet, Paul van der Vlis. -- http://www...
2018 Feb 07
1
geo-replication
...on node 1 at all? I did a small test on my testing machines. Turned one of the geo machines off and created 10000 files containing one short string in the master nodes. Nothing became synced with the geo slaves. When I turned on the geo machine again all 10000 files were synced to the geo slaves. Ofcause devided between the two machines. Is this the right/expected behavior of geo-replication with a distributed cluster? Many thanks in advance! Regards Marcus On Wed, Feb 07, 2018 at 06:39:20PM +0530, Kotresh Hiremath Ravishankar wrote: > We are happy to help you out. Please find the answers in...
2018 Mar 02
1
geo-replication
...s containing > > one > > > > short string in the master nodes. > > > > Nothing became synced with the geo slaves. > > > > When I turned on the geo machine again all 10000 files were synced to > > the > > > > geo slaves. > > > > Ofcause devided between the two machines. > > > > Is this the right/expected behavior of geo-replication with a > > distributed > > > > cluster? > > > > > > > > > > Yes, it's correct. As I said earlier, CREATE itself would have failed > &gt...
2018 Feb 07
0
geo-replication
We are happy to help you out. Please find the answers inline. On Tue, Feb 6, 2018 at 4:39 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > > I am planning my new gluster system and tested things out in > a bunch of virtual machines. > I need a bit of help to understand how geo-replication behaves. > > I have a master gluster cluster replica 2 > (in
2018 Mar 02
0
geo-replication
...in the master nodes. > > > > > > Nothing became synced with the geo slaves. > > > > > > When I turned on the geo machine again all 10000 files were synced > > to > > > > the > > > > > > geo slaves. > > > > > > Ofcause devided between the two machines. > > > > > > Is this the right/expected behavior of geo-replication with a > > > > distributed > > > > > > cluster? > > > > > > > > > > > > > > > > Yes, it's correct....
2005 Jul 02
6
Loadbalancing how to ? ? ? ?
...eth0 of PC ) In the /etc/network/option I enable the forwardable = 1 I use iptable to NAT the outgoing of eth1 and eth2 Iptables -t nat -A POSTROUTING -s 192.168.60.0/24 -o eth1 -j SNAT -to 10.0.1.2 Iptables -t nat -A POSTROUTING -s 192.168.60.0/24 -o eth2 -j SNAT -to 10.0.2.2 Ofcause by default the eth1 will always be forwarded from LAN and nerver the ADSL 2 was use The ideal of mine is writing a programe loadbalancing for n line ADSL contact to one PC as gateway ,But when the packet reach the eth0 , how to control it forwarld to eth1 or eth2 is my problem . If I cou...
2018 Feb 06
4
geo-replication
Hi all, I am planning my new gluster system and tested things out in a bunch of virtual machines. I need a bit of help to understand how geo-replication behaves. I have a master gluster cluster replica 2 (in production I will use an arbiter and replicatied/distributed) and the geo cluster is distributed with 2 machines. (in production I will have the geo cluster distributed) Everything is up
2007 Feb 15
4
I keep getting password mismatches
Hey, This is the debug information: auth(default): client in: AUTH 1 PLAIN service=IMAP secured lip=127.0.0.1 rip=127.0.0.1 resp=AG1hcmsAbWFyaw==auth(default): passwd(mark,127.0.0.1): password mismatchauth(default): client out: FAIL 1 user=markimap-login: Disconnected: user=<mark>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, secured The strange thing is that i