similar to: Teaming vs Bond?

Displaying 20 results from an estimated 1000 matches similar to: "Teaming vs Bond?"

2017 Jun 19
0
Teaming vs Bond?
I haven't done any testing of performance differences, but on my oVirt/rhev i use standard bonding as that's that it supports. On the stand along gluster nodes I use teaming for bonding. Teaming may be slightly easier to manage, but not by much if you are already used to bond setups. I haven't noticed any bugs or issues using teaming. *David Gossage* *Carousel Checks Inc. | System
2017 Jun 19
1
Teaming vs Bond?
OK, at least its not an *issue* with Gluster. I didn't expect any but you never know. I have been amused at the 'lack' of discussion on Teaming performance found on Google searches. There are lots of 'here it is and here is how to set it up' articles/posts, but no 'ooh-wee-wow it is awesome' comments. It seems that for most people Bonding has worked it kinks out
2017 Aug 30
1
Reboot node steps
I've had no issues with mass power outages nodes just came up and were healed by time I could start VM's. I've done rolling updates before as well. However I must have forgotten a step as yesterday I shutdown 1 server in my 3-node replicate setup and I have tones of files/shards not healing yet. I've seen it happen in past and I likely just need a relatively short downtime this
2017 Aug 07
2
Volume hacked
Interesting problem... Did you considered an insider job?( comes to mind http://verelox.com <https://t.co/dt1c78VRxA> recent troubles) On Mon, Aug 7, 2017 at 3:30 AM, W Kern <wkmail at bneit.com> wrote: > > > On 8/6/2017 4:57 PM, lemonnierk at ulrar.net wrote: > > > Gluster already uses a vlan, the problem is that there is no easy way > that I know of to tell
2017 Aug 24
1
GlusterFS as virtual machine storage
On 8/23/2017 10:44 PM, Pavel Szalbot wrote: > Hi, > > On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote: >> The default timeout for most OS versions is 30 seconds and the Gluster >> timeout is 42, so yes you can trigger an RO event. > I get read-only mount within approximately 2 seconds after failed IO. Hmm, we don't see that, even on busy VMs. We
2017 Aug 24
2
GlusterFS as virtual machine storage
That really isnt an arbiter issue or for that matter a Gluster issue. We have seen that with vanilla NAS servers that had some issue or another. Arbiter simply makes it less likely to be an issue than replica 2 but in turn arbiter is less 'safe' than replica 3. However, in regards to Gluster and RO behaviour The default timeout for most OS versions is 30 seconds and the Gluster
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote: > Just so I know. > > Is it correct to assume that this corruption issue is ONLY involved if you > are doing rebalancing with sharding enabled. > > So if I am not doing rebalancing I should be fine? > That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > >
2017 Aug 24
0
GlusterFS as virtual machine storage
Hi, On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote: > The default timeout for most OS versions is 30 seconds and the Gluster > timeout is 42, so yes you can trigger an RO event. I get read-only mount within approximately 2 seconds after failed IO. > Though it is easy enough to raise as Pavel mentioned > > # echo 90 > /sys/block/sda/device/timeout AFAIK
2017 Aug 07
0
Volume hacked
On Mon, Aug 07, 2017 at 10:40:08AM +0200, Arman Khalatyan wrote: > Interesting problem... > Did you considered an insider job?( comes to mind http://verelox.com > <https://t.co/dt1c78VRxA> recent troubles) I would be really really surprised, we are only 5 / 6 with access and as far as I know no one has a problem with the company. The last person to leave did so last year, and we
2018 Feb 20
2
[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device
On Tue, 20 Feb 2018 21:14:10 +0100, Jiri Pirko wrote: > Yeah, I can see it now :( I guess that the ship has sailed and we are > stuck with this ugly thing forever... > > Could you at least make some common code that is shared in between > netvsc and virtio_net so this is handled in exacly the same way in both? IMHO netvsc is a vendor specific driver which made a mistake on what
2018 Feb 20
2
[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device
On Tue, 20 Feb 2018 21:14:10 +0100, Jiri Pirko wrote: > Yeah, I can see it now :( I guess that the ship has sailed and we are > stuck with this ugly thing forever... > > Could you at least make some common code that is shared in between > netvsc and virtio_net so this is handled in exacly the same way in both? IMHO netvsc is a vendor specific driver which made a mistake on what
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2007 May 31
2
4.5 ALB Bonding Hang on Shutdown
Since I upgraded from 4.4 to 4.5 my system which has 2 sets of ALB bonded interfaces hangs on shutdown while doing an ifdown on these interfaces. Has anyone else seen this issue with 4.5 and bonding? Ross S. W. Walker Information Systems Manager Medallion Financial, Corp. 437 Madison Avenue 38th Floor New York, NY 10022 Tel: (212) 328-2165 Fax: (212) 328-2125 WWW: http://www.medallion.com
2018 Feb 21
2
[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device
On Wed, Feb 21, 2018 at 1:51 AM, Jiri Pirko <jiri at resnulli.us> wrote: > Tue, Feb 20, 2018 at 11:33:56PM CET, kubakici at wp.pl wrote: >>On Tue, 20 Feb 2018 21:14:10 +0100, Jiri Pirko wrote: >>> Yeah, I can see it now :( I guess that the ship has sailed and we are >>> stuck with this ugly thing forever... >>> >>> Could you at least make some
2018 Feb 21
2
[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device
On Wed, Feb 21, 2018 at 1:51 AM, Jiri Pirko <jiri at resnulli.us> wrote: > Tue, Feb 20, 2018 at 11:33:56PM CET, kubakici at wp.pl wrote: >>On Tue, 20 Feb 2018 21:14:10 +0100, Jiri Pirko wrote: >>> Yeah, I can see it now :( I guess that the ship has sailed and we are >>> stuck with this ugly thing forever... >>> >>> Could you at least make some
2017 Aug 07
0
Volume hacked
On 8/6/2017 4:57 PM, lemonnierk at ulrar.net wrote: > > Gluster already uses a vlan, the problem is that there is no easy way > that I know of to tell gluster not to listen on an interface, and I > can't not have a public IP on the server. I really wish ther was a > simple "listen only on this IP/interface" option for this What about this?
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel. Is there a difference between native client (fuse) and libgfapi in regards to the crashing/read-only behaviour? We use Rep2 + Arb and can shutdown a node cleanly, without issue on our VMs. We do it all the time for upgrades and maintenance. However we are still on native client as we haven't had time to work on libgfapi yet. Maybe that is more tolerant. We have linux VMs mostly
2017 Oct 04
0
data corruption - any update?
Just so I know. Is it correct to assume that this corruption issue is ONLY involved if you are doing rebalancing with sharding enabled. So if I am not doing rebalancing I should be fine? -bill On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > > > On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran > <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
2017 Apr 18
2
anaconda/kickstart: bonding device not created as expected
Hi, I am currently struggling with the right way to configure a bonding device via kickstart (via PXE). I am installing servers which have "eno" network interfaces. Instead of the expected bonding device with two active slaves (bonding mode is balance-alb), I get a bonding device with only one active slave and an independent, non-bonded network device. Also the bonding device
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > On 3 October 2017 at 13:27, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Any update about multiple bugs regarding data corruptions with >> sharding enabled ? >> >> Is 3.12.1 ready to be used in production? >> > >