Displaying 20 results from an estimated 10000 matches similar to: "bonding mode"
2018 May 09
2
Some more questions
Ok, some more question as I'm still planning our SDS (but I'm prone to use
LizardFS, gluster is too inflexible)
Let's assume a replica 3:
1) currently, is not possbile to add a single server and rebalance like any
order SDS (Ceph, Lizard, Moose, DRBD, ....), right ? In replica 3, I have
to add 3 new servers
2) The same should be by add disks on spare slots on existing servers.
2018 May 09
0
Some more questions
On Wed, 2018-05-09 at 18:26 +0000, Gandalf Corvotempesta wrote:
> Ok, some more question as I'm still planning our SDS (but I'm prone
> to use
> LizardFS, gluster is too inflexible)
>
> Let's assume a replica 3:
>
> 1) currently, is not possbile to add a single server and rebalance
> like any
> order SDS (Ceph, Lizard, Moose, DRBD, ....), right ? In replica
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is
802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf,
I never get more than a total of about 3Gbps throughput. Is there anything
to tweak to get better throughput? Or am I running into other limits (e.g.
was reading about tcp retransmit limits for mode 0).
The iperf test was run with iperf -s on the
2010 Jan 21
1
KVM virtio bonding bandwidth problem
(first post)
Dear all,
I have been wrestling with this issue for the past few days ; googling
around doesn't seem to yield anything useful, hence this cry for help.
Setup :
- I am running several RHEL5.4 KVM virtio guest instances on a Dell PE
R805 RHEL5.4 host. Host and guests are fully updated ; I am using iperf
to test available bandwidth from 3 different locations (clients) in the
2017 Oct 10
4
ZFS with SSD ZIL vs XFS
Anyone made some performance comparison between XFS and ZFS with ZIL
on SSD, in gluster environment ?
I've tried to compare both on another SDS (LizardFS) and I haven't
seen any tangible performance improvement.
Is gluster different ?
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
> Anyone made some performance comparison between XFS and ZFS with ZIL
> on SSD, in gluster environment ?
>
> I've tried to compare both on another SDS (LizardFS) and I haven't
> seen any tangible performance improvement.
>
> Is gluster different ?
Probably not. If there is, it would probably favor
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks (
http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on
bricks.
On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote:
> On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
> > Anyone made some performance comparison between XFS and ZFS with ZIL
> > on
2018 May 01
0
Finding performance bottlenecks
Hi,
So is the KVM or Vmware as the host(s)? I basically have the same setup ie
3 x 1TB "raid1" nodes and VMs, but 1gb networking. I do notice with vmware
using NFS disk was pretty slow (40% of a single disk) but this was over 1gb
networking which was clearly saturating. Hence I am moving to KVM to use
glusterfs hoping for better performance and bonding, it will be interesting
to see
2007 Aug 29
0
poor performance with bonding in round-robin mode (only samba affected)
Hi,
samba 3.0.24, debian etch
I'm seeing a strange effect with samba and traffic over a bond0
interface in round robin mode.
2 server each with 2 GbE interfaces as bond0 device ind rr mode.
netio benchmark:
NETIO - Network Throughput Benchmark, Version 1.26
(C) 1997-2005 Kai Uwe Rommel
TCP connection established.
Packet size 1k bytes: 182840 KByte/s Tx, 197599 KByte/s Rx.
Packet size
2017 May 24
0
local ephemeral ports usage and distribution / inet_csk_get_port()
Hello
I'm using CentOS Linux release 7.3.1611 (Core) with
kernel 3.10.0-514.16.1.el7.x86_64
Using iperf for bond benchmarking, and opening several sockets, I noticed a
strange behavior.
My Centos using iperf as a client to connect to an iperf server (running
either CentOS or Debian) requesting N parallel TCP connections.
I notice that the local ephemeral ports used are not consecutive and
2018 May 03
0
Finding performance bottlenecks
It worries me how many threads talk about low performance. I'm about to
build out a replica 3 setup and run Ovirt with a bunch of Windows VMs.
Are the issues Tony is experiencing "normal" for Gluster? Does anyone here
have a system with windows VMs and have good performance?
*Vincent Royer*
*778-825-1057*
<http://www.epicenergy.ca/>
*SUSTAINABLE MOBILE ENERGY SOLUTIONS*
2008 Sep 06
1
bonding theory question
Hello All,
I am currently using bonding with 2 NICs (using mode 0). Its been
working well, but I am trying to understand how it works (I am a total
newbie).
mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the
first available slave through the last. This mode provides load
balancing and fault tolerance.
So I have 2 NICs (1 NIC attached to switch A, 2nd NIC
2017 Sep 25
0
Shift the normal curve to the top or near to the top of the histogram
Hi Abou,
Try this:
library(plotrix)
curve(rescale(dnorm(x
,mean=mean(Lizard.tail.lengths),sd=sd(Lizard.tail.lengths)),
c(0,6)),add=TRUE, col=2, lwd = 2)
Jim
On Mon, Sep 25, 2017 at 9:35 AM, AbouEl-Makarim Aboueissa
<abouelmakarim1962 at gmail.com> wrote:
> Dear All:
>
> One more thing.
>
> I want to add the normal curve to the histogram. Is there away to stretch
> the
2018 May 01
3
Finding performance bottlenecks
On 01/05/2018 02:27, Thing wrote:
> Hi,
>
> So is the KVM or Vmware as the host(s)?? I basically have the same setup
> ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.? I do notice with
> vmware using NFS disk was pretty slow (40% of a single disk) but this
> was over 1gb networking which was clearly saturating.? Hence I am moving
> to KVM to use glusterfs
2007 Apr 18
0
[Bridge] Bridging and bonding
Hi,
I'm trying to setup a bridge with a bonded device (2 links,
balance-rr). The problem is that after attaching the bonded device to
the bridge the network throughput drops down from 110MB to 100KB. This
seems to be due to the MAC address of internal devices of the bridge
being seen on the external ports, where the bonded device exists (see
also the URLs below).
An arp packet from some
2006 Jun 22
0
HP DL360, tg3 driver, bonding and link flapping
Hi *,
I'm running into a problem configuring bonding on an HP DL 360 G4p,
running 4.3 + tg3 driver version 3.43f. I'm connecting eth0 and eth1
to a Cisco 2948 (CatOS 8.1(3)) and receiving flapping notices. The
ethernet address is that of the primary interface. I have tried
several different modes, including balance-rr (0), active-backup (1),
and balance-alb (6). All have the
2018 Jul 02
0
dhcpd.conf for balance-rr bonding
Hi,
I am trying to configure linux bonding using balance-rr configuration. I
could not find a way to configure dhcpd.conf for this purpose and failed
miserably when I tried the following (dhcpd just failed):
host servername {
hardware ethernet [mac address interface #1] ;
hardware ethernet [mac address interface #2] ;
fixed-address 10.0.10.6;
}
In my understanding balance-rr would be
2017 Sep 24
3
Shift the normal curve to the top or near to the top of the histogram
Dear All:
One more thing.
I want to add the normal curve to the histogram. Is there away to stretch
the peak of the curve to the top of the histogram or at least near to the
top of the histogram.
Please see the code below.
Lizard.tail.lengths <- c(6.2, 6.6, 7.1, 7.4, 7.6, 7.9, 8, 8.3, 8.4, 8.5,
8.6,8.8, 8.8, 9.1, 9.2, 9.4, 9.4, 9.7, 9.9, 10.2, 10.4, 10.8,11.3, 11.9)
x<-seq(5,12, 0.001)
2009 Sep 14
2
Opinions on bonding modes?
I am working on setting up an NFS server, which will mainly serve files
to web servers, and I want to setup two bonds. I have a question
regarding *which* bonding mode to use. None of the documentation I have
read suggests any mode is "better" than other with the exception of
specific use cases (e.g. switch does not support 802.3ad, active-backup).
Since my switch *does* support
2011 Jul 10
2
bond0 performance issues in 5.6
Hi all,
I've got two gigabit ethernet interfaces bonded in CentOS 5.6. I've
set "miimode=1000" and I've tried "mode=" 0, 4 and 6. I've not been able
to get better than 112MB/sec, which is the same as the non-bonded
interfaces.
My config files are:
===
cat /etc/sysconfig/network-scripts/ifcfg-{eth1,eth2,bond0}
# SN1
HWADDR=00:30:48:fd:26:71