Displaying 20 results from an estimated 22 matches for "10ge".
Did you mean:
10g
2016 Feb 03
6
10GE performance issues
Hi,
we have network performance issues with Samba (Version 4.2.7-SerNet-RedHat-19.el6) on one of our servers. The maximum throughput from client to server is 110 MB/s (read/write) under Windows 7 x64 (single 10GE NIC). When using NFS on a different Linux workstation we're getting much higher rates, around 500-700 MB/s. I still can't find the problem.
Setup server:
CentOS 6.6
Samba 4.7.2
Mellanox 10GE NIC with LACP bonding
local attached storage: XFS, 1,5 GB/s read/write
[global]
workgroup =...
2016 Feb 04
0
10GE performance issues
On 03/02/16 03:27 PM, Martin Markert wrote:
> Hi,
> we have network performance issues with Samba (Version 4.2.7-SerNet-RedHat-19.el6) on one of our servers. The maximum throughput from client to server is 110 MB/s (read/write) under Windows 7 x64 (single 10GE NIC). When using NFS on a different Linux workstation we're getting much higher rates, around 500-700 MB/s. I still can't find the problem.
>
> Setup server:
> CentOS 6.6
> Samba 4.7.2
> Mellanox 10GE NIC with LACP bonding
> local attached storage: XFS, 1,5 GB/s read/write...
2016 Feb 04
0
10GE performance issues
...23:27 GMT+03:00 Martin Markert <martinmarkert at mac.com>:
> Hi,
> we have network performance issues with Samba (Version
> 4.2.7-SerNet-RedHat-19.el6) on one of our servers. The maximum throughput
> from client to server is 110 MB/s (read/write) under Windows 7 x64 (single
> 10GE NIC). When using NFS on a different Linux workstation we're getting
> much higher rates, around 500-700 MB/s. I still can't find the problem.
>
> Setup server:
> CentOS 6.6
> Samba 4.7.2
> Mellanox 10GE NIC with LACP bonding
> local attached storage: XFS, 1,5 GB/s read/...
2016 Feb 04
2
10GE performance issues
On Thu, Feb 04, 2016 at 09:03:21AM +0300, Владимир Терентьев wrote:
> Hi. Try add this to your config in global section.
>
> socket options = SO_KEEPALIVE TCP_NODELAY IPTOS_THROUGHPUT SO_RCVBUF=262140
> SO_SNDBUF=262140
Sorry to step in, but SO_SNDBUF and SO_RCVBUF are almost always bad for
performance, unless you know *EXACTLY* what you're doing at a packet
level. A general
2014 May 20
2
Samba 4 + Windows XP very slow - especially noticeable with many files
...mance issues. Today, I've been able to nail it down to a simple
test case.
What strikes me is that Windows XP takes about 25 seconds for one job,
while it only costs 3 seconds on Windows 7.
I have two tests:
1000 files of 10k each (total ~10Mb):
* Linux on localhost or a remote host (1GE or 10GE): 5.5 seconds
* Windows XP - Machine #1: 24.5 seconds
* Windows XP - Machine #2: 60 sec
* Windows 7 - Identical hardware as XP machine #1: 3 seconds
One 500MB file:
* Expected result: 4-5 seconds on 1Gbit (100-125Mbyte/s)
* Linux on localhost: 0.5 sec
* Other Linux machine with 10GE connect: 0.9 s...
2010 Apr 23
1
client mount fails on boot under debian lenny...
Hi
Is there clean way to ensure that a glusterfs mount point specified in
/etc/fstab is mounted automatically on boot under debian lenny when
referencing a remote node for the volfile? In my test case everytime I
reboot, the system tries to mount the filesystem before the backend 10ge
interface comes up so it gets a "No route to host" and immediately
aborts.
I know I can dump a mount -a into /etc/rc.local but I'm hoping there's a
more elegant way to handle this scenario. The fstab entry contains
options noatime,_netdev already.
Thanks.
Mohan
2020 May 06
2
Parallel transfers with sftp (call for testing / advice)
On Tue, May 5, 2020 at 4:31 AM Peter Stuge <peter at stuge.se> wrote:
>
> Matthieu Hautreux wrote:
> > The change proposed by Cyril in sftp is a very pragmatic approach to
> > deal with parallelism at the file transfer level. It leverages the
> > already existing sftp protocol and its capability to write/read file
> > content at specified offsets. This enables
2020 Aug 14
2
Teo En Ming's Learning Achievements on 14 August 2020 Friday
...e number of AnyConnect Premium Peers has been increased from 2 to 50
after license activation.
[2] Configuring NIC teaming/bonding on CentOS 8.2 (2004) Linux Server
Today I had to configure NIC teaming/bonding on CentOS 8.2 (2004) Linux
Server. The server hardware is Dell PowerEdge R640 with 2x 10GE NIC
ports and 6x 1GE NIC ports. NIC teaming/bonding was configured using 2x
10GE NIC ports.
All I had to do is to follow the following reference guide.
Reference Guide: How to Configure NIC Teaming on CentOS 8 / RHEL 8
Link: https://www.linuxtechi.com/configure-nic-teaming-centos-8-rhel-8/
In...
2013 Nov 12
2
Expanding legacy gluster volumes
Hi there,
This is a hypothetical problem, not one that describes specific hardware
at the moment.
As we all know, gluster currently usually works best when each brick is
the same size, and each host has the same number of bricks. Let's call
this a "homogeneous" configuration.
Suppose you buy the hardware to build such a pool. Two years go by, and
you want to grow the pool. Changes
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but exceedingly slowly. Disks show all but
no activity,...
2010 Jan 06
1
wiki down?
...3.nlr.net (216.24.186.21) 64.592 ms 64.755 ms
64.966 ms
11 newy-wash-98.layer3.nlr.net (216.24.186.22) 70.267 ms 70.299 ms
70.251 ms
12 216.24.184.86 (216.24.184.86) 153.551 ms 153.544 ms 153.535 ms
13 belnet-gw.rt1.ams.nl.geant2.net (62.40.124.162) 156.964 ms 156.927
ms 156.883 ms
14 10ge.ar1.mon.belnet.net (193.191.17.154) 158.453 ms 158.415 ms
158.367 ms
15 umh-1.customer.mons.belnet.net (193.191.11.138) 158.589 ms 158.690
ms 158.649 ms
16 193.190.193.10 (193.190.193.10) 159.595 ms 160.831 ms 160.799 ms
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * *...
2017 Oct 14
1
nic requirement for teiring glusterfs
Hi everybody, I have a question about network interface used for tiering
in glusterfs, if I have a 1G nic on glusterfs servers and clients, can I
get more performance by setting up glusterfs tiering?or the network
interface should be 10G?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2008 May 07
7
questions from a 10GbE driver author
Hi,
I maintain a driver for a 10GbE nic which supports multiple hardware tx/rx rings. We can steer rx packets into rings using the "standard" NDIS6 Toeplitz hashing on TCP port numbers, IP addresses, etc. We can also steer packets based on MAC address. Would this NIC be considered to be capable of supporting crossbow?
Also, can crossbow do things like steer outgoing packets to the
2020 Nov 14
0
ssacli start rebuild?
...I/O
>
> It doesn't matter what I expect.
It *does* matter if you know what the hardware?s capable of.
TLS is a much harder problem than XOR checksumming for traditional RAID, yet it imposes [approximately zero][1] performance penalty on modern server hardware, so if your CPU can fill a 10GE pipe with TLS, then it should have no problem dealing with the simpler calculations needed by the ~2 Gbit/sec flat-out max data rate of a typical RAID-grade 4 TB spinning HDD.
Even with 8 in parallel in the best case where they?re all reading linearly, you?re still within a small multiple of the E...
2012 Jun 24
11
Xen 10GBit Ethernet network performance (was: Re: Experience with Xen & AMD Opteron 4200 series?)
...rdware instance and
the other way around.
I tried Debian Xen and vanilla Xen and also using Debian Xen kernel
and Debian bpo kernel as a dom0 kernel.
I did try setting the dom0 memory to various sizes and iommu=1 as well.
Finally I retried most test scenarios with irqbalance enabled, as
those Intel 10GE cards expose between 8 and 16 queses (IRQs) for
TX/RX, depending on the kernel version.
The result is always the same: I am limited to 250MByte/s max. unless
both boxes run on bare hardware.
Even Debian Xen kernel on bare hardware on both boxes does about 550MByte/s.
When testing dom0 performance,...
2020 Nov 14
6
ssacli start rebuild?
On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote:
> > I have yet to see software RAID that doesn't kill the performance.
>
> When was the last time you tried it?
I'm currently using it, and the performance sucks. Perhaps it's
not the software itself or the CPU but the on-board controllers
or other
2020 May 04
3
Parallel transfers with sftp (call for testing / advice)
...ase the compute throughput. The future does
not seem brighter in that area.
In the meantime, network bandwidth has still increased at a regular
pace. As a result, a cpu frequency that was once sufficient to fill the
network pipe is now only at a fraction of what the network can really
deliver. 10GE ethernet cards are common nowadays on datacenter servers
and no openssh ciphers and MACs can deliver the available bandwidth for
single transfers.
Introducing parallelism is thus necessary to leverage what the network
hardware can offer.
The change proposed by Cyril in sftp is a very pragmatic...
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all
after being a silent reader for some time and not very successful in getting
good performance out of our test set-up, I'm finally getting to the list with
questions.
Right now, we are operating a web server serving out 4MB files for a
distributed computing project. Data is requested from all over the world at a
rate of about 650k to 800k downloads a day. Each data file is usually
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi,
I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode
(fs1, fs2)
Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB
sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit
network
I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using
glusterfs. (also tried NFS -> Gluster mount)
We have 50Gb of
2020 Apr 08
6
Parallel transfers with sftp (call for testing / advice)
Hello, I'd like to share with you an evolution I made on sftp.
1. The need
I'm working at CEA (Commissariat ? l'?nergie atomique et aux ?nergies
alternatives) in France. We have a compute cluster complex, and our customers
regularly need to transfer big files from and to the cluster. Each of our front
nodes has an outgoing bandwidth limit (let's say 1Gb/s each, generally more