Displaying 20 results from an estimated 76 matches for "10gbit".
2008 Mar 13
4
10Gbit ethernet
If I could ask question about 10Gbit ethernet....
We have a 70 node cluster built on Centos 5.1, using NFS to mount user home
areas. At the moment the network is a bottleneck, and it's going to get worse
as we add another 112 CPUs in the form of two blade servers.
To help things breath better, we are considering building thre...
2011 Jan 17
2
Question on how to get Samba to use larger pread/write calls.
We are testing Samba 3 (and 4) on Fedora Core 13,
10Gbit connection with a Mac OS 10.6.4 system
as the client. We will be adding some Windows
machines sooner or later with 10Gbit interfaces.
We are seeing 100-150MBytes/sec read or write
performance between the Mac and the FC13 system
over 10Gbit interface but it should be capable of
400-500MBytes/sec....
2017 Feb 03
0
CentOS 7.3, SPF+ 10GBit network
Hello,
our server uses 1 GBit Nic and 10 GBit SPF+ NIC.
When both nics are configured ONBOOT=yes, then both nics are OK.
Wen 1GBit nic is ONBOOT=no, netwerk does not come up.
Missing driver? What else?
--
Viele Gr??e
Helmut Drodofsky
Internet XS Service GmbH
He?br?hlstra?e 15
70565 Stuttgart
Gesch?ftsf?hrung
Dr.-Ing. Roswitha Hahn-Drodofsky
HRB 21091 Stuttgart
USt.ID: DE190582774
Tel.
2012 Jun 24
11
Xen 10GBit Ethernet network performance (was: Re: Experience with Xen & AMD Opteron 4200 series?)
...en setup(s) running on top of Opterons 4274HE, Opteron
> 4200 or Dell R515 machines and is willing to share some experience?
Meanwhile, I got myself a two machine test setup for evaluation.
2 machines:
- 1x Opteron 4274HE (8C)
- 32GByte RAM
- 2x 1GBit Eth on board
- 1x Intel X520-DA2 (dual port 10GBit Ethernet)
I installed Debian squeeze with Xen on both machines.
"Debian kernel": 2.6.32-5-amd64
"Debian Xen kernel": 2.6.32-5-xen-amd64
"Debian bpo kernel": 3.2.0-0.bpo.2-amd64
"Debian Xen": 4.0.1
"Vanilla Xen": 4.1.2
TCP/IP transfer test:
user@lx...
2018 Mar 18
4
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...5m51.862s
user 0m0.862s
sys 0m8.334s
root at gluster-client:/mnt/gluster_perf_test/ # time rm -rf private_perf_test/
real 0m49.702s
user 0m0.087s
sys 0m0.958s
## Hosts
- 16x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz per Gluster host / client
- Storage: iSCSI provisioned (via 10Gbit DAC/Fibre), SSD disk, 50K R/RW 4k IOP/s, 400MB/s per Gluster host
- Volumes are replicated across two hosts and one arbiter only host
- Networking is 10Gbit DAC/Fibre between Gluster hosts and clients
- 18GB DDR4 ECC memory
## Volume Info
root at gluster-host-01:~ # gluster pool list
UUID...
2018 Mar 18
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...ot at gluster-client:/mnt/gluster_perf_test/ # time rm -rf private_perf_test/
>
> real ? ?0m49.702s
> user ? ?0m0.087s
> sys ? ?0m0.958s
>
>
> ## Hosts
>
> - 16x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz per Gluster host / client
> - Storage: iSCSI provisioned (via 10Gbit DAC/Fibre), SSD disk, 50K R/RW
> 4k IOP/s, 400MB/s per Gluster host
> - Volumes are replicated across two hosts and one arbiter only host
> - Networking is 10Gbit DAC/Fibre between Gluster hosts and clients
> - 18GB DDR4 ECC memory
>
> ## Volume Info
>
> root at gluster-h...
2012 Nov 18
6
Xen + IPv6 + Netapp = NFS read problem
...ernel,
newer xen, newer nfs commons etc)
One of virtual servers is a backup server which has mounted a big nfs
share from Netapp. It copied changed data on Netapp''s share (we use
also excellent Netapp''s snapshot technology to keep older versions
data).
Our network is mixed 1GBit/10GBit ethernet with junper switch as a central
point of network. Internally we use IPv6 protocol.
The problem is the following:
THE READ PERFORMANCE OF BACKUP SERVER IS DRAMATICALLy LOW: TYPICAL SEQUENTIAL
READ FROM NETAPP SHARE FALLS DOWN TO 5-10MBytes/s !!!!!!!!!!!!!!!!!!!!!!!
On the other hand WRITE...
2020 Sep 01
1
Rsync 3.2.2/OSX/ on high bandwidth
Hello,
First, thanks for your amazing job !
I used rsync on OSX on a 10Gbit network.
Now i move to 50Gbit network but rsync stay to a maximum transfert of 130
MB/s.
I believe that the limit comes from the read block size fixed at 256K.
Could it be changed to 512K or even to 1 000K to enjoy the maximum
bandwidth available?
Thanks
Guillaume B
-------------- next part -----...
2009 Jul 21
5
File Size Limit - Why/How?
Hello Samba Lists:
I am trying to read a 22TB file from a system running OpenSuSE 10.3/x64
(Using whatever version of Samba came out with 10.3/x64). The file is on a
30TB XFS volume. I'm connecting over 10GBit Ethernet from a Windows Server
2003/X64 client. If I try to read the 22TB file, I get the message "Access
Denied", but if I try to read 100GB files from the same volume, they read
with no problems - please help...Note that I'm not trying to write - only
read...
Lance
2013 Nov 13
1
strange speed of mkfs Centos vs. Debian
Hi,
I'm testing a storage system and different network settings and I'm
faced with a strange phenomen.
Doing a mkfs.ext4 on the centos server lasts 11 minutes.
The same mkfs.ext4 command on the debian installation is done in 20 seconds.
It is formatting a 14 TB 10Gbit ISCSI Target.
It is the same server. Centos and debian are installed on different
internal harddisks.
Any explanations why debian is so f*** fast? Any hint?
Regards . G?tz
--
G?tz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
Fax +49 7141 969 55 420
E-Mail goetz.reinicke at filmakademie....
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...ot at gluster-client:/mnt/gluster_perf_test/ # time rm -rf private_perf_test/
>
> real ? ?0m49.702s
> user ? ?0m0.087s
> sys ? ?0m0.958s
>
>
> ## Hosts
>
> - 16x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz per Gluster host / client
> - Storage: iSCSI provisioned (via 10Gbit DAC/Fibre), SSD disk, 50K R/RW
> 4k IOP/s, 400MB/s per Gluster host
> - Volumes are replicated across two hosts and one arbiter only host
> - Networking is 10Gbit DAC/Fibre between Gluster hosts and clients
> - 18GB DDR4 ECC memory
>
> ## Volume Info
>
> root at gluster-ho...
2008 Dec 19
3
dom0 using only a single CPU (networking)
Hello,
I''m using a server with 10Gbps network interfaces and 4 CPUs, running
several domUs. The problem is that in this setup, with high network load,
dom0 turns out to be the bottleneck, using only a single CPU which is
saturated and 100%. So the network speed is bounded to much less than
10Gbps. How could I make dom0 use more CPUs in parallel ? I checked my VCPU
setup; i have 4 VCPUs
2010 Oct 11
1
Problems with gpxelinux and Broadcom 57711
Hello.
We are using gpxelinux from SYSLINUX 4.02. We recently purchased some HP
Proliant BL460c G6 servers with Broadcom BCM 57711 10Gbit NICS.
We have a very simple pxelinux.gpxe script compiled into gpxelinux.0:
#!gpxe
set use-cached 1
dhcp net0
chain http://webserver/gpxe/gpxe.php?IP=${net0/ip}
The PHP script dynamically creates a config file that looks something
like this:
#!gpxe
set 209:string pxelinux.cfg/default
set 210:st...
2017 Oct 27
5
Poor gluster performance on large files.
...erfs 3.10.5
32GB Ram
36 drive array on LSI raid
Sustained >2.5GB/s to XFS (164TB)
Speed tests are done with local with single thread (dd) or 4 threads
(iozone) using my standard 64k io size to 20G or 5G files (20G for local
drives, 5G for gluster) files.
Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with
802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single
stream >9000Mbit/s
Here is my current gluster performance:
Single brick on server 1 (server 2 was similar):
Fuse mount:
1000MB/s write
325MB/s read
Distributed only servers 1+2:
Fuse mount...
2015 Jan 23
8
network copy performance is poor (rsync) - debugging suggestions?
...MB/s
The options I use are:
rsync -aHAXxv --numeric-ids --progress -e "ssh -T -c arcfour -o
Compression=no -x"
If I copy files by smb to/from the servers I do get 60 - 80 MB/s, a dd
(r/w) on the storages attached gives 90 MB/s on the 1Gbit ISCSI (Source
Server) and up to 600MB/s on the 10Gbit ISCSI (Destination Server) storage.
Both servers have plenty of memory and cpu usage looks low.
Currently we dont use jumbo frames. Network over all usage is moderate
to low. There are no special sysctl tweeks yet in use.
As mentioned, I'm confused that even with SMB I do get 3 to 4 times
be...
2015 Jul 06
2
Live migration using shared storage in different networks
Hi!
I am building a KVM cluster that needs VM live migration.
My shared storage as well as the KVM hosts will be running
CentOS.
Because 10 Gbps Ethernet switches are very expensive at the
moment I will connect the KVM hosts to the storage by
cross-over cables and create private networks for each
connection (10.0.0.0/30 and 10.0.0.4/30).
The following diagram shows the topology
Management
2014 Mar 22
4
suggestions for a "fast" fileserver - 1G / 10G
...eservers as our requirements changed over time.
The "main" problem we are faced with is, that with smb (windows 7 and OS
X) clients we never get really close to GBit speed on reads or writes.
Using the same servers/storages with ftp, ssh, rsync, nfs we are on the
max. of GBit or with the 10Gbit Storage/server on the max of the storage
we currently own. (about 400MB/s)
E.g. from my Mac Pro I get smb r/w +- 40MB/s, with ftp I get 90MB/s on a
1Gbit Server.
So I try to eliminate some bottle necks. But where are they?
I know there are some protocol overheads etc. comparing smb and e.g. ftp....
2011 May 12
1
Slow reading speed over RDMA
...results did not improve.
Moreover, I have tested all the suggested optimization hacks/parameters with
little change in the results.
A little more insight on the setup: we have 24 HD machines, with dual Xeon
CPUs, 16GByte of RAM, working over an Areca RAID card, using JBOD, all boxes
connected via 10Gbit/s Infiniband. The CPUs are far from being maxed out
(top reports around 50% idle), as is the network.
I'm using CentOS 5.5 on all the machines. Let me know if you need more
information on what's going on (configuration files, setup, etc, etc).
Thanks in advance,
Daniel
?????????????????...
2017 Oct 30
0
Poor gluster performance on large files.
...rray on LSI raid
> Sustained >2.5GB/s to XFS (164TB)
>
> Speed tests are done with local with single thread (dd) or 4 threads
> (iozone) using my standard 64k io size to 20G or 5G files (20G for local
> drives, 5G for gluster) files.
>
> Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with
> 802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single
> stream >9000Mbit/s
>
> Here is my current gluster performance:
>
> Single brick on server 1 (server 2 was similar):
> Fuse mount:
> 1000MB/s write
> 325MB/s read
&...
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...r-client:/mnt/gluster_perf_test/ # time rm -rf
> private_perf_test/
>
> real ? ?0m49.702s
> user ? ?0m0.087s
> sys ? ?0m0.958s
>
>
> ## Hosts
>
> - 16x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz per Gluster host /
> client
> - Storage: iSCSI provisioned (via 10Gbit DAC/Fibre), SSD disk, 50K
> R/RW 4k IOP/s, 400MB/s per Gluster host
> - Volumes are replicated across two hosts and one arbiter only host
> - Networking is 10Gbit DAC/Fibre between Gluster hosts and clients
> - 18GB DDR4 ECC memory
>
> ## Volume Info
>
> root at gluster-h...