Displaying 14 results from an estimated 14 matches for "2gbit".
Did you mean:
gbit
2016 Sep 06
2
No increased throughput with SMB Multichannel and two NICs
...n't try and second guess the
kernel tcp tuning.
> min receivefile size = 16384
> use sendfile = Yes
2 above not needed. With aio enabled sendfile is disabled.
Receive file to kernel isn't implemented in Linux.
> Transferring a file from share(linux,tmpfs) to Windows SSD hits >2GBit/s
> now.
>
> But transferring from Windows SSD to linux-tmpfs share still only hits 1
> GBit/s (~500MBit per interface).
>
> The SSD is fast enough to deliver 2GBit/s and on samba-side no disks
> involved (tmpfs).
>
> Is there maybe another option required?
Delete all...
2016 Sep 06
2
No increased throughput with SMB Multichannel and two NICs
On Tue, Sep 06, 2016 at 08:06:48PM +0200, Volker Lendecke via samba wrote:
> On Tue, Sep 06, 2016 at 07:58:27PM +0200, Daniel Vogelbacher via samba wrote:
> > I don't have these options in my smb.conf.
> > Do you recommend any specific values?
>
> aio read size = 1
> aio write size = 1
>
> You might try with current master. There we have improved async I/O
>
2016 Sep 06
0
No increased throughput with SMB Multichannel and two NICs
...el tcp tuning.
>
>> min receivefile size = 16384
>> use sendfile = Yes
>
> 2 above not needed. With aio enabled sendfile is disabled.
> Receive file to kernel isn't implemented in Linux.
>
>> Transferring a file from share(linux,tmpfs) to Windows SSD hits >2GBit/s
>> now.
>>
>> But transferring from Windows SSD to linux-tmpfs share still only hits 1
>> GBit/s (~500MBit per interface).
>>
>> The SSD is fast enough to deliver 2GBit/s and on samba-side no disks
>> involved (tmpfs).
>>
>> Is there maybe ano...
2016 Sep 06
2
No increased throughput with SMB Multichannel and two NICs
...al copy speed it.
> >
>
> Now I have:
>
> server multi channel support = yes
> vfs objects = aio_pthread,recycle
> aio read size = 1
> aio write size = 1
> strict locking = No
>
> But read (linux->windows) transfer rate is now again 1GBit/s instead of
> 2GBit/s?!
OK, very strange. But at least you can now
add in the things I told you to remove one
by one to see which one I was wrong about :-).
Don't keep adding, add one - then remove and
add another until we discover which makes
the difference (if it's indeed one, which
I'm guessing) !
2016 Sep 06
0
No increased throughput with SMB Multichannel and two NICs
...ery important!!!
aio read size = 1
aio write size = 1
read raw = Yes
write raw = Yes
strict locking = No
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072
SO_SNDBUF=131072
min receivefile size = 16384
use sendfile = Yes
Transferring a file from share(linux,tmpfs) to Windows SSD hits >2GBit/s
now.
But transferring from Windows SSD to linux-tmpfs share still only hits 1
GBit/s (~500MBit per interface).
The SSD is fast enough to deliver 2GBit/s and on samba-side no disks
involved (tmpfs).
Is there maybe another option required?
Regards
Daniel Vogelbacher
2007 Mar 19
2
TC not working well with bonded nics please help
...load balancing) mode. I created a qdisc, class
and a filter as follows:
tc qdisc add dev bond0 root handle 1: htb
tc class add dev bond0 parent 1: classid 1:1 htb rate 240mbps
tc class add dev bond0 parent 1:1 classid 1:2 htb rate 50 ceil 50 quantum
1500
I started a TCP traffic between this bond (2gbit bandwidth) and a remote
nic (1gbit bandwidth).
Without qos, bond was transmitting at 960Mbps.
After I executed above mentioned commands, it was expected that the bond
will transmit at 400Mbps but it was transmitting only at 70Mbps.
Same thing was observed with different qos rates for class 1:2, out...
2008 Aug 20
44
GPL PV drivers for Windows 0.9.11-pre12
...ust uploaded 0.9.11-pre12 of the GPL PV drivers for Windows.
Since -pre10 (and -pre11) I''ve fixed a heap of crashes that were
plaguing xennet under load, and also rewritten the interrupt/event
distribution logic to improve performance.
Under windows 2003 I can now get network speeds of 1-2Gbit/second TX and
600Gbit/second RX, which is considerably better than I was getting
before.
Please download it and give it a go.
http://www.meadowcourt.org/downloads
James
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com...
2008 Aug 20
44
GPL PV drivers for Windows 0.9.11-pre12
...ust uploaded 0.9.11-pre12 of the GPL PV drivers for Windows.
Since -pre10 (and -pre11) I''ve fixed a heap of crashes that were
plaguing xennet under load, and also rewritten the interrupt/event
distribution logic to improve performance.
Under windows 2003 I can now get network speeds of 1-2Gbit/second TX and
600Gbit/second RX, which is considerably better than I was getting
before.
Please download it and give it a go.
http://www.meadowcourt.org/downloads
James
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com...
2007 Mar 18
0
Doubt...
...load balancing) mode. I created a qdisc, class
and a filter as follows:
tc qdisc add dev bond0 root handle 1: htb
tc class add dev bond0 parent 1: classid 1:1 htb rate 240mbps
tc class add dev bond0 parent 1:1 classid 1:2 htb rate 50 ceil 50 quantum
1500
I started a TCP traffic between this bond (2gbit bandwidth) and a remote
nic (1gbit bandwidth).
Without qos, bond was transmitting at 960Mbps.
After I executed above mentioned commands, it was expected that the bond
will transmit at 400Mbps but it was transmitting only at 70Mbps.
Same thing was observed with different qos rates for class 1:2, out...
2006 Nov 03
2
Filebench, X4200 and Sun Storagetek 6140
Hi there
I''m busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above available for tests, I''m open to suggestions on potential configs that I could run for you.
Pop me a mail if you want something specific _or_ you have suggestions concerning filebench (varmail) config setup.
Cheers
This message posted from
2011 Sep 05
0
Slow performance
....com/bugzilla/show_bug.cgi?id=1281
http://oss.oracle.com/bugzilla/show_bug.cgi?id=1300
on the mailing list archive, this thread also shows similar behavior:
http://www.mail-archive.com/ocfs2-users at oss.oracle.com/msg02509.html
The cluster is formed by two Dell PE 1950 with 8G ram, attached via
2Gbit FC to a Dell EMC AX/100 storage. The network between them is
running at 1Gbit.
Using CenOS 5.5, OCFS2 1.6.4 and ULEK 2.6.32-100.0.19.el5.
Tests so far:
* We have changed mount option data from ordered to writeback -- no
success;
* We have added mount option localalloc=16 -- no success;
* We ha...
2010 Jan 31
1
poor network performance to one of two guests
G'day, I have a host running two kvm guests. One of them gets very poor network
performance, testing with iperf I get ~10MBit/sec to guest A, >400MBit/sec to
guest B (running iperf between the host/guest). Both guests are using the same
bridge:
Guest A:
<interface type='bridge'>
<mac address='54:52:00:75:24:91'/>
<source
2007 Dec 18
1
System hangs up every day
Hello everybody
Unfortunately my problem still doesn't have any solution.
But I have an interesting observation. The gateway freezes very quickly, if torrent client programs are running on workstations.
I assume the cause of the problem consists in many number of TCP/IP connections that torrent client establishes.
Any ideas?
Maybe I can tune somehow a TCP/IP via kernel, sysctl or pf settings?
2009 Jan 17
25
GPLPV network performance
Just reporting some iperf results. In each case, Dom0 is iperf server,
DomU is iperf client:
(1) Dom0: Intel Core2 3.16 GHz, CentOS 5.2, xen 3.0.3.
DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based.
Iperf: 1.17 Gbits/sec
(2) Dom0: Intel Core2 2.33 GHz, CentOS 5.2, xen 3.0.3.
DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based.
Iperf: 725 Mbits/sec
(3) Dom0: Intel Core2 2.33 GHz,