similar to: slow throughput on 1gbit lan

Displaying 20 results from an estimated 1000 matches similar to: "slow throughput on 1gbit lan"

2010 Oct 07
2
Truncating leading zeros in strings
I am new to R. I thing this will be simple, but I don't yet know my way around. I am generating character strings from the system clock that represent integers, and I want to convert them to integer values. Strtoi works well, except when there are leading zeros on the string. Could anyone suggest a way to remove those leading zeros? Thanks Paul -- E. Paul Wileyto, Ph.D.
2006 Feb 10
0
OpenSSH ControlAllowUsers, et al Patch
Attached (and inline) is a patch to add the following config options: ControlBindMask ControlAllowUsers ControlAllowGroups ControlDenyUsers ControlDenyGroups It pulls the peer credential check from client_process_control() in ssh.c, and expounds upon it in a new function, client_control_grant(). Supplemental groups are not checked in this patch. I didn't feel comfortable taking a shot
1998 Feb 22
0
resource starvation against passwd(1)
Standard apology if old... This demonstrates a resource starvation attack on the setuid root passwd(1) program. In the case I tested it was the Red Hat Linux passwd-0.50-7 program without shadowing. #include <stdio.h> #include <sys/time.h> #include <stdlib.h> #include <unistd.h> #include <sys/resource.h> main () { struct rlimit rl, *rlp; rlp=&rl;
2006 Aug 04
2
route mail through different gateway
Hi All, I''ve got server with one LAN card eth0 ip=10.0.0.5 default access t ointernety done through ADSL router gw 10.0.0.1 we got second internet access through another ADSL router gw 10.0.0.2 I want to send all e-mail out through gw 10.0.0.2 How it can be done? I''ve tried to mark packets: iptables -t mangle -A OUTPUT -p tcp --dport 25 -j MARK --set-mark 0x1 and ip ru add
2018 Jan 24
1
fault tolerancy in glusterfs distributed volume
I have made a distributed replica3 volume with 6 nodes. I mean this: Volume Name: testvol Type: Distributed-Replicate Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 10.0.0.2:/brick Brick2: 10.0.0.3:/brick Brick3: 10.0.0.1:/brick Brick4: 10.0.0.5:/brick Brick5: 10.0.0.6:/brick Brick6:
2005 Jan 02
1
Dnat problems with adsl-box
Hello! So i got this problem. I have a debian sarge (with 2.6 kernel) box with shorewall up and network something like this: (net-ip)adsl-router(10.0.0.2)->(10.0.0.5)debian(192.168.0.1)->(192.168.0.x)lan-machines Everything works just great but i cant get port forwarding to work. shorewall show nat shows the traffic (to port 2002) but the machine (192.168.0.3) isnt getting it.. I have
2017 Apr 07
1
Slow write times to gluster disk
Hi, We noticed a dramatic slowness when writing to a gluster disk when compared to writing to an NFS disk. Specifically when using dd (data duplicator) to write a 4.3 GB file of zeros: * on NFS disk (/home): 9.5 Gb/s * on gluster disk (/gdata): 508 Mb/s The gluser disk is 2 bricks joined together, no replication or anything else. The hardware is (literally) the same: * one server with
2017 Jul 02
3
Re: virtual drive performance
Hi again, just today an issue I've thought to be resolved popped up again. We backup the machine by doing: virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external # backup hda.qcow2 virsh blockcommit domain hda --active --pivot Every now and then this process fails with the following error message: error: failed to pivot
2009 Dec 03
3
Xen DomU with high IOWAIT and low disk performance (lvm raid1)
Hello list! My setup: Dom0: Debain 5.0.3 with xen-hypervisor-3.2-1-i386 (2.6.26-2-xen-686) DomU: Ubuntu 8.04 2.6.26-2-xen-686 System is running on two hard drives mirrored with raid1 and organized by LVM. Dom0 and DomU are running on logical volumes. Partitions for DomUs are connected via ''phy:/dev/lvm/disk1,sda1,w'' for example. Here are some scenarios I testet, where you
2017 Jul 02
2
Re: 答复: virtual drive performance
Just a little catch-up. This time I was able to resolve the issue by doing: virsh blockjob domain hda --abort virsh blockcommit domain hda --active --pivot Last time I had to shut down the virtual machine and do this while being offline. Thanks Wang for your valuable input. As far as the memory goes, there's plenty of head room: $ free -h total used free
2018 Feb 04
2
halo not work as desired!!!
I have 2 data centers in two different region, each DC have 3 severs, I have created glusterfs volume with 4 replica, this is glusterfs volume info output: Volume Name: test-halo Type: Replicate Status: Started Snapshot Count: 0 Number of Bricks: 1 x 4 = 4 Transport-type: tcp Bricks: Brick1: 10.0.0.1:/mnt/test1 Brick2: 10.0.0.3:/mnt/test2 Brick3: 10.0.0.5:/mnt/test3 Brick4: 10.0.0.6:/mnt/test4
2016 Sep 04
2
No increased throughput with SMB Multichannel and two NICs
Hello, I'm running Samba 4.4.5 with enabled SMB Multichannel. The Linux server has two 1GBit/s NICs and for testing purposes I've shared a tmpfs mountpoint with 2GiB and ~2GiB large test-file. My Windows 10 host has one dual-port 1GBit/s NIC, and if both interfaces are enabled, Get-SmbMultichannelConnection lists active multichannel connections to my Linux SMB server. If I disable one
2018 Feb 05
0
halo not work as desired!!!
I have mounted the halo glusterfs volume in debug mode, and the output is as follows: . . . [2018-02-05 11:42:48.282473] D [rpc-clnt-ping.c:211:rpc_clnt_ping_cbk] 0-test-halo-client-1: Ping latency is 0ms [2018-02-05 11:42:48.282502] D [MSGID: 0] [afr-common.c:5025:afr_get_halo_latency] 0-test-halo-replicate-0: Using halo latency 10 [2018-02-05 11:42:48.282525] D [MSGID: 0]
2016 Sep 06
2
No increased throughput with SMB Multichannel and two NICs
Am 2016-09-06 10:41, schrieb Anoop C S via samba: > On Sun, 2016-09-04 at 11:42 +0200, Daniel Vogelbacher via samba wrote: >> Hello, >> >> I'm running Samba 4.4.5 with enabled SMB Multichannel. The Linux >> server >> has two 1GBit/s NICs and for testing purposes I've shared a tmpfs >> mountpoint with 2GiB and ~2GiB large test-file. >> >>
2011 Mar 10
1
Dove cot+Kerberos
Hi All. I have a problem with authorization users AD via kerberos in Dovecot&Postfix. Windows SRV 2008 Standart - AD mail server: Gentoo + cyrus-sasl + postfix + dovecot with support ldap&kerberos. I am created a 4 keytabs on Windows box. C:\Users\Admin>ktpass -princ host/srv-mail.cn.energy at CN.ENERGY -mapuser ldapmail at CN.ENERGY -pass "superpasswd" -crypto RC4-HMAC-NT
2016 Sep 06
0
No increased throughput with SMB Multichannel and two NICs
On Sun, 2016-09-04 at 11:42 +0200, Daniel Vogelbacher via samba wrote: > Hello, > > I'm running Samba 4.4.5 with enabled SMB Multichannel. The Linux > server > has two 1GBit/s NICs and for testing purposes I've shared a tmpfs > mountpoint with 2GiB and ~2GiB large test-file. > > My Windows 10 host has one dual-port 1GBit/s NIC, and if both > interfaces > are
2016 Sep 06
0
No increased throughput with SMB Multichannel and two NICs
On Tue, Sep 06, 2016 at 03:56:14PM +0200, Daniel Vogelbacher via samba wrote: > > Am 2016-09-06 10:41, schrieb Anoop C S via samba: > >On Sun, 2016-09-04 at 11:42 +0200, Daniel Vogelbacher via samba wrote: > >>Hello, > >> > >>I'm running Samba 4.4.5 with enabled SMB Multichannel. The Linux > >>server > >>has two 1GBit/s NICs and for
2006 Jun 01
13
Not understanding network setup!!
Hi to all, +-------+ eth1 +-------+ | |==========| | ''network 1'' ----| A | | B |---- ''network 2'' | |==========| | +-------+ eth2 +-------+ A and B are routers # tc qdisc add dev eth1 root teql0 # tc qdisc add dev eth2 root teql0 # ip link set
2016 Sep 06
2
No increased throughput with SMB Multichannel and two NICs
On 06.09.2016 19:39, Jeremy Allison via samba wrote: > On Tue, Sep 06, 2016 at 03:56:14PM +0200, Daniel Vogelbacher via samba wrote: >> >> Am 2016-09-06 10:41, schrieb Anoop C S via samba: >>> On Sun, 2016-09-04 at 11:42 +0200, Daniel Vogelbacher via samba wrote: >>>> Hello, >>>> >>>> I'm running Samba 4.4.5 with enabled SMB
2006 Mar 29
1
How to define class type hierarchy of speeds?
Hi I''m very very new to tc iproute etc and have read the LARTC howto. What I want to do is create some "master" classes of bandwidth limit and below that per ip address which "inherits" from this master class. Example: one queue for 128Kbps other queue for 256Kbps What I want now is that for example in "class" 128Kbps the ip 10.0.0.5, 10.0.0.8 etc. goes