Displaying 20 results from an estimated 22 matches for "500mbps".
Did you mean:
100mbps
2005 Mar 20
4
I/O descriptor ring size bottleneck?
Hi everyone,
I''m doing some networking experiments over high BDP topologies. Right
now the configuration is quite simple -- two Xen boxes connected via a
dummynet router. The dummynet router is set to limit bandwidth to
500Mbps and simulate an RTT of 80ms.
I''m using the following sysctl values:
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 65536 4194304
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_bic = 0
(tcp westwood and vegas are also turned off for...
2008 Jan 06
1
DRBD NFS load issues
...n rsync drives the NFS box through the roof and forces a failover.
I can do my backup using --bwlimit=1500, but then I'm not anywhere close
to a fast backup, just 1.5MBps. My backups are probably 40G. (The
database has fast disks and between database copies I see run at up to
60MBps - close to 500Mbps). I obviously do not have a networking issue.
The processor loads up like this:
bwlimit 1500 load 2.3
bwlimit 2500 load 3.5
bwlimit 4500 load 5.5+
The DRBD secondary seems to run at about 1/2 the load of the primary.
What I'm wondering is--why is this thing *so* load sensi...
2015 Jan 08
2
Intel NUC? Any experience
...lly rocks.
>
>I'm using one with pfSense (a freeBSD based firewall distribution)
>and its very slick. routing tons of connections (bittorrent) to
>my 30Mbps internet, it uses only 3-5% of its CPU, I've been told it
>can handle AES IPSEC vpns up to about 100Mbps, and 400-500Mbps
>simple NAT routing.
>
>--
>john r pierce 37N 122W
>somewhere on the middle of the left coast
John
Thanks for your comments. In the particular application, I used the
word "server" only in the sense that GUI is only rarely used, and...
2015 Jan 08
5
Intel NUC? Any experience
Folks
The price point of Intel's NUC unit makes it attractive to use as a
server that doesn't have significant computational load. In my
environment, a USB connected hard-drive could provide all the storage
needed. I wonder if anyone has had experience with it, and can answer:
1) Does Centos6 and/or Centos7 install from a USB connected optical
drive? or a USB flash drive? I'd
2015 Jan 08
0
Intel NUC? Any experience
...num case, and basically rocks.
I'm using one with pfSense (a freeBSD based firewall distribution) and
its very slick. routing tons of connections (bittorrent) to my 30Mbps
internet, it uses only 3-5% of its CPU, I've been told it can handle AES
IPSEC vpns up to about 100Mbps, and 400-500Mbps simple NAT routing.
--
john r pierce 37N 122W
somewhere on the middle of the left coast
2006 Jun 21
1
Linux Qos : PRIO qdisc works
...ou test PRIO qdisc with TCP having high priority and UDP having low priority?
In the below, there is my configuration. Is there something wrong?
Second Question:
When I test with one TCP stream having high priority and one TCP stream having
low priority, the throughput of each TCP stream is about 500Mbps.
When I increase the number of TCP stream, the TCP streams having high priority
have more bandwidth than TCP streams with low priority. (see below,B.TCP vs TCP)
Is it correct?
Thanks in advance,
Sincerely,
Sangho Lee
--------------------------------------------------------------------------------...
2017 Sep 06
2
Slow performance of gluster volume
...rformance.readdir-ahead: on
>> nfs.disable: on
>> nfs.export-volumes: on
>>
>>
>> I observed that when testing with dd if=/dev/zero of=testfile bs=1G
>> count=1 I get 65MB/s on the vms gluster volume (and the network traffic
>> between the servers reaches ~ 500Mbps), while when testing with dd
>> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a consistent
>> 10MB/s and the network traffic hardly reaching 100Mbps.
>>
>> Any other things one can do?
>>
>> On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay <kdha...
2017 Sep 05
3
Slow performance of gluster volume
...performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
nfs.export-volumes: on
I observed that when testing with dd if=/dev/zero of=testfile bs=1G count=1
I get 65MB/s on the vms gluster volume (and the network traffic between the
servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero
of=testfile bs=1G count=1 *oflag=direct *I get a consistent 10MB/s and the
network traffic hardly reaching 100Mbps.
Any other things one can do?
On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> I'm assuming...
2016 Feb 06
1
Why is my rsync transfer slow?
Den 2016-01-26 kl. 09:06, skrev Simon Hobson:
>>> >>The other option is
>>> >>
>>> >>HD <--FW800--> Computer <--USB2 or Ethernet 1000Mbit --> NAS
>> >
>> >If you use a network connection then you've still got that network
layer.
> Just thinking a bit more about that ...
>
> Is your normal setup :
>
2017 Sep 06
0
Slow performance of gluster volume
...>>> nfs.disable: on
>>> nfs.export-volumes: on
>>>
>>>
>>> I observed that when testing with dd if=/dev/zero of=testfile bs=1G
>>> count=1 I get 65MB/s on the vms gluster volume (and the network traffic
>>> between the servers reaches ~ 500Mbps), while when testing with dd
>>> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a
>>> consistent 10MB/s and the network traffic hardly reaching 100Mbps.
>>>
>>> Any other things one can do?
>>>
>>> On Tue, Sep 5, 2017 at 5:57 AM, Kr...
2017 Sep 06
2
Slow performance of gluster volume
...: on
>>>> nfs.export-volumes: on
>>>>
>>>>
>>>> I observed that when testing with dd if=/dev/zero of=testfile bs=1G
>>>> count=1 I get 65MB/s on the vms gluster volume (and the network traffic
>>>> between the servers reaches ~ 500Mbps), while when testing with dd
>>>> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a
>>>> consistent 10MB/s and the network traffic hardly reaching 100Mbps.
>>>>
>>>> Any other things one can do?
>>>>
>>>> On Tue, Se...
2017 Sep 10
2
Slow performance of gluster volume
...dress-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: on
>
>
> I observed that when testing with dd if=/dev/zero of=testfile bs=1G count=1 I
> get 65MB/s on the vms gluster volume (and the network traffic between the
> servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero
> of=testfile bs=1G count=1 oflag=direct I get a consistent 10MB/s and the
> network traffic hardly reaching 100Mbps.
>
> Any other things one can do?
>
> On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay < kdhananj at redhat.com >...
2017 Sep 08
0
Slow performance of gluster volume
...: on
>>>> nfs.export-volumes: on
>>>>
>>>>
>>>> I observed that when testing with dd if=/dev/zero of=testfile bs=1G
>>>> count=1 I get 65MB/s on the vms gluster volume (and the network traffic
>>>> between the servers reaches ~ 500Mbps), while when testing with dd
>>>> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a
>>>> consistent 10MB/s and the network traffic hardly reaching 100Mbps.
>>>>
>>>> Any other things one can do?
>>>>
>>>> On Tue, Se...
2017 Sep 11
2
Slow performance of gluster volume
...addir-ahead: on
> > nfs.disable: on
> > nfs.export-volumes: on
> >
> >
> > I observed that when testing with dd if=/dev/zero of=testfile bs=1G
> count=1 I
> > get 65MB/s on the vms gluster volume (and the network traffic between the
> > servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero
> > of=testfile bs=1G count=1 oflag=direct I get a consistent 10MB/s and the
> > network traffic hardly reaching 100Mbps.
> >
> > Any other things one can do?
> >
> > On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay <...
2017 Sep 11
0
Slow performance of gluster volume
...address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: on
>
>
> I observed that when testing with dd if=/dev/zero of=testfile bs=1G
count=1 I
> get 65MB/s on the vms gluster volume (and the network traffic between the
> servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero
> of=testfile bs=1G count=1 oflag=direct I get a consistent 10MB/s and the
> network traffic hardly reaching 100Mbps.
>
> Any other things one can do?
>
> On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay < kdhananj at redhat.com >
&...
2017 Sep 11
0
Slow performance of gluster volume
...n
> > > nfs.export-volumes: on
> > >
> > >
> > > I observed that when testing with dd if=/dev/zero of=testfile bs=1G
> > count=1 I
> > > get 65MB/s on the vms gluster volume (and the network traffic between
> the
> > > servers reaches ~ 500Mbps), while when testing with dd if=/dev/zero
> > > of=testfile bs=1G count=1 oflag=direct I get a consistent 10MB/s and
> the
> > > network traffic hardly reaching 100Mbps.
> > >
> > > Any other things one can do?
> > >
> > > On Tue, Sep 5, 2017...
2017 Sep 05
0
Slow performance of gluster volume
...address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: on
>
>
> I observed that when testing with dd if=/dev/zero of=testfile bs=1G
> count=1 I get 65MB/s on the vms gluster volume (and the network traffic
> between the servers reaches ~ 500Mbps), while when testing with dd
> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a consistent
> 10MB/s and the network traffic hardly reaching 100Mbps.
>
> Any other things one can do?
>
> On Tue, Sep 5, 2017 at 5:57 AM, Krutika Dhananjay <kdhananj at redhat.com>
&...
2017 Sep 05
0
Slow performance of gluster volume
I'm assuming you are using this volume to store vm images, because I see
shard in the options list.
Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of the writes - such as creating the shard and then writing to it
and then updating the
2017 Sep 04
2
Slow performance of gluster volume
Hi all,
I have a gluster volume used to host several VMs (managed through oVirt).
The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network
for the storage.
When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1
oflag=direct) out of the volume (e.g. writing at /root/) the performance of
the dd is reported to be ~ 700MB/s, which is quite decent. When testing the
dd on
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations
4 x x6