similar to: Slow Writes on GlusterFS Backed Xen DomU's

Displaying 20 results from an estimated 10000 matches similar to: "Slow Writes on GlusterFS Backed Xen DomU's"

2005 Aug 03
0
networking is slow/fails for more than one domU
Up to now I''ve been using only one domU and networking has been fine. Now I''m trying to use several domUs simultaneously (up to 14) and having many networking problems. In particular I''d like to fire off commands on all the domUs at the same time (or at least within a couple seconds). Even something as simple as: ssh dom1 hostname & ssh dom2 hostname &
2008 Apr 22
0
slow traffic over bridged interface dom0/domU
Hi, I am using xen 3.1.3 on a Celeron, a netbsd 4.0 dom0, and netbsd 4.0 and slackware 11 (with 2.6.18.8-xen kernel) domU''s, and a bridge to communicate between those. I noticed traffic between (both ways) netbsd dom0/domU is rather slow (4Mb/s max), and traffic from the linux domU to dom0 is about 18Mb/s, while traffic from dom0 to the linux domU is rather slow again (4Mb/s max). Any
2009 Sep 16
4
Slow network
Hi, I have problem with a Centos 5.3 computer. The networking is very slow. The networking card is a RealTek 1GigE. I plugged an Ubuntu laptop on the same cable and port with an Intel GigE, and it's about 8 to 10 times faster. So the problem is likely to be with the CentOS config. Any suggestion on how to track down the problem with speed. I have checked the packets with wireshark, no
2007 Mar 18
1
Samba / NFS performance
I have the following network configuration: Server FreeBSD 6.2 P4 3Ghz, 1GB RAM Samba 3.0.24 (options: WITH_ADS, WITH_PAM, WITH_SENDFILE, WITH_UTMP, WITH_WINBIND) Standard FreeBSD NFS Server Adaptec 2410SA controller with 4 drives running RAID5 Broadcom GigE Client Windows XP MCE Microsoft SFU 3.5 running NFS client over UDP The client and the server are
2015 Jan 08
0
Intel NUC? Any experience
On 1/8/2015 11:32 AM, david wrote: > The price point of Intel's NUC unit makes it attractive to use as a > server that doesn't have significant computational load. In my > environment, a USB connected hard-drive could provide all the storage > needed. I wonder if anyone has had experience with it, and can answer: > IMHO, its totally unsuitable as a server, there are
2016 Oct 11
0
gigE -> 100Mb problems
On 10/10/2016 5:33 PM, John R Pierce wrote: > I've got a pair of identical CentOS 6.7 servers, with SuperMicro > X8DTE-F motherboards, these have 2 each Intel 82574L ethernet ports. > The eth0 ports are plugged in with 10' runs of brand new cat 5e cable > to a Cisco Nexxus 9000 switch (provided by the data center). > > These servers keep coming up at 100baseT rather
2016 Oct 12
0
gigE -> 100Mb problems
Hi, On Tue, Oct 11, 2016 at 6:03 AM, John R Pierce <pierce at hogranch.com> wrote: > I've got a pair of identical CentOS 6.7 servers, with SuperMicro X8DTE-F > motherboards, these have 2 each Intel 82574L ethernet ports. The eth0 > ports are plugged in with 10' runs of brand new cat 5e cable to a Cisco > Nexxus 9000 switch (provided by the data center). > >
2005 Feb 25
2
samba 3 performance
Yes, I get more than 30MB/s performance. The benchmark I use (NetBench) is essentially CPU bound, such that a faster processor = faster performance. With a very fast hardware config (dual 3.2GHz processors), I've been able to hit around 100MB/s. Changing the RAM or other attributes does not buy me much, it seems that processor power is the bottleneck (at least in my case). When doing your
2016 Oct 11
5
gigE -> 100Mb problems
I've got a pair of identical CentOS 6.7 servers, with SuperMicro X8DTE-F motherboards, these have 2 each Intel 82574L ethernet ports. The eth0 ports are plugged in with 10' runs of brand new cat 5e cable to a Cisco Nexxus 9000 switch (provided by the data center). These servers keep coming up at 100baseT rather than gigE. I've swapped ports and cables with a different server,
2006 Jan 23
1
OFF TOPIC: Core router upgrade for a voip colocation center
Hello, hope this isn't too far offtopic here but being a troller for a long time here I've realized there is a great knowledge base so I wanted to at least see if i could get some tips. I help run a small colocation company in California and I am in the middle of recommending a new 'core router' platform for our network. We offer mainly colo and dedicated servers, and several of
2004 Jan 06
1
Traffic going to wrong interface?
I have a samba server with 2 ethernet ports, one of which is a gigabit port. When connecting from a windows client that has a crossover to the gigabit port, and a crossover to the 100Meg port: If I connect via \\gige.ethernet.address\foo , and copying a large file, windows reports outbound traffic on the gige port and return traffic on the 100Meg port. Thus, it seems the samba server sees the
2015 Jan 08
2
Intel NUC? Any experience
At 01:54 PM 1/8/2015, John R Pierce wrote: >On 1/8/2015 11:32 AM, david wrote: >>The price point of Intel's NUC unit makes it attractive to use as a >>server that doesn't have significant computational load. In my >>environment, a USB connected hard-drive could provide all the >>storage needed. I wonder if anyone has had experience with it, and can answer:
2013 May 12
0
Glusterfs with Infiniband tips
Hello guys, I was wondering if someone could share their glusterfs volume and system settings if you are running glusterfs with infiniband networking. In particular I am interested in using the glusterfs + infiniband + kvm for virtualisation. However, any other implementation would also be useful for me. I've tried various versions of glusterfs (versions 3.2, 3.3 and 3.4beta) over the past
2008 Feb 07
2
Lustre behaviour when multiple network paths are available?
Hi there, When Lustre is configured in an environment where there are multiple paths to the same destination of the same length (i.e. two paths, each one hop away), which path(s) will be used for sending and receiving data? I have my cluster configured with two OSTs with two GigE NICs in each. I am seeing identical performance metrics when I use LACP to aggregate, and when I use two separate
2003 Jul 28
1
Win2k - samba-2.2.8a - can't get above 4MB/sec
I'm trying to maximize throughput between my Samba server and my Win2k (SP3 and 4) clients. Samba's configured to allow up to 64k packets and the server is on a GigE network (3com card, acenic driver) with SCSI UW2 drives (software RAID1 and RAID5 volumes). All clients are 100MB/sec FDX on a switched LAN. I can get sustained 35MB/sec transfers (read and write) on the disk subsystem
2005 Dec 25
2
OT: SUSE 9.3 and NICs
Folks, I realize this is off topic, and if anyone can suggest a better source for the question, I'd be glad to go there. Novell SUSE's support is unresponsive, however. My problem is this: I'm running 9.3 Pro on an Intel server board that has two NIC chips built in (a 10/100 and a GigE). I've since added a Netgear GigE NIC. However, every time I reboot, the NICs assigned to
2004 Oct 21
1
Throughput to a single client
I have a Linux Server (3.06 Xeon) with a very fast RAID array -- reads at around 500 MB/sec as clocked by Bonnie++. I have 6 GigE nics on my machine -- on two 133Mhz PCI-x bus segments (not on the same bus as the RAID drives) I have noticed two puzzling things and I'm wondering if anybody has any ideas about why I'm seeing these: 1) transfer speeds over a single NIC from a single
2004 Jan 16
1
Any (known) scaling issues?
I'm considering using rsync in our data center but I'm worried about whether it will scale to the numbers and sizes we deal with. We would be moving up to a terabyte in a typical sync, consisting of about a million files. Our data mover machines are RedHat Linux Advanced Server 2.1 and all the sources and destinations are NFS mounts. The data is stored on big NFS file servers. The
2008 Mar 29
0
Migration Problem
Hi all! I've been experimenting with Xen on CentOS 5 and RHAS 5 for a while now with mixed success I thought I'd describe my latest challenge. I'll describe this from memory since all the equipment is at work and not contactable from here. I think I've described this config to the list before but here it is again: I have 2 x HP DL585 servers each with 4 Dual core Opterons
2004 Feb 02
2
rsync 2.6.0 causing incredible load on Linux 2.4.x?
Hi everyone. Has anyone experienced rsync 2.6.0 causing huge amounts of system load? Especially on Linux 2.4? We recently upgraded our "push" machine to rsync 2.6.0 and the next push that went out (rsyncing about 3GB of data to 15 servers sequentially over gigabit ethernet) caused the box to hit 110.59. We only know the load because snmpd was still working, but nothing else in userspace