search for: 55mb

Displaying 20 results from an estimated 28 matches for "55mb".

Did you mean: 50mb
2011 Mar 07
2
connection speeds between nodes
...). Now we've estimated that the average file send to each node will be about 90MB , so that's what i like the average connection to be, i know that gigabit ethernet should be able to that (testing with iperf confirms that) but testing the speed to already existing nfs shares gives me a 55MB max. as i'm not familiar with network shares performance tweaking is was wondering if anybody here did and could give me some info on this? Also i thought on giving all the nodes 2x1Gb-eth ports and putting those in a BOND, will do this any good or do i have to take a look a the nfs server...
2009 Sep 24
1
xen & iSCSI
...0 and if I use them from dom0 I obtain good performance with simple dd tests (~100MB/s both reading and writing). I then use the block devices for the domU and if I repeat the same dd test from within the domU the write performace is still good (~100MB/s), but the read performance is cut in half (~55MB/s). I tried changing several parameters like read_ahead and such, but I can not obtain a good read performance in the domU. Any idea? Thanks, Daniel.
2012 Oct 16
0
Free space cache writeback issue
Hi, I''ve hit an issue with the free space cache. It looks like we miss writing everything to disk on unmount under rough conditions. Setup: git head cmason/master, /dev/sdv1 is a 55MB partition on an SSD. Run the following script: -- DEV=/dev/sdv1 MP=/mnt/scratch umount $MP mkfs.btrfs -M $DEV mount -o inode_cache $DEV $MP cat /dev/urandom | head -c 654321 > $MP/1 mkdir $MP/2 mv $MP/1 $MP/2/1 btrfs subvol snap $MP $MP/@1 rm $MP/2/1 umount $MP mount -o inode_cache $DEV $MP...
2009 Mar 02
1
slow throughput on 1gbit lan
...39;GET 1GB' WARNING: The "write cache size" option is deprecated Domain=[EPW] OS=[Unix] Server=[Samba 3.3.1] getting file \1GB of size 1048576000 as 1GB (331177.2 kb/s) (average 331177.2 kb/s) Now, when I try to get this file from another server (10.0.0.5), maximum transfer drops to 55MB/s.... 10.0.0.5 # smbclient '\\10.0.0.2\test\' xxx -U xxx -c 'GET 1GB' Domain=[EPW] OS=[Unix] Server=[Samba 3.3.1] getting file \1GB of size 1048576000 as 1GB (55664.3 kb/s) (average 55664.3 kb/s) So far, I was turning these knobs: socket options = TCP_NODELAY SO_RCVBUF=262144 SO...
2015 Jun 29
2
Slow network performance issues
...ng fine under normal network activity, but trying to perform a full backup of the 300GB filesystem is taking forever because the network speed appears to be very slow. Using rsync, the transfer speeds reach a max of like 180kB/s. Using rsync to copy files on the local filesystem is greater than 55MB/s, so I don't think it's a disk issue. What kind of network speed can I expect copying data across the network from the guest to another host on the same gigabit network? I'm using the virtio driver: # lsmod|grep virtio virtio_console 28672 0 virtio_balloon 16384...
2002 Jul 25
0
Thanks!
I'd like to thank as an end-user all the people who have contributed to such a great program as ISOLINUX is. For the FreeDOS operating system (MS-DOS open source clone , freedos.org) we've been putting togehter a 55MB distribution on cdrom, and your ISOLINUX project proves it's very easy to add it to existing cdrom's to make them (multi)bootable. Thanks to both your effors and those of the FreeDOS people the smallest multiboot cdrom is now 1.71MB big [http://members.home.nl/bblaauw/fdbootcd.iso] Th...
2008 Jul 18
0
nfs high load issues
...stat reports nothing unusual - the cpu idle percentage hovers around the high 80's to 90's. I am not seeing any network errors or collisions when viewed via ifconfig. These servers are on a gigabit network, connected via a good 3Com switch. I am getting transfer rates averaging around 55MB/sec using scp, so I don't think I have a network/wiring issue. The nics are intel 82541GI nics on a Supermicro serverboard X6DH8G. I am not currently bonding, although I want to if the bandwidth will actually increase. Both server's filesystems are ext3, 9TB, running raid 50 on a 3Wa...
2017 Jul 02
3
Re: virtual drive performance
...her test by copying a 3Gb large file on the guest. What I > can observe on my computer is that the copy process is not at a constant > rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to > 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to > 55Mb/s and the pattern continues. Please note that the drive is still > configured as: > > <driver name='qemu' type='qcow2' cache='none' io='threads'/> > > and I would expect a constant rate that is either high or low since there > is no caching in...
2015 Jan 21
0
updated R-cairo bridge, official R-3.1.*-mavericks.pkg crippled, snpMatrix 1.19.0.20
....2-mavericks/R.framework/Versions/3.1/Resources/library/grDevices/libs/cairo.so" is so much smaller, not because it is built with a better and newer compiler, but because tiff functionality is missing. The official R-3.1.*-mavericks.pkg is about 14MB smaller than R-3.1.*-snowleopard.pkg (from 55MB-ish to 70MB-ish). Looking carefully, the size differently did not come from better compilation - it is in 3 unrelated areas, one of the problematic: - the snowleopard builds come the manuals in pdf's, mavericks don't. That's about 14MB. - a few bundled gcc runtime libraries (libgcc/lib...
2015 Jun 29
0
Re: Slow network performance issues
...nning fine under normal network activity, but trying to perform a full backup of the 300GB filesystem is taking forever because the network speed appears to be very slow. Using rsync, the transfer speeds reach a max of like 180kB/s. Using rsync to copy files on the local filesystem is greater than 55MB/s, so I don't think it's a disk issue. What kind of network speed can I expect copying data across the network from the guest to another host on the same gigabit network? I'm using the virtio driver: # lsmod|grep virtio virtio_console 28672 0 virtio_balloon 16384 0...
2017 Jul 02
2
Re: 答复: virtual drive performance
...her test by copying a 3Gb large file on the guest. What I > can observe on my computer is that the copy process is not at a constant > rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to > 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to > 55Mb/s and the pattern continues. Please note that the drive is still > configured as: > > <driver name='qemu' type='qcow2' cache='none' io='threads'/> > > and I would expect a constant rate that is either high or low since there > is no caching in...
2017 Jun 21
2
Re: virtual drive performance
On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote: > On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote: > > > > > to the following: > > > > <disk type='file' device='disk'> > > <driver name='qemu' type='qcow2' cache='none'/> > > <source
2002 Feb 23
1
wish: postscript-device - LaTeX ec-fonts (PR#1322)
Full_Name: Christof Boeckler Version: 1.4.1 OS: linux 2.4, x86 Submission from: (NULL) (217.233.100.207) Hello R-gurus, I have a little contribution to the wishlist: Since (?) I am using the more recent ec-font-family with LaTeX instead of the original cm-family, I did not manage to use this new feature about using (TeX's) cm-fonts for PS-output. I got the following errors when trying to
2017 Jul 02
0
Re: virtual drive performance
...es. I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as: <driver name='qemu' type='qcow2' cache='none' io='threads'/> and I would expect a constant rate that is either high or low since there is no caching involved and the underlying ha...
2005 Apr 20
1
IPC$ entries not deleted from connections.tdb?
Back to this problem Here a proof of it: 1. smbd version 3.0.11 started yaberge2@sda6 ==> p smbd root 13820 20662 0 08:05:39 - 0:00 /usr/local/samba/sbin/smbd -D -s/usr/local/samba/lib/smb.conf root 20662 1 0 08:05:39 - 0:00 /usr/local/samba/sbin/smbd -D -s/usr/local/samba/lib/smb.conf yaberge2@sda6 ==> /usr/local/samba/bin/smbstatus Samba version 3.0.11
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it. It has the following spec: Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array. RAID bus controller: Intel Corporation 82801 SATA RAID Controller For a simple
2017 Jul 02
0
答复: virtual drive performance
...es. I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as: <driver name='qemu' type='qcow2' cache='none' io='threads'/> and I would expect a constant rate that is either high or low since there is no caching involved and the underlying ha...
2017 Jul 07
0
Re: 答复: virtual drive performance
...ing a 3Gb large file on the guest. What I >> can observe on my computer is that the copy process is not at a constant >> rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to >> 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to >> 55Mb/s and the pattern continues. Please note that the drive is still >> configured as: >> >> <driver name='qemu' type='qcow2' cache='none' io='threads'/> >> >> and I would expect a constant rate that is either high or low since there &...
2016 Nov 05
3
Avago (LSI) SAS-3 controller, poor performance on CentOS 7
...so I installed CentOS 6 on one of them and reloaded CentOS 7 on the other. Immediately after install, a difference is apparent in the RAID rebuild speed. The CentOS 6 system is initializing its software RAID5 array at somewhere around 120MB/s, while the CentOS 7 system is rebuilding at around 55MB/s. It took a while to get an older system installed, and I wasn't able to boot CentOS 6 under UEFI, to that's one difference between the systems. I also set the elevator on CentOS 7 to cfq, to match the CentOS 6 system. That didn't have any apparent effect. Other than that, the k...
2009 Jan 27
20
Xen SAN Questions
Hello Everyone, I recently had a question that got no responses about GFS+DRBD clusters for Xen VM storage, but after some consideration (and a lot of Googling) I have a couple of new questions. Basically what we have here are two servers that will each have a RAID-5 array filled up with 5 x 320GB SATA drives, I want to have these as useable file systems on both servers (as they will both be