Displaying 9 results from an estimated 9 matches for "112mb".
Did you mean:
512mb
2013 Aug 12
3
Speed differences for windows clients
...th samba 3.5.10
server 2: centos 6.4 with samba 3.6.9
both servers are configured as BDC and have - aside from netbios name -
identical smb.conf which contains ldapsam as backend and all other
parameters are not set (i.e. default)
When I mount a share from a linux client, the transfer speed is
~112MB/sec to either server from any linux client. However, when I mount
a share from Windows clients, the speed to server 1 is ~95MB/s and to
server 2 ~85MB/s. We tested this with several windows clients (all
running Windows 7 with all updates).
The speed difference between linux client and windows c...
2008 Sep 22
7
performance of pv drivers for windows
...he xensource drivers, the speed was
about 78 MB/s.
The Windows system was a XP SP2.
hdparm on the dom0 gives about 60MB/s.
The network test was an ftp transfer, just downloading a 500MB file, without
writing it to disk, writing to nul. The same in the dom0, writing the file
to /dev/null gave me 112MB/s.
So I am wondering, what are the expected speed gains for the gplpv drivers?
Is the performance of the drivers bettter with different windows versions,
e.g. windows server 2003?
kind regards
Sebastian
_______________________________________________
Xen-users mailing list
Xen-users@lists.xenso...
2011 Jul 10
2
bond0 performance issues in 5.6
Hi all,
I've got two gigabit ethernet interfaces bonded in CentOS 5.6. I've
set "miimode=1000" and I've tried "mode=" 0, 4 and 6. I've not been able
to get better than 112MB/sec, which is the same as the non-bonded
interfaces.
My config files are:
===
cat /etc/sysconfig/network-scripts/ifcfg-{eth1,eth2,bond0}
# SN1
HWADDR=00:30:48:fd:26:71
DEVICE=eth1
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
# SN2
HWADDR=00:1B:21:87:80:CE
DEVICE=eth2
BOOTPROTO=none
MASTER=bond0
SLA...
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
...partitioning as
suggested by the installer (ext3, small /boot on /dev/sda1, remainder
as / on LVM VolGroup with 2GB swap).
Firmware from 3Ware codeset 9.4.1.2 in use, firmware/driver details:
//serv1> /c0 show all
/c0 Driver Version = 2.26.05.007
/c0 Model = 9550SX-8LP
/c0 Memory Installed = 112MB
/c0 Firmware Version = FE9X 3.08.02.005
/c0 Bios Version = BE9X 3.08.00.002
/c0 Monitor Version = BL9X 3.01.00.006
I initially noticed something odd while installing 4.4, that writing
the inode tables took a longer time than I expected (I thought the
installer had frozen) and the system overall...
2020 May 04
2
tinc performance relatively slow
...need
to use NFS over a VPN. Our current tinc network seems to be able to
transmit at around 30-40 MB/s. (I used an 1GB random testfile to copy
to/from /dev/shm/; using netcat and http.) In comparison, HTTP and scp
are 300MB/s and 200MB/s respectively (over 10Gb link; over 1Gb link,
both are around 112MB/s).
By observing _top_ output, it seems that the CPU usage is around 90% for
the tinc process on at least one of the transmitting machines.
I tried to change the cipher to aes-128-cbc, but it did not have any
significant effect on transmit speed.
How can I know tinc operation eats up my CPU? Are...
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...ine as the drives hit the 5 year age mark. So I took the 12 drives
out, added 24 drives to the machine (we had unused slots),
reconfigured raid 6 and left it initializing in the background and
started the heal of 13.1TB of data. My servers are connected via
10Gbit (I am not seeing reads/writes over 112MB/s) and this process
started last Monday at 7;20PM and it is not done yet. It is missing
healing about 40GB still. Now my servers are used as a file server,
which means lots of small files which take longer to heal. I would
think your VM images will heal much faster.
> I want to turn every VM of...
2014 Nov 19
3
Tunning samba for better read performance
Hi,
I'm running samba server on board and client is windows 7.
I did below steps for performance tests.
+ format /dev/sda1 with ext4
+ mount the drive in server as mentioned in [media] path of /etc/samba/smb.conf
+ created a root password
$ smbpasswd -a root
+ 1Gb ethernet interface from board.
+ map the driver in windows
+ did a 4gb robocopy
+ read got 13MBps and write got 105MBps
2017 Sep 22
2
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi,
thanks for suggesions. Yes "gluster peer probe node3? will be first command in order to discover 3rd node by Gluster.
I am running on latest 3.7.x - there is 3.7.6-1ubuntu1 installed and latest 3.7.x according https://packages.ubuntu.com/xenial/glusterfs-server <https://packages.ubuntu.com/xenial/glusterfs-server> is 3.7.6-1ubuntu1, so this should be OK.
> If you are *not* on
2017 Oct 01
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
...t the 5 year age mark. So I took the 12 drives
> out, added 24 drives to the machine (we had unused slots),
> reconfigured raid 6 and left it initializing in the background and
> started the heal of 13.1TB of data. My servers are connected via
> 10Gbit (I am not seeing reads/writes over 112MB/s) and this process
> started last Monday at 7;20PM and it is not done yet. It is missing
> healing about 40GB still. Now my servers are used as a file server,
> which means lots of small files which take longer to heal. I would
> think your VM images will heal much faster.
>
>&g...