Displaying 20 results from an estimated 4000 matches similar to: "Transfer speed"
2009 Jan 01
5
Samba performance issue
Hello, I sent the following message to the Debian folks.
They don't think that the Debian packaging could be responsible for the
issue described there.
> Well, I'm not completely convinced that we will have very useful input
> for you. I don't really see any reason for this to be caused by the
> Debian packaging. To check this, why not compile samba from sources,
2010 Mar 29
2
Samba SMB throughput
Hello everyone,
Quoting from Samba Team Blog #2 (25 Sept 2009):
"Volker showed how to get more than 700MB/sec from Samba using
smbclient and a modern Samba server, which shows what you can
really do when you understand the protocol thoroughly and don't feel
you have to invent a new one (SMB2 :-)."
Would it be possible to get a complete accounting of how this was achieved?
Thanks,
2013 Jul 30
3
SMB throughput inquiry, Jeremy, and James' bow tie
I went to the site to subscribe again and ended up watching some of
Jeremy's Google interviews. I particularly enjoyed the interview with
James and the bow tie lesson at the end. :)
So anyway, I recently upgraded my home network to end-to-end GbE. My
clients are Windows XP SP3 w/hot fixes, and my Samba server is 3.5.6
atop vanilla kernel.org Linux 3.2.6 and Debian 6.0.6.
With FDX fast
2011 Aug 04
3
Very slow samba performance on Centos 6
Hello all,
I have 2 identical Dell r510 servers with 10gig card, running centos 6 with
samba-3.5.4-68.el6_0.2.x86_64.
I setup 16G ramdisk samba share on both and ran cp from local ramdisk to
samba ramdisk mount.
If I cp 12 1-gig files, I get combined 100MB/s transfer rate. Single file cp
maxes out at about 15MB/s.
Ftp transfer give me over 300MB/s.
Running with 9000 MTU. Most smb.conf is
2011 May 19
2
[PATCH] arch/tile: add /proc/tile, /proc/sys/tile, and a sysfs cpu attribute
(adding virtualization mailing list)
On Thursday 19 May 2011, Chris Metcalf wrote:
> On 5/19/2011 9:41 AM, Arnd Bergmann wrote:
> >> /proc/tile/hvconfig
> >> Detailed configuration description of the hypervisor config
>
> I'm concerned about moving this one out of /proc, since it's just (copious)
> free text. An "hvconfig" (hypervisor config)
2011 May 19
2
[PATCH] arch/tile: add /proc/tile, /proc/sys/tile, and a sysfs cpu attribute
(adding virtualization mailing list)
On Thursday 19 May 2011, Chris Metcalf wrote:
> On 5/19/2011 9:41 AM, Arnd Bergmann wrote:
> >> /proc/tile/hvconfig
> >> Detailed configuration description of the hypervisor config
>
> I'm concerned about moving this one out of /proc, since it's just (copious)
> free text. An "hvconfig" (hypervisor config)
2005 Jul 08
1
Re: Hot swap CPU -- shared memory (1 NUMA/UPA) v. clustered (4 MCH)
From: Bruno Delbono <bruno.s.delbono at mail.ac>
> I'm really sorry to start this thread again but I found something very
> interesting I thought everyone should ^at least^ have a look at:
> http://uadmin.blogspot.com/2005/06/4-dual-xeon-vs-e4500.html
> This article takes into account a comparision of 4 dual xeon vs. e4500.
> The author (not me!) talks about "A
2009 Oct 14
2
Best practice settings for channel bonding interface mode?
Hi,
may be there are some best practice suggestions for the "best mode" for
channel bonding interface?
Or in other words, when should/would I use which mode?
E.g. I do have some fileservers connected to the users lan and to some
ISCSI Storages. Or some Webservers only connected to the LAN. The
switches are all new cisco models.
I've read sone docs (1), (2) and (3) so the theory
2005 Aug 26
5
OT: CentOS server with 2 GbE links to 2 GbE switches
Hi all,
I am trying to come up with an architecture that has some redundancy.
The idea is to hook up the two GbE LAN interfaces of a CentOS server to
two Gigabit Ethernet switches. In case one switch goes down, there is a
redundant path (the server is redundant too). Here is the idea:
-----------
| GbE |
PCs
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux
2020 Aug 12
2
[Sharing] CentOS 8.2 (2004) Linux Server is Compatible with Dell PowerEdge R640 1U Server
Subject: [Sharing] CentOS 8.2 (2004) Linux Server is Compatible with
Dell PowerEdge R640 1U Server
Good day from Singapore,
I have just installed CentOS 8.2 (2004) Linux Server on Dell PowerEdge
R640 1U Server for "Donald Trump and Xi Jinping Investment Company LLP"
(fictitious/fictional company name used) in Singapore on 11 August 2020
Tuesday.
I can confirm that CentOS 8.2
2008 Apr 07
2
virtual gigabit
Is there anyway to get gigabit networking for a fully virtualized
guest? I''ve tried searching for this one, but all I get are results
about gigabit networking for the host / dom0, nothing for domU.
Thanks,
Gordon
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2012 Oct 19
6
Large Corosync/Pacemaker clusters
Hi,
We''re setting up fairly large Lustre 2.1.2 filesystems, each with 18
nodes and 159 resources all in one Corosync/Pacemaker cluster as
suggested by our vendor. We''re getting mixed messages on how large of a
Corosync/Pacemaker cluster will work well between our vendor an others.
1. Are there Lustre Corosync/Pacemaker clusters out there of this
size or larger?
2.
2005 Jun 22
11
Opteron Mobo Suggestions
I've been planning to build a dual Opteron server for awhile. I'd like
to get people's suggestions on a suitable motherboard.
I've looked at the Tyan K8SE (S2892) and K8SRE (S2891) but would like to
find more Linux-specific experiences with these boards.
Some features I expect are at least 4 SATA (SATA-300?) ports, serial
console support in the BIOS, USB 2.0 and IEEE-1394
2016 May 26
3
Problems with OS X 10.11.5
Hello,
I just wanted to check in on this list and see what folks know about the
new severe performance problems with OS X 10.11.5.
There's a comment on Reddit claiming that 10.11.5 is requiring SMB signing,
but I haven't found documentation on that.
I myself saw performance on my 10 GbE go from 800 MB/s on 10.11.4 to 60
MB/s on 10.11.5. My NAS is running Samba 4.3.6 on FreeNAS, which is
2011 Jan 25
1
10 GbE PCI passthrough
Dear all,
we are trying to get our virtual machines to work performant with our 10
GbE cards (lspci says: NetXen Incorporated NX3031 Multifunction
1/10-Gigabit Server Adapter (rev 42)).
When doing some benchmarks it was noticed that especially on the
receiving side, the virtualization layer hits a significant performance
impact and therefore we went for a PCI passthrough, so that the VMs can
2008 Dec 17
2
Segmentation fault in smbc_getxattr()->...->convert_sid_to_string() in samba-3.2.6
Hi,
Got segmentation fault with the follwoing stacktrace while running "examples/libsmbclient/testacl13.c".
#0 0x00000033a866f200 in strlen () from /lib64/tls/libc.so.6
#1 0x00000033a8642c51 in vfprintf () from /lib64/tls/libc.so.6
#2 0x00000033a8661bb4 in vsnprintf () from /lib64/tls/libc.so.6
#3 0x00000033a86481a1 in snprintf () from /lib64/tls/libc.so.6
#4 0x00002aaaaaaebf36
2007 Sep 26
9
Rule of Thumb for zfs server sizing with (192) 500 GB SATA disks?
I''m trying to get maybe 200 MB/sec over NFS for large movie files (need large capacity to hold all of them). Are there any rules of thumb on how much RAM is needed to handle this (probably RAIDZ for all the disks) with zfs, and how large a server should be used? The throughput required is not so large, so I am thinking an X4100 M2 or X4150 should be plenty.
This message posted from
2017 Sep 04
3
poor performance when copying files with windows client
Hello everyone,
in my setup I have two samba file server with clustered samba 4.6.7 and
glusterfs 3.10. The server are connected via a 10 Gb network (one for
clients and an extra network for gluster). When I am copying files from
different server ( centos 7 1611, samba 4.4.4, connected with 10Gb) to
the file servers with scp, the maximum speed is 70 MB/s (for small files
this is decreasing).
2013 Jun 28
3
Bandwidth limited when shorewall is enabled
Hi,
I''ve been having a really strange thing happen. I can''t remember when it
happened, or if it coincided with a shorewall update, but if I have shorewall
"running", my 100mbps connection is limited to about 1-6mbps per connection.
This is with TC/Shaping/QoS disabled or enabled.
I have no idea if its shorewall doing something funky or ipables or what, but
if I