Displaying 20 results from an estimated 40000 matches similar to: "Disable nagle algorithm"
2011 Oct 23
2
GlusterFS over lessfs/opendedupe
Hi,
I'm currently running GlusterFS over XFS, and it works quite well.
I'm wondering if it's possible to add data deduplication into the mix by:
glusterfs --> lessfs --> xfs or
glusterfs --> opendedupe --> xfs
Has anybody tried doing this? We're running VM images on gluster, and I figure we could get a bit of space saving bu deduplicating the data.
Gerald
2012 Apr 19
2
Gluster 3.2.6 for XenServer
Hi,
I have Gluster 3.2.6 RPM's for Citrix XenServer 6.0. I've installed and mounted exports, but that's where I stopped.
My issues are:
1. XenServer mounts the NFS servers SR subdirectory, not the export. Gluster won't do that.
-- I can, apparently, mount the gluster export somewhere else, and then 'mount --bind' the subdir to the right place
2. I don't really know
2012 Sep 18
1
New release of Gluster?
Hi,
Are there any proposed dates for a new release of Gluster? I'm currently running 3.3, and the gluster heal info commands all segfault.
Gerald
2013 Dec 05
2
Ubuntu GlusterFS in Production
Hi,
Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking at using the NFS portion of it over a bonded interface. I believe I'll get better speed than user the gluster client across a single interface.
Setup:
3 servers running KVM (about 24 VM's)
2 NAS boxes running Ubuntu (13.04 and 13.10)
Since Gluster NFS does server side replication, I'll put
2012 Jan 05
1
Can't stop or delete volume
Hi,
I can't stop or delete a replica volume:
# gluster volume info
Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export
# gluster volume stop sync1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Volume sync1 does not exist
# gluster volume
2018 Jun 15
1
[PATCH] v2v: rhv-upload: Disable Nagle algorithm
When sending a PUT request, the http header may be sent in the first
packet when calling con.endheaders(). When we send the first chunk, the
kernel may delay the send because the header packet was not acked yet.
We have seen PUT requests delayed by 40 milliseconds on the server side
during virt-v2v upload to ovirt. Here is example log from current RHEL
virt-v2v version, uploading to RHV 4.2.3:
2019 May 30
0
[PATCH nbdkit 2/2] server: Disable Nagle's algorithm.
Unlike the equivalent change on the client side which caused a
dramatic performance improvement, there is no noticable difference
from this patch in my testing.
---
server/sockets.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/server/sockets.c b/server/sockets.c
index 2c71970..b25405c 100644
--- a/server/sockets.c
+++ b/server/sockets.c
@@ -37,13 +37,15 @@
2012 Oct 17
6
SuSE Linux Enterprise Server OpenSSH 5.1p1 nagle issue?
I have a system in place where it appears that TCP will make a massive
change in behavior mid-stream with existing SSH sessions. We noticed the
issue first with an application using an SSH forward. However, we were
able to rule that out by generating the same TCP characteristics by
having a perl script dump text out to a terminal simulating a large data
flow from the far end(ssh server) back
2011 Sep 21
0
Gluster NFS vs iSCSI in a XenServer Environment
Hi,
I just completed testing on using Gluster via it's NFS server in a XenServer environment. My comparison was iSCSI in the same environment.
You can see the results at http://majentis.com/2011/09/21/xenserver-iscsi-and-glusterfsnfs/
Gerald
2006 Dec 20
1
Nagle & delayed ACK strike again
This time the problem is that the ssh server only sets TCP_NODELAY for
interactive (tty) sessions or if X11 forwarding is enabled. Neither
of which are true for the use of the sftp subsystem. This hurts
upload performance for sftp/sshfs.
I'm not sure why this hasn't cropped up earlier. Were there any
TCP_NODELAY related changes in the sshd code recently?
Is there a reason not to
2004 Aug 06
1
Second patch again CVS version
On Sun, Feb 24, 2002 at 09:04:03AM +0100, Ricardo Galli wrote:
> Sorry, didn't explain well.
>
> Nagle's algorithm (rfc896) buffers user data until there is no pending acks
> or it can send a full segment (rfc1122).
>
> icecast doesn't need it at all, because it already sends large buffers and
> the time to send the next buffers is relatively very long.
IMO
2011 Sep 13
1
read and write speeds
Hi,
I'm testing gluster/nfs for replacement of an existing DRBD/iSCSI system.
Speed tests show gluster NFS to be pretty close to iSCSI, but I have some questions.
If I do a sequential write of data, I get ~118 MB/s. A sequential read of data gets about 65 MB/s.
If I do a sequential read and write at the same time, write speed drops to ~100 MB/s while read speed drops to about 10 MB/s.
2009 Sep 28
1
is glusterfs DHT really distributed?
Hi All,
I noticed a very weird phenomenon when I'm copying data (200KB image
files) to our glusterfs storage. When I run only run client, it copies
roughly 20 files per second and as soon as I start a second client on
another machine, the copy rate of the first client immediately degrade
to 5 files per second. When I stop the second client, the first client
will immediately speed up
2012 Mar 10
0
XFS inode64 and Gluster 3.2.5 NFS export
Hi,
I've recently had dataloss on an XFS (inode64) glusterfs (3.2.5) NFS exported file system. I was using the gluster NFS server.
On the XFS FAQ page, they have this:
Q: Why doesn't NFS-exporting subdirectories of inode64-mounted filesystem work?
The default fsid type encodes only 32-bit of the inode number for subdirectory exports. However, exporting the root of the filesystem
2023 Aug 07
2
Packet Timing and Data Leaks
On Mon, 7 Aug 2023, Chris Rapier wrote:
> > The broader issue of hiding all potential keystroke timing is not yet fixed.
>
> Could some level of obfuscation come from enabling Nagle for interactive
> sessions that has an associated TTY? Though that would be of limited
> usefulness in low RTT environments. I don't like the idea of having a steady
> drip of packets as that
2005 Jul 27
1
icecast performance on many concurrent low-bitrate streams
Hi all,
I'm running an Icecast-2.2 server with at peak times some 50 sources and 500 concurrent listeners all using low-bitrate 16kpbs streams. I'm experiencing some connection losses at these peak times ("Client connection died" message in error.log).
The machine running Icecast has a 100Mbit connection to the internet. It is a Celeron 2.4Ghz machine with 1Gbyte of main
2020 Nov 04
0
parallel PSOCK connection latency is greater on Linux?
Please, check a tcpdump session on localhost while running the following script:
library(parallel)
library(tictoc)
cl <- makeCluster(1)
Sys.sleep(1)
for (i in 1:10) {
tic()
x <- clusterEvalQ(cl, iris)
toc()
}
The initialization phase comprises 7 packets. Then, the 1-second sleep
will help you see where the evaluation starts. Each clusterEvalQ
generates 6 packets:
1. main ->
2005 Dec 28
0
Use of TCP_CORK instead of TCP_NODELAY
> As a streaming server, it's fairly crucial for icecast to
> send out data with as low a delay as possible (many clients
> don't care, but some do). That's why we use TCP_NODELAY - we
> actually WANT to send out data as soon as we can.
Nagle is inherently unsuited for streams. NODELAY was (imho) ment for
connections for which Nagle isn't sufficient and CORK is not
2017 Jul 31
0
Elastic Hashing Algorithm
Good Morning!
I?m writing my thesis about distributed storage systems and I came across GlusterFS.
I need more information/details about Elastic Hashing Algorithm which is used by GlusterFS. Unfortunately, I couldn?t find any details about it.
Please help my with a reference to this Algorithm so I can study it.
------------------
Best Regards
Moustafa Hammouda
System Engineer
2006 Jan 24
4
sftp performance problem, cured by TCP_NODELAY
In certain situations sftp download speed can be much less than that
of scp.
After many days of trying to find the cause finally I found it to be
the tcp nagle algorithm, which if turned off with TCP_NODELAY
eliminates the problem.
Now I see it being discussed back in 2002, but it still unresolved in
openssh-4.2 :(
Simple solution would be to add a NoDelay option to ssh which sftp
would set.