Displaying 20 results from an estimated 68 matches for "sockopts".
2005 Nov 01
2
request: add TCP buffer options to rsync CLI?
Dear rsync folks,
I'd like to request/suggest that cli options to set TCP send/receive buffers
be added to rsync client-side.
Summary:
I'm aware that a daemon's config-file can set socket options for
the server side
(e.g. SO_SNDBUF, SO_RCVBUF). That is useful.
But when trying to get high-throughput rsync over
long paths (i.e. large bandwidth*delay product), since
2010 Jan 28
4
Latency and Rsync Transfers
Hello,
Working a few servers that are transferring data across country with a 75ms
delay on a GIGE connection. We can tune the tcp buffers on linux to improve
the connections using iperf. Does rsync use the tcp buffers of the OS or
does it override these settings?
Thanks,
Neal
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2008 Apr 02
3
DO NOT REPLY [Bug 5370] New: sockopts attempts to set options after the socket is open
https://bugzilla.samba.org/show_bug.cgi?id=5370
Summary: sockopts attempts to set options after the socket is
open
Product: rsync
Version: 2.6.9
Platform: x86
URL: http://people.freebsd.org/~gordon/rsync-socket.diff
OS/Version: Linux
Status: NEW
Severity: normal...
2009 Aug 24
0
CIFS slow on gigabit, doesn't support sockopt=TCP_NODELAY ?
Hi, everyone. I originally sent this to the cifs-vfs mailing list,
but upon reading the descriptions of the lists, I think that might
have been the wrong place to ask. My apologies for the repeat. I
hope I got the right place this time. :)
I've noticed that the cifs client for Linux is slow over gigabit
ethernet. It seems to max out at about 10 megs/sec, while the drives
can go a lot
2019 Oct 11
1
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Thu, Oct 10, 2019 at 11:32:54AM +0200, Stefano Garzarella wrote:
> On Wed, Oct 09, 2019 at 01:30:26PM +0100, Stefan Hajnoczi wrote:
> > On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> > Another issue is that this patch drops the VIRTIO_VSOCK_MAX_BUF_SIZE
> > limit that used to be enforced by virtio_transport_set_buffer_size().
> > Now the limit
2019 Oct 10
0
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Wed, Oct 09, 2019 at 01:30:26PM +0100, Stefan Hajnoczi wrote:
> On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> > @@ -140,18 +145,11 @@ struct vsock_transport {
> > struct vsock_transport_send_notify_data *);
> > int (*notify_send_post_enqueue)(struct vsock_sock *, ssize_t,
> > struct vsock_transport_send_notify_data *);
> > + int
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2019 Oct 09
2
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> @@ -140,18 +145,11 @@ struct vsock_transport {
> struct vsock_transport_send_notify_data *);
> int (*notify_send_post_enqueue)(struct vsock_sock *, ssize_t,
> struct vsock_transport_send_notify_data *);
> + int (*notify_buffer_size)(struct vsock_sock *, u64 *);
Is ->notify_buffer_size() called under
2019 Oct 09
2
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> @@ -140,18 +145,11 @@ struct vsock_transport {
> struct vsock_transport_send_notify_data *);
> int (*notify_send_post_enqueue)(struct vsock_sock *, ssize_t,
> struct vsock_transport_send_notify_data *);
> + int (*notify_buffer_size)(struct vsock_sock *, u64 *);
Is ->notify_buffer_size() called under
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2011 Nov 28
0
RFC: [PATCH] Add TCP congestion control and Diffserv options
...sion = PROTOCOL_VERSION;
@@ -776,6 +778,8 @@ void usage(enum logcode F)
rprintf(F," --address=ADDRESS bind address for outgoing socket to daemon\n");
rprintf(F," --port=PORT specify double-colon alternate port number\n");
rprintf(F," --sockopts=OPTIONS specify custom TCP options\n");
+ rprintf(F," --diffserv=[0-63] specify diffserv setting \n");
+ rprintf(F," --congestion-alg=STRING choose a congestion algo\n");
rprintf(F," --blocking-io use blocking I/O for the remote sh...
2009 Mar 11
1
bandwidth issue
...processes in parallel I get 340 KB/s for EACH process, which
is 5.4 Mbps in total. Starting a third parallel process bandwidth is going
down, but the sum is still about 5.4 Mbps.
I think the problem is related to the buffer size. Is it ok to change
buffersize with this command line option:
--sockopts=SO_SNDBUF=130000,SO_RCVBUF=130000
Is it sufficient to change only those 2 parameters ?
Will this also change the buffersize af the other host (via rsync
communication)? In the logs I do not get any information that
the sockopts parameter is changing something (although I am using
"vvvv...
2019 Sep 27
0
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
virtio_transport and vmci_transport handle the buffer_size
sockopts in a very similar way.
In order to support multiple transports, this patch moves this
handling in the core to allow the user to change the options
also if the socket is not yet assigned to any transport.
This patch also adds the '.notify_buffer_size' callback in the
'struct virtio_tra...
2019 Oct 23
0
[PATCH net-next 07/14] vsock: handle buffer_size sockopts in the core
virtio_transport and vmci_transport handle the buffer_size
sockopts in a very similar way.
In order to support multiple transports, this patch moves this
handling in the core to allow the user to change the options
also if the socket is not yet assigned to any transport.
This patch also adds the '.notify_buffer_size' callback in the
'struct virtio_tra...
2019 Oct 30
1
[PATCH net-next 07/14] vsock: handle buffer_size sockopts in the core
> From: Stefano Garzarella [mailto:sgarzare at redhat.com]
> Sent: Wednesday, October 23, 2019 11:56 AM
> Subject: [PATCH net-next 07/14] vsock: handle buffer_size sockopts in the
> core
>
> virtio_transport and vmci_transport handle the buffer_size sockopts in a
> very similar way.
>
> In order to support multiple transports, this patch moves this handling in the
> core to allow the user to change the options also if the socket is not yet
>...
2005 Dec 25
4
Use of TCP_CORK instead of TCP_NODELAY
We're abusing icecast in a true narrowcasting setup (personalized stream per
mountpoint). The streams itself are created in a piece of proprietory
(spelling?, i'm dutch) software, icecast merely relays them.
However, the intended endpoint is an embedded device. This device has
trouble with tcp/ip packets not matching the max. packet size (MSS or MSS
minus header). After eleborate testing,
2019 Oct 14
1
[PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
...??4:17, Stefan Hajnoczi wrote:
> SO_VM_SOCKETS_BUFFER_SIZE might have been useful for VMCI-specific
> applications, but we should use SO_RCVBUF and SO_SNDBUF for portable
> applications in the future. Those socket options also work with other
> address families.
>
> I guess these sockopts are bypassed by AF_VSOCK because it doesn't use
> the common skb queuing code in net/core/sock.c:(. But one day we might
> migrate to it...
>
> Stefan
+1, we should really consider to reuse the exist socket mechanism
instead of re-inventing wheels.
Thanks
2003 Oct 26
1
getsockopt TCP_NODELAY: Socket operation on non-socket
We get the warning above whenever we use a ProxyCommand. We _know_ it's
a pipe, so we can't use sockopts on it. So we shouldn't bitch about it.
This breaks all kinds of things which use SSH transparently; including
pine, which really wants the first thing it receives from an IMAP server
to be a valid imap greeting... which $subject is not.
$ ssh -o "proxycommand sh -c '( echo CONNECT %...
2003 Jan 11
0
SMBmount in daemon mode slow to write
Hi,
We have a linux server which has the samba client running in daemon mode
(smb shares auto-mounted via fstab). The source smb share is on a W2K
box, and both machines are on the same 100MB ethernet LAN. Whenever we
try to copy a file from the linux machine's local disk to the SMB share,
we would get speed like 5,000-6,000 kbps, but yet if we try to ftp the
same file from the linux box to
2018 Feb 23
1
Error IPV6_V6ONLY
When I run
gluster v heal datastore full
errror into log is
gfapi: Error disabling sockopt IPV6_V6ONLY
The brick are up.
Version is 3.13.2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180223/44c58392/attachment.html>