Displaying 20 results from an estimated 68 matches for "sockopt".
Did you mean:
sockopts
2005 Nov 01
2
request: add TCP buffer options to rsync CLI?
...(latest Linux, and probably Vista).
But for the rest, I think the most straightforward way to enable
high throughput
would be to also let the client-side make TCP buffer requests.
Request -in-a-nutshell: something like --tcp_sndbuf and --tcp_rcvbuf options
that result in the same setsockopt calls as in the rsync socket.c code
available to rsyncd.conf.
If I've totally missed something, and such functionality is already
available, my apologies (but I'd appreciate a pointer!)
******
More detail:
I was helping resolve a throughput issue between a research network...
2010 Jan 28
4
Latency and Rsync Transfers
Hello,
Working a few servers that are transferring data across country with a 75ms
delay on a GIGE connection. We can tune the tcp buffers on linux to improve
the connections using iperf. Does rsync use the tcp buffers of the OS or
does it override these settings?
Thanks,
Neal
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2008 Apr 02
3
DO NOT REPLY [Bug 5370] New: sockopts attempts to set options after the socket is open
https://bugzilla.samba.org/show_bug.cgi?id=5370
Summary: sockopts attempts to set options after the socket is
open
Product: rsync
Version: 2.6.9
Platform: x86
URL: http://people.freebsd.org/~gordon/rsync-socket.diff
OS/Version: Linux
Status: NEW
Severity: normal...
2009 Aug 24
0
CIFS slow on gigabit, doesn't support sockopt=TCP_NODELAY ?
...I mount the same network share from a
Mac, it's a lot faster. When _sharing_ via samba, one can set the
TCP_NODELAY option (among others), which fixes the problem. But with
the cifs client, I find that there appears to be no way to set the
option. When mounting manually, you can use "-o sockopt=TCP_NODELAY",
and you can also put that into /etc/fstab. Either way, the option
appears to be ignored.
I filed this bug with Gentoo: http://bugs.gentoo.org/265183
Am I doing this wrong? Is there a work-around? Or plans to fix it?
Thanks!
--
Timothy Normand Miller
http://www.cse.ohio-...
2019 Oct 11
1
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Thu, Oct 10, 2019 at 11:32:54AM +0200, Stefano Garzarella wrote:
> On Wed, Oct 09, 2019 at 01:30:26PM +0100, Stefan Hajnoczi wrote:
> > On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> > Another issue is that this patch drops the VIRTIO_VSOCK_MAX_BUF_SIZE
> > limit that used to be enforced by virtio_transport_set_buffer_size().
> > Now the limit
2019 Oct 10
0
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Wed, Oct 09, 2019 at 01:30:26PM +0100, Stefan Hajnoczi wrote:
> On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> > @@ -140,18 +145,11 @@ struct vsock_transport {
> > struct vsock_transport_send_notify_data *);
> > int (*notify_send_post_enqueue)(struct vsock_sock *, ssize_t,
> > struct vsock_transport_send_notify_data *);
> > + int
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...9 05:08:13.711255] W [MSGID: 101002]
[options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is
deprecated, preferred is 'transport.address-family', continuing with
correction
[2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-04-09 05:08:13.729025] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-04-09 05:08:13.737757] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thre...
2019 Oct 09
2
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> @@ -140,18 +145,11 @@ struct vsock_transport {
> struct vsock_transport_send_notify_data *);
> int (*notify_send_post_enqueue)(struct vsock_sock *, ssize_t,
> struct vsock_transport_send_notify_data *);
> + int (*notify_buffer_size)(struct vsock_sock *, u64 *);
Is ->notify_buffer_size() called under
2019 Oct 09
2
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
On Fri, Sep 27, 2019 at 01:26:57PM +0200, Stefano Garzarella wrote:
> @@ -140,18 +145,11 @@ struct vsock_transport {
> struct vsock_transport_send_notify_data *);
> int (*notify_send_post_enqueue)(struct vsock_sock *, ssize_t,
> struct vsock_transport_send_notify_data *);
> + int (*notify_buffer_size)(struct vsock_sock *, u64 *);
Is ->notify_buffer_size() called under
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...[MSGID: 101002]
> [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is
> deprecated, preferred is 'transport.address-family', continuing with
> correction
> [2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
> [2018-04-09 05:08:13.729025] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2018-04-09 05:08:13.737757] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker]...
2011 Nov 28
0
RFC: [PATCH] Add TCP congestion control and Diffserv options
...sion = PROTOCOL_VERSION;
@@ -776,6 +778,8 @@ void usage(enum logcode F)
rprintf(F," --address=ADDRESS bind address for outgoing socket to daemon\n");
rprintf(F," --port=PORT specify double-colon alternate port number\n");
rprintf(F," --sockopts=OPTIONS specify custom TCP options\n");
+ rprintf(F," --diffserv=[0-63] specify diffserv setting \n");
+ rprintf(F," --congestion-alg=STRING choose a congestion algo\n");
rprintf(F," --blocking-io use blocking I/O for the remote s...
2009 Mar 11
1
bandwidth issue
...processes in parallel I get 340 KB/s for EACH process, which
is 5.4 Mbps in total. Starting a third parallel process bandwidth is going
down, but the sum is still about 5.4 Mbps.
I think the problem is related to the buffer size. Is it ok to change
buffersize with this command line option:
--sockopts=SO_SNDBUF=130000,SO_RCVBUF=130000
Is it sufficient to change only those 2 parameters ?
Will this also change the buffersize af the other host (via rsync
communication)? In the logs I do not get any information that
the sockopts parameter is changing something (although I am using
"vvv...
2019 Sep 27
0
[RFC PATCH 07/13] vsock: handle buffer_size sockopts in the core
virtio_transport and vmci_transport handle the buffer_size
sockopts in a very similar way.
In order to support multiple transports, this patch moves this
handling in the core to allow the user to change the options
also if the socket is not yet assigned to any transport.
This patch also adds the '.notify_buffer_size' callback in the
'struct virtio_tr...
2019 Oct 23
0
[PATCH net-next 07/14] vsock: handle buffer_size sockopts in the core
virtio_transport and vmci_transport handle the buffer_size
sockopts in a very similar way.
In order to support multiple transports, this patch moves this
handling in the core to allow the user to change the options
also if the socket is not yet assigned to any transport.
This patch also adds the '.notify_buffer_size' callback in the
'struct virtio_tr...
2019 Oct 30
1
[PATCH net-next 07/14] vsock: handle buffer_size sockopts in the core
> From: Stefano Garzarella [mailto:sgarzare at redhat.com]
> Sent: Wednesday, October 23, 2019 11:56 AM
> Subject: [PATCH net-next 07/14] vsock: handle buffer_size sockopts in the
> core
>
> virtio_transport and vmci_transport handle the buffer_size sockopts in a
> very similar way.
>
> In order to support multiple transports, this patch moves this handling in the
> core to allow the user to change the options also if the socket is not yet
>...
2005 Dec 25
4
Use of TCP_CORK instead of TCP_NODELAY
...in a piece of proprietory
(spelling?, i'm dutch) software, icecast merely relays them.
However, the intended endpoint is an embedded device. This device has
trouble with tcp/ip packets not matching the max. packet size (MSS or MSS
minus header). After eleborate testing, we found that using the sockopt
'TCP_CORK' instead of 'TCP_NODELAY' produces far better results on the field
on reconnects etc/. Also, with streaming media, TCP_CORK is more efficient
than TCP_NODELAY.
To patch icecast to use tcp_cork is a piece of cake, it involves no more
than 10 lines of code. My question woul...
2019 Oct 14
1
[PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
...??4:17, Stefan Hajnoczi wrote:
> SO_VM_SOCKETS_BUFFER_SIZE might have been useful for VMCI-specific
> applications, but we should use SO_RCVBUF and SO_SNDBUF for portable
> applications in the future. Those socket options also work with other
> address families.
>
> I guess these sockopts are bypassed by AF_VSOCK because it doesn't use
> the common skb queuing code in net/core/sock.c:(. But one day we might
> migrate to it...
>
> Stefan
+1, we should really consider to reuse the exist socket mechanism
instead of re-inventing wheels.
Thanks
2003 Oct 26
1
getsockopt TCP_NODELAY: Socket operation on non-socket
We get the warning above whenever we use a ProxyCommand. We _know_ it's
a pipe, so we can't use sockopts on it. So we shouldn't bitch about it.
This breaks all kinds of things which use SSH transparently; including
pine, which really wants the first thing it receives from an IMAP server
to be a valid imap greeting... which $subject is not.
$ ssh -o "proxycommand sh -c '( echo CONNECT...
2003 Jan 11
0
SMBmount in daemon mode slow to write
...unt's write to the W2K
share is particularly slow.
My questions:
1. Are there ways to optimize the smbmount writing performance to an
SMB share? I have tried adding the following command line options while
using smbmount and found that they didn't have any impact on the write
speed:
sockopt=IPTOS_LOWDELAY,sockopt=TCP_NODELAY
2. Are there ways to monitor the smbmount's performance/process other
than the standard network sniffing tools?
Any light on this issue would be greatly appreciated. TIA.
-Jeffrey Chen
=====
Jeffrey Chen
digiMine Corp.
-------------- next part ---------...
2018 Feb 23
1
Error IPV6_V6ONLY
When I run
gluster v heal datastore full
errror into log is
gfapi: Error disabling sockopt IPV6_V6ONLY
The brick are up.
Version is 3.13.2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180223/44c58392/attachment.html>