Displaying 20 results from an estimated 25 matches for "sendq".
Did you mean:
send
2006 Nov 20
10
[Bug 1263] connection sharing often freezes
http://bugzilla.mindrot.org/show_bug.cgi?id=1263
Summary: connection sharing often freezes
Product: Portable OpenSSH
Version: v4.5p1
Platform: PPC
OS/Version: Mac OS X
Status: NEW
Severity: major
Priority: P2
Component: ssh
AssignedTo: bitbucket at mindrot.org
ReportedBy: vincent at
2010 Jan 13
7
[Bug 1697] New: scp transfers from remote cygwin machine fail with ssh versions >= 4.6
https://bugzilla.mindrot.org/show_bug.cgi?id=1697
Summary: scp transfers from remote cygwin machine fail with ssh
versions >= 4.6
Product: Portable OpenSSH
Version: 4.6p1
Platform: ix86
OS/Version: Linux
Status: NEW
Severity: major
Priority: P2
Component: scp
AssignedTo:
2006 Jan 04
1
[Bug 1143] connections with "sshd: root@notty" is established but not closed
http://bugzilla.mindrot.org/show_bug.cgi?id=1143
Summary: connections with "sshd: root at notty" is established but
not closed
Product: Portable OpenSSH
Version: 3.9p1
Platform: ix86
OS/Version: Linux
Status: NEW
Severity: critical
Priority: P2
Component: Kerberos support
2002 May 30
0
update on hung rsyncs
...It seems like some delay/glitch/issue with
NFS on the destination might be causing ocassional/random troubles for
my rsync processes. It seems this NFS factor is something that people
are bringing up more and more lately. Ideas? I'll try 2.5.5 with the
generator patch on the destinations.
SendQ and RecvQ are 0 on the source sockets.
strace shows the parent rsync process on source is stuck in this endless
loop:
gettimeofday({1022796482, 605543}, NULL) = 0
wait4(8783, 0xbffffc48, WNOHANG, NULL) = 0
gettimeofday({1022796482, 605602}, NULL) = 0
gettimeofday({1022796482, 605626}, NULL) = 0...
2009 Aug 21
1
problem with asterisk hylafax and sangoma A200D
...]: SEND FAX: JOB 7 DEST 922668982
COMMID 000000013 DEVICE '/dev/
ttyIAX' FROM 'root <user at domain.com.ni>' USER root
Aug 20 16:11:21 voz FaxSend[6715]: SEND FAILED: JOB 7 DEST 922668982
ERR [2] No carrier detected
Aug 20 16:11:22 voz FaxQueuer[3504]: NOTIFY: bin/notify "sendq/q7"
"requeued" "" "16:16"
Aug 20 16:11:23 voz FaxQueuer[3504]: NOTIFY exit status: 0 (6721)
Aug 20 16:11:30 voz FaxGetty[3666]: MODEM set DTR OFF
Aug 20 16:11:30 voz FaxGetty[3666]: MODEM set baud rate: 0 baud (flow
control unchanged)
Aug 20 16:11:30 voz FaxGetty[...
2017 Mar 29
5
[PATCH 1/6] virtio: wrap find_vqs
...d;
diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 5e66e08..f7cade0 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -869,7 +869,7 @@ static int rpmsg_probe(struct virtio_device *vdev)
init_waitqueue_head(&vrp->sendq);
/* We expect two virtqueues, rx and tx (and in this order) */
- err = vdev->config->find_vqs(vdev, 2, vqs, vq_cbs, names, NULL);
+ err = virtio_find_vqs(vdev, 2, vqs, vq_cbs, names, NULL);
if (err)
goto free_vrp;
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi....
2017 Mar 29
5
[PATCH 1/6] virtio: wrap find_vqs
...d;
diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 5e66e08..f7cade0 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -869,7 +869,7 @@ static int rpmsg_probe(struct virtio_device *vdev)
init_waitqueue_head(&vrp->sendq);
/* We expect two virtqueues, rx and tx (and in this order) */
- err = vdev->config->find_vqs(vdev, 2, vqs, vq_cbs, names, NULL);
+ err = virtio_find_vqs(vdev, 2, vqs, vq_cbs, names, NULL);
if (err)
goto free_vrp;
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi....
2019 Nov 09
4
SSH hang question
Very rarely, but it has repeated, we see openssh on the client side
hanging. On the server side there is no indication of connection in the
logs. These are always scripted remote commands that do not have user
interaction when we find it. This seems to be happening only in vm
environments but I could be wrong. It seems surprising to me that there
would not be timeouts and retries on the protocol,
2012 Jan 25
2
Server/Client Alive mechanism issues
Hello,
I have a bandwidth-constrained connection that I'd like to run rsync
over through an SSH tunnel. I also want to detect any network drops
pretty rapidly.
On the servers I'm setting (via sshd_config):
ClientAliveCountMax 5
ClientAliveInterval 1
TCPKeepAlive no
and on the clients I'm setting (via ssh_config):
ServerAliveCountMax 5
ServerAliveInterval 1
TCPKeepAlive no
After
2015 Feb 09
3
Connection stalls at debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
...ion data /home/meta/.ssh/config
[...]
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<7680<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
On Mon Feb 09 2015 at 1:29:17 PM Darren Tucker <dtucker at zip.com.au> wrote:
> I'd add "if you run netstat on both ends and see "SendQ" non-zero and not
> decreasing then this is likely your problem.
>
With the -m parameter as above, running ss on the client, I see Send-Q go
to 1208 and then sit there until I Ctrl-C out the client, when it increases
by 1. On the server side, I see nothing. Is that plausible, that the c...
2015 Sep 20
4
OpenSSH Always Hangs When Connecting to Remote
On 09/20/2015 03:25 AM, Darren Tucker wrote:
> I suspect a path mtu problem. The key exchange packet is one of the
> first large ones in an SSH connection so it tends to show up such problems.
>
> Seehttp://www.snailbook.com/faq/mtu-mismatch.auto.html
> <http://www.snailbook.com/faq/mtu-mismatch.auto.html>
Has this been changed? SSH used to work fine on my old machine. My
2015 Feb 09
3
Connection stalls at debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
Trying to connect from Fedora 21 to CentOS 6.6, OpenSSH on both ends.
Connection is via a VPN.
Initially the connection seems good, but OpenSSH stalls at
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP.
Software version on servers:
openssh-server-5.3p1-104.el6_6.1.x86_64
openssh-5.3p1-104.el6_6.1.x86_64
Software version on client:
openssh-6.6.1p1-11.1.fc21.x86_64
also duplicated problem using
2016 May 20
4
Directory listing fails for specific user
Hello,
We have recently had a new problem with one of the users on one of our servers.
Filezilla claims it connects and authenticates, but then fails to list the directory (although no error message is output, it just eventually times out. There is the full output of filezilla located here: http://pastebin.com/tAVcSP8Y.
>From the server side, the most verbose output I can make it print can
2017 Jan 27
0
[PATCH 5/9] virtio: allow drivers to request IRQ affinity when creating VQs
...;
diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 3090b0d3..5e66e08 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -869,7 +869,7 @@ static int rpmsg_probe(struct virtio_device *vdev)
init_waitqueue_head(&vrp->sendq);
/* We expect two virtqueues, rx and tx (and in this order) */
- err = vdev->config->find_vqs(vdev, 2, vqs, vq_cbs, names);
+ err = vdev->config->find_vqs(vdev, 2, vqs, vq_cbs, names, NULL);
if (err)
goto free_vrp;
diff --git a/drivers/s390/virtio/kvm_virtio.c b/drivers/s390...
2005 Feb 24
1
SAMBA + LDAP : Unable to Login on a domain
...ory mode = 0777
[Faxs recus]
comment = Fax en reception
path = /home/services/fax/recvq
read only = yes
writable = no
users = @users
force group = users
force create mode = 0664
force directory mode = 0775
[Faxs env]
comment = Fax en emission
path = /home/services/fax/sendq
read only = yes
writable = no
users = @users
force group = users
force create mode = 0664
force directory mode = 0775
[Propositions]
comment = Repertoire propositions commerciales
path = /home/propositions
read only = no
writable = yes
users = @users
valid users...
2008 Dec 14
5
[PATCH] AF_VMCHANNEL address family for guest<->host communication.
...ff_q.lock);
+ while ((skb = skb_peek(&vmc_dev.tx_skbuff_q))) {
+ if (vmchannel_try_send_one(skb))
+ break;
+ __skb_unlink(skb, &vmc_dev.tx_skbuff_q);
+ sent++;
+ }
+ spin_unlock(&vmc_dev.tx_skbuff_q.lock);
+ if (sent)
+ vmc_dev.sq->vq_ops->kick(vmc_dev.sq);
+}
+
+static void sendq_notify(struct virtqueue *sendq)
+{
+ tasklet_schedule(&vmc_dev.tx_tasklet);
+}
+
+static int vmchannel_send_skb(struct sk_buff *skb, const __u32 id)
+{
+ struct vmchannel_desc *desc;
+
+ desc = skb_vmchannel_desc(skb);
+ desc->id = cpu_to_le32(id);
+ desc->len = cpu_to_le32(skb->len);...
2008 Dec 14
5
[PATCH] AF_VMCHANNEL address family for guest<->host communication.
...ff_q.lock);
+ while ((skb = skb_peek(&vmc_dev.tx_skbuff_q))) {
+ if (vmchannel_try_send_one(skb))
+ break;
+ __skb_unlink(skb, &vmc_dev.tx_skbuff_q);
+ sent++;
+ }
+ spin_unlock(&vmc_dev.tx_skbuff_q.lock);
+ if (sent)
+ vmc_dev.sq->vq_ops->kick(vmc_dev.sq);
+}
+
+static void sendq_notify(struct virtqueue *sendq)
+{
+ tasklet_schedule(&vmc_dev.tx_tasklet);
+}
+
+static int vmchannel_send_skb(struct sk_buff *skb, const __u32 id)
+{
+ struct vmchannel_desc *desc;
+
+ desc = skb_vmchannel_desc(skb);
+ desc->id = cpu_to_le32(id);
+ desc->len = cpu_to_le32(skb->len);...
2014 Dec 26
8
[RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
Hi all:
This series try to share MSIX irq for each tx/rx queue pair. This is
done through:
- introducing virtio pci channel which are group of virtqueues that
sharing a single MSIX irq (Patch 1)
- expose channel setting to virtio core api (Patch 2)
- try to use channel setting in virtio-net (Patch 3)
For the transport that does not support channel, channel paramters
were simply ignored. For
2014 Dec 26
8
[RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
Hi all:
This series try to share MSIX irq for each tx/rx queue pair. This is
done through:
- introducing virtio pci channel which are group of virtqueues that
sharing a single MSIX irq (Patch 1)
- expose channel setting to virtio core api (Patch 2)
- try to use channel setting in virtio-net (Patch 3)
For the transport that does not support channel, channel paramters
were simply ignored. For
2016 Nov 17
13
automatic IRQ affinity for virtio
Hi Michael,
this series contains a couple cleanups for the virtio_pci interrupt
handling code, including a switch to the new pci_irq_alloc_vectors
helper, and support for automatic affinity by the PCI layer if the
consumers ask for it. It then converts over virtio_blk to use this
functionality so that it's blk-mq queues are aligned to the MSI-X
vector routing. I have a similar patch in the