Displaying 20 results from an estimated 355 matches for "sendmsg".
2015 Mar 15
2
virtio-net: tx queue was stopped
...p_seq=21 ttl=64 time=0.094 ms
> 64 bytes from 9.62.1.2: icmp_seq=22 ttl=64 time=0.098 ms
> 64 bytes from 9.62.1.2: icmp_seq=23 ttl=64 time=0.097 ms
> 64 bytes from 9.62.1.2: icmp_seq=24 ttl=64 time=0.095 ms
> 64 bytes from 9.62.1.2: icmp_seq=25 ttl=64 time=0.095 ms
> ....
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ....
>
> --
> R...
2015 Mar 15
2
virtio-net: tx queue was stopped
...p_seq=21 ttl=64 time=0.094 ms
> 64 bytes from 9.62.1.2: icmp_seq=22 ttl=64 time=0.098 ms
> 64 bytes from 9.62.1.2: icmp_seq=23 ttl=64 time=0.097 ms
> 64 bytes from 9.62.1.2: icmp_seq=24 ttl=64 time=0.095 ms
> 64 bytes from 9.62.1.2: icmp_seq=25 ttl=64 time=0.095 ms
> ....
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ping: sendmsg: No buffer space available
> ....
>
> --
> R...
2006 Oct 31
1
Fw: domU network problem , 10/30 progress
...ytes from 9.2.78.83: icmp_seq=8 ttl=64 time=10.0 ms
64 bytes from 9.2.78.83: icmp_seq=9 ttl=64 time=3.96 ms
64 bytes from 9.2.78.83: icmp_seq=10 ttl=64 time=10.0 ms
64 bytes from 9.2.78.83: icmp_seq=11 ttl=64 time=3.89 ms
64 bytes from 9.2.78.83: icmp_seq=12 ttl=64 time=2.73 ms
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
2. brct...
2006 Oct 31
1
Fw: domU network problem , 10/30 progress
...ytes from 9.2.78.83: icmp_seq=8 ttl=64 time=10.0 ms
64 bytes from 9.2.78.83: icmp_seq=9 ttl=64 time=3.96 ms
64 bytes from 9.2.78.83: icmp_seq=10 ttl=64 time=10.0 ms
64 bytes from 9.2.78.83: icmp_seq=11 ttl=64 time=3.89 ms
64 bytes from 9.2.78.83: icmp_seq=12 ttl=64 time=2.73 ms
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
2. brct...
2006 Oct 31
0
6262586 truss should print data written/read for sendmsg/recvmsg
Author: ja97890
Repository: /hg/zfs-crypto/gate
Revision: 4e8bdca5612d8e39ecb94db81ca5272fe9872d89
Log message:
6262586 truss should print data written/read for sendmsg/recvmsg
Files:
update: usr/src/cmd/truss/actions.c
update: usr/src/cmd/truss/expound.c
2001 Dec 28
0
multiple kernel routing tables & sendmsg
...tiple routing tables (or FIBs) that I am hoping someone can answer.
This support is also used by the iproute2 package that is available. My question is:
For applications sending raw packets, is it possible to specify the routing table to use when sending? Currently, I am finding that calls to sendmsg() use the main table by default, but I can''t find a way to make it use a different one (or a different call).
Thanks in advance,
Robert.
2002 Jan 16
1
Kernel boot problem using PXELinux boot
...v/linux/dri
ver/eepro100.html
eepro100. $Revison: 1.20.2.10 $ 2000/05/31 Modified by Andrey V. Svochkin <
aw at sw.com.sg> and others
eepro100.c: VA Linux custom, Dragan Stancevic visitor at valinux.com 2000/11/15
Partiton Check:
hda: hda1
Looking up port of RPC 100003/2 on 140.111.161.1
RPC: sendmsg returned error 101
Protmap: RPC call returned error 101
Root-NFS: Unable to get nfsd port number from server, using default
Lookingup port of RPC 100005/1 on 140.111.161.1
RPC: sendmsg returned error 101
Protmap: RPC call returned error 101
Root-NFS: Unable to get mountd port number from server, us...
2018 Nov 15
3
[RFC] Discuss about an new idea "Vsock over Virtio-net"
...+------------+
>> |
>> |
>> +------------------------------------------------------------------+
>> |VSOCK Core Module |
>> |ops->sendmsg; (vsock_stream_sendmsg) |
>> | -> alloc_skb; /* it will packet a skb buffer, and include vsock |
>> | * hdr and payload */ |
>> | -> dev_queue_xmit(); /* it will call start_xmit(virtio-net.c) */|
>...
2018 Nov 15
3
[RFC] Discuss about an new idea "Vsock over Virtio-net"
...+------------+
>> |
>> |
>> +------------------------------------------------------------------+
>> |VSOCK Core Module |
>> |ops->sendmsg; (vsock_stream_sendmsg) |
>> | -> alloc_skb; /* it will packet a skb buffer, and include vsock |
>> | * hdr and payload */ |
>> | -> dev_queue_xmit(); /* it will call start_xmit(virtio-net.c) */|
>...
2005 Aug 23
6
NFS-root problem
...nd searching the archive , haven''t got anything
helpful. Would appreciate any help.
Got the follow error when trying to start a domain using NFS root
IP-Config: Incomplete network configuration information.
Looking up port of RPC 100003/2 on 10.10.24.141 <http://10.10.24.141>
RPC: sendmsg returned error 101
portmap: RPC call returned error 101
Root-NFS: Unable to get nfsd port number from server, using default
Looking up port of RPC 100005/1 on 10.10.24.141 <http://10.10.24.141>
RPC: sendmsg returned error 101
--
My domain configuration
***********************
kernel = "...
2015 Mar 16
1
virtio-net: tx queue was stopped
...94 ms
>> 64 bytes from 9.62.1.2: icmp_seq=22 ttl=64 time=0.098 ms
>> 64 bytes from 9.62.1.2: icmp_seq=23 ttl=64 time=0.097 ms
>> 64 bytes from 9.62.1.2: icmp_seq=24 ttl=64 time=0.095 ms
>> 64 bytes from 9.62.1.2: icmp_seq=25 ttl=64 time=0.095 ms
>> ....
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> .....
2015 Mar 16
1
virtio-net: tx queue was stopped
...94 ms
>> 64 bytes from 9.62.1.2: icmp_seq=22 ttl=64 time=0.098 ms
>> 64 bytes from 9.62.1.2: icmp_seq=23 ttl=64 time=0.097 ms
>> 64 bytes from 9.62.1.2: icmp_seq=24 ttl=64 time=0.095 ms
>> 64 bytes from 9.62.1.2: icmp_seq=25 ttl=64 time=0.095 ms
>> ....
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> ping: sendmsg: No buffer space available
>> .....
2015 Mar 16
0
virtio-net: tx queue was stopped
...rom 9.62.1.2: icmp_seq=22 ttl=64 time=0.098 ms
> >> 64 bytes from 9.62.1.2: icmp_seq=23 ttl=64 time=0.097 ms
> >> 64 bytes from 9.62.1.2: icmp_seq=24 ttl=64 time=0.095 ms
> >> 64 bytes from 9.62.1.2: icmp_seq=25 ttl=64 time=0.095 ms
> >> ....
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer sp...
2015 Mar 16
0
virtio-net: tx queue was stopped
...rom 9.62.1.2: icmp_seq=22 ttl=64 time=0.098 ms
> >> 64 bytes from 9.62.1.2: icmp_seq=23 ttl=64 time=0.097 ms
> >> 64 bytes from 9.62.1.2: icmp_seq=24 ttl=64 time=0.095 ms
> >> 64 bytes from 9.62.1.2: icmp_seq=25 ttl=64 time=0.095 ms
> >> ....
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer space available
> >> ping: sendmsg: No buffer sp...
2004 Aug 24
2
Unmounting Errors On Reboot
I am receiving these errors when rebooting my system.
Unmounting file systems: (1785) ERROR: unable to sendmsg, error=-101,
Linux/ocfsipc.c, 394
(1785) ERROR: status = -999, Linux ocfsipc.c, 206
ocfs: Unmounting device (104,33) on rac1-priv1.collo.corp.net (node 2)
(1785) ERROR: unable to sendmsg, error=-101, Linux/ocfsipc.c, 394
(1785) ERROR: status = -999, Linux/ocfsipc.c, 206
It looks like the no...
2018 Sep 06
0
[PATCH net-next 10/11] tap: accept an array of XDP buffs through sendmsg()
...+ } else {
+ kfree_skb(skb);
+ }
+ rcu_read_unlock();
+
+ return 0;
+
+err_kfree:
+ kfree_skb(skb);
+err:
+ rcu_read_lock();
+ tap = rcu_dereference(q->tap);
+ if (tap && tap->count_tx_dropped)
+ tap->count_tx_dropped(tap);
+ rcu_read_unlock();
+ return err;
+}
+
static int tap_sendmsg(struct socket *sock, struct msghdr *m,
size_t total_len)
{
struct tap_queue *q = container_of(sock, struct tap_queue, sock);
struct tun_msg_ctl *ctl = m->msg_control;
+ struct xdp_buff *xdp;
+ int i;
- if (ctl && ctl->type != TUN_MSG_UBUF)
- return -EINVAL;
+ if (ctl...
2017 May 09
1
答复: The memory maybe leak in samba 4.3.11
...==2796353== by 0x7171207: ctdbd_migrate (in /usr/lib/x86_64-linux-gnu/libsmbconf.so.0)
> ==2796353== by 0x716BD6E: ??? (in /usr/lib/x86_64-linux-gnu/libsmbconf.so.0)
> ==2796353== by 0xAE6692F: ??? (in /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0)
>
> Ifound the sendmsg is always failed because erron=EINTR, but smbd also need to malloc for the new msgs, so the res of the smbd grows up quickly.
>
> I add some code in unix_dgram_send_job, just send 10 times if sendmsg faild with EINTR, the res will not grows up anymore.
> Another, keep the max queue length...
2018 Sep 06
0
[PATCH net-next 09/11] tuntap: accept an array of XDP buffs through sendmsg()
...(tun->pcpu_stats);
+ u64_stats_update_begin(&stats->syncp);
+ stats->rx_packets++;
+ stats->rx_bytes += skb->len;
+ u64_stats_update_end(&stats->syncp);
+ put_cpu_ptr(stats);
+
+ if (rxhash)
+ tun_flow_update(tun, rxhash, tfile);
+
+out:
+ return err;
+}
+
static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
{
- int ret;
+ int ret, i;
struct tun_file *tfile = container_of(sock, struct tun_file, socket);
struct tun_struct *tun = tun_get(tfile);
struct tun_msg_ctl *ctl = m->msg_control;
+ struct xdp_buff *xdp;
if (!tun)
return -EBA...
2018 Nov 15
2
[RFC] Discuss about an new idea "Vsock over Virtio-net"
...>> |
>>>> |
>>>> +------------------------------------------------------------------+
>>>> |VSOCK Core Module |
>>>> |ops->sendmsg; (vsock_stream_sendmsg) |
>>>> | -> alloc_skb; /* it will packet a skb buffer, and include vsock |
>>>> | * hdr and payload */ |
>>>> | -> dev_queue_xmit(); /* it will call start_xmi...
2018 Nov 15
2
[RFC] Discuss about an new idea "Vsock over Virtio-net"
...>> |
>>>> |
>>>> +------------------------------------------------------------------+
>>>> |VSOCK Core Module |
>>>> |ops->sendmsg; (vsock_stream_sendmsg) |
>>>> | -> alloc_skb; /* it will packet a skb buffer, and include vsock |
>>>> | * hdr and payload */ |
>>>> | -> dev_queue_xmit(); /* it will call start_xmi...