search for: rmem

Displaying 20 results from an estimated 24 matches for "rmem".

Did you mean: mem
2006 Jan 03
3
ip_queue module issue
...cal/snort-lib/lib \ --with-libnet-includes=/usr/local/snort-lib/include \ --with-libnet-libraries=/usr/local/snort-lib/lib \ --with-libipq-includes=/usr/local/iptables/include \ --with-libipq-libraries=/usr/local/iptables/lib \ --enable-inline cat /proc/net/netlink> sk Eth Pid Groups Rmem Wmem Dump Locks c11c8040 0 0 00000000 0 0 00000000 2 c7ec0140 3 0 00000000 0 0 00000000 7 c11c8780 4 0 00000000 0 0 00000000 2 c7e74c40 5 0 00000000 0 0 00000000 2 Starting SNORT now: /usr/local/snort/b...
2020 Apr 28
0
[PATCH 5/5] virtio: Add bounce DMA ops
...aration] 144 | if (!of_get_flat_dt_prop(node, "no-map", NULL)) | ^~~~~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors vim +/of_get_flat_dt_prop +144 drivers/virtio/virtio_bounce.c 139 140 static int __init virtio_bounce_setup(struct reserved_mem *rmem) 141 { 142 unsigned long node = rmem->fdt_node; 143 > 144 if (!of_get_flat_dt_prop(node, "no-map", NULL)) 145 return -EINVAL; 146 147 return virtio_register_bounce_buffer(rmem->base, rmem->size); 148 } 149 --- 0-DAY CI Kernel Test Service, Int...
2020 Apr 28
0
[PATCH 5/5] virtio: Add bounce DMA ops
...ckend driver is to make use of a bounce buffer. The bounce > buffer is accessible to both backend and frontend drivers. All IO > buffers that are in private space of guest VM are bounced to be > accessible to backend. [...] > +static int __init virtio_bounce_setup(struct reserved_mem *rmem) > +{ > + unsigned long node = rmem->fdt_node; > + > + if (!of_get_flat_dt_prop(node, "no-map", NULL)) > + return -EINVAL; > + > + return virtio_register_bounce_buffer(rmem->base, rmem->size); > +} > + > +RESERVEDMEM_OF_DECLARE(virtio, "virtio_...
2020 Apr 28
0
[PATCH 5/5] virtio: Add bounce DMA ops
...r(phys_addr_t base, size_t size) > +{ > + if (bounce_buf_paddr || !base || size < PAGE_SIZE) > + return -EINVAL; > + > + bounce_buf_paddr = base; > + bounce_buf_size = size; > + > + return 0; > +} > + > +static int __init virtio_bounce_setup(struct reserved_mem *rmem) > +{ > + unsigned long node = rmem->fdt_node; > + > + if (!of_get_flat_dt_prop(node, "no-map", NULL)) > + return -EINVAL; > + > + return virtio_register_bounce_buffer(rmem->base, rmem->size); > +} > + > +RESERVEDMEM_OF_DECLARE(virtio, "virtio_...
2017 Jan 24
2
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
...in drivers use at the > moment as well. It bothers me that this becomes a part of userspace ABI. Apps will see that everyone does 256 and will assume it, we'll never be able to go back. This does mean that XDP_PASS will use much more memory for small packets and by extension need a higher rmem limit. Would all admins be comfortable with this? Why would they want to if all their XDP does is DROP? Why not teach applications to query the headroom? Or even better, do what we do with skbs and do data copies whenever you run out of headroom instead of a failure. Anyone using build_skb already...
2017 Jan 24
2
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
...in drivers use at the > moment as well. It bothers me that this becomes a part of userspace ABI. Apps will see that everyone does 256 and will assume it, we'll never be able to go back. This does mean that XDP_PASS will use much more memory for small packets and by extension need a higher rmem limit. Would all admins be comfortable with this? Why would they want to if all their XDP does is DROP? Why not teach applications to query the headroom? Or even better, do what we do with skbs and do data copies whenever you run out of headroom instead of a failure. Anyone using build_skb already...
2017 Jan 24
1
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
...gt; If you are trying to do buffering differently for virtio_net, well... > that's a self inflicted wound as far as I can tell. Right but I was wondering about the fact that this makes XDP_PASS much slower than processing skbs without XDP, as truesize is huge so we'll quickly run out of rmem space. When XDP is used to fight DOS attacks, why isn't this a concern? -- MST
2017 Jan 24
1
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
...gt; If you are trying to do buffering differently for virtio_net, well... > that's a self inflicted wound as far as I can tell. Right but I was wondering about the fact that this makes XDP_PASS much slower than processing skbs without XDP, as truesize is huge so we'll quickly run out of rmem space. When XDP is used to fight DOS attacks, why isn't this a concern? -- MST
2003 Aug 23
0
[ANNOUNCE] ulogd-1.01 released
...://ftp.netfilter.org/pub/ulogd/ulogd-1.01.tar.bz2 There is a GPG signature for verification of the authenticity of the=20 archive: ftp://ftp.netfilter.org/pub/ulogd/ulogd-1.01.tar.bz2.sig Changes (since Version 1.00) - use $(LD) macro in order to provide cross-compiling/linking support - add 'rmem' configuration key to set the netlink socket rmem buffsize - don't use kernel header files for IP/TCP header definitions - various cosmetic cleanup to compile with -Wall - fix usage of libmysqlclient: call mysql_init() before mysql_real_connect() - don't have LOGEMU read the system time...
2011 Sep 09
1
Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3
Hi everyone, I am seeing slower-than-expected performance in Gluster 3.2.3 between 4 hosts with 10 gigabit eth between them all. Each host has 4x 300GB SAS 15K drives in RAID10, 6-core Xeon E5645 @ 2.40GHz and 24GB RAM running Ubuntu 10.04 64-bit (I have also tested with Scientific Linux 6.1 and Debian Squeeze - same results on those as well). All of the hosts mount the volume using the FUSE
2017 Jan 24
0
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
...as well. > > It bothers me that this becomes a part of userspace ABI. > Apps will see that everyone does 256 and will assume it, > we'll never be able to go back. > > This does mean that XDP_PASS will use much more memory > for small packets and by extension need a higher rmem limit. > Would all admins be comfortable with this? Why would they want > to if all their XDP does is DROP? > Why not teach applications to query the headroom? This works in the regimen that XDP packets always live in exactly one page. That will be needed to mmap the RX ring into userspa...
2017 Jan 25
0
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
...ux stack to work > > reasonably well together. > > btw the micro benchmarks showed that page per packet approach > that xdp took in mlx4 should be 10% slower vs normal operation > for tcp/ip stack. Interesting. TCP only or UDP too? What's the packet size? Are you tuning your rmem limits at all? The slowdown would be more noticeable with UDP with default values and small packet sizes. > We thought that for our LB use case > it will be an acceptable slowdown, but turned out that overall we > got a performance boost, since xdp model simplified user space > and go...
2017 Jan 25
1
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
On Tue, Jan 24, 2017 at 7:48 PM, John Fastabend <john.fastabend at gmail.com> wrote: > > It is a concern on my side. I want XDP and Linux stack to work > reasonably well together. btw the micro benchmarks showed that page per packet approach that xdp took in mlx4 should be 10% slower vs normal operation for tcp/ip stack. We thought that for our LB use case it will be an acceptable
2017 Jan 25
1
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
On Tue, Jan 24, 2017 at 7:48 PM, John Fastabend <john.fastabend at gmail.com> wrote: > > It is a concern on my side. I want XDP and Linux stack to work > reasonably well together. btw the micro benchmarks showed that page per packet approach that xdp took in mlx4 should be 10% slower vs normal operation for tcp/ip stack. We thought that for our LB use case it will be an acceptable
2010 May 03
0
TCP Tuning/Apache question (possibly OT)
Hello All: I've been requested to add some TCP tuning parameters to some CentOS 5.4 systems. These tunings are for the TCP receive buffer windows: net.core.rmem_max net.core.wmem_max Information on this tuning is broadly available: http://fasterdata.es.net/TCP-tuning/linux.html http://www.speedguide.net/read_articles.php?id=121 Potential downsides are available: http://www.29west.com/docs/THPM/udp-buffer-sizing.html >From the above, the rmem size is...
2018 Jan 27
1
[Bug 1218] New: ULOGD PCAP Plugin Missing Ethernet Headers
...rity: P5 Component: ulogd Assignee: netfilter-buglog at lists.netfilter.org Reporter: djcanadianjeff at gmail.com With these settings the pcap file is created but missing headers so can not use with wireshark? [global] logfile="/var/log/ulogd.log" loglevel=5 rmem=131071 bufsize=150000 plugin="/usr/lib/ulogd/ulogd_inppkt_NFLOG.so" #plugin="/usr/lib/ulogd/ulogd_inpflow_NFCT.so" plugin="/usr/lib/ulogd/ulogd_filter_IFINDEX.so" plugin="/usr/lib/ulogd/ulogd_filter_IP2STR.so" plugin="/usr/lib/ulogd/ulogd_filter_IP2BIN....
2006 Jun 04
4
Maximum samba file transfer speed on gigabit...
...e upgrade from 2x 10K RPM SATA 1.5Gbps drives in RAID-0 to 4x 15K RPM SAS 3.0Gbps drives in RAID-10. That should do it. Nope. No difference, no change whatsoever (that was an expensive mistake). Then it must be the network card is the bottleneck. So we get PCI-E Gigabit NICs, I learn all about rmem and wmem and tcp window sizes, set a bunch of those settings (rmem & wmem = 25000000, tcp window size on Windows = 262800 as well as so_sndbuf, so_rcvbuf, max xmit, and read size in smb.conf = 262800), still no change. No change! I can run 944 Mb/s or higher in iperf. Why can't I even ge...
2016 Feb 12
3
Experimental 6502 backend; memory operand folding problem
Greetings, LLVM devs, For the past few weeks, I have been putting together a 6502 backend for LLVM. The 6502 and its derivatives, of course, have powered countless microcomputers, game consoles and arcade machines over the past 40 years. The backend is just an experimental hobby project right now. The code is available here: <https://github.com/beholdnec/llvm-m6502>. This branch introduces
2017 Jan 24
2
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
On Tue, Jan 24, 2017 at 03:09:59PM -0500, David Miller wrote: > From: "Michael S. Tsirkin" <mst at redhat.com> > Date: Tue, 24 Jan 2017 21:53:13 +0200 > > > I didn't realise. Why can't we? I thought that adjust_header is an > > optional feature that userspace can test for, so no rush. > > No, we want the base set of XDP features to be present in
2017 Jan 24
2
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
On Tue, Jan 24, 2017 at 03:09:59PM -0500, David Miller wrote: > From: "Michael S. Tsirkin" <mst at redhat.com> > Date: Tue, 24 Jan 2017 21:53:13 +0200 > > > I didn't realise. Why can't we? I thought that adjust_header is an > > optional feature that userspace can test for, so no rush. > > No, we want the base set of XDP features to be present in