Displaying 20 results from an estimated 45 matches for "fanout".
2014 May 13
0
[Bug 939] New: extensions: NFQUEUE: missing cpu-fanout
https://bugzilla.netfilter.org/show_bug.cgi?id=939
Summary: extensions: NFQUEUE: missing cpu-fanout
Product: netfilter/iptables
Version: linux-2.6.x
Platform: x86_64
OS/Version: Debian GNU/Linux
Status: NEW
Severity: enhancement
Priority: P5
Component: ip_tables (kernel)
AssignedTo: netfilter-buglog at lists....
2014 May 13
1
[Bug 933] New: queue: Incorrect use of option with queue
...ersion: Debian GNU/Linux
Status: NEW
Severity: normal
Priority: P5
Component: nft
AssignedTo: pablo at netfilter.org
ReportedBy: anarey at gmail.com
Estimated Hours: 0.0
The correct use of option in queue is "[..] queue [..] options fanout, bypass
[...]
But, you can add the following rule in a table without It shows some error
messages:
$ sudo nft add rule ip test input queue num 2 total 3 options fanout options
bypass counter
$ sudo nft list table test
table ip test {
chain input {
queue num 2 total 3 options bypas...
2018 Mar 12
4
Expected performance for WORM scenario
Heya fellas.
I've been struggling quite a lot to get glusterfs to perform even
halfdecently with a write-intensive workload. Testnumbers are from gluster
3.10.7.
We store a bunch of small files in a doubly-tiered sha1 hash fanout
directory structure. The directories themselves aren't overly full. Most of
the data we write to gluster is "write once, read probably never", so 99%
of all operations are of the write variety.
The network between servers is sound. 10gb network cards run over a 10gb
(doh) switch. ipe...
2018 Mar 12
0
Expected performance for WORM scenario
...s.ericsson at findity.com>
wrote:
> Heya fellas.
>
> I've been struggling quite a lot to get glusterfs to perform even
> halfdecently with a write-intensive workload. Testnumbers are from gluster
> 3.10.7.
>
> We store a bunch of small files in a doubly-tiered sha1 hash fanout
> directory structure. The directories themselves aren't overly full. Most of
> the data we write to gluster is "write once, read probably never", so 99%
> of all operations are of the write variety.
>
> The network between servers is sound. 10gb network cards run over...
2014 Apr 14
0
[ANNOUNCE]: Release of nftables 0.2
...e used as symbolic
constants in combination with the "ct label" expression.
- nft filter input ct label clients,servers accept
will accept packets of connections labeled with either clients or servers.
* Queue load balancing
The queue statement now supports load balancing, CPU fanout, queue bypass
etc.
- nft filter output queue num 3 total 2 options fanout
will queue packets to queue numbers 3 and 4 using CPU fanout.
* XML/JSON ruleset export
Using "nft export <xml|json>", the ruleset can be exported in either format.
A corresponding import facilit...
2018 Mar 13
5
Expected performance for WORM scenario
...io
>
>
>
> Heya fellas.
>
>
>
> I've been struggling quite a lot to get glusterfs to perform even
> halfdecently with a write-intensive workload. Testnumbers are from gluster
> 3.10.7.
>
>
>
> We store a bunch of small files in a doubly-tiered sha1 hash fanout
> directory structure. The directories themselves aren't overly full. Most of
> the data we write to gluster is "write once, read probably never", so 99%
> of all operations are of the write variety.
>
>
>
> The network between servers is sound. 10gb network cards...
2018 Mar 12
0
Expected performance for WORM scenario
...r.org
Subject: [Gluster-users] Expected performance for WORM scenario
Heya fellas.
I've been struggling quite a lot to get glusterfs to perform even halfdecently with a write-intensive workload. Testnumbers are from gluster 3.10.7.
We store a bunch of small files in a doubly-tiered sha1 hash fanout directory structure. The directories themselves aren't overly full. Most of the data we write to gluster is "write once, read probably never", so 99% of all operations are of the write variety.
The network between servers is sound. 10gb network cards run over a 10gb (doh) switch. ipe...
2006 Oct 31
0
PSARC 2005/654 Nemo soft rings
Author: krgopi
Repository: /hg/zfs-crypto/gate
Revision: a813fd7825c4b1d3fb282c08cdf80bc9ffa88a1a
Log message:
PSARC 2005/654 Nemo soft rings
6306717 For Nemo based drivers, IP can ask dls to do the fanout
Files:
create: usr/src/uts/common/io/dls/dls_soft_ring.c
create: usr/src/uts/common/sys/dls_soft_ring.h
update: usr/src/uts/common/Makefile.files
update: usr/src/uts/common/inet/ip.h
update: usr/src/uts/common/inet/ip/ip.c
update: usr/src/uts/common/inet/ip/ip_if.c
update: usr/src/uts/commo...
2002 Dec 17
2
Profiles and Win2000
Hi
We're running Samba 2.2.7a as a PDC for Windows2000 SP2 Clients.
Everthing works just fine, but there is one problem:
The clients save their profiles on the PDC (~/.Profiles), but don't
delete the local copy of the profile (C:\My Documents\$user).
This causes a really big problem, since the client's HD's aren't that
big and these profiles grow every day.
Is there any way
2013 Aug 06
0
[ANNOUNCE] iptables 1.4.20 release
...xml: Fix various parsing bugs
Russell Senior (1):
libxt_recent: restore reap functionality to recent module
Willem de Bruijn (1):
build: fail in configure on missing dependency with --enable-bpf-compiler
holger at eitzenberger.org (1):
extensions: libxt_NFQUEUE: add --queue-cpu-fanout parameter
2016 Feb 17
0
[Bug 980] BUG!!! nf_queue: full at 1024 entries, dropping packets(s). Dropped: 998887
...CC| |pablo at netfilter.org
Resolution|--- |WONTFIX
--- Comment #2 from Pablo Neira Ayuso <pablo at netfilter.org> ---
You can use nfq_set_queue_maxlen() to get a larger queue size. Moreover, have a
look at queue balancing and fanout options to parallelize processing.
There's also a bypass option so packets are not dropped in case they cannot be
enqueued to userspace.
You can also set the zero copy option.
Please, have a look at the documentation for performance options. Otherwise,
refer to the netfilter at vger.kernel.o...
2017 Mar 20
1
[Bug 1134] New: snat and dnat should accept mapping concatenated values for address and port
...ement.
so...
table example {
dnat_info {
type inet_service : ipv4_addr . inet_service
elements = { 80 : 192.168.13.5 . 8080 }
}
chain foo {
dnat tcp port @dnat_info
}
}
Intervals for all three values would be nice too.
P.S. intervals of addresses to acheive fanout behavior in dnat would be a
different new feature. 8-)
--
You are receiving this mail because:
You are watching all bug changes.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.netfilter.org/pipermail/netfilter-buglog/attachments/20170320/99dbda4b/...
2014 Jun 25
0
[ANNOUNCE] nftables 0.3 release
...or all new features contained in the recent 3.15 kernel release.
Syntax changes
==============
* More compact syntax for the queue action, eg.
nft add rule test input queue num 1
You can also express the multiqueue as a range, followed by options.
nft add rule test input queue num 1-3 bypass fanout
Or just simply the options:
nft add rule test input queue bypass
New features
============
* Match input and output bridge interface name through 'meta ibriport'
and 'meta obriport', e.g.
nft add rule bridge filter input meta ibriport br0 counter
* netlink event monitor, t...
2018 Mar 14
0
Expected performance for WORM scenario
...>
>>
>>
>> I've been struggling quite a lot to get glusterfs to perform even
>> halfdecently with a write-intensive workload. Testnumbers are from gluster
>> 3.10.7.
>>
>>
>>
>> We store a bunch of small files in a doubly-tiered sha1 hash fanout
>> directory structure. The directories themselves aren't overly full. Most of
>> the data we write to gluster is "write once, read probably never", so 99%
>> of all operations are of the write variety.
>>
>>
>>
>> The network between servers...
2018 Mar 13
0
Expected performance for WORM scenario
...g>
Subject: [Gluster-users] Expected performance for WORM scenario
Heya fellas.
I've been struggling quite a lot to get glusterfs to perform even halfdecently with a write-intensive workload. Testnumbers are from gluster 3.10.7.
We store a bunch of small files in a doubly-tiered sha1 hash fanout directory structure. The directories themselves aren't overly full. Most of the data we write to gluster is "write once, read probably never", so 99% of all operations are of the write variety.
The network between servers is sound. 10gb network cards run over a 10gb (doh) switch. ipe...
2018 Mar 13
3
Expected performance for WORM scenario
...io
>
>
>
> Heya fellas.
>
>
>
> I've been struggling quite a lot to get glusterfs to perform even
> halfdecently with a write-intensive workload. Testnumbers are from gluster
> 3.10.7.
>
>
>
> We store a bunch of small files in a doubly-tiered sha1 hash fanout
> directory structure. The directories themselves aren't overly full. Most of
> the data we write to gluster is "write once, read probably never", so 99%
> of all operations are of the write variety.
>
>
>
> The network between servers is sound. 10gb network cards...
2010 Feb 04
4
best parallel / cluster SSH
Hey folks,
I stumbled upon this while looking for something else
http://www.linux.com/archive/feature/151340
And it is something I could actually really make use of. But right on
that site they list 3 different ones, and so I'm wondering what all is
out there and what I should use.
Is there one that is part of the standard CentOS?
thanks,
-Alan
--
?Don't eat anything you've
2008 Oct 09
11
Crossbow Code Review Phase III ready
(networking-discuss Bcc''ed)
Good morning,
The Crossbow team would like to invite you to participate in the project''s
third and last phase of code review.
The third phase of the review starts today, and will last for
two weeks. It covers the following
parts of the code:
VLANs
Link Aggregations
Xen
mac_datapath_setup
All drivers
2018 Mar 13
0
Expected performance for WORM scenario
...g>
Subject: [Gluster-users] Expected performance for WORM scenario
Heya fellas.
I've been struggling quite a lot to get glusterfs to perform even halfdecently with a write-intensive workload. Testnumbers are from gluster 3.10.7.
We store a bunch of small files in a doubly-tiered sha1 hash fanout directory structure. The directories themselves aren't overly full. Most of the data we write to gluster is "write once, read probably never", so 99% of all operations are of the write variety.
The network between servers is sound. 10gb network cards run over a 10gb (doh) switch. ipe...
2018 Mar 14
2
Expected performance for WORM scenario
...io
>
>
>
> Heya fellas.
>
>
>
> I've been struggling quite a lot to get glusterfs to perform even
> halfdecently with a write-intensive workload. Testnumbers are from gluster
> 3.10.7.
>
>
>
> We store a bunch of small files in a doubly-tiered sha1 hash fanout
> directory structure. The directories themselves aren't overly full. Most of
> the data we write to gluster is "write once, read probably never", so 99%
> of all operations are of the write variety.
>
>
>
> The network between servers is sound. 10gb network cards...