Displaying 9 results from an estimated 9 matches for "2048b".
Did you mean:
2048
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...E: bw=17.4MiB/s (18.2MB/s), 17.4MiB/s-17.4MiB/s (18.2MB/s-18.2MB/s), io=256MiB (268MB), run=14748-14748msec
## 2k randwrite
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=2k --iodepth=32 --size=256MB --readwrite=randwrite
test: (g=0): rw=randwrite, bs=(R) 2048B-2048B, (W) 2048B-2048B, (T) 2048B-2048B, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=8624KiB/s][r=0,w=4312 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=42781: Tue Mar 20 15:05:57 2018
write: IOPS=4439, BW=8880KiB/s (9093kB/s)(256MiB...
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod <mailinglists at smcleod.net>
wrote:
> Hi Raghavendra,
>
>
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowdapp at redhat.com>
> wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
> https://github.com/gluster/glusterfs/issues/364
>
>
2004 Apr 06
11
htb2 -> htb3 problems
Hello!
I need to switch from htb2 to htb3, because of speed issues (for me,
htb2 is unable to handle more then 100mbit duplex with ~550 classes),
kernel profiling shows htb_dequeue_prio at 1st place with 3x isolation.
So, I''ve moved from 2.4.19 to 2.4.25 kernel (hi-pac for classification/marking
and htb3 for queueing), and traffic rate drop from 100 to 20mbit.
What can be wrong? The
2012 Oct 29
3
mbox vs. maildir storage block waste
...640
Available_begin: 47592193024
Available_end: 7721119744
mdir exact used space: 39683908608
mdir guess used space: 39871086592
mdir num mails: 3425033
delta: 1.561232384 G
delta / mail: 455 B
As you can see, the delta per mail is rather close to the statistically
expected values of 2048B, 1024B and 512B.
In the end I probably changed my opinion.
~7GB of wasted block space for all my mails is actually quite a lot, but
in days of cheap disk space it's acceptable.
And with mbox one has IMHO the major disadvantage that mailservers
(including dovecot) store some meta-data _in_ it...
2017 Jul 06
3
[RFC][SVE] Supporting Scalable Vector Architectures in LLVM IR (take 2)
On 6 July 2017 at 23:13, Chris Lattner <clattner at nondot.org> wrote:
>> Yes, as an extension to VectorType they can be manipulated and passed
>> around like normal vectors, load/stored directly, phis, put in llvm
>> structs etc. Address computation generates expressions in terms vscale
>> and it seems to work well.
>
> Right, that works out through
2014 Oct 28
22
[Bug 2302] New: ssh (and sshd) should not fall back to deselected KEX algos
...e.g. if the RFC would mandate it for a
conforming implementation).
If a user/admin removes it from his KEX algo preference list, then he
probably does so by intention and thus this shouldn't be silently
reverted again by ssh/sshd.
Further, according to e.g. the ECRYPT II recommendations,... a 2048bit
group as in group14 is only suggested for something between "legacy
standard level" and "Medium term protection",... which may not be
enough for some people.
Since its typically those people who try to disable the algo by
removing it from their preference lists, that fallback...
2014 Oct 28
22
[Bug 2302] New: ssh (and sshd) should not fall back to deselected KEX algos
...e.g. if the RFC would mandate it for a
conforming implementation).
If a user/admin removes it from his KEX algo preference list, then he
probably does so by intention and thus this shouldn't be silently
reverted again by ssh/sshd.
Further, according to e.g. the ECRYPT II recommendations,... a 2048bit
group as in group14 is only suggested for something between "legacy
standard level" and "Medium term protection",... which may not be
enough for some people.
Since its typically those people who try to disable the algo by
removing it from their preference lists, that fallback...
2019 Jun 03
2
[EXT] Re: [RFC][SVE] Supporting SIMD instruction sets with variable vector lengths
...th 256b vectors, you have the potential to double the work done per cycle on the second cpu without changing the code (there's lots of factors that could prevent performance from scaling nicely, but that's not directly related to the vector length). SVE's current maximum defined size is 2048b, though I suspect it'll be a quite a while before we see vectors of that size in commodity hardware. Fujitsu's A64FX will use 512b vectors.
We used predication in the example to show a loop without a scalar tail, but it's not necessary to use the ISA in that manner.
The RISC-V V exten...
2019 May 27
2
[EXT] Re: [RFC][SVE] Supporting SIMD instruction sets with variable vector lengths
Hi All,
I have read the links from Joel. It seems one of its main focus is vectorization of loop with vector predicate register. I am not sure we need the scalable vector type for it. Let's see a simple example from the white paper.
1 void example01(int *restrict a, const int *b, const int *c, long N)
2 {
3 long i;
4 for (i = 0; i < N; ++i)
5 a[i] = b[i] + c[i];
6 }