Displaying 20 results from an estimated 200 matches similar to: "ls performance on directories with small number of items"
2017 Nov 29
0
ls performance on directories with small number of items
The -l flag is causing a metadata lookup for every file in the directory. The way the ls command does that it's with individual fstat calls to each directory entry. That's a lot of tiny network round trips with fops that don't even fill a standard frame thus each frame has a high percentage of overhead for tcp. Add to that the replica check to ensure you're not getting stale data
2017 Nov 27
1
ls performance on directories with small number of items
Also note, Sam's example is comparing apples and orchards. Feeding one person from an orchard is not as efficient as feeding one person an apple, but if you're feeding 10000 people...
Also in question with the NFS example, how long until that chown was flushed? How long until another client could see those changes? That is ignoring the biggie, what happens when the NFS server goes down?
2017 Nov 27
0
ls performance on directories with small number of items
Hi Aaron,
We also find that Gluster is perhaps, not the most performant when performing actions on directories containing large numbers of files.
For example, with a single NFS server on the client side a recursive chown on (many!) files took about 18 seconds, our simple two replica gluster servers took over 15 minutes.
Having said that, while I'm new to the gluster world, things seem to be
2009 Nov 27
1
Proxy, using checkpassword
Hi all,
I think I may be doing something wrong but, is it possible to proxy POP and IMAP users when using a checkpassword script as the passdb?
I'm trying to write a perl script to handle authentication to a mix of SQL and POP3 sources whilst logging user passwords at the same time for a migration.
At the moment, I'm trying to set environment variables to tell dovecot what to do:
2008 Mar 08
3
1.1 master auth not expanding static userdb variables
Can dovecot-1.1 deliver work with static userdb? I'm currently running
dovecot-1.0.12 and postfix-2.4.6, with virtual users' maildirs all owned
by vmail and mail_location = maildir:/var/mail/%Lu. The following
definition of the dovecot transport in postfix/master.conf works fine with
dovecot-1.0:
dovecot unix - n n - 1 pipe
flags=DRh
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
>
2007 Aug 26
1
Winbind deadlock with AD and nss
Hi,
I'm testing out Samba 3.0.25c with Active Directory using the rid
idmap backend. In certain cases there seems to be a repeatable
deadlock in winbind.
I have a local user "ed" created with uid 100 and no user exists with
uid 1001. Here's the behavior I'm seeing with wbinfo:
# time wbinfo -U 100
S-1-22-1-100
real 0m0.047s
user 0m0.014s
sys 0m0.007s
# time
2018 Jan 20
2
PDFs getting mangled
> On 19 Jan, 2018, at 4:39, Aki Tuomi <aki.tuomi at dovecot.fi> wrote:
>
>
>
> On 19.01.2018 04:35, Adam Weinberger wrote:
>> Since upgrading to 2.3.0 / 0.5.0.1, incoming PDFs are getting mangled.
>> It seems to be happening when I use vnd.dovecot.filter. When I comment
>> out the block, things come through fine.
>>
>> My filter block looks like
2018 Jan 21
2
PDFs getting mangled
Op 1/20/2018 om 11:01 PM schreef Adam Weinberger:
>> On 20 Jan, 2018, at 10:05, Adam Weinberger <adamw at adamw.org> wrote:
>>
>>
>>> On 19 Jan, 2018, at 4:39, Aki Tuomi <aki.tuomi at dovecot.fi> wrote:
>>>
>>>
>>>
>>> On 19.01.2018 04:35, Adam Weinberger wrote:
>>>> Since upgrading to 2.3.0 / 0.5.0.1, incoming
2019 Aug 13
3
winbind - frequent high CPU utilization
Hi.
I use winbind + squid on Debian Buster to authenticate users + authorize
them based on groups they are in. It all works, well, good, but winbind's
CPU utilization peaks can reach up to 100%. The same solution ran OK on
Debian Jessie with up to 20% CPU utilization at most.
The configuration of Buster must have been updated based on the samba
version leap/shift compared to Jessie.
On
2004 Aug 06
2
preprocessor performance (was Re: Memory leak in denoiser + a few questions)
Jean-Marc Valin wrote:
>If you set the denoiser to "on" and the VAD to "off", what difference
>does it make in CPU time?
>
<p>Same program, running on Athlon XP 1700+:
Test 1, using VAD, but AGC, denoise off:
tevek@canarsie:~/work/hms/app_conference $ time ./vad_test
/tmp/demo-instruct.sw 5
reading from /tmp/demo-instruct.sw, repeating 5 times
read 537760
2018 Jan 19
3
PDFs getting mangled
Since upgrading to 2.3.0 / 0.5.0.1, incoming PDFs are getting mangled.
It seems to be happening when I use vnd.dovecot.filter. When I comment
out the block, things come through fine.
My filter block looks like this:
require "vnd.dovecot.filter";
filter "bogofilter_filter";
if header :contains "X-Bogosity" [
"Spam,
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote:
> [Adding gluster-users]
>
> On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com
> <mailto:jaganz at gmail.com>> wrote:
>
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3
> full replicated node . This cluster have 2 gluster volume:
>
> - data: volume for
2003 Jan 12
10
Shorewall on a file/webserver/router Help
Hi,
I have a install of shorewall I have 2 interfaces(I think)
ppp0[connection device] and eth0 [LAN device],
I want to allow all traffic from the the internet in or aleast port 80 and
CVS and webmin and mail and everything normal to the main machine with
shorewall on it.
I changed to policy file but it just gave me errors as to double interfaces.
I also what still to alow connection sharing
2006 Aug 28
1
Strange slowdown calling Wine from PHP
I'm getting some odd behaviour from Wine when I call it from PHP, and
I was wondering if someone who has the command line version of PHP
installed would be willing to give the following test a try to see if
it is something local to me or not.
When I call Wine from a PHP script, it runs perfectly quickly. When I
call that PHP script from another one, the whole thing slows down
dramatically.
2016 Mar 08
7
Strange behaviour of iptables in centos 7
Hi
strange behaviour of iptables on a centos 7.0 machine:
The following rule is in the iptables of said machine:
[root at myserver ~]# iptables -L -v -n --line-numbers |grep 175\.
9 9 456 DROP all -- * * 175.44.0.0/16
0.0.0.0/0
[root at myserver ~]#
The corresponding enty in /etc/sysconfig/iptables looks like:
[root at myserver ~]# grep 175 /etc/sysconfig/iptables
2009 Apr 22
4
Problem with "apply"
Hi R users,
I am trying to assign ages to age classes for a large data set (123,000 records), and using a for-loop was too slow, so I wrote a function and used apply. However, the function does not properly assign the first two classes (the rest are fine). It appears that when age is one digit, it does not get assigned properly.
I tried to provide a small-scale work-up (at the end of the
2009 Oct 26
17
[Bug 1667] New: sshd slow connect with 'UseDNS yes'
https://bugzilla.mindrot.org/show_bug.cgi?id=1667
Summary: sshd slow connect with 'UseDNS yes'
Product: Portable OpenSSH
Version: 5.2p1
Platform: All
OS/Version: Linux
Status: NEW
Severity: major
Priority: P2
Component: sshd
AssignedTo: unassigned-bugs at mindrot.org
ReportedBy:
2012 Mar 25
1
how to speed up OpenSSH command execution (and a speed analysis)
Hi.
I recently did some investigation about how to get out the last
microseconds of
executing commands via OpenSSH on remote host (of course I'm using
ConnectMaster).
MOTIVATION:
I'm introducing Nagios (well actualla Icinga) at the local institute.
We have
many active checks that must run locally on the remote hosts.
The "best" way to do this is using NRPE (Nagios Remote
2011 Aug 13
1
[Bug 8375] New: rsync with bandwidth limit sometimes expend extra time
https://bugzilla.samba.org/show_bug.cgi?id=8375
Summary: rsync with bandwidth limit sometimes expend extra time
Product: rsync
Version: 3.0.8
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo: wayned at samba.org
ReportedBy: fbmoser at